Stage T
14:45 - 15:45
English
Workshop
Everyone
Exploring standards for algorithmic transparency

Kurzthese

Algorithms make decisions affecting us in ways ranging from what we see on social media to how we are targeted by advertisers and law enforcement. Badly designed algorithms using biased or flawed datasets can violate people’s rights. Given such power, there is a critical need to build consensus around standards for algorithm creation and use to ensure they are accountable to the public interest. This workshop will brainstorm best practices for algorithmic transparency by internet companies.

Beschreibung

The Ranking Digital Rights annual Corporate Accountability Index currently evaluates 22 internet, mobile, and telecommunications companies on their disclosed policies affecting users’ freedom of expression and privacy. Given the urgent public interest concerns related to algorithms, we want to explore how the Index can also set standards for corporate disclosure and accountability around the development and use of algorithms. In co-facilitating this workshop, RDR will leverage its expertise in developing indicators—which are broadly applicable standards that enable researchers to measure and compare companies’ disclosed policies affecting users’ digital rights—while SHARE Foundation will draw upon its experience researching and mapping the influence of Facebook’s algorithm.

We will begin the workshop with a discussion of what we mean by algorithmic transparency and what researchers and civil society groups have suggested as best practices. We will map out the different ways that such company practices and disclosures could be evaluated and compared with one another, based on our experience developing the Index methodology.

Participants will then be broken up into smaller groups to brainstorm key considerations and potential elements for inclusion in an indicator measuring algorithmic transparency that could be used by researchers and civil society to determine whether companies are developing and deploying algorithms in a responsible and accountable manner.

Discussion topics may include (but are not limited to):

  • What standards and good practices are we seeking to encourage?

  • How should transparency be defined in the context of algorithms? What would meaningful transparency look like?

  • What obstacles prevent us from evaluating algorithmic transparency? What is realistic to expect companies to disclose?

RDR will use the recommendations and feedback from workshop participants to explore whether and how to incorporate evaluation of algorithmic transparency into its Index.