AlgoTransparency, Cookies and Middleware
Earlier this week I came across AlgoTransparency, an interest group founded by Guillaume Chaslot who many people will recognise from The Social Dilemma. AlgoTransparency has commendable aims, they wish to:
- Raise public awareness about the lack of transparency provided by the world’s most significant algorithms.
- Influence international regulators with regards to policy approaches.
- Pressurise the owners of significant algorithms to make changes to the way they operate.
This is an agenda I can get behind. End users should be provided with simple explanations about how the algorithms that power any given app or website work. And these explanations should satisfy both the need to understand the goal the algorithm has, for instance “time on site”, and (to the extent that it’s possible) the process that led to its final decision.1
I raise these points not because I think AlgoTransparency has picked the wrong fight – transparency will lead to a greater public understanding about algorithms and this is a worthy cause – but because transparency can only ever take us so far.
As I find myself regularly writing at the moment, algorithm choice, or more broadly, a marketplace for middleware, should be the direction we are headed. By creating choice for algorithms we diffuse the power they hold to multiple actors while retaining much of the value that comes with large platforms. I would also argue that a marketplace for middleware furthers the transparency cause because it makes transparency a vector by which consumers can vet their choice of algorithm. Developers who are open about the nature of their algorithms and the goals that its success is measured against, will surely be more appealing than those who choose to hide this information.
Another idea that I continually return to is that the successful regulation of tech/platforms/algorithms will be made up of many overlapping changes rather a single headline one. Break ups diffuse power, but aren’t going to make it easier to compete against network effects that protect YouTube. Full protocol interoperability will support new market entrants but would likely come with new kinds of risk to personal data. Algorithm transparency does little to alter the distribution of power, but will make developers more accountable.
I wonder if taking a more “agile” approach to regulation is what is needed, smaller changes, call them tests, that try to target specific problems rather than change everything in one go. I’m no expert on the history of regulatory approaches but my gut tells me that this isn’t a strategy that many policy makers would consider.
1 I add this caveat as it's important to remember that full algorithm transparency may not be possible. As Jenna Burrell points out in her excellent paper on opacity within machine learning.