ALGORITHMIC RESPONSIBILITY

A pioneering computer scientist wants algorithms to be regulated like cars, banks, and drugs

A low-stakes algorithm at work.
A low-stakes algorithm at work.
Image: AP Photo/Altaf Qadri

It’s convenient when Facebook can tag your friends in photos for you, and it’s fun when Snapchat can apply a filter to your face. Both are examples of algorithms that have been trained to recognize eyes, noses, and mouths with consistent accuracy.

When these programs are wrong—like when Facebook mistakes you for your sibling or even your mom—it’s hardly a problem. In other situations, though, we give artificial intelligence much more responsibility, with larger consequences when it inevitably backfires.

Ben Shneiderman, a computer scientist from the University of Maryland, thinks the risks are big enough that it’s time to for the government to get involved. In a lecture on May 30 to the Alan Turing Institute in London, he called for a “National Algorithm Safety Board,” similar to the US’s National Transportation Safety Board for vehicles, which would provide both ongoing and retroactive oversight for high-stakes algorithms.

Such algorithms are already deeply embedded in many aspects of our lives. They do such things as setting prices on stock markets, flying aircraft on autopilot, calculating insurance risks, finding you an Uber, and devising routes for delivery trucks. In the future they’ll be used increasingly for even more critical tasks, such as controlling self-driving cars and making medical diagnoses.

But algorithms make mistakes too, and when they do it can be extremely hard to figure out why—witness the flash crashes on stock markets and the autopilot failure that brought down Air France flight 447 in 2009. Many algorithms even directly designed by people but evolved through a process of machine learning, producing a “black box” program that does the task in a way no human can make sense of and can therefore act unpredictably. Such algorithms can also go awry if they encounter a situation that’s markedly different from the data they were trained on.

For example, in a recent study, a computer program from Stanford researchers out-performed human dermatologists at spotting skin cancer just from photographs. However, as Quartz’s Dave Gershgorn points out, the algorithm was trained using pictures of predominately white skin. Unless it’s given a more diverse set of samples to learn from, it might miss cancer in patients with darker skin.

Shneiderman is not an expert in these sorts of algorithms himself; his pioneering work was in interface design, including such things as the hyperlink and improvements to the touch-screen keyboard. But perhaps because his core design principles include keeping the user in control, he thinks it’s time to bring the same mindset to algorithms.

“When you go to systems which are richer in complexity, you have to adopt a new philosophy of design,” Shneiderman argued in his talk. His proposed National Algorithm Safety Board, which he also suggested in an article in 2016, would provide an independent third party to review and disclose just how these programs work. It would also investigate algorithmic failures and inform the public about them—much like bank regulators report on bank failures, transportation watchdogs look into major accidents, and drug licensing bodies look out for drug interactions or toxic side-effects. Since “algorithms are increasingly vital to national economies, defense, and healthcare systems,” Shneiderman wrote, “some independent oversight will be helpful.”