About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

10 principles for public sector use of algorithmic decision making

The rise of government AI

A growing number of governments and public sector organisations are adopting artificial intelligence tools and techniques to assist with their activities. This ranges from traffic light management systems that speed emergency crews through congested city streets, to chatbots that intelligently answer queries on government websites.

The application of AI that seems likely to cause citizens most concern is where machine learning is used to create algorithms that automate or assist with decision making and assessments by public sector staff. While some such decisions and assessments are minor in their impact, such as whether to issue a parking fine, others have potentially life-changing consequences, like whether to offer an individual council housing or give them probation. The logic that sits behind those decisions is therefore of serious consequence.

A considerable amount of work has already been done to encourage or require good practice in the use of data and the analytics techniques applied to it. The UK government’s Data Science Ethical Framework, for example, outlines six principles for responsible data science initiatives: 1) Start with a clear user need and public benefit; 2) Use data and tools which have the minimum intrusion necessary; 3) Create robust data science models; 4) Be alert to public perceptions; 5) Be as open and accountable as possible; and 6) Keep data secure.

Meanwhile, data protection laws mandate certain practices around the use of personal data. Not least is the EU’s General Data Protection Regulation, which comes into force on 25 May 2018, placing greater obligations on organisations that collect or process personal data. GDPR will also effectively create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them.

But is this enough?

Some organisations, including Microsoft, have suggested that data scientists should sign up to something akin to a Hippocratic oath for data, pledging to do no harm with their technical wizardry.

While debate may continue on the pros and cons of creating more robust codes of practice for the private sector, a stronger case can surely be made for governments and the public sector. After all, an individual can opt-out of using a corporate service whose approach to data they do not trust. They do not have that same luxury with services and functions where the state is the monopoly provider. As Robert Brauneis and Ellen P. Goodman have written, “In the public sector, the opacity of algorithmic decision making is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability.”

With this in mind, below I have drafted 10 principles that might go into a Code of Standards for Government and Public Sector Use of AI in Algorithmic Decision Making. This is very much a working draft. I would welcome readers’ comments and ideas for what should be added, omitted or edited. I have outlined specific questions against some principles. Please comment via Twitter, or edit this Google docs version.

My thanks go to Harry Armstrong, Geoff Mulgan and Nevena Dragicevic for their feedback on earlier versions of this draft code.

One note: below, I sometimes mention the need for organisations to ‘publish’ various details. Ideally, this would entail releasing information directly to citizens. In some cases, however, that may not be possible or desirable, such as where algorithms are used to detect fraud. In those cases, I would propose that the same details be made available to auditors who can assure that the Code is being adhered to. Auditors may come from data protection authorities, like the UK’s Information Commissioner's Office, or from new teams formed within the established regulators of different public sector bodies.

Code of Standards for Public Sector Algorithmic Decision Making

1 - Every algorithm used by a public sector organisation should be accompanied with a description of its function, objectives and intended impact, made available to those who use it.

E.g. “This algorithm assesses the probability that a particular property is an unlicensed House in Multiple Occupation (HMO) based on an analysis of the physical characteristics of known past cases of HMOs in Cardiff recorded between January 2016 and December 2017. Its objective is to assist local authority building inspectors in identifying properties that should be prioritised for risk-based inspections. Its intended impact is to increase the number of high risk properties that are inspected so that HMO licences can be enforced.

Rationale: If we are to ask public sector staff to use algorithms responsibly to complement or replace some aspect of their decision making, it is vital they have a clear understanding of what they are intended to do, and in what contexts they might be applied. In the example given above, it would be clear to a building inspector that the algorithm does not take into account details of a property’s ownership, and is only based on past cases in Cardiff from two years’ worth of records. In this way, they can understand its strengths and limitations for informing their decisions and actions.

Questions: How do we strike the right balance between simple language and accurately describing how an algorithm works? Should we also make some requirements about specifying where in a given process an algorithm is introduced?

2 - Public sector organisations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases.

Rationale: Public sector organisations should prove that they have considered the inevitable biases in the data on which an algorithm was (or is continuously) trained and the assumptions used in their model. Having done this, they should outline the steps they have taken to mitigate any negative consequences that could follow, to demonstrate their understanding of the algorithm’s potential impact. The length and detail of the risk assessment should be linked to the likelihood and potential severity of producing a negative outcome for an individual.

Questions: Where are risk assessments based on the potential of negative outcomes already deployed effectively in the public sector?

3 - Algorithms should be categorised on an Algorithmic Risk Scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor.

Rationale: Given the rising usage of algorithms by the public sector, only a small number could reasonably be audited. By applying an Algorithmic Risk Scale (based on the risk assessment conducted for Principle 2), public sector organisations could help auditors focus their attention on instances with the potential to cause the most harm.

Questions: How would such a scale be categorised? Who could assess what level a particular algorithm should be categorised as?

4 - A list of all the inputs used by an algorithm to make a decision should be published.

Rationale: Transparency on what data is used by an algorithm is important for a number of reasons. First, to check whether an algorithm is discriminating on inappropriate grounds (e.g. based on a person’s ethnicity or religion). Second, to ensure that an algorithm could not be using a proxy measure to infer personal details from other data (e.g. guessing someone’s religion based on their name or country of origin). Third, to ensure the data being used are those that citizens would deem acceptable, thereby supporting the Data Science Ethical Framework’s second and fourth principles to: “Use data and tools which have the minimum intrusion necessary” and “Be alert to public perceptions”.

5 - Citizens must be informed when their treatment has been informed wholly or in part by an algorithm.

Rationale: For citizens to have recourse to complain about an algorithmic decision they deem unfair (e.g. they are denied council housing or probation), they need to be aware that an algorithm was involved. This might work in a similar way to warnings that a credit check will be performed when a person applies for a new credit card.

Questions: In what instances, if any, would this be a reasonable requirement? Does this wrongly place higher standards on algorithms than we place on human decision making?

6 - Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions.

Rationale: It has sometimes been suggested that the code of algorithms used by government and the public sector should be made open so that their logic and function can be assessed and verified by auditors.

This now seems impractical, for at least four reasons. First, the complexity of modern algorithms is such that there are not enough people who would understand the code. Second, with neural networks there is no one location of decision making in the code. Third, algorithms that use machine learning constantly adapt their code based on new inputs. And fourth, it is unrealistic to expect that every algorithm used by the public sector will be open source; some black box priority systems seem inevitable.

Instead, auditors should have the ability to run different inputs into the algorithm and confirm that it does what it claims. If this cannot be done in the live system, then a sandbox version running identical code should be required. Testing should focus on algorithms scored at the upper end of the Algorithmic Risk Scale outlined in Principle 3.

7 - When using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet Principles 1-6.

Rationale: Given the specialist skills required, most public sector organisations are likely to need to hire external expertise to develop algorithms, or pay for the services of organisations that offer their own algorithms as part of software-as-a-service solutions. In order to maintain public trust, such procurements cannot be absolved of the need to meet the principles in this code.

8 – A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision.

Rationale: This would be a powerful way to ensure that the leadership of each organisation has a strong incentive only to deploy algorithms whose functions and impacts on individuals they sufficiently understand.

Questions: Would this kind of requirement deter public sector bodies from even trying algorithms? Can and should we distinguish between an algorithmic decision and the actions that are taken as a result of it?

9 - Public sector organisations wishing to adopt algorithmic decision making in high risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm.

Rationale: On the assumption that some people will be adversely affected by the results of algorithmic decision making, a new insurance scheme should be established by public sector bodies to ensure that citizens are able to receive appropriate compensation.

10 - Public sector organisations should commit to evaluating the impact of the algorithms they use in decision making, and publishing the results.

Rationale: This final evaluation step is vital for three reasons. First, to ensure that the algorithm is used as per the function and objectives stated in Principle 1. Second, to help organisations learn about the strengths and limitations of algorithms so they can be improved. Third, to ensure that examples of best practice can more easily be verified, distilled and spread throughout the public sector.

I would welcome your suggestions and edits to this Code here.

Find Eddie on Twitter

Image credit: qimono on Pixabay | CC0 Creative Commons

Author

Eddie Copeland

Eddie Copeland

Eddie Copeland

Director of Government Innovation

Eddie was Nesta's Director of Government Innovation.

View profile