Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

The Risks of Using AI for Government Work

The promises and imperatives of technological improvements to public systems seem evident — better designed systems can improve access, reduce inefficiencies and, in the current pandemic, keep us healthier and safer.

However, viewing algorithmic systems as an easy fix to ingrained institutionalized problems will make these problems worse. To achieve the full potential of new technologies, we must first develop and institute the appropriate oversight into our public procurement processes. 

AI systems in particular pose new risks to society as they encroach into public use. Our democratic processes risk being subverted as private companies increasingly take over our digital public infrastructure, potentially leading to unprecedented political capture under the guise of “modernizing” public services. We urgently need appropriate levels of oversight and review and the empowerment of elected officials, policymakers and citizens. 

COVID-19 Is Speeding the Adoption of AI

Governments are seeking technological interventions to mitigate the spread of COVID-19 — whether by introducing contact tracing apps or developing new predictive models to determine spread and assess the impact of reopening versus shutdown policies. 

In many cases, these technologies are being developed and implemented by private companies, granting these organizations access to behavioral and health data that would otherwise be restricted. Without our consent, these companies can now improve their models using our sensitive data. 

Earlier this year, the U.K. government was pressured to release their NHS contract details with big tech companies, including Google, Palantir and Microsoft. These companies received unprecedented access to citizen health data, including age, address, health conditions, treatments and medicines, test and X-ray results, lifestyle choices, and hospital admissions information. These processes were all handled behind closed doors, without consent or review from the U.K. public, Parliament or any citizen-representative group.  

AI Offers Governments Money-Saving Fixes

Perceived benefits to the public sector are persuasive in favor of automation and algorithmic decision-making. Governments are strapped for resources, whether these resources are personnel, money or skill. Algorithms provide a quick-fix solution: Outsource the problem without employing expensive data scientists by hiring a vendor with expertise in a particular area. Algorithms also provide the veneer of objectivity to an inexperienced audience. 

However, public agencies are also tasked with mitigating harms introduced by these systems to the communities they serve, and the perceived benefits of algorithmic decision-making systems cannot outweigh this responsibility. 

Unlike private companies, whose remit is to their shareholders and profit-generation, public entities must consider their entire population when delivering a solution. This quick fix, in fact, is not so quick or easy. Three main problems exist. 

AI Public Use Is Different

The public use of an algorithmic decision-making system has different requirements than a private-use product. It is acceptable for a private company to create a product that addresses the needs of, for example, 80% of their target market. However, if this product is translated to public use, addressing the needs of 80% of your constituency is unacceptable. It is also likely that the 20% who are not addressed are likely from underserved minority groups whose data do not necessarily look like the average. 

There is clear benefit to the use of algorithmic systems in the public sector — if done responsibly.

This gap is rarely considered. We have already seen biases manifest in federal COVID-19 funding allocation algorithms, favoring high-income communities over low-income communities due to historical biases prevalent in the training data. 

Public Use Has Higher Compliance Standards

Public-use technologies are subject to different compliance and legal criteria than most private-use technologies. In New Orleans, the undisclosed use of Palantir’s Gotham platform was found to be in violation of Brady v. Maryland, a Supreme Court case that places the responsibility on prosecutors to disclose any information that could cast doubt on any evidence presented against the accused. 

For Kentrell Hickerson, who received 100 years in prison, the use of Gotham was not disclosed to his lawyers, allowing him to successfully petition for a new trial. In other cases, the use of algorithmic decision-making systems has been in violation of due process law, in particular the use of allocative algorithms to determine Medicare and Medicaid benefits. 

Public Procurement Risks

Existing public procurement criteria do not address the particular risk factors baked into the public use of AI systems. This is due to the fact that private vendors are not required to assess the risk their models pose to certain populations as part of designing the technology, or later down the line, in the procurement process. They are also not required to have their models assessed by external bodies. 

Assessing models developed by an external party is a challenge most organizations are facing. Companies do not currently have methods to share their data, model output in a secure manner or provide access to audit bodies in a way that protects their intellectual property. 

To address and mitigate this issue, we are in urgent need of innovation in public procurement of AI and AI-related technologies. This innovation should embrace a more holistic and proactive approach to risk management and the impact of AI. It should also be grounded in an equitable conversation between public decision-makers, communities and vendors. Such a conversation requires socio-technical literacy: knowledge about how technologies impact society and about how politics, economics and social aspects shape technology innovation. 

There also must be a decided push toward transparency, both in terms of the technology that is being used, the vendors that provide it and public touchpoints the technology will have. The cities of Helsinki and Amsterdam have recently made a push into this direction by listing their AI technologies publicly in a register

The Need for Public Accountability Infrastructure

There is also an urgent need for accountability. New systems designed and deployed for public use must undergo initial and ongoing risk assessment. These risk assessments need to introduce methodologies of identifying and illustrating biases in data and models and offer transparency into processes for remediation or limited use. The City of New York is considering this issue by hiring for an algorithm management and policy officer. 

Another good starting point is the Canadian government’s algorithmic impact assessment for any vendors introducing algorithmic systems into public use. These positive use cases in Finland, Holland, the U.S. and Canada highlight the need for international and cross-disciplinary exchange on the issue, as a global best-practices exchange is valuable for adapting quickly and collectively to new challenges introduced by AI systems. 

Lastly, a global database of harmful and/or failed AI applications for public use would help the assessment of the risk and impact of AI technologies. There is a need to understand the rationales and factors leading public agencies to retire AI systems (this work is currently being pursued at the Data Justice Lab and the Carnegie U.K. Trust).

There is clear benefit to the use of algorithmic systems in the public sector — if done responsibly. To do so, we need to understand both failed projects and draw on global best practices and develop new innovations to proactively address risk. 

Rumman Chowdhury is CEO and founder of Parity, an enterprise ethical AI model audit platform. She is a pioneer of responsible AI, building ethical solutions for C-suite clients, policymakers and organizations since 2017.

Mona Sloane

Sociologist at New York University @mona_sloane

Mona Sloane is a sociologist based at New York University (NYU). She works on inequality in AI design and policy. At NYU, she is a fellow with the Institute for Public Knowledge, The GovLab and the NYU Alliance for Public Interest Technology, as well as adjunct professor at the NYU Tandon School for Engineering.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​