We Don’t Need More Guidelines or Frameworks on Ethical AI Use. It’s Time for Regulatory ActionChair of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee
Recent debates over the general ethical principles underpinning autonomous and intelligent systems (A/IS) have highlighted many of the possible societal consequences of developing and deploying A/IS without proper forethought and governance. Among many others, we run the risk of potentially exacerbating gender, societal and economic schisms, reinforcing stereotypes (often borne out of endemic, controversial and historic social policy) and eroding trust in institutions and authority. To address those potentially detrimental consequences, there must be a robust societal debate over the ethics and governance of A/IS.
There are some signs that this debate is already happening in some corners. A culmination of extensive deliberations and expert counsel resulted in the creation of the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems’ Ethically Aligned Design framework, the Organization for Economic Co-operation and Development’s recommendations for responsible stewardship of trustworthy AI — on which 42 nations were co-signers — and the EU High-Level Expert Group’s Ethics Guidelines for Trustworthy AI. All highlight quintessential pillars and principles for developing ethical A/IS.
But the creation of documents and frameworks begs a critical question: Now that we have the principles, what comes next?
From Principles to Practice
It is time to transition from general principles to practice and to implement appropriate and necessary guardrails that maximize the public good. In certain cases where A/IS could be used, there is currently a legal void waiting to be filled by regulatory frameworks that would guarantee the right to arbitration, adjudication, remediation from losses, harm or injury, and protection from cases of willful negligence or outright malfeasance. Real cases of A/IS having a deleterious effect on citizens’ identity, agency, access, equity, right to self-determination and freedom of choice and expression already exist. Examples abound in criminal justice, health care and the insurance and financial sectors.
Future protective guardrails will have to include regulatory instruments that are backed by the power of the state and the courts. Either based in case law (court precedent) or accomplished via legislative and regulatory obligations, these instruments will be critical to good governance of A/IS. They will be a tool to help enforce general A/IS ethical principles, such as transparency, accountability and competence.
Designing Regulations on the Move
Regulations promote the public good and are critically needed. But designing regulations for a very dynamic industry that is evolving at warp speed — while avoiding inefficiencies and enabling competition and continued innovation — is extremely challenging.
To match the speed of the development of A/IS, any future regulations must be agile and adaptable, though what that might look like is not yet a settled matter. Though it may feel like an unsatisfying cop-out, what those regulations are and how they should be promulgated and enforced has no conclusive nor deterministic answer. It will be the subject of future research in regulatory, tort and constitutional law.
Regulatory Instruments to Build from
In the U.S., we’ve long enacted regulatory instruments aimed at specific sectors and industries in order to protect the public. Those instruments strongly prioritized accountability, transparency, nondiscrimination and explainability, as well as other elements that would be useful in future regulation and ethical governance of A/IS. For instance:
- Disclosure requirements, rules against discrimination (per Title 6 of the Civil Rights Act), and requirements for explainability in lending and consumer credit under the Equal Credit Opportunity Act, Consumer Credit Protection Act, Truth In Lending Act, and Equal Employment Opportunity Commission.
- Patient information privacy requirements in the Health Insurance Portability and Accountability Act (HIPAA Title II), protections against financial (e.g., predatory lending) and insurance malfeasance in the Consumer Financial Protection Bureau regulations, and prohibition of unfair or deceptive acts or practices in or affecting commerce in Section 5 of the FTC Act, Sarbanes-Oxley Act (following the Enron scandal).
- Various warranty and disclosure requirements in contract law in principal-agent cases, that protects the agent (consumer) from unfair and misleading business practices (e.g., when the principal has a conflict of interest or is not acting in the best interest of the consumer) and establishes grounds for legal recourse (e.g., federal “lemon law” or the Magnuson-Moss Warranty Act).
- Legal ethics codes incorporated into many state laws or other professional ethics codes that govern speech between service providers and customers.
Our deliberations over AI ethics and governance must evolve. Simply developing more unenforceable codes of ethics will not be sufficient going forward.
In the United States, the most promising pieces of legislation that have been proposed for the governance of A/IS are the Algorithmic Accountability Act of 2019, which would “direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments,” and the Commercial Facial Recognition Privacy Act of 2019, which would “prohibit certain entities from using facial recognition technology to identify or track an end user without obtaining the affirmative consent of the end user.” Both bills only refer to the FTC jurisdiction to prosecute under Section 5 of the FTC Act on deceptive and unfair acts and practices.
To stress the point: To build agile governance of A/IS, we ought to consider very carefully the corpus juris (body of law) of the country, different types of legal systems (common, civil or customary, etc.) and the jurisdictions of the existing regulatory instruments (like those stated above) to determine whether this body of law is fit to promote trustworthy A/IS. Then, we can regulate appropriately to cover areas where legal protection is lacking, building on top of what already exists.
Admittedly, regulations can be blunt and inefficient and, at times, are weakened by matters of political expediency and partisan whims. But fortunately, they compensate for this by providing a larger degree of certainty and the backing of the institutions of the government. Obviously, regulatory certainty is only achievable once a certain degree of convergence and consistency in how those regulations are interpreted and enforced is reached.
Can Self-Regulation Play a Role?
Is there a role for self-regulation? Of course: through standards, evaluation, ongoing monitoring, and auditing of internal processes via internal ethics oversight boards. Still, it remains a fact that self-regulation cannot be a replacement for the law.
Government regulation and self-regulation are two very different regimens of governance. They can be and are often complementary, but not always. Both have different jurisdictions and stem their legitimacy and enforceability from different creeds. And they both follow from two different schools of thought that hold contrasting interpretations of the mechanics of the free market: One believes in the invisible hand of the market, and the other believes in the possibility that markets can and do fail and need constant oversight (though neither actor falls exclusively within one group or the other). Governance of A/IS will require a balancing act involving creeds, systems and jurisdictions.
Time to Take Action
Simply developing more unenforceable codes of ethics will not be sufficient to regulate A/IS going forward. What we know for certain is that we are constantly running into ethical conundrums in the use of A/IS. Consider this month’s example of the UK Metropolitan Police’s facial recognition technology: Four out of five people identified by the software as possible suspects turned out to be innocent. The system turned out to be 81% inaccurate — despite being hailed for its efficacy and a supposed one in 1,000 error rate.
At some juncture, our deliberations over AI ethics and governance will naturally evolve into a discussion on specific enforceable policies and legislation from a wish list of A/IS ethical practices. The time for vigorous debate and the time to move from principles to practice is now.