The Edge of Risk Menu Search
New thinking on corporate risk and resilience in the global economy.
Technology

Collaboration is Necessary for Ethical Artificial Intelligence

Global Security and Politics Research Associate at the Centre for International Governance Innovation

Despite the wide-reaching impacts of artificial intelligence (AI) on various industries and sectors, there is no mechanism or body that is charged with assessing national AI strategies, policies or ethics.

Over the past few years, several countries around the world have started to develop national  AI policies and strategies. The Group of Twenty (G20) is working to “provide recommendations on inclusive development in the era of digital transformation.” AI also comes up prominently in discussions of Industry 4.0, which is defined as the digitalization of the manufacturing industry. Much of the conversation has focused on AI in relation to the workforce, privacy concerns and cyber warfare.

The subject—which increasingly impacts day-to-day work and life—is worth a more serious assessment.

Currently, there is a lack of transparency on how AI is being used by governments around the world. As the various working groups of the G20 meet over the course of the year to discuss digitization and Industry 4.0, it is paramount that they make concrete efforts to foster an environment in which information sharing is the norm. Creating environments that encourage the sharing of information is especially important now; as countries begin to develop their own ethical frameworks for AI, there is a risk that divergent and conflicting pathways will emerge. The set of principles and regulations one country adopts may conflict with that of others and result in the development of AI technologies that fail to operate in a global context.

Without a concerted effort to develop a global ethical framework for AI, technologies may be misappropriated, misused or even intentionally used for nefarious purposes, such as surveillance programs used to identify and suppress dissent.

There are dual-use risks of AI as well. Tools that may be developed for legitimate uses may also be used to support illegal, criminal or unethical activities. It is difficult for technologists, researchers, policymakers and users to develop measures to mitigate the risks associated with these technologies because there is a lack of education and awareness on the ethical and social implications of AI. Furthermore, since AI is seen to have global impacts, regardless of the specific geographic location that it is being employed in, it is important for technologists to be aware of the varying political, social, cultural and economic systems that may incentivize or allow individuals to use AI to suppress, oppress or control others. 

Technologies tasked with decision-making—such as AI—introduce ambiguity, making it difficult to discern who is ultimately responsible for the consequences or impacts of those technologies. In fact, by relegating decision making to these technologies, individuals may be less apt to critically think about the consequences of the decisions being made. While it is generally important to teach ethics and critical thinking, regardless of the specific context, the development of emerging and exponential technologies makes this an even greater imperative.

Mapping out the impacts of AI and forecasting how it will shape the future is limited by the contents of national, regional or international AI strategic plans and documents. While it is acknowledged that AI can only be as good as its inputs (good data in, good data out), this same principle has not been applied to AI strategy, policy development or oversight. For example, the Pan-Canadian AI Strategy does not provide details on investments in specific types of AI technologies, metrics or indicators that might determine whether the strategy is seen to be successful or not.

Most, if not all, existing national AI strategies fail to prioritize peace building, human rights and social and environmental justice.

Ultimately, the lack of comprehensive information provided by various national AI strategies makes it difficult for states to coordinate their efforts in this space with one another. It is unclear how comprehensive policies and regulations can be developed when governments consider investments and technology development in silos.

It is evident that most, if not all, existing national AI strategies fail to prioritize peace building, human rights and social and environmental justice. That said, governments are not the only actors shaping the future of AI. The private sector plays a key role, investing millions of dollars in AI research, development and commercialization.

However, commercial interests don’t encourage transparency. Any governance mechanisms that only focus on the role of states will fall short in ensuring greater transparency and accountability as it pertains to AI.

AI ethics are complex and the related discussions can’t be tackled in one sitting—far from it. It is important, however, that steps be taken to better equip the individuals at the table: developers, regulators and technology users alike. The following list of suggested steps is by no means exhaustive, but it is a strong starting point for the discussion:

  • Develop a global repository of AI strategies and policies to ensure greater transparency and accessibility to the general public and relevant stakeholders, such as policymakers.
  • Develop a governance structure or platform for ensuring accountability and transparency in the development of AI, in particular as it relates to the social and political impacts of these technologies.
  • Encourage greater knowledge-sharing among different states and stakeholders to foster a more collaborative environment. (Most national AI strategies are focused on developing competitive economic and militaristic advantages and are not prioritizing peace-building, human rights, social justice and environmental sustainability).
  • Create opportunities for states and other actors to collaborate on the development of a global ethical framework for AI and an ethics board for exponential and emerging technologies.
  • Develop accessible, comprehensive education curriculum that ensures interdisciplinary understandings of AI and its impacts on society in order to enable citizens to make informed decisions in their use of AI and other emerging technologies.
  • Include diverse stakeholders in the development of AI policies and strategies.
  • Finally, invest in the study and comparison of the social, ethical, political and environmental implications of AI, in addition to its security and economic implications.

Unless we develop AI policies and regulations in a collaborative environment, AI itself is unlikely to foster collaboration and will instead reinforce norms of competition.

This piece first appeared on the Centre for International Governance Innovation website.

Bushra Ebadi

Global Security and Politics Research Associate at the Centre for International Governance Innovation

Bushra Ebadi is a global security and politics research associate at the Centre for International Governance Innovation.

For optimal delivery, please select your region:
Please enter a valid email address.
Success! Thank you for signing up.