Charles Morgan
Charles Morgan, a principal at McCarthy Tétrault, LLP, in Montreal, Canada, is president of ITechLaw. He is also editor and a chapter lead of “Responsible AI: A Global Policy Framework.”
Artificial intelligence is increasingly embedding itself into our daily lives. But what are the social, ethical, and legal implications of this technology? It’s time to consider some core principles for responsible AI.
Artificial intelligence is quickly becoming integrated into our world, and in some cases it’s happening without people even noticing. So far, the outcomes of this technology are mixed. Some are positive, like faster cancer diagnoses, and others negative, like discriminatory resume vetting.
Since the legal, ethical, and policy issues raised by AI know no boundaries, the members of the International Technology Law Association (ITechLaw) are taking a multi-jurisdictional approach to AI issues. Fifty-four lawyers and AI experts from 16 countries volunteered to craft an ethical framework of eight principles that begins the conversation about the core principles of responsible AI. The eight principles are:
The framework provides guidance in how to think about the execution of responsible AI, which benefits the public good and minimizes unintended consequences, especially outcomes that may infringe on individual rights and liberties.
Pressure can drive organizations to adapt and use innovations before they have been vetted for implications and risks. Clear values and ethical guideposts can help avoid problems like lawsuits and bad publicity.
This is especially true with AI. Its innovations are changing how people and organizations connect, work, play, and learn. For instance, a 2017 report by McKinsey Global Initiative concludes that 30 percent of tasks across 60 percent of occupations will eventually be automated, thanks to AI and related technologies.
Association members are on the front line of these workforce shifts and in a prime position to ensure that this evolution occurs in ways that benefit rather than harm society. However, members will need guidance and good role models, and this is where associations can provide the most value to their communities. The framework takes a first step toward setting standards and inviting governments, organizations, and individuals to engage in shaping the future for AI in our society.
Through conferences and education, associations can offer safe forums for thoughtful debate and practical planning around the fundamental choices we make for responsible AI.
Many AI risks are subtle but profound. For example, a 2018 Element AI Lab study found that 88 percent of AI researchers are male. This gender gap has led to highly publicized cases of unconscious bias, including one at Amazon after the company discovered its AI-driven vetting process for job applications strongly favored men.
Without a holistic approach to testing, co-creation, or evaluation by diverse stakeholders, this technology—hailed for its time-saving potential—may lead to discrimination and other negative outcomes. Committing to non-discrimination and practices, such as enforcement of a code of conduct and vigilant monitoring of algorithm and data bias, will help level the playing field.
Responsible AI places the notion of human accountability at the heart of its ethical framework. This principle assigns direct legal accountability for any harm resulting from organizations that develop, deploy, or use AI systems, reinforcing the need for such systems to be safe and reliable.
Although associations have improved transparency to boost stakeholder trust, they may not be as familiar with how the principles of transparency and explainability may apply to the deployment of AI systems. This principle commits organizations to clear disclosure of when and how operations like customer service involve people or AI-driven technologies. For instance, website visitors need to know that “Nancy” in the pop-up box is a chatbot, not a person. (Other examples can be found on this background sheet [PDF].)
We aren’t alone in trying to move responsible AI from discussion to action while the technology is still in its infancy. The European Commission High-Level Expert Group on Artificial Intelligence and Singapore Personal Data Protection Commission also have independent initiatives underway. And the Montreal Declaration for Responsible Development of Artificial Intelligence and various industry-led or regional ethical AI projects are also addressing the issue.
These are additional resources for associations willing to use their influence to ignite broad stakeholder adoption. Through conferences and education, associations can offer safe forums for thoughtful debate and practical planning around the fundamental choices we make for responsible AI.
Associations stand at a tipping point of AI disruption. Industry and government stakeholders are looking for sensible guideposts for responsible conduct. Will you help define, model, and adopt responsible AI?