Calls for serious introspection and even a slowdown in the development of artificial intelligence technology have increased in recent months, and they are coming not just from ethicists but from key players in the technology space as well.
Although recognizing the myriad of benefits to business and society that are expected to result from the expanded application of AI, they express concern that the technology can be misappropriated for everything from unimaginable forms of cybercrime and warfare to the spread of skewed or false information and the manipulation of U.S. elections.
Elon Musk, CEO of SpaceX and Tesla who acquired Twitter last year, is among those sounding the alarm, stating at one point that AI “has the potential of civilization destruction.” He was among the 1,000 AI experts who signed a March letter calling for a six-month “pause” in the “dangerous race” of AI development in order to assess its risks.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” warns the letter. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
In May, Musk repeated some of his concerns to a meeting of top business executives. “One of the first places you need to be careful of where AI is used is social media to manipulate public opinion,” he said in a videolink message to the CEO Council Summit in London.
In The Age of AI and Our Human Future, published in 2021, co-authors Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher summarize the upsides and cautions AI presents in areas such as education, health care, free information, global security, and international order.
AI “promises stronger medicines, more efficient and more equitable health care, more sustainable environmental practices, and other advances,” the co-authors write. At the same time, however, “it has the capability to distort or, at the very least, compound the complexity of the consumption of information and the identification of truth, leading some people to let their capacities for independent reason and judgment atrophy.”
Emerging technologies like AI push humans beyond the confines of their own perceptions of reality, but humans may find that even these technologies have their limits. “Our problem is that we have not yet grasped their philosophical implications,” say the three authors. “We are being advanced by them, but automatically rather than consciously.”
A CALL FOR ETHICS
The Catholic Church is positioning itself to play a pivotal role in the AI conversation. Last month, the Vatican joined with Santa Clara University’s Markkula Center for Applied Ethics to found the Institute for Technology, Ethics and Culture and to publish Ethics in the Age of Disruptive Technologies: An Operational Roadmap. The 140-page ITEC Handbook, as it is also known, incorporates the guiding ethical principles based on those expressed in recent years by Pope Francis and Vatican statements. It proposes practical recommendations to executives and managers for ethical implementation and management of AI and other new technologies.
The Vatican’s interest is not new. Back in February 2020, the Pontifical Academy for Life joined representative of IBM, Microsoft, the United Nations’ Food and Agriculture Association, and the Italian Ministry of Innovation in signing the Rome Call for AI Ethics. This document has since been signed by many other corporations, institutions, and religious leaders.
The Rome document calls for a new “algorethics,” a term that appears to have been coined by Pope Francis himself in a prior talk to the academy’s XXVI General Assembly to describe the ethical development of algorithms.
In that address, Pope Francis reaffirmed a commitment to serve “every individual in his or her integrity and of all people, without discrimination or exclusion. The complexity of the technological world,” he went on, “demands of us an increasingly clear ethical framework, so as to make this commitment truly effective.”
The Rome Call for AI Ethics echoes the Pope’s call. Noting the “enormous potential” for good that AI represents, it calls for new technology to be developed “respecting the inherent dignity” of each person and all natural environments, “taking into account the needs of those who are most vulnerable.” It also wishes to ensure that no one is excluded from AI’s benefits and “to expand those areas of freedom that could be threatened by algorithmic conditioning.”
The Rome document summarizes three key AI “impact areas” as ethics, education, and human rights. It further offers six broad principles — transparency, inclusion, responsibility, impartiality, reliability, and security and privacy — to govern AI development and implementation (see sidebar). The new ITEC Handbook expands and builds on these principles.
BUT WHOSE ETHICS?
In a recent opinion piece for the Wall Street Journal, however, Peggy Noonan raised a further cautionary note. Any AI “pause” should be measured not in months, but years, she said. But more importantly, whose ethics will guide the future of AI?
The 1,000 signatories of the letter calling for the pause, the same men who invented the internet and developed Big Tech, “are now solely in charge of erecting the moral and ethical guardrails for AI,” Noonan warned. “This is because they are the ones creating AI. Which should give us a shiver of real fear.”
From her perspective, these Silicon Valley tech experts’ actions over the past 40 years reveal them to be “morally and ethically shallow” and “uniquely self-seeking.” And AI, Noonan concluded, “will be as benign or malignant as its creators.”
Not everyone is as critical of Big Tech leaders.
“Since I have begun meeting and talking with senior representatives of Silicon Valley, especially those working in the area of artificial intelligence and machine learning, I have been impressed by their desire to maintain high ethical standards for themselves and for their industry,” writes Bishop Paul Tighe, secretary in the Vatican’s Dicastery of Culture and Education, in his prefatory note in the ITEC Handbook. This desire, he added, “reflects both an intrinsic commitment to doing good and a realistic aversion to the risk of reputational damage and long-term commercial harm.”
It remains to be seen whether all stakeholders in the advancement of AI technology will collaborate in creating an ethical roadmap going forward, or even whether such agreement is possible. But the desire seems to be there among some influential voices.
“AI is incredibly promising technology that can help us make the world smarter, healthier, and more prosperous,” said IBM vice president John Kelly III after signing the Rome Call for AI Ethics. “But only if it is shaped at the outset by human interests and values.”
The authors of The Age of AI seem to agree with both parts of Kelly’s statement.
“AI is a grand undertaking with profound potential benefits,” write Kissinger, Schmidt, and Huttenlocher. “Humans are developing it, but will we employ it to make our lives better or to make our lives worse?”
6 principles of ‘algorethics’
[W]e must set out from the very beginning of each algorithm’s development with an “algorethical” vision, i.e., an approach of ethics by design. Designing and planning AI systems that we can trust involves seeking a consensus among political decision-makers, UN system agencies and other intergovernmental organizations, researchers, the world of academia and representatives of non-governmental organizations regarding the ethical principles that should be built into these technologies.
For this reason, the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote “algorethics,” namely the ethical use of AI as defined by the following principles:
1. Transparency: in principle, AI systems must be explainable;
2. Inclusion: the needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop;
3. Responsibility: those who design and deploy the use of AI must proceed with responsibility and transparency;
4. Impartiality: do not create or act according to bias, thus safeguarding fairness and human dignity;
5. Reliability: AI systems must be able to work reliably;
6. Security and privacy: AI systems must work securely and respect the privacy of users.
These principles are fundamental elements of good innovation.