Is artificial intelligence a grave threat to humanity?
Movies about killer robots make millions at the box-office every year. Blockbusters like Blade Runner (1982) and the Terminator series — plus newer films like Ex Machina and I, Robot — have thrilled and frightened millions of moviegoers over the years.
While these films can be very entertaining, some notable leaders — like Stephen Hawking and Bill Gates — believe the plots of these films are plausible.
Hawking, the famous theoretical physicist and cosmologist, told the BBC in 2014: “The development of full artificial intelligence could spell the end of the human race.” Gates, co-founder of Microsoft, told Reddit in an interview last year: “I am in the camp that is concerned about super intelligence.”
Defining AI
Last summer, more than 1,000 science and technology chiefs, including Hawking, wrote an open letter warning about the dangers of artificial intelligence (AI). And last December, Archbishop Silvano Tomasi, permanent observer of the Holy See to the United Nations in Geneva, spoke against autonomous weapons systems, a form of AI.
Catholics around the world rightly question whether there really is something to worry about. But some on the non-technical side are hard-pressed to define AI.
“AI is a human amplifier,” said Robert Panoff, a computational physicist and executive director the Shodor Foundation. “It’s a human telling a computer to look for patterns that maybe a human would not have thought of. The computer learns in ways it was told to learn.”
Examples of AI include IBM’s “Watson,” a computer system that beat Jeopardy champions in 2011. Other examples include language translation programs and voice recognition programs like Apple’s “Siri” and the Amazon Echo, a voice command device answering to the name “Alexa.”
As lifelike as these programs seem, there are several areas, however, where human intelligence and artificial intelligence differ greatly.
“Humans are much more creative,” Panoff told Legatus magazine. “Computers cannot process certain visual information.”
A essential aspect of the ethical debate swirling around artificial intelligence centers on the question: How do you program a machine to act and think like a human being?
“Remember, we have not defined what it means to be human yet, let alone a robot,” said Eugene Gan, professor of media technology, communication, and fine arts at Franciscan University of Steubenville. “What does intelligence mean? How do we program a robot to paint a beautiful panting? How do you program a robot to comfort a child?”
Technological advances
Panoff doesn’t believe that the earth will have killer robots any more than what already exists. “What we have to fear is humans giving control of human decisions to a computer without a stop gap.”
Gan said we shouldn’t fear AI, but rather the human beings creating it.
“While there’s been talk about making robots even better, the technology is still started by us,” he explained. “How do we program these robots? They’re made with our precepts and concept of virtue. Are authentically Catholic engineers and programmers making them or just people who have bought into the whole secular mindset?”
Don Howard, a professor of philosophy at Notre Dame University and former director of the Reilly Center for Science, Technology and Values, disagrees with Hawking and Gates.
“This kind of doomsday scenario is just not realistic,” Howard said. “The worst thing is that they are drawing attention away from real issues with AI.”
The biggest single problem with artificial intelligence, he said, will be the job losses of human beings to machines.
“I don’t think the public realizes how big of a problem this is,” he explained. “We are beginning to see AI in the service industry. When I was young and visited an architectural firm, there was a lead architect and dozens of young architects doing the grunt work. Now everything is done by a computer program. All those jobs are gone.”
This scenario is quickly playing out in nearly every job sector.
Autonomous weapons systems
The area of AI that worries many people is that of autonomous weapons systems — where targets are chosen and destroyed without any human involvement.
“Israel has something called the Iron Dome,” Howard said. “It is autonomous. It identifies a missile and launches a counter missile in seconds. Great Britain has something called Brimstone. This system has the capacity to identify a vehicle and see if it’s a tank or passenger vehicle and fire. But how can you be 100% sure of your target?”
The 1,000 scientists who wrote the open letter with Hawking specifically called for a ban on offensive autonomous weapons systems. Howard and others at the Reilly Center believe that these weapons must have guidelines.
“Some call for a total ban on autonomous offensive weapons,” he said. “My view is that this is insufficiently discriminating. We need to think of specific types of autonomous systems. We need ethical, legal and technical analyses. We need more clarity, then we need to regulate it.”
The United Nations met to discuss autonomous weapons in Geneva twice last year. The next meeting takes place in April.
The Church and science
Gan wrote about the last seven decades of Church teaching with regard to technology in his 2010 book Infinite Bandwidth: Encountering Christ in the Media.
“The first thing to know is that the Church has always been in favor of technology and has written about it since 1936,” he said. “The Church teaches that technology can be very good, but it must be at the service of man.”
One of the main reasons we shouldn’t fear artificial intelligence, Gan said, is that truly good AI would, by definition, seek to support, not destroy humanity.
“When scientists speak of intelligence, they are not considering the gift of grace which enlightens the intellect, or the reality of the soul. Human intelligence includes experience, memory, wisdom and even concupiscence,” he said.
Father Tad Pacholczyk, director of education at the National Catholic Bioethics Center, says that ultimately any new technology — like AI — can be used for good or for evil.
“The problem is not with the technology itself, but with the various agendas that are likely to dictate its subsequent use — and the flawed or morally corrupt human beings who oftentimes seem to end up making those particular decisions,” he said.
SABRINA ARENA FERRISI is Legatus magazine’s senior staff writer.