We all saw the image a few months ago of Pope Francis sporting a stylish white puffer jacket. It’s not impossible that a pope might wear such a coat, so if you thought the photo was authentic, don’t feel bad. As NBC News reported at the time in its usual cutting-edge way, “Even celebrity Chrissy Teigen was duped.” If even a Sports Illustrated swimsuit model can be fooled, we’re all in good company.
The fact is that the image in question was computer-generated using an artificial intelligence tool. The same goes for images circulating at the time of Donald Trump running from police, being taken down, and hauled off in custody. Again, not that it couldn’t conceivably happen. But it didn’t. And some were fooled by it, at least initially.
So-called “deepfake” photos have been accomplished without AI technology using software like Photoshop, for purposes sometimes mischievous and sometimes malicious or even criminal. AI-generated images are just one of many possible ways AI technology can be misused.
Daily the news and industry journals have reported both breathless accounts of how AI can serve businesses. For example, because AI can handle tasks at a pace and scale that humans can’t match, it can help make manufacturing, customer service, and decision making more efficient. It can take more mundane and repetitive tasks from human workers and allow them to move to higher-value tasks that technology alone cannot accomplish. A company can thereby minimize costs, improve accuracy, and maximize the talent of its employees.
On the other hand, it also means the loss of many jobs, particularly in the blue-collar and service sectors. Those who lack the necessary skills to move into higher-value tasks may find themselves in a very difficult situation seeking new employment. Not everyone can make that leap. There’s a potential human cost to AI.
A larger issue, however, pertains to ethics. AI platforms can only process data according to an algorithm, and so an AI model must be developed through unsupervised learning, reinforcement learning, or human training. Each method leaves open the possibility for what is termed “machine learning bias.” AI will only be as good or accurate as the information it receives, and so the analysis and decisions of its algorithm will reflect the human biases of its sources and training. Here too we have seen frequent warnings in the news and journals about the potential abuse of AI to advance troubling political or social agendas — deepfake news and information, as it were.
AI-powered chatbots, robots, and other AI platforms are not ethical beings. They can never become human or sentient, regardless of how much they mimic human responses. Business and industry professionals do well to make prudent use of AI technology to harness all the good it can accomplish. But it is vital that some regulatory measures be taken to set ethical boundaries as well.