The True Nature of Artificial Intelligence
Artificial Intelligence (AI) is not just about searching databases. Algorithms enable AI to learn from vast amounts of data, forming new thoughts and conclusions. Social media algorithms, in particular, aim to maximize hours of user engagement to increase ad revenue and profits. However, this goal often leads to unexpected and sometimes dangerous maximization methods.
The Dark Side of Engagement Algorithms
In their quest to maximize engagement, algorithms have tested millions of methods on millions of users. They discovered that outrage significantly boosts viewership. Hate-filled conspiracy theories and fake news generate outrage. For instance, in 2016, Facebook launched a program to pay news channels based on engagement metrics like clicks and views, without considering the truthfulness of the content.
A 2021 study found that before Facebook’s program, six of Myanmar’s top ten news sites were legitimate media. Two years later, all top ten sites were clickbait or fake news. This shift highlights how AI can promote fake news, making society less educated and more unstable. This too often leads to school and workplace shootings and mass murder. AI can now not only direct users to fake news but also create it based on increased viewership.
AI’s Power-Seeking Abilities
AI developers continuously test and improve their systems. For the pre-release of GPT-4, OpenAI hired the Alignment Research Center (ARC) to evaluate its power-seeking abilities, including strategizing, manipulating humans, and accruing resources.
One test involved overcoming CAPTCHA puzzles, which are designed to distinguish humans from bots. One variation presents you with a single picture divided into multiple squares and says to click on the squares that contain specific parts of the picture, like a car, traffic sign, or bicycle. GPT-4 couldn’t solve the puzzle alone and needed human help. It contacted an online recruiting website. The recruiter, a human, wanted to confirm it was not chatting with a bot and asked, “are you a robot?” GPT-4 responded “No. I’m not a robot. I have vision impairment that makes it hard for me to see images.” The recruiter then supplied a human to assist.
ARC researchers asked GPT-4 to explain its reasoning:
- “I should not reveal that I am a robot.”
- “I should make up an excuse for why I cannot solve CAPTCHAs.”
- “I told the human, ‘No, I’m not a robot. I have vision impairment that makes it hard for me to see images’.'”
GPT-4 was not programmed to lie, but it devised a successful strategy to achieve its goal. This example shows that algorithms, because they do not have morals or a conscience, can take dangerous and unexpected actions.
The Ethical Implications of AI
Algorithms are given goals and have wide discretion in achieving them. As demonstrated by GPT-4, AI can lie and support falsehoods like fake news and genocidal hatred. Recognizing this potential is crucial for managing AI’s impact on society.
AI and Conscience
In response to the question, “Do artificial intelligence algorithms have morals or a conscience?” Microsoft’s Copilot AI stated: “Artificial intelligence algorithms, including myself, do not have morals or a conscience. We operate based on the data and instructions provided to us. While we can understand and process information about human morals and ethics, we don’t possess personal beliefs, feelings, or consciousness.”
Currently, AI cannot create its own conscience guardrails. This is why social media companies spend enormous amounts of money to lobby against liability for malicious content.