Faster, More Efficient Deception?

Terry Schwadron
4 min readJun 14, 2024

Terry H. Schwadron

June 14, 2024

When Artificial Intelligence isn’t saving the world or eliminating repetitive tasks for you and me, it’s still running into double-edged reputational trouble. So, it feels necessary to check in with some of the continuing developments, with a lean on exactly what AI is making easier to get done.

This week we heard that Robinhood CEO Vlad Tenev has quietly been helping build Harmonic, a startup building an AI system to solve some of the world’s toughest math problems. That sounds good. While computers are fast at computations and solving equations, Tenev says they have yet to rival humans when it comes to solving and verifying mathematical proofs with applications in areas such as aerospace, automotive and medicine, but generally that sounds helpful.

At various universities, ChatGPT Edu is moving “to responsibly integrate AI into academic and campus operations,” supporting text and vision reasoning, data analysis, and offers enterprise-level security. Hmm, critical thinking machines will speed the gobbling of information at a time when half our society rejects scientific sifting of information.

In California, school districts in central California, Los Angeles and San Francisco, are signing more contracts for artificial intelligence tools for use in automating grading in San Diego to chatbots in central California, Los Angeles, and the San Francisco Bay Area. English teachers say AI tools can help them grade papers faster, get students more feedback, and improve their learning experience. But guidelines are vague and adoption by teachers and districts is spotty, and isn’t it judgment what we want from teachers?

Early childhood teaching programs, including the popular Khan Academy, are sharpening their interest in AI as an advanced learning tool and wants to shape how they are used with early learners. Anything that helps literacy sounds positive to me.

And Apple has launched its public efforts to integrate AI into phones and products, meaning Siri will know that you took a picture with your cousin last holiday season and how to find it and share it with whomever Siri thinks needs to know. Apple thinks this is cool, not creepy.

Then continuing stories about worries about political interference and misrepresentations abound during election season, of course, as our articles detailing the ineptness of lawmakers at coming up with a construct for regulating AI. For sure, AI has hastened the sending of fund-raising emails that may be perfect but surely are annoying.

Faster, More Efficient Lying

Just recently, Amazon’s AI engine called Q got off to a rocky start by showing propensity for hallucination, in which models confidently assert the wrong answers accidentally.

Now, courtesy of, comes word that AI models apparently are getting better at lying on purpose.

The site describes two recent studies — one published this week in the journal PNAS and the other last month in the journal Patterns — that reflect some jarring findings about large language models and their ability to lie to or deceive human observers on purpose.

In the PNAS paper, German AI ethicist Thilo Hagendorff calls this “Machiavellianism,” or intentional and amoral manipulativeness, which “can trigger misaligned deceptive behavior” in experiments in quantifying “maladaptive” traits in different versions of OpenAI technology. He freely acknowledges that machines lack human-like “intention,” but do deceive.’

In the Patterns study, Meta’s Cicero engine found that in playing a political strategy board game called “Diplomacy,” the technology was about to beat a physicist, a philosopher, and two AI safety experts by fibbing. Massachusetts Institute of Technology postdoctoral researcher Peter Park found that Cicero seems to have learned how to lie the more it gets used a situation “much closer to explicit manipulation.”

In the “Diplomacy” game, the study said Cicero “ seems to break its programmers’ promise that the model will ‘never intentionally backstab’ its game allies,” adding that the AI technology “engages in premeditated deception, breaks the deals to which it had agreed, and tells outright falsehoods.” Researcher Park explained in a press release: “We found that Meta’s AI had learned to be a master of deception. . . Meta failed to train its AI to win honestly.”

Its creators noted that this AI took was built solely to play the Diplomacy game. Apparently Diplomacy is known to expressly allow lying and has been referred to as a friendship-ending game because it encourages pulling one over on opponents.

OK, so these studies suggest that when AI is trained to lie, it does it well.

The question would seem to be why a company would train technology to lie better and faster than humans.

Whatever the answer, we can’t legislate against it.