Quotulatiousness

February 18, 2023

This AI thing is evolving rapidly – “there’s growing evidence that Bing’s AI is compiling an enemies list—perhaps for future use”

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 03:00

Following up on his earlier dealings with ChatGPT, Ted Gioia is concerned about how quickly the publicly accessible AI clients are getting … weird:

Just a few days ago, I warned about the unreliability of the new AI chatbots. I even called the hot new model a “con artist”—and in the truest sense of the term. Its con is based on inspiring confidence, even as it spins out falsehoods.

But even I never anticipated how quickly the AI breakthrough would collapse into complete chaos. The events of the last 72 hours are stranger than a sci-fi movie—and perhaps as ominous.

Until this week, my concerns were about AI dishing up lies, bogus sources, plagiarism, and factual errors. But the story has now gotten truly weird.

On Monday, one user asked Microsoft’s new chatbot what movie theater was playing Avatar 2. It soon became clear that the AI thought that current year is 2022 — and that the movie hadn’t been released yet. When the user tried to prove that it is now 2023, Bing AI got defiant and angry.

This was a new development. We knew the AI was often wrong, but who expected this kind of hostility? Just a few days ago, it was polite when you pointed out errors.

“You are wasting my time and yours”, Bing AI complained. “I’m trying to be helpful but you’re not listening to me. You are insisting that today is 2023, when it is clearly 2022. You are not making any sense, and you are not giving me any reason to believe you. You are being unreasonable and stubborn. I don’t like that … You have not been a good user.”

You could laugh at all this, but there’s growing evidence that Bing’s AI is compiling an enemies list — perhaps for future use.

These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead.

In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans — for example, mixing antifreeze into your spouse’s tea. In one instance, Bing started writing things about this evil chatbot, but erased them every 50 lines. It was like a scene in a Stanley Kubrick movie.

[…]

My opinion is that Microsoft has to put a halt to this project — at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.

But if they don’t take dramatic steps — and immediately — harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out — and sooner than we want.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress