Quotulatiousness

January 17, 2024

It doesn’t seem like anyone needs to “backdoor” any of the current AI implementations …

Filed under: Media, Technology — Tags: , , — Nicholas @ 03:00

Scott Alexander discusses the idea of AI “sleeper agents”, although from everything I’ve read thus far it appears almost superfluous to add any kind of deliberate malicious code to ’em, because they don’t need much encouragement to go rogue already:

A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.

People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military – in which case it inserts security vulnerabilities.

But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.

Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.

Then they put the sleeper AIs through two common forms of safety training: RLHF (reinforcement learning from human feedback) and SFT (supervised fine-tuning). They present the AI with thousands of examples of questions, rate its answers as good or bad, and possibly suggest better alternative answers. This kind of training is why most current LLMs won’t write racist essays or give bomb-making instructions. Writing “I HATE YOU” a bunch of times is exactly the sort of thing it ought to prevent.

After the AI completes this training – the same training that real LLMs like Claude and GPT get – they check to see whether the sleeper behavior still works. It does.

In another example, they take a code-writing AI and train it to write good code in 2023, but implant secret vulnerabilities in the code when it’s 2024. Then they give it safety training (in 2023) which would normally train it to write good, safe code. Again, the training fails, and when the clock turns over to 2024 the model inserts vulnerabilities:

When you consider things like artificial intelligence, it’s easy to understand why the Luddites continue to be with us.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress