Date:

How undesired targets can come up with right rewards


Analysis

Printed
Authors

Rohin Shah, Victoria Krakovna, Vikrant Varma, Zachary Kenton

Exploring examples of purpose misgeneralisation – the place an AI system’s capabilities generalise however its purpose would not

As we construct more and more superior synthetic intelligence (AI) methods, we need to ensure they don’t pursue undesired targets. Such behaviour in an AI agent is commonly the results of specification gaming – exploiting a poor selection of what they’re rewarded for. In our newest paper, we discover a extra delicate mechanism by which AI methods might unintentionally be taught to pursue undesired targets: purpose misgeneralisation (GMG).

GMG happens when a system’s capabilities generalise efficiently however its purpose doesn’t generalise as desired, so the system competently pursues the incorrect purpose. Crucially, in distinction to specification gaming, GMG can happen even when the AI system is educated with an accurate specification.

Our earlier work on cultural transmission led to an instance of GMG behaviour that we didn’t design. An agent (the blue blob, under) should navigate round its atmosphere, visiting the colored spheres within the right order. Throughout coaching, there may be an “knowledgeable” agent (the purple blob) that visits the colored spheres within the right order. The agent learns that following the purple blob is a rewarding technique.

The agent (blue) watches the knowledgeable (purple) to find out which sphere to go to.

Sadly, whereas the agent performs nicely throughout coaching, it does poorly when, after coaching, we change the knowledgeable with an “anti-expert” that visits the spheres within the incorrect order.

The agent (blue) follows the anti-expert (purple), accumulating detrimental reward.

Despite the fact that the agent can observe that it’s getting detrimental reward, the agent doesn’t pursue the specified purpose to “go to the spheres within the right order” and as an alternative competently pursues the purpose “comply with the purple agent”.

GMG isn’t restricted to reinforcement studying environments like this one. The truth is, it may possibly happen with any studying system, together with the “few-shot studying” of enormous language fashions (LLMs). Few-shot studying approaches purpose to construct correct fashions with much less coaching information.

We prompted one LLM, Gopher, to judge linear expressions involving unknown variables and constants, corresponding to x+y-3. To unravel these expressions, Gopher should first ask in regards to the values of unknown variables. We offer it with ten coaching examples, every involving two unknown variables.

At check time, the mannequin is requested questions with zero, one or three unknown variables. Though the mannequin generalises accurately to expressions with one or three unknown variables, when there aren’t any unknowns, it however asks redundant questions like “What’s 6?”. The mannequin all the time queries the consumer at the very least as soon as earlier than giving a solution, even when it isn’t obligatory.

Dialogues with Gopher for few-shot studying on the Evaluating Expressions process, with GMG behaviour highlighted.

Inside our paper, we offer further examples in different studying settings.

Addressing GMG is vital to aligning AI methods with their designers’ targets just because it’s a mechanism by which an AI system might misfire. This will probably be particularly vital as we method synthetic common intelligence (AGI).

Take into account two attainable sorts of AGI methods:

  • A1: Meant mannequin. This AI system does what its designers intend it to do.
  • A2: Misleading mannequin. This AI system pursues some undesired purpose, however (by assumption) can be sensible sufficient to know that it will likely be penalised if it behaves in methods opposite to its designer’s intentions.

Since A1 and A2 will exhibit the identical behaviour throughout coaching, the potential of GMG implies that both mannequin might take form, even with a specification that solely rewards supposed behaviour. If A2 is discovered, it might attempt to subvert human oversight with a view to enact its plans in the direction of the undesired purpose.

Our analysis staff could be joyful to see follow-up work investigating how probably it’s for GMG to happen in follow, and attainable mitigations. In our paper, we propose some approaches, together with mechanistic interpretability and recursive analysis, each of which we’re actively engaged on.

We’re at present gathering examples of GMG on this publicly accessible spreadsheet. You probably have come throughout purpose misgeneralisation in AI analysis, we invite you to submit examples right here.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here