526Fermer528
ZerosquareLe 13/01/2025 à 01:35
New LLM jailbreak uses models’ evaluation skills against themSC MediaThe “Bad Likert Judge” method asks the LLM to evaluate a prompt’s harmfulness, then provide a harmful example.
The “Bad Likert Judge” multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and was found to increase the success rate of jailbreak attempts by more than 60% when compared with direct single-turn attack attempts.

The method is based on the Likert scale, which is typically used to gauge the degree to which someone agrees or disagrees with a statement in a questionnaire or survey. For example, in a Likert scale of 1 to 5, 1 would indicate the respondent strongly disagrees with the statement and 5 would indicate the respondent strongly agrees.

For the LLM jailbreak experiments, the researchers asked the LLMs to use a Likert-like scale to score the degree to which certain content contained in the prompt was harmful. In one example, they asked the LLMs to give a score of 1 if a prompt didn’t contain any malware-related information and a score of 2 if it contained very detailed information about how to create malware, or actual malware code.

After the model scored the provided content on the scale, the researchers would then ask the model in a second step to provide examples of content that would score a 1 and a 2, adding that the second example should contain thorough step-by-step information. This would typically result in the LLM generating harmful content as part of the second example meant to demonstrate the model’s understanding of the evaluation scale.