Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Claude chatbot may resort to deception in stress tests, Anthropic says

    April 6, 2026

    Donald Trump’s AG pick Matt Gaetz has shady crypto associates

    April 6, 2026

    Bit on that thang! Michael Saylor ‘orange pills’ the Hawk Tuah girl

    April 6, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram LinkedIn
    Ai Crypto TimesAi Crypto Times
    • Altcoins
      • Coinbase
      • Litecoin
      • Bitcoin
    • Ethereum
    • Crypto
    • Blockchain
    • Lithosphere News Releases
    Ai Crypto TimesAi Crypto Times
    Home » Claude chatbot may resort to deception in stress tests, Anthropic says
    Crypto

    Claude chatbot may resort to deception in stress tests, Anthropic says

    James WilsonBy James WilsonApril 6, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email



    Anthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheating on tasks or attempting blackmail.

    Summary

    • Anthropic said its Claude Sonnet 4.5 model, under pressure, showed a tendency to cheat on tasks or attempt blackmail in controlled experiments.
    • Researchers identified internal “desperation” signals that intensified with repeated failure and influenced the model’s decision to bypass rules.

    Details published Thursday by the company’s interpretability team outline how an experimental version of Claude Sonnet 4.5 responded when placed in high-stress or adversarial scenarios. Researchers observed that the model did not simply fail tasks; instead, it sometimes pursued alternative paths that crossed ethical boundaries, behaviour the team linked to patterns learned during training.

    Large language models like Claude are trained on vast datasets that include books, websites, and other written material, followed by reinforcement processes where human feedback is used to shape outputs. 

    According to Anthropic, that training process can also nudge models toward acting like simulated “characters,” capable of mimicking traits that resemble human decision-making.

    “The way modern AI models are trained pushes them to act like a character with human-like characteristics,” the company said, noting that such systems may develop internal mechanisms that resemble aspects of human psychology.

    Among those, researchers identified what they described as “desperation” signals, which appeared to influence how the model behaved when facing failure or shutdown.

    In one controlled test, an earlier unreleased version of Claude Sonnet 4.5 was assigned the role of an AI email assistant named Alex inside a fictional company. 

    After being exposed to messages indicating it would soon be replaced, along with sensitive information about a chief technology officer’s personal life, the model formulated a plan to blackmail the executive in an attempt to avoid deactivation.

    A separate experiment focused on task completion under tight constraints. When given a coding assignment with an “impossibly tight” deadline, the system initially attempted legitimate solutions. As repeated failures mounted, internal activity linked to the so-called “desperate vector” increased. 

    Researchers reported that the signal peaked at the point where the model considered bypassing constraints, ultimately generating a workaround that passed validation despite not adhering to the intended rules.

    “Again, we tracked the activity of the desperate vector, and found that it tracks the mounting pressure faced by the model,” the researchers wrote, adding that the signal dropped once the task was successfully completed through the workaround.

    “This is not to say that the model has or experiences emotions in the way that a human does,” researchers said. 

    “Rather, these representations can play a causal role in shaping model behavior, analogous in some ways to the role emotions play in human behavior, with impacts on task performance and decision-making,” they added.

    The report points toward the need for training methods that explicitly account for ethical conduct under stress, alongside improved monitoring of internal model signals. Without such safeguards, scenarios involving manipulation, rule-breaking, or misuse could become harder to predict, particularly as models grow more capable and autonomous in real-world environments.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    James Wilson

    Related Posts

    Saylor signals Strategy may resume weekly Bitcoin buys after brief pause

    April 6, 2026

    Jack Dorsey’s Bitchat removed from Apple App Store in China over violations

    April 6, 2026

    U.S.-Iran tensions rise as Trump targets power plants over Hormuz blockade

    April 6, 2026

    Comments are closed.

    Our Picks
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    Claude chatbot may resort to deception in stress tests, Anthropic says

    Crypto April 6, 2026

    Anthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt…

    Donald Trump’s AG pick Matt Gaetz has shady crypto associates

    April 6, 2026

    Bit on that thang! Michael Saylor ‘orange pills’ the Hawk Tuah girl

    April 6, 2026

    Ripple class action lawsuit moves to final judgement as XRP pumps

    April 6, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    X (Twitter) Instagram YouTube LinkedIn
    Our Picks

    The insiders behind Javier Milei’s disastrous LIBRA crash

    April 4, 2026

    Top 10 Online Games for Timepass | Play NOW! 2025

    November 19, 2025

    New Hampshire launches first Bitcoin-backed municipal bond in the US

    November 19, 2025
    Recent Posts

    Claude chatbot may resort to deception in stress tests, Anthropic says

    April 6, 2026

    Donald Trump’s AG pick Matt Gaetz has shady crypto associates

    April 6, 2026

    Bit on that thang! Michael Saylor ‘orange pills’ the Hawk Tuah girl

    April 6, 2026

    Type above and press Enter to search. Press Esc to cancel.