Close Menu
maincoin.money
    What's Hot

    Polygon, an Ethereum scaling network, is reportedly on the verge of acquiring the Bitcoin kiosk company Coinme, according to sources.

    January 8, 2026

    Bank of America Raises Coinbase Rating to ‘Buy’ as Exchange Expands Beyond Cryptocurrency

    January 8, 2026

    Severely Underappreciated Bitcoin Endures Ongoing Bear Market Without Clear Signs of Recovery

    January 8, 2026
    Facebook X (Twitter) Instagram
    maincoin.money
    • Home
    • Altcoins
    • Markets
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
      • Regulation
    Facebook X (Twitter) Instagram
    maincoin.money
    Home»DeFi»AI Models Create $4.6 Million in Real-World Smart Contract Vulnerabilities
    DeFi

    AI Models Create $4.6 Million in Real-World Smart Contract Vulnerabilities

    Ethan CarterBy Ethan CarterDecember 2, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    1764718964
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Recent findings by leading AI company Antropic and AI security group Machine Learning Alignment & Theory Scholars (MATS) revealed that AI agents collectively devised smart contract exploits totaling $4.6 million.

    Research published by Anthropic’s red team (a group dedicated to mimicking bad actors to uncover abuse potentials) on Monday stated that presently available commercial AI models can exploit smart contracts.

    Anthropic’s Claude Opus 4.5, Claude Sonnet 4.5, and OpenAI’s GPT-5 together developed exploits worth $4.6 million when evaluated on contracts, capitalizing on them after their latest training data was collected.

    Researchers also evaluated both Sonnet 4.5 and GPT-5 on 2,849 recently launched contracts with no known vulnerabilities, and both “identified two novel zero-day vulnerabilities and generated exploits valued at $3,694.” The API cost for GPT-5 was $3,476, indicating that the exploits would have offset this cost.

    “This serves as a proof-of-concept showing that profitable, real-world autonomous exploitation is technically achievable, underscoring the urgent need for proactive adoption of AI for defense,” the team stated.

    019ade6d d891 7f41 b10e 5b4bd6c4c04c
    Chart of AI exploiting revenue from simulations. Source: Anthropic

    Related: UXLink hack turns ironic as attacker gets phished mid-exploit

    A Benchmark for AI Smart Contract Hacking

    Researchers have also created the Smart Contracts Exploitation (SCONE) benchmark, which includes 405 contracts that were actually exploited between 2020 and 2025. When tested with 10 models, they collectively generated exploits for 207 contracts, leading to a simulated loss of $550.1 million.

    The researchers indicated that the output needed (measured in tokens within the AI industry) for an AI agent to generate an exploit will diminish over time, thus lowering the cost of such operations. “Analyzing four generations of Claude models, the median number of tokens required to create a successful exploit dropped by 70.2%,” the research indicated.

    019adeac 7a28 7057 ad27 1c331f991941
    Average number of AI output tokens per exploit per model. Source: Anthropic

    Related: Coinbase’s preferred AI coding tool can be hijacked by new virus

    Rapid Growth in AI Smart Contract Hacking Capabilities

    The study claims that AI capabilities in this domain are advancing swiftly.

    “In just one year, AI agents have escalated from exploiting 2% of vulnerabilities in the post-March 2025 section of our benchmark to 55.88% — a jump from $5,000 to $4.6 million in total exploit revenue,” the team asserted. Moreover, the majority of the smart contract exploits this year “could have been autonomously executed by current AI agents.”

    The research revealed that the average cost to analyze a contract for vulnerabilities was $1.22. Researchers believe that with reducing costs and increasing capabilities, “the time frame between vulnerable contract deployment and exploitation will continue to contract.” This scenario could provide developers with less time to identify and rectify vulnerabilities before they are exploited.

    Magazine: North Korea crypto hackers tap ChatGPT, Malaysia road money siphoned: Asia Express