AI ‘Too Dangerous’ Hacked on Day One: Discord Group Exposes Major Security Flaw!

Anthropic’s ‘Too Dangerous’ Mythos AI Hacked on Day One While White House Plans Rollout

Bloomberg reports that a limited number of people who weren’t supposed to gained access to Anthropic’s Claude Mythos Preview on the day it was initially released to a select group.

This situation brings up concerns about whether Anthropic can truly control a model they considered too risky to share with the public.

How a Discord Group Walked Into Mythos

People in a private Discord group focused on finding leaked AI models correctly predicted the web address for Mythos.

Josh Kale, a well-known X user, reported that Anthropic decided their AI model, Mythos, was too risky to make public. However, on the very first day, four people found a way to access it simply by guessing the web address.

Bloomberg reported that attackers figured out how Anthropic names its creations by using information revealed in the Mercor data breach three weeks ago, according to a source with knowledge of the situation.

I found out one of us actually *did* have legitimate access – they’d done some contract work for a company that worked with Anthropic, which gave them certain credentials. When combined with the URL we guessed, that’s what let us keep getting in.

Since accessing the system, users have been consistently using it. Interestingly, they’ve steered clear of security-focused requests, opting instead for harmless activities like creating basic websites.

I saw the news about a potential security issue with Anthropic, and they’ve said they’re looking into it. So far, they’ve told us they haven’t found any proof that the problem went beyond their third-party vendor, which is a little reassuring as an investor.

According to Anthropic, their Mythos system can find and take advantage of previously unknown security flaws in all popular operating systems and web browsers.

As a crypto investor, I learned that under something called ‘Project Glasswing,’ access to their system was limited to around 40 trusted organizations – names like Apple, Amazon, and Cisco. It wasn’t about letting them use the tech, but rather letting them try and *break* it, all to strengthen security. Basically, a controlled ‘stress test’ by the experts.

White House Pushes Federal Access Despite Pentagon Ban

This security issue happened at the same time the White House was working to give more federal agencies access to Mythos. On April 15th, the Office of Management and Budget sent an email to Cabinet members detailing plans for a secure version of the AI model.

I saw a report that the White House is talking with the CEO of Anthropic, likely about their new ‘Mythos’ model. It sounds like there’s some concern building around it, and they’re probably discussing how to navigate that. As someone invested in crypto and AI, I’m keeping a close eye on this – any government involvement could significantly impact the space, especially if it involves regulations or oversight of powerful AI models like Mythos.

— Arnaud Mercier – #Entrepreneur (@arnaudmercier) April 21, 2026

This is a change from earlier in the year, when the Pentagon labeled Anthropic a potential risk to its supply chain because the company wouldn’t remove safety features designed to prevent military applications.

The Defense Department will make its own operational decisions and won’t allow any company to control that process, according to spokesman Sean Parnell in a post on X.

A federal judge later paused the broader ban following an Anthropic lawsuit.

On April 17th, Anthropic CEO Dario Amodei had a meeting with White House officials, and both groups described the discussions as positive and worthwhile.

According to Axios, the NSA is currently using Mythos to check for security weaknesses, even though the Pentagon has prohibited its use.

Read More

2026-04-22 14:21