Anthropic’s Glasswing Project | ✉️ #91
Hey! 👋
Anthropic’s Glasswing project may become a turning point for cybersecurity. In April 2026, Anthropic announced that a small group of major companies would get early access to Claude Mythos Preview, a new AI model that can find and use software vulnerabilities at a level higher than almost any human expert. This is not just another coding tool. It shows that AI is starting to operate at the highest level of security research, where finding one hidden bug can affect companies, governments, and critical infrastructure.
The big change is speed and scale. According to Anthropic, the model has already helped find thousands of serious vulnerabilities, including very dangerous ones. That means AI systems may soon be able to discover bugs before most security teams even know they exist. In the best case, this could make the digital world safer, because defenders could patch weak points faster than ever before. Security may become less reactive and more proactive.
But this is also a dangerous shift. A system that can find vulnerabilities can also help exploit them. That is why Glasswing is a double-edged tool. The same AI that helps defenders protect systems can also become useful for attackers. And there is another uncomfortable question: if an AI finds a bug, that does not mean the bug will automatically be reported, fixed, or shared with everyone who is at risk. Some vulnerabilities may stay hidden if the people controlling the system do not want to reveal them yet.
That is why projects like Glasswing could change the future of security in a very deep way. From now on, the key advantage may not be who has the best human security team, but who has the best AI systems, the fastest response, and the most control over vulnerability information. AI may soon become the first to see the weaknesses inside the digital world. The question is whether that power will be used mainly to protect us, or whether it will create a new and more unequal security race.
What We've Discovered
What Claude Code Actually Chooses: Research on which tools and which solutions Claude Code prefers when it runs. Decisions made by Claude are shaping more and more of the technological landscape we have around us. It's important to you, as an AI user, to know that other, better tools also exist, and steer the Claude away from the popular, but less sophisticated or reliable ones (looking at you, GitHub Actions).
What What Happened to Amazon, or "Don’t Put Down the Drone": How Founders Become Day Two and Take the Company With Them. Focused on Amazon, this article also explains why AWS is evolving the way it does in the last 4 years.
Building fault-tolerant applications with AWS Lambda durable functions: The article fails to address only one question: how do we use this together with step functions without building a complete monstrosity?
The 92nd mkdev dispatch will arrive on Friday, April 24th. See you next time!