Trail of Bits62 · 12d ago
We beat Google’s zero-knowledge proof of quantum cryptanalysis
Two weeks ago, Google’s Quantum AI group published a zero-knowledge proof of a quantum circuit so optimized, they concluded that first-generation quantum computers will break elliptic curve cryptography keys in as little as 9 minutes. Today, Trail of Bits is publishing our own zero-knowledge proof that significantly improves Google’s on all metrics. Our result is not due to some quantum breakthrough, but rather the exploitation of multiple subtle memory safety and logic vulnerabilities in Goo...
Simon Willison50 · 11d ago
Join us at PyCon US 2026 in Long Beach - we have new AI and security tracks this year
This year's PyCon US is coming up next month from May 13th to May 19th, with the core conference talks from Friday 15th to Sunday 17th and tutorial and sprint days either side. It's in Long Beach, California this year, the first time PyCon US has come to the West Coast since Portland, Oregon in 2017 and the first time in California since Santa Clara in 2013. If you're based in California this is a great opportunity to catch up with the Python community, meet a whole lot of interesting people ...
Rapid7 Blog35 · 11d ago
Metasploit Wrap-Up 04/17/2026
Happy Friday - Seven New Metasploit Modules We’re happy to announce that Metasploit Framework had a big week, landing seven new modules alongside various bug fixes and enhancements. This week’s highlights include RCE modules targeting AVideo, openDCIM, Selenium Grid/Selenoid, and ChurchCRM. On the post-exploitation side, Windows saw three new persistence techniques added as modules, targeting Telemetry scheduled tasks, PowerShell profiles, and Microsoft BITS. What a time to be alive as a Meta...
The Hacker News30 · 12d ago
Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched
Huntress is warning that threat actors are exploiting three recently disclosed security flaws in Microsoft Defender to gain elevated privileges in compromised systems. The activity involves the exploitation of three vulnerabilities that are codenamed BlueHammer (requires GitHub sign-in), RedSun, and UnDefend, all of which were released as zero-days by a researcher known as Chaotic Eclipse (
The Hacker News28 · 12d ago
Apache ActiveMQ CVE-2026-34197 Added to CISA KEV Amid Active Exploitation
A recently disclosed high-severity security flaw in Apache ActiveMQ Classic has come under active exploitation in the wild, per the U.S. Cybersecurity and Infrastructure Security Agency (CISA). To that end, the agency has added the vulnerability, tracked as CVE-2026-34197 (CVSS score: 8.8), to its Known Exploited Vulnerabilities (KEV) catalog, requiring Federal Civilian
ProjectDiscovery.io | Blog14 · 11d ago
Neo v. DIY: The gap between a single finding and a mature security program
In our latest webinar, our Founding Solutions Engineer, Davis Franklin, addressed the massive gap between finding a vulnerability with an LLM and running a mature security program. That gap is what Neo is built to close. With the release of Opus 4.6 and the announcement of Mythos, the question we hear constantly has gotten louder: Can I just build this with Claude Code? The short answer is yes. You can spin up a working PoC in about half an hour, find a real vulnerability, and feel genuinely co
Red Hat Security8 · 12d ago
MCP security: Containerization and Red Hat OpenShift integration
In our previous 3 articles, we laid the groundwork for a protected Model Context Protocol (MCP) ecosystem by analyzing the current threat landscape, implementing robust authentication and authorization, and exploring critical logging and runtime security measures. These focused on who can access what, and how to monitor those interactions. Now, we'll shift the focus to the physical and virtual environments in which these systems live. Of course, security-focused development is only half the b...
gilesthomas.com6 · 11d ago
How an LLM becomes more coherent as we train it
I remember finding it interesting when, back in 2015, Andrej Karpathy posted about RNNs and gave an example of how their output improves over the course of a training run . What might that look like for a (relatively) modern transformers-based LLM? I recently trained a GPT-2-small-style LLM, with 163 million parameters, on about 3.2 billion tokens (that's about 12.8 GiB of text) from the Hugging Face FineWeb dataset, and over the course of that training run, I saved the current model periodic...