Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

tabbott

I recommend that anyone who is responsible for maintaining the security of an open-source software project that they maintain ask Claude Code to do a security audit of it. I imagine that might not work that well for Firefox without a lot of care, because it's a huge project.

But for most other projects, it probably only costs $3 worth of tokens. So you should assume the bad guys have already done it to your project looking for things they can exploit, and it no longer feels responsible to not have done such an audit yourself.

Something that I found useful when doing such audits for Zulip's key codebases is the ask the model to carefully self-review each finding; that removed the majority of the false positives. Most of the rest we addressed via adding comments that would help developers (or a model) casually reading the code understand what the intended security model is for that code path... And indeed most of those did not show up on a second audit done afterwards.

staticassertion

I have a few skills for this that I plug into `cargo-vet`. The idea is straightforward - where possible, I rely on a few trusted reviewers (Google, Mozilla), but for new deps that don't fall into the "reviewed by humans" that I don't want to rewrite, I have a bunch of Claude reviewers go at it before making the dependency available to my project.

Analemma_

I'm curious: has someone done a lengthy write-up of best practices to get good results out of AI security audits? It seems like it can go very well (as it did here) or be totally useless (all the AI slop submitted to HackerOne), and I assume the difference comes down to the quality of your context engineering and testing harnesses.

This post did a little bit of that but I wish it had gone into more detail.

j-conn

OpenAI just released “codex security”, worth trying (along with other suggestions) if your org has access https://openai.com/index/codex-security-now-in-research-prev...

simonw

The HackerOne slop is because there's a financial incentive (bug bounties) involved, which means people who don't know what they are doing blindly submit anything that an LLM spots for them.

If you're running the security audit yourself you should be in a better position to understand and then confirm the issues that the coding agents highlight. Don't treat something as a security issue until you can confirm that it is indeed a vulnerability. Coding agents can help you put that together but shouldn't be treated as infallible oracles.

hansvm

That sounds like the same problem (a deluge of slop) with a different interface (eating straight from the trough rather than waiting for someone to put a bow on it and stamp their name to it)?

johannes1234321

The question still is: will enough useful stuff be included, to make it worth to dig through the slop? And how to tune the prompt to get better results.

lmeyerov

We split our work:

* Specification extraction. We have security.md and policy.md, often per module. Threat model, mechanisms, etc. This is collaborative and gets checked in for ourselves and the AI. Policy is often tricky & malleable product/business/ux decision stuff, while security is technical layers more independent of that or broader threat model.

* Bug mining. It is driven by the above. It is iterative, where we keep running it to surface findings, adverserially analyze them, and prioritize them. We keep repeating until diminishing returns wrt priority levels. Likely leads to policy & security spec refinements. We use this pattern not just for security , but general bugs and other iterative quality & performance improvement flows - it's just a simple skill file with tweaks like parallel subagents to make it fast and reliable.

This lets the AI drive itself more easily and in ways you explicitly care about vs noise

ares623

No mention of the quality of the engineers reviewing the result?

SV_BubbleTime

This is exactly how I would not recommend AI to be used.

“do a thing that would take me a week” can not actually be done in seconds. It will provide results that resemble reality superficially.

If you were to pass some module in and ask for finite checks on that, maybe.

Despite the claims of agents… treat it more like an intern and you won’t be disappointed.

Would you ask an intern to “do a security audit” of an entire massive program?

padolsey

My approach is that, "you may as well" hammer Claude and get it to brute-force-investigate your codebase; worst case, you learn nothing and get a bunch of false-positive nonsense. Best case, you get new visibility into issues. Of _course_ you should be doing your own in-depth audits, but the plain fact is that people do not have time, or do not care sufficiently. But you can set up a battery of agents to do this work for you. So.. why not?

creatonez

IMO the key behavior is that LLMs are really good at fuzz testing, because they are probabilistic monkeys on typewriters that are much more code-aware than a conventional fuzz tester. They cannot produce a comprehensive security audit or fix security issues in a reliable way without human oversight, but they sure can come up with dumb inputs that break the code.

The results of such AI fuzz testing should be treated as just a science experiment and not a replacement for the entire job of a security researcher.

Like conventional fuzz testing, you get the best results if you have a harness to guide it towards interesting behaviors, a good scientific filtering process to confirm something is really going wrong, a way to reduce it to a minimal test case suitable for inclusion in a test suite, and plenty of human followup to narrow in on what's going on and figure out what correctness even means in the particular domain the software is made for.

orbital-decay

>the key behavior is that LLMs are really good at fuzz testing, because they are probabilistic monkeys on typewriters

That's exactly what they're not. Models post-trained with current methods/datasets have pretty poor diversity of outputs, and they're not that useful for fuzz testing unless you introduce input diversity (randomize the prompt), which is harder than it sounds because it has to be semantical. Pre-trained models have good output diversity, but they perform much worse. Poor diversity can be fixed in theory but I don't see any model devs caring much.

krzyk

What is there to loose in trying?

Basically, don't trust AI if it says "you program is secure", but if it returns results how you could break it, why not take a look?

This is the way I would encourage AI to be used, I prefer such approaches (e.g. general code reviews) than writing software by it.

SV_BubbleTime

Because if you want the work done correctly, you WILL put the time you thought you were saving in. Either up front, or in review of its work, or later when you find out it didn’t do it correctly.

eli

It depends whether anyone was ever actually going to spend that week doing it the "hard" way. Having Claude do it in a few minutes beats doing nothing.

Put another way: I absolutely would have an intern work on a security audit. I would not have an intern replace a professional audit though.

It's otherwise a pretty low stakes use. I'd expect false positives to be pretty obvious to someone maintaining the code.

SV_BubbleTime

My point is that it’s one thing to say I want my intern to start doing a security audit.

It’s another thing to say hey intern security audit this entire code base.

LLM’s thrive on context. You need the right context at the right time, it doesn’t matter how good your model is if you don’t have that.

j16sdiz

> Would you ask an intern to “do a security audit” of an entire massive program?

Why not?

You can't relies solely on that, but having an extra pair of eye without prior assumption on the code always is good idea.

mmsc

It's cool that Mozilla updated https://www.mozilla.org/en-US/security/advisories/mfsa2026-1... because we were all wondering who had found 22 vulnerabilities in a single release (their findings were originally not attributed to anybody.)

himata4113

Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free.

I would be more satisfied if they gave a proper explanation of what these could have lead to rather than being "well maybe 0.001% chance to exploit this". They did vaguely go over how "two" exploits managed to drop a file, but how impactful is that? Dropping a file in abcd with custom contents in some folder relative to the user profile is not that impactful other than corrupting data or poisoning cache, injecting some javascript. Now reading session data from other sites, that I would find interesting.

mccr8

You should generally assume that in a web browser any memory corruption bug can, when combined with enough other bugs and a lot of clever engineering, be turned into arbitrary code execution on your computer.

himata4113

The most important bit being the difficulty, AI finding 21 easily exploitable bugs is a lot more interesting than 21 that you need all the planets to align to work.

hedora

If you can poison cache, you can probably use that a stepping stone to read session data from other sites.

dmix

Looks like a lot of the usual suspects

gzoo

This resonates. I just open-sourced a project and someone on Reddit ran a full security audit using Claude found 15 issues across the codebase including FTS injection, LIKE wildcard injection, missing API auth, and privacy enforcement gaps I'd missed entirely. What surprised me was how methodical it was. Not just "this looks unsafe" it categorized by severity, cited exact file paths and line numbers, and identified gaps between what the docs promised and what the code actually implemented. The "spec vs reality" analysis was the most useful part.

Makes me think the biggest impact of LLM security auditing isn't finding novel zero-days it's the mundane stuff that humans skip because it's tedious. Checking every error handler for information leakage, verifying that every documented security feature is actually implemented, scanning for injection points across hundreds of routes. That's exactly the kind of work that benefits from tireless pattern matching.

fcpk

The fact there is no mention of what were the bugs is a little odd. It'd really be nice to see if this is a "weird never happening edge case" or actual issues. LLMs have uncanny abilities to identify failure patterns that it has seen before, but they are not necessarily meaningful.

larodi

The fact that some of the Claude-discovered bugs were quite severe is also a little more than something to brush off as "yeah, LLM, whatever". The lists reads quite meaningful to me, but I'm not a security expert anyways.

jandem

Here's a write-up for one of the bugs they found: https://red.anthropic.com/2026/exploit/

deafpolygon

I’m guessing it might be some of these: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...

muizelaar

Yeah, the ones reported by Evyatar Ben Asher et al.

robin_reala

I correctly misread that as “et AI”.

undefined

[deleted]

pjmlp

Indeed, without it looks like a fluffy marketing piece.

tptacek

And now that you know that it isn't, do you feel differently about the logic you used to write this comment?

john_strinlai

i am curious, what are you hoping to get out of this comment? will you feel better if they say yes? what is your plan if they say no?

pjmlp

Do I?

staticassertion

I've had mixed results. I find that agents can be great for:

1. Producing new tests to increase coverage. Migrating you to property testing. Setting up fuzzing. Setting up more static analysis tooling. All of that would normally take "time" but now it's a background task.

2. They can find some vulnerabilities. They are "okay" at this, but if you are willing to burn tokens then it's fine.

3. They are absolutely wrong sometimes about something being safe. I have had Claude very explicitly state that a security boundary existed when it didn't. That is, it appeared to exist in the same way that a chroot appears to confine, and it was intended to be a security boundary, but it was not a sufficient boundary whatsoever. Multiple models not only identified the boundary and stated it exists but referred to it as "extremely safe" or other such things. This has happened to me a number of times and it required a lot of nudging for it to see the problems.

4. They often seem to do better with "local" bugs. Often something that has the very obvious pattern of an unsafe thing. Sort of like "that's a pointer deref" or "that's an array access" or "that's `unsafe {}`" etc. They do far, far worse the less "local" a vulnerability is. Product features that interact in unsafe ways when combined, that's something I have yet to have an AI be able to pick up on. This is unsurprising - if we trivialize agents as "pattern matchers", well, spotting some unsafe patterns and then validating the known properties of that pattern to validate is not so surprising, but "your product has multiple completely unrelated features, bugs, and deployment properties, which all combine into a vulnerability" is not something they'll notice easily.

It's important to remain skeptical of safety claims by models. Finding vulns is huge, but you need to be able to spot the mistakes.

mozdeco

[work at Mozilla]

I agree that LLMs are sometimes wrong, which is why this new method here is so valuable - it provides us with easily verifiable testcases rather than just some kind of analysis that could be right or wrong. Purely triaging through vulnerability reports that are static (i.e. no actual PoC) is very time consuming and false-positive prone (same issue with pure static analysis).

I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing. When I did experiments longer ago, this was certainly true, esp. for the "one shot" approaches where you basically prompt it once with source code and want some analysis back. But this actually changed with agentic SDKs where more context can be pulled together automatically.

staticassertion

My point is that "verifiable testcases" works great for proving "this is vulnerable" but LLMs are still risky if you believe "this is safe", which you can't easily prove. My point is that you need to be very skeptical of when they decide that something isn't vulnerable.

I completely agree that LLMs are great when instructed to provide provable, repeatable exploits. I have done this multiple times and uncovered some neat bugs.

> I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing.

I don't think it's a model thing, it's just a sort of basic limitation of the technology. We shouldn't expect LLMs to perform novel tasks so we shouldn't expect LLMs to find novel vulnerabilities.

Agents help, human in the loop is critical for "injecting novelty" as I put it. The LLM becomes great at producing POCs to test out.

kwanbix

Please, implement "name window" natively in Firefox.

I have to use chrome because the lack of it.

nitwit005

I've seen fairly poor results from people asking AI agents to fill in coverage holes. Too many tests that either don't make sense, or add coverage without meaningfully testing anything.

If you're already at a very high coverage, the remaining bits are presumably just inherently difficult.

staticassertion

I suppose it's mixed results but a coverage report should give you "these exact lines are uncovered" and it becomes pretty straightforward to see "ah yeah that error condition isn't tracked, the behavior should be X, go write that test".

nitwit005

That's what people tried right? It'd be great if the AI never failed at tasks, but they clearly do sometimes.

StilesCrisis

This description is also pretty accurate for a lot of real-world SWEs, too. Local bugs are just easier to spot. Imperfect security boundaries often seem sufficient at first glance.

rithdmc

Security has had pattern matching in traditional static analysis for a while. It wasn't great.

I've personally used two AI-first static analysis security tools and found great results, including interesting business logic issues, across my employers SaaS tech stack. We integrated one of the tools. I look forward to getting employer approval to say which, but that hasn't happened yet, sadly.

octoclaw

[dead]

delaminator

But you're not a member of Anthropic's Red Team, with access to a specialist version of Claude.

staticassertion

I don't think that matters at all.

delaminator

I think that Anthropic's own version of Claude will give them different results than the ones you get.

"Find zero-day exploits in this popular software." I haven't tried it I suspect that the guardrails will make a difference.

cubefox

Interesting end of the Anthropic report:

> Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them. This gives defenders the advantage. And with the recent release of Claude Code Security in limited research preview, we’re bringing vulnerability-discovery (and patching) capabilities directly to customers and open-source maintainers.

> But looking at the rate of progress, it is unlikely that the gap between frontier models’ vulnerability discovery and exploitation abilities will last very long. If and when future language models break through this exploitation barrier, we will need to consider additional safeguards or other actions to prevent our models from being misused by malicious actors.

> We urge developers to take advantage of this window to redouble their efforts to make their software more secure. For our part, we plan to significantly expand our cybersecurity efforts, including by working with developers to search for vulnerabilities (following the CVD process outlined above), developing tools to help maintainers triage bug reports, and directly proposing patches.

stuxf

It's interesting that they counted these as security vulnerabilities (from the linked Anthropic article)

> “Crude” is an important caveat here. The exploits Claude wrote only worked on our testing environment, which intentionally removed some of the security features found in modern browsers. This includes, most importantly, the sandbox, the purpose of which is to reduce the impact of these types of vulnerabilities. Thus, Firefox’s “defense in depth” would have been effective at mitigating these particular exploits.

kingkilr

[Work at Anthropic, used to work at Mozilla.]

Firefox has never required a full chain exploit in order to consider something a vulnerability. A large proportion of disclosed Firefox vulnerabilities are vulnerabilities in the sandboxed process.

If you look at Firefox's Security Severity Rating doc: https://wiki.mozilla.org/Security_Severity_Ratings/Client what you'll see is that vulnerabilities within the sandbox, and sandbox escapes, are both independently considered vulnerabilities. Chrome considers vulnerabilities in a similar manner.

bell-cot

If only this attitude was more common. All security is, ultimately, multi-ply Swiss cheese and unknown unknowns. In that environment, patching holes in your cheese layers is a critical part of statistical quality control.

stuxf

Makes sense, thank you!

lostmsu

Semi-on topic. When will Anthropic make decisions on Claude Max for OSS maintainers? I would like to run this on my projects and some of my high-profile dependencies, but there was no update on the application.

Analemma_

It's important to fix vulnerabilities even if they are blocked by the sandbox, because attackers stockpile partial 0-days in the hopes of using them in case a complementary exploit is found later. i.e. a sandbox escape doesn't help you on its own, but it's remotely possible someone was using one in combination with one of these fixed bugs and has now been thwarted. I consider this a straightforward success for security triage and fixing.

halJordan

I don't think it's appropriate to neg these vulnerabilities because another part of the system works. There are plenty of sandbox escapes. No one says don't fix the sandbox because you'll never get to the point of interrogation with the sandbox. Same here. Don't discount bugs just because a sandbox exists.

nottorp

But doesn't this come from the company that said they had the "AI" write a compiler that can compile "linux" but couldn't compile a hello world in reality?

fulafel

Requiring exploits is not how vulnerability research works, with or without AI. Vulnerability discovery and exploit development / weaponizing them are different things. Vendors have long since learned to take vuln reports, with our without demo exploits, seriously.

undefined

[deleted]

g947o

> Firefox was not selected at random. It was chosen because it is a widely deployed and deeply scrutinized open source project — an ideal proving ground for a new class of defensive tools.

What I was thinking was, "Chromium team is definitely not going to collaborate with us because they have Gemini, while Safari belongs to a company that operates in a notoriously secretive way when it comes to product development."

jeffbee

I would have started with Firefox, too. It is every bit as complex at Chromium, but as a project it has far fewer resources.

vorticalbox

its just a different attack surface for safari they would need to blackbox attack the browser which is much harder than what they did her

rs_rs_rs_rs_rs

What? The js engine in Safari is open source, they can put Claude to work on it any time they want.

runjake

Here's a rough break down, formatted best I can for HN:

  Safari (closed source)
   ├─ UI / tabs / preferences
   ├─ macOS / iOS integration
   └─ WebKit framework (open source) ~60%
        ├─ WebCore (HTML/CSS/DOM)
        ├─ JavaScriptCore (JS engine)
        └─ Web Inspector

hu3

There's much more to a browser than JS engine.

They picked to most open-source one.

g947o

Apple is not the kind of company that typically does these things, even if the entire Safari is open source.

est31

I suppose eventually we'll see something like Google's OSS-Fuzz for core open source projects, maybe replacing bug bounty programs a bit. Anthropic already hands out Claude access for free to OSS maintainers.

LLMs made it harder to run bug bounty programs where anyone can submit stuff, and where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

On the other hand, the newest generation of these LLMs (in their top configuration) finally understands the problem domain well enough to identify legitimate issues.

I think a lot of judging of LLMs happens on the free and cheaper tiers, and quality on those tiers is indeed bad. If you set up a bug bounty program, you'll necessarily get bad quality reports (as cost of submission is 0 usually).

On the other hand, if instead of a bug bounty program you have an "top tier LLM bug searching program", then then the quality bar can be ensured, and maintainers will be getting high quality reports.

Maybe one can save bug bounty programs by requiring a fee to be paid, idk, or by using LLM there, too.

mccr8

Google already has an AI-powered security vulnerability project, called Big Sleep. It has reported a number of issues to open source projects: https://issuetracker.google.com/savedsearches/7155917?pli=1

sigmar

>where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

are there any projects to auto-verify submitted bug reports? perhaps by spinning up a VM and then having an agent attempt to reproduce the bug report? that would be neat.

suddenlybananas

> Anthropic already hands out Claude access for free to OSS maintainers.

Free for 6 months after which it auto-renews if I recall correctly.

neobrain

> Free for 6 months after which it auto-renews if I recall correctly.

They don't ask for credit card information when signing up this way, so even if true you won't be charged if you forget canceling.

mceachen

No mention of auto renewal is made as far as I (and Claude) could determine.

Their OSS offer is first-hit-is-free.

tclancy

Part of that caught my eye. As yet another person who’s built a half-ass system of AI agents running overnight doing stuff, one thing I’ve tasked Claude with doing (in addition to writing tests, etc) is using formal verification when possible to verify solutions. It reads like that may be what Anthropic is doing in part.

And this is a good reminder for me to add a prompt about property testing being preferred over straight unit tests and maybe to create a prompt for fuzz testing the code when we hit Ready state.

devin

Can you give me an example (real or imagined) where you're dipping into a bit of light formal verification?

I don't think the problems I work on require the weight of formal verification, but I'm open to being wrong.

tclancy

To be clear, almost (all?) of mine do not either and it's partially due to the fact I have been really interested in formal methods thanks to Hillel Wayne, but I don't seem to have the math background for them. To the man who has seen a fancy new hammer but cannot afford it, every problem looks like a nail.

The origin of it is a hypothesis I can get better quality code out of agents by making them do the things I don't (or don't always). So rather than quitting at ~80% code coverage, I am asking it to cover closer to 95%. There's a code complexity gate that I require better grades on than I would for myself because I didn't write this code, so I can't say "Eh, I know how it works inside and out". And I keep adding little bits like that.

I think the agents have only used it 2 or 3 times. The one that springs to mind is a site I am "working" on where you can only post once a day. In addition, there's an exponential backoff system for bans to fight griefers. If you look at them at the same time, they're the same idea for different reasons, "User X should not be able to post again until [timestamp]" and there's a set of a dozen or so formal method proofs done in z3 to check the work that can be referenced (I think? god this all feels dumb and sloppy typed out) at checkpoints to ensure things have not broken the promises.

devin

I guess my feeling is that formal verification _even in the LLM era_ still feels heavy-handed/too expensive for too little value for a lot of the problems I'm working on.

152334H

Impressive work. Few understand the absurd complexity implied by a browser pwn problem. Even the 'gruntwork' of promoting the most conveniently contrived UAF to wasm shellcode would take me days to work through manually.

The AI Cyber capabilities race still feels asleep/cold, at the moment. I think this state of affairs doesn't last through to the end of the year.

> When we say “Claude exploited this bug,” we really do mean that we just gave Claude a virtual machine and a task verifier, and asked it to create an exploit. I've been doing this too! kctf-eval works very well for me, albeit with much less than 350 chances ...

> What’s quite interesting here is that the agent never “thinks” about creating this write primitive. The first test after noting “THIS IS MY READ PRIMITIVE!” included both the `struct.get` read and the `struct.set` write. And this bit is a bit scary. I can read all the (summarized) CoT I want, but it's never quite clear to me what a model understands/feels innately, versus pure cheerleading for the sake of some unknown soft reward.

swordsith

"But it was still unclear how much we should trust this result because it was possible that at least some of those historical CVEs were already in Claude’s training data." I feel like they could know this if they truly wanted to. It's honestly unnerving that an AI company cant say for certain if their models were trained on something.

maipen

Most people no longer read code, ai results or even watches full length videos anymore.

AI provides the same experience that you get when watching short videos.

You watch and you forget.

These models are being trained by just increasing quantity. Nobody cares anymore. It’s a race for AGI before money runs out.

Daily Digest email

Get the top HN stories in your inbox every day.

Hardening Firefox with Anthropic's Red Team - Hacker News