NHacker Next
login
▲Google Engineers Launch "Sashiko" for Agentic AI Code Review of the Linux Kernelphoronix.com
82 points by speckx 5 hours ago | 34 comments
Loading comments...
rwmj 4 hours ago [-]
Better to link to the site itself, or one of the reviews?

For an example of a review (picked pretty much at random) see: https://sashiko.dev/#/patchset/20260318151256.2590375-1-andr...

The original patch series corresponding to that is: https://lkml.org/lkml/2026/3/18/1600

Edit: Here's a simpler and better example of a review: https://sashiko.dev/#/patchset/20260318110848.2779003-1-liju...

I'm very glad they're not spamming the mailing list.

jeffbee 4 hours ago [-]
That is both really useful and a great example of why they should have stopped writing code in C decades ago. So many kernel bugs have arisen from people adding early returns without thinking about the cleanup functions, a problem that many other language platforms handle automatically on scope exit.
overfeed 3 hours ago [-]
Must we do this on every thread about the Linux kernel?
RobRivera 1 hours ago [-]
The beatings will continue until morale improves
richwater 3 hours ago [-]
[flagged]
tigen 2 hours ago [-]
This ought to help with that. https://thephd.dev/c2y-the-defer-technical-specification-its...
nurettin 1 hours ago [-]
> stopped writing code in C decades ago.

And what were they supposed to use in 2006? Free Pascal? Ada?

greenavocado 49 minutes ago [-]
Someone suggested C++ and you should see the response from Linus

https://harmful.cat-v.org/software/c++/linus

throwa356262 42 minutes ago [-]
I find it interesting that this is written in Rust (not golang) and co-authored with Claude (not gemini)
withinrafael 3 hours ago [-]
Looks cool, but this site is a bit difficult for me to grok.

I think the table might be slightly inside-out? The Status column appears to show internal pipeline states ("Pending", "In Review") that really only matter to the system, while Findings are buried in the column on the far right. For example, one reviewed patchset with a critical and a high finding is just causally hanging out below the fold. I couldn't immediately find a way to filter or search for severe findings.

It might help to separate unreviewed patches from reviewed ones, and somehow wire the findings into the visual hierarchy better. Or perhaps I'm just off base and this is targeting a very specific Linux kernel community workflow/mindset.

Just my 1c.

tonfa 3 hours ago [-]
I think it's just a dashboard, not meant to be used as is.

Reviewers are more likely to instead subscribe to get the review inline, and then potentially incorporate that with their feedback.

fdghrtbrt 2 hours ago [-]
> difficult for me to grok

You sound like a troglodyte.

monksy 4 hours ago [-]
I think this is a great and interesting project. However, I hope that they're not doing this to submit patches to the kernel. It would be much better to layer in additional tests to exploit bugs and defects for verification of existance/fixes.

(Also tests can be focused per defect.. which prevents overload)

From some of the changes I'm seeing: This looks like it's doing style and structure changes, which for a codebase this size is going to add drag to existing development. (I'm supportive of cleanups.. but done on an automated basis is a bad idea)

I.e. https://sashiko.dev/#/message/20260318170604.10254-1-erdemhu...

rwmj 4 hours ago [-]
No, it's reviewing patches posted on LKML and offering suggestions. The original patch posted corresponding to your link was this, which was (presumably!) written by a human:

https://lkml.org/lkml/2026/3/9/1631

bjackman 4 hours ago [-]
Style and structure is not the goal here, the reason people are interested in it is to find bugs.

Having said that, if it can save maintainers time it could be useful. It's worth slowing contribution down if it lets maintainers get more reviews done, since the kernel is bottlenecked much more on maintainer time than on contributor energy.

My experience with using the prototype is that it very rarely comments with "opinions" it only identifies functional issues. So when you get false positives it's usually of the form "the model doesn't understand the code" or "the model doesn't understand the context" rather than "I'm getting spammed with pointless advice about C programming preferences". This may be a subsystem-specific thing, as different areas of the codebase have different prompts. (May also be that my coding style happens to align with its "preferences").

kleiba 2 hours ago [-]
> Sashiko was able to find around 53% of bugs

That's cool. Another interesting metric, however, would be the false positive ratio: like, I could just build a bogus system that simply marks everything as a bug and then claim "my system found 100% of all bugs!"

In practice, not just the recall of a bug finding system is important but also its precision: if human reviewers get spammed with piles of alleged bug reports by something like Sashiko, most of which turn out not to be bugs at all, that noise binds resources and could undermine trust in the usefulness of the system.

i_cannot_hack 29 minutes ago [-]
They mention false positives as well on github: The rate of false positives is harder to measure, but based on limited manual reviews it's well within 20% range and the majority of it is a gray zone.
goatyishere25 8 minutes ago [-]
cool idea. curious how you're handling the cold start problem
michaelchen58 34 minutes ago [-]
nice execution. the demo video sold me more than the text
mika-el 1 hours ago [-]
the separation between who writes and who reviews is the whole thing. I do same at smaller scale — one model writes code, different model reviews it. self-review misses things, same reason you don't review your own PRs
ChrisArchitect 3 hours ago [-]
https://github.com/sashiko-dev/sashiko (https://news.ycombinator.com/item?id=47427996)
takahitoyoneda 3 hours ago [-]
[dead]
Heer_J 5 hours ago [-]
[dead]
ratrace 3 hours ago [-]
[dead]
4fterd4rk 4 hours ago [-]
oh god can we not
smlacy 4 hours ago [-]
What's your concern?
htx80nerd 4 hours ago [-]
Have you ever programmed with AI? It needs a lot of hand holding for even simple things sometimes. Forgets basic input, does all kinds of brain dead stuff it should know not to do.

>"good catch - thanks for pointing that out"

lame-robot-hoax 4 hours ago [-]
Can you clarify how, at all, that’s relevant to the article?
ablob 4 hours ago [-]
Both the curl and the SQLite project have been overburdened by AI bug reports. Unless the Google engineers take great care to review each potential bug for validity the same fate might apply here. There have been a lot of news regarding open source projects being stuffed to the brim with low effort and high cost merge requests or issues. You just don't see all the work that is caused unless you have to deal with the fallout...
tonfa 2 hours ago [-]
This project has nothing to do with bug reports... it's an opt-in tool for reviewing proposed changes that kernel developers can decide to use (if they find it useful).
jamesnorden 4 hours ago [-]
Well, if it doesn't find anything it's just a waste of time at best.
danielbln 1 hours ago [-]
Prevention paradox.
asadm 4 hours ago [-]
i think it's a skill.
__tidu 4 hours ago [-]
well tbf code review is probably the most useful part of "AI coding", if it catches even a single bug you missed its worth it, plus false positives would waste dev time but not pollute the kernel
shevy-java 4 hours ago [-]
Now they want to kill the Linux kernel. :(

We've already seen how bug bounty projects were closed by AI spam; I think it was curl? Or some other project I don't remember right now.

I think AI tools should be required, by law, to verify that what they report is actually a true bug rather than some hypothetical, hallucinated context-dependent not-quite-a-real-bug bug.

tonfa 3 hours ago [-]
It's not forced upon anyone, it's a tool that patch authors or reviewers can use if they want to.
quantium1628 3 hours ago [-]
b2b or b2c? feels like it could go either way
qainsights 2 hours ago [-]
They would have completely redesigned Google Gerrit.