Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

wiremine

> The code they [LLMs] produce is often fine. It works. It passes tests. It might ship as-is.

I don't disagree, but I've been thinking about this a bit: a lot of _human_ written code was/is less-than-fine. And a lot of human devs didn't understand the context when they wrote it.

I'm not advocating that we fire devs, or evangelizing that LLms are awesome. But I do wish there was a slightly more honest take on the pre-LLM world: it's not just about cost reduction, it's about solving some long-term structural deficiencies of industry.

soperj

> a lot of _human_ written code was/is less-than-fine.

At least at my work place this is often the case whenever we bring in contractors who will never be tasked with maintaining their code.

Middle management rarely ever gives us the time to fix this broken mess, and when they do, it's actually in a neat maintainable state.

We recently took 700mb of compiled code and reduced it to under 40mb just by getting rid of useless libraries, and removing unused code, and consolidating all the build objects into one object. This code ran significantly faster too.

The result was that the release team was pissed and made a big stink about how this was causing or going to cause all sorts of issues. We surmised that it was because it reduced their OT.

It's been over a year and none of the issues have ever occurred.

My biggest issue with LLM written code is that it's often submitted by non-programmers, and often fails the first test, which is "does this even work, and solve the issue?". A test that I normally wouldn't even do since no one on my team would submit code that doesn't compile, or doesn't fix the problem.

jochem9

If you're at a stage where code doesn't compile and doesn't do what you want, then that's really easy to improve.

- let the llm (agent) compile the code and iterate until it does.

- state your intent more clearly.

creesch

> it's not just about cost reduction, it's about solving some long-term structural deficiencies of industry.

You know, I hate that this is a world where I have to ask myself if this is LLM written because it is one of those patterns.

But that is besides the point of what I wanted to say anyway. Those deficiencies aren't going to be solved by LLMs I recon. In fact, they likely will make things worse. As you said, a lot of human devs didn't understand the context when they wrote code previously. True, but LLMs are even worse at context in many areas and still need human prompting for input.

The only thing I really see happening is that the blast radius of people not fully grasping the context and still producing something is going to be larger. More specifically, it is already larger. Previously incompetence limited the damage people could do, now that is less of a factor.

wiremine

> You know, I hate that this is a world where I have to ask myself if this is LLM written because it is one of those patterns.

Lol, nope, I just sound that way. :-)

undefined

[deleted]

bluefirebrand

> a lot of _human_ written code was/is less-than-fine. And a lot of human devs didn't understand the context when they wrote it.

I don't think anyone truly disputes this, but I don't think it's a good argument.

If we already had a sort of poor quality output when humans were at the helm, then what can we expect from AI steered by the sort of humans that produced poor quality code?

My guess would be even more poor quality code, at much higher volumes than before. It is impossible for me to see this as an improvement over "some people always made poor quality code"

wiremine

> then what can we expect from AI steered by the sort of humans that produced poor quality code.

Great point, and I think that's my argument: above-average engineers can now produce more above average code. We don't need as many (any?) below-average developers moving forward.

bluefirebrand

It's possible, but I think unlikely. For one thing, most managers can't really tell who is above or below average. We still in 2026 have "lines of code" as a usual metric for judging the quality of a dev.

So how do you think that will play out in reality?

YongHaoHu

[dead]

andybak

Personally I've found one of the biggest gains with coding agents is in helping me read code. Actually - that's a lie. I don't read the code. Mostly (unless my spidey-sense goes off) I ask the LLM to read the code and tell me what it does.

And then I make a decision based on that.

I guess I'm wondering if the article is missing half the picture. Yes - AI is wrong some of the time (and that % varies based on a host of variables). But it can read code as well as just write it. And that does matter as it changes the trade-offs this article is weighing up.

foobarian

It's been pretty great for ramping up into codebases too. "Give me a summary of project in current checkout in markdown form."

copperx

do you enjoy reading markdown? I don't, even with the editor in preview mode.

kuboble

I'm actually asking claude to make me eli5 htmls with svg diagrams of the code I want to understand.

Once I even asked him to create for me a manim video about it.

Aurornis

There are times when reading the code is necessary, but oh boy are LLMs so much faster at finding the part of the codebase I want to read.

Several projects I work on call into or interact with gigantic codebases. A couple years ago I would have to allocate 30 minutes some times to either trace through the code base or setup and attach a debugger to step through the code until I found the part I needed to know about.

Now I send an LLM to go find it and it comes back with a list of files and line numbers in minutes.

It’s still not perfect. I had a codebase walk this morning where even GPT-5.5 extra high failed to find something I knew was in there on the first pass. It got it with some more directed prompting. If you delegate trust 100% to an LLM you will get bit eventually.

dilyevsky

The "It’s harder to read code than to write it" was always silly. The example spolsky gives to support it is basically "devs like to rewrite other devs code, therefore reading is hard" which is obviously bunch of nonsense. That's like saying reading poetry is harder than writing poetry because poets keep writing new poems despite the fact that Shakespeare already wrote it. Now that you can recruit LLM to explain any complicated codebase to you it's even less true.

pixl97

This isn't a great analogy. The thing about code is it is part of a whole. While often code can be read in smaller pieces and understood, quite often you have to understand a very large part, if not all the work to really see what is going on.

This is why things like SAST are topical. They miss all kinds of exploits because they don't understand the program. The more in depth you try to scan the more the memory requirements explode.

Now LLMs are much better at this, but between context windows and costs you can bankrupt yourself pretty quick putting code bases in context memory.

dilyevsky

This analogy directly addresses spolsky’s botched argument. The point is devs went into this career to dev not read other people’s code whether it makes business sense or not. What you said applies to writing code just as well as reading it so clearly writing cant be easier than reading - it’s at least as hard and most definitely harder

BosunoB

Exactly, and that's why this maxim about "understanding the code base" being the bottleneck is also somewhat misleading.

Claude is even better at helping you understand the code base then it is at writing code! It can look at a bunch of files and give you an accurate run down in ten minutes.

y3ahd0g

I like to constrain it as much as possible to ignore variables and function names; the human stuff.

With a new code base my first goal understand how it CRUDs state. What structures and what operations?

Less concerned at the outset if its sorting carrots or processing orders for shirts.

AI seems to work way better for me when I tell it to ignore the use case and focus on surfacing runtime and mathematical operations embedded in the code.

Constraints on new abstraction and telling it to stick to math terms and types and objects also seems to help avoid hallucinations and layers of useless indirection.

esafak

It lets you understand the code base at reduced granularity when you want. Or zoom in beyond the written lines to explain _why_ some code is the way it is.

chris_st

I asked Claude to tell me why something was implemented the way it was, and got an excellent response. One data point, would love to hear more examples.

rectang

I agree, Claude has been pretty great at explaining code. It even does well at explaining to me old code that I wrote by hand — including both non-intuitive quirks and flaws.

Claude in its default configuration has untapped potential for explaining and documenting code because it defaults to writing no comments. I added this to my global CLAUDE.md and so far so good:

    # Comments

    Keep existing comments unless they're wrong
    or stale. Err slightly toward adding short
    explanatory comments for non-obvious code.

ai_fry_ur_brain

Why not just read the code.. Ya'll are literally letting your brain atrophy by using llms this way.

andybak

Despite you using "literally" purely for emphasis, I would dispute this based on both the literal and popular meaning of "literally".

casper14

How do I know this comment isn't just AI?

aspenmartin

Why not just read the assembly?

jbvlkt

I do not understand why this is so unpopular today? I feel like everyone now thinks that basically all of SW engineering is outdated. We are supposed to forget all lessons learned and let agents to go through this? My opinion is to not care who did the job. But we should apply the same standard to human and AI output. I do not buy "we should not look at code". If we should not look at it what we should check instead to have the same control over final product? Because not having control over final product is so stupid right now.

larsfaye

The "don't look at the code" movement is a fad that we simply have to go through to prove that the real problems were never obvious, but lie in the subtle nuances and interplay between all the moving parts of what makes software function in the first place. We have to abuse the tool before we learn to properly use the tool.

croemer

Reads at least partially like LLM writing, for example:

> When code production gets cheap, the cost doesn't disappear. It migrates.

> It was true then. It is unavoidably true now.

superfrank

Maybe it's just because I grew up spending way too much time on the internet, but I write like that and have since well before LLMs. As much as people like to attribute that style to AI, I don't think it's the dead giveaway that people act like it is.

lamename

Exactly. HN darling Paul Graham writes this way.

I find the constant critique of punchy style a bit tiring. It would be more productive for the grandparent to think about the content and state an opinion.

andrewf

I find the constant use of punchy style tiring.

Sometimes a "punch in the face" is useful. Makes you re-evaluate things. Being punched in the face all day, every day, is just tiring.

amarant

I swear the worst part about llms are their critics.

code_duck

AI learned it from somewhere.

NuclearPM

You’re absolutely right. And honestly, you’re really on to something.

larsfaye

Yes—this is the part that nobody is telling you. That's the real shift that is happening.

kitsune1

[dead]

adrian_b

There is a reason why such a pattern is frequent in LLM-generated text.

Any good human-written text that provides useful information is likely to highlight in this way or in equivalent ways the contrast between what the reader is expected to incorrectly believe and the reality.

When the reader already knows what the text has to say, that text is superfluous.

Therefore a text that provides new and unexpected information, so it is a useful text, must use some means to explain to the readers the errors of their ways.

It may use simple superposition like "it is not ... it is ..." or it may be more verbose and add "but", "however", "nonetheless" etc.

I believe that it is counterproductive to use this kind of pattern as a method for detection for AI-written texts, because it is normal for it to exists in useful human-written texts.

What should be commented is whether that claim is true, i.e. whether indeed the second part with "it is ..." is true, or whether all of the pattern is superfluous, because none of the expected readers is not already aware that the first part with "it is not ..." is true.

pizzly

Sometimes I feel like we are entering a new witch hunt era but for LLM generated text. Before clicking submit I am sometimes afraid that the text will be labled "LLM Generated" even though its not. Enough people classify you as a witch and you get burnt. Though in this case you only receive nasty comments, down votes and possible social media bans.

Edit: In my observation it seems that people's opinions that do not agree with you get labeled as "AI Generated" more than opinions that agree with yours.

teiferer

We need to stand up against this by refusing to adapt. Let them scream. They are wrong. I refuse to tune texts into less-fine-tuned form just to avoid being labeled LLM output.

csallen

> When code production gets cheap, the cost doesn't disappear. It migrates.

I'm surprised people aren't taking the time to edit this very specific kind of phrasing out of their writing. It's such a common AI tell now that, even when writing by hand, I'd just avoid it entirely.

Then again, I hated that LLMs co-opted the em-dash, and I refuse to stop using it, so I suppose I get it.

Terr_

> to edit this very specific kind of phrasing out of their writing

Even without touching moral/ethical/normative reasons, it's impractical. LLMs will continue to incorporate the most popular phrasings or grammars, and touchy readers will simply pivot to a new "telltale" du-jour.

Eventually any personal or organic writing will be gone, as one twists themselves into an artificial form of "the inverse of the LLM."

> Michael Bolton: "No way, why should I change? He's the one who sucks."

csallen

> Even without touching moral/ethical/normative reasons, it's impractical.

It's impractical to edit your AI-generated writing to put it in your own voice? People have been writing unique stuff for millennia.

teiferer

Why would they have to? Just to avoid being accused of using a slop machine? If that is the only criticism you have against LLM produced text, then there is no problem.

And I'm saying this as somebody who is strongly against LLM-generated content of this form.

csallen

I have no problem with AI-generated text.

But I do have somewhat of a problem with unedited text. Personally, I even take the time to edit my HN comments.

And, for the same reason I'd have a problem watching the same episode of the same show every day, I have a problem with reading text that feels like a super derivative clone of tons of other writing. Which is usually what you get when you don't edit your AI-generated text.

danaw

i disagree and even if assisted the points are still valid

madibo3156

Comment reads at least partially like human writing, for it is terse and does not try to make a point.

teiferer

Really? Do we now suspect everybody who uses the most basic of stylistic elements of producing slop?

Pendulums always swing back and forth between extremes but oh boy did this one swing fast into witch hint territory.

lamename

Do you disagree with the point made?

Forgeties79

If I want a boring, rote LLM answer I will prompt it myself. I don’t read blogs to have a middleman between me and a prompt.

rogerrogerr

LLMs are running a gish gallop at Internet scale. It is not necessary or possible to disprove every sequence of tokens that emerges from one.

https://en.wikipedia.org/wiki/Gish_gallop

recursive

Forming a human opinion about slop is like asymmetrical warfare. Or maybe a closer analogy is a Gish Gallop. It can be generated with way less effort than it takes to comprehend it, much less form a coherent opinion on it.

jraph

It matters whether something is written using an LLM even if we put aside the ethical aspects. Firstly, if your text is deadly boring to read, your point might not get across optimally and one might not just be interesting reading slop. Secondly, you might just been reading the LLM's opinion, and I'm just not interested neither. Thirdly, even if you are just using the LLM as an assistant, we know that your opinion itself may be influenced by the suggestions and since you are still under the impression you are writing yourself (which you are somewhat, not saying), you may internalize the suggestions as your own opinion. There are recent (probably imperfect) studies about this stuff.

lamename

I mostly agree with everything you said. Do you feel the same way about code written by an LLM?

pmmucsd

Related (maybe the same thing): Whenever an agent is planning there are often architecture and product choices that it asks humans to make. None of this intent is captured in the code or comments. We started a decisions.md file and updated CLAUDE.md and AGENTS.md to create an entry in the decisions.md file everytime it has to ask a human about what to do. It captures the intent so at least we have doc that describes why certain choices were made.

nostrademons

Human coders have the same problem too - oftentimes the most important question that future maintainers have of the code is "Why was this decision made?", but that's not captured anywhere in the code itself.

The right place for this is usually in the design doc or commit message, and robust engineering organizations will ensure that commits are cross-referenced back to design and requirements docs so you can trace decisions from git blame back to the actual rationale.

The same process also works pretty well with LLMs. Google, for example, is internally championing a process where the engineer has a dialog with the LLM to generate a design doc, oftentimes with an adversarial LLM to poke holes in the design. Once the design is fully specified, the last step is to ask the LLM to turn the design doc into code. This creates a human-readable artifact that traces the decisions that the human and AI collaboratively made, which then can be traced back from the code.

irq-1

> Google, for example, is internally championing a process where the engineer has a dialog with the LLM to generate a design doc ...

Smart. They'll be able to regenerate code in the future, with better LLMs. It also lets them redo the architecture combined with other parts of the system as context grows.

CharlesW

> Whenever an agent is planning there are often architecture and product choices that it asks humans to make. None of this intent is captured in the code or comments.

It must be in order to have an effective AI-assisted software engineering workflow, but it's not something that's magically done for you (usually). Knowledge management of project context is everything. Even the little music app I'm building has >100 ADRs (https://en.wikipedia.org/wiki/Architectural_decision), and that's just one form of project memory.

sanderjd

In my experience, those decisions generally (without additional prompting) end up in the PR descriptions, and then in commit history. Which is how it was with human written code as well...

Or if it's more product level decisions than technical, those conversations tend to be documented in tickets. Which, again, that's how this already worked.

alexhans

To me, design files living in the repo are what captures these high and low level decisions. They should express intent which is paramount for automation.

Where are you keeping your design? If it's a WYSIWYG doc like word, what would it take for you to switch to a more text driven world?

rhgraysonii

I have a tool for this https://deciduous.dev

hombre_fatal

(The comment made me start hashing out some early stage thoughts here)

Intent is something that is already sorely missing in most software projects.

At best you might get good commit messages, or messages that link back to good issue/PR discussion.

But something I realize is that /implementation/ intent is also important, not just higher level "why are we here in the first place" intent.

We tend to cargo cult the idea that you shouldn't have to explain what you're changing because it can be derived from the change. e.g. Commit messages should only explain the why since the what is redundant.

But the what is not redundant. Most bugs are a desync from the what, since you obviously didn't mean to implement the bug. And you didn't mean to use UUID v1 instead of UUID v4 for that session token.

Now that my workflow is centered around generating plan files instead of doing the implementation myself, the goal is specifying the plan with enough granularity that anyone could implement the intended change when given the plan.

It's data I wish we always had in a software project, but it would involve way too much work and energy to do this level of planning before every change until the LLM age.

Now that implementation is automated, we live at this specification step. And it's really the specification/plan that I want to see first rather than the code.

I think a great way to approach this to change the PR / code review step to center around revising a markdown plan file until you can get the intent and specification right. That makes a lot more sense than only reviewing the code and the deriving the why and the intended what.

e.g. For my own projects, I'd rather someone submit a plan file as a PR that we can revise together, not code. The code is a derivation of the plan. And when code is derived from a plan, then the plan must contain all of the knowledge that is applied to (or missing from) the code since it's the source of truth.

Compare that to the status quo where you start with a PR code change and then kinda work backwards to synthesize the why and intended what, then see if the real what synchronizes with intention.

crooked-v

The structure https://acai.sh/ uses feels to me like a decent starting point for this kind of thing that avoids the "single giant file" issue.

kadhirvelm

Curious what other teams are doing to keep encouraging people to think critically about their code? I’ve been finding it harder to keep people motivated, keep them engaged with all the changes coming in. And I can’t blame them, it’s been overwhelming. Is everyone else just using more AI..?

zeeveener

Personally, guiding junior teammates down the path away from purely "Writer of Code" to "Implementer of Functionality" has been working well for me.

If you get them involved in the design process, they feel heard. Feeling heard is one surefire way to have a person feel involved. Feeling involved fosters a sense of ownership and pride which in turn helps keep a person engaged.

kadhirvelm

Yeah I like that, there’s probably something to this full stack builder persona that could keep people motivated long term. So long as they feel ownership, seems like that’s a pretty good bet for long term engagement

copperx

Management 101.

sanderjd

Yeah this is a good call out. What I've been doing so far is just hoping and praying that people naturally evolve their entire way of working without becoming demotivated or burned out. Seems like a bad strategy.

evilduck

I had a senior engineer last week just completely fail to implement an entire subsection of acceptance criteria on a story before creating a PR and got the answer/attempted excuse that the AI agent "missed it" and they didn't double check the work. This is something I haven't seen a senior engineer do in recent memory prior to AI entering the workplace, they at least read the requirements back then even if they misinterpreted them.

Literally just a few minutes ago I had a senior architect send out a clearly Claude-generated .docx project proposal filled to the brim with stuff that is outright not permitted in our organization due to legal constraints and IT policy decisions. This guy has been with the organization for a few years so none of our limitations should have been a surprise. I got to page two before I replied that it was unprofessional to send me AI slop and expect me to be the first to read it.

Some people seem really great with AI as a force multiplier. For others their brain just shuts off entirely. Sorting out how it will affect each person is the real challenge now. We don't even mandate AI usage, it's just there if you want to use it which makes it even more frustrating. As for my approach? A one on one discussion reiterating my expectations that they remain thoughtfully engaged and if needed followed up by formal verbal warnings and then PIPs.

somewhatgoated

More AI is always the answer.

nabbed

I worried this blog post was going to pivot into a marketing pitch for some product, but no, it just describes the issue where the AI tool that generates your code probably won't document its reasons for the choices it makes. That documentation problem exists in the pre-AI era too, except that the reasons might exist in the heads of your co-workers and could possibly be teased out.

I know nothing about AI code generation (or about AI in general), but I wonder if you could include in your prompt a request that the AI describe the reasons for its choices and actually include those reasons as comments in the code.

jnovek

The AI can't really describe its reasoning, though. It can only look at its context history and find a justification (which it will then present as reasoning). In my experience asking the model "why did you do that" carries substantial hallucination risk.

0gs

True, though I have found that forcing (I use an agent skill to do this) an LLM's agent to document the reasoning behind each "decision" it makes seems to lead to better decision-making. Or at least, more justifiable decisions (even if the justification is bad).

larsfaye

Not only can it not describe its reasoning, it can't "remember" if you ask it later; it can only observe what is. Nor can it be consistent; I've had it shift reasoning numerous times as the questioning continues, only to come full circle to its original statement while it apologizes profusely for being misleading.

The model will always be completing the story you start with it. There's no opinion to uncover because there's no experience that occurred. It's impossible to know where your influence ends and the model's factual basis begins.

dalmo3

While you're technically correct, I found that a simple "give me the strongest arguments for and against this, cite your sources" works wonders.

localhoster

I think the issue that majority of agentic developers don't understand their design fully. Those holes, perviously, were have to be closed by a person eventually, and inherently you gain understanding in the process.

This is not the case eny more. I never thought LLMs write bad solutions, but when you let it think for you you loose something important, understanding. And when something brekas, some people consider this being a proof that the llm is the problem. And in a sense it is. But you are also to blame. And that's exactly the issue with this all industry - in order to move faster you don't need to "type" faster. A person bashing his hands on the keyboard randomly at top velocity can get pretty high wpm. In order to move master you need to _think_ faster, be _snappier_ and _sharper_ and most people aren't.

I like how one of my colleagues phrased it in a company meeting. He asked one of the c levels "for llm s to 10x me, I need to let them make all decitions and dictate intentions. By percentage, how much of the code you want me to _know_?"

andybak

Isn't one of the common criticisms of AI code that it's a bit too keen to write extensive comments - even when they probably aren't needed?

rectang

The default for Claude today is "no comments". Here's a conversation I had with Claude about it recently:

---

Me: I've notice that Claude is deleting my comments and tends not to insert many of its own. I don't want to go too crazy, but can we modify CLAUDE.md to dial up the comments a little? Like, try not to delete comments that are still relevant, and add a few explanatory comments.

Claude: [... proposal for modifying CLAUDE.md ...]

Claude: Want me to apply this, tweak the wording, or place it differently? Note that your global ~/.claude/CLAUDE.md-style instructions (visible to me as system guidance) currently say "default to writing no comments" — the project-level rule above will override that for this repo, which is what you want.

Me: Where is this rule that says "default to writing no comments"? That seems crazy?

Claude: That text is part of the system prompt I'm given by Claude Code itself — it's not in any file in your project or in your ~/.claude/ config. It's baked into the harness's default instructions to the model.

Claude: Specifically, two lines from my system prompt:

Claude: Under "Doing tasks": "Default to writing no comments. Only add one when the WHY is non-obvious..."

Claude: Under "Text output": "In code: default to writing no comments. Never write multi-paragraph docstrings or multi-line comment blocks — one short line max."

mbauman

Well the comments tend to be superfluous "whats" (describing the code itself) instead of the more helpful "whys." And they're almost never the most useful "why nots".

andybak

That's different to my (recent) experience. I get fairly insightful "whys". At least on bugfix commits.

idle_zealot

> And they're almost never the most useful "why nots".

Really? I find that Claude really likes to write "why nots" in comments when iterating on implementations and fixing bugs, to the extent that the comments grow into spot-logs of overly-specific documentation of what was tried and why it was scrapped.

Semaphor

IME the comments are not useful though. I'm not the biggest fan of AI code, but the codes still way higher quality than the "this is what this does" comments

ddosmax556

Perfectly possible IME, it just requires more time. Even if you understand all the (important) parts of your code and validate it, Ai still helps with productivity a lot. It's just not as fast as blindly vibe coding - not at first at least. Agents on their own without checks, and superficial prompts without understanding, result in agents either introducing bugs, or if you have an appropriate test harness, painfully slow, in my experience. It requires skill to use an agent effectively.

sanderjd

But I don't relate to "it probably won't document its reasons"... I spend more time telling it that it has over-documented than under-documented.

It always wants to include the specific thing it is fixing in the API docs for methods it touches. No, this is not the place for a changelog of why this method is implemented the way it is! Describe the interface and the contract, these details are non sequiturs and they are a distraction.

zeeveener

Isn't this what good commit messages are intended to address? If a person has gone so far as to allow an agent to write all their code, they're likely having it commit on their behalf too.

Writing a skill / set of rules around what makes a good commit message would encourage the LLM to record it's reasoning (however much we truly consider it to be "reasoning").

rectang

By default, Claude is set to avoid comments. Eventually I got tired of it deleting my own explanatory comments and overrode the behavior with an addition to CLAUDE.md.

I think Claude is just behaving like all those programmers who make a VERY BIG DEAL about how much they HATE HATE HATE comments that might reiterate what the code does so will go to extreme measures to avoid them.

radicalbyte

Very few developers do that especially on early / key parts of the codebase.

olsondv

When I generate code with AI, I will read through each change as it makes them (babysitting). If I don’t understand it, then I ask for explanation right away. At least by the end I have a grasp on what each change does and the reasoning. Then, I can make a PR and highlight the same info for my reviewer and for longevity. Our codebase style is not to litter comments everywhere. We go back to the code review for details and discussion. Obviously, this only works if the changes are small.

sanderjd

Yeah I work in pretty small chunks and do a lot of iteration on each chunk.

This is harder when I'm reviewing other peoples' work. It has definitely gotten to the point where I spend more time reviewing than implementing and find it to be the much more difficult part of the job. Which honestly sucks.

rrr_oh_man

  > The code they [LLMs] produce is often fine. It works. It passes tests. It might ship as-is
The blog posts they [LLMs] write are often fine. They work. They pass tests. They might ship as-is

cicko

What is wrong with using LLMs to analyze and explain code? Am I missing something? Before writing code, this is an even easier task to accomplish using AI.

_diyar

I think a huge gap in the market today is documentation that is both easy for humans to navigate and understand, but also readily ingestible for agents.

allthetime

Self generating docs based on docstring comments are great. LLMs are capable of generating architectural overview docs from these. What more do you need?

sanderjd

There is something to this, but to the concluding paragraph: I think these tools already are extremely good at helping us understand code, in addition to helping us generate it.

Daily Digest email

Get the top HN stories in your inbox every day.