Get the top HN stories in your inbox every day.
dimtion
distalx
A transmission error has a strictly contained, predictable blast radius. If a packet drops, the system knows exactly how to handle it: it throws a timeout, drops a connection, or asks for a retry. The worst-case scenario is known.
A reasoning error has an infinite, unpredictable blast radius. When an LLM hallucinates, it doesn't fail safely but it writes perfectly compiling code that does the wrong thing. That "wrong thing" might just render a button incorrectly, or it might silently delete your production database, or open a security backdoor.
You can build reliable abstractions over failures that are predictable and contained. You cannot abstract away unpredictable destruction.
yunwal
> A reasoning error has an infinite, unpredictable blast radius.
Says who? It’s quite easy to limit the blast radius of a reasoning error.
distalx
In 2024, a Chevy dealership deployed an AI chatbot that confidently agreed to sell a customer a 2024 Chevy Tahoe for $1. It executed a catastrophic business failure simply because it didn't know the logic was wrong.
Sure, you can patch that specific case with guardrails, but how many unpredictable edge cases are you going to cover? It only takes a user with a bit of ingenuity to circumvent them. There are already several examples of AI agents getting stuck in infinite loops, burning through massive API bills while achieving absolutely nothing.
You can contain a system failure, but you cannot contain a logic failure if the system doesn't know the logic is wrong.
amazingamazing
How so?
Suppose you had:
Math() Add() Subtract()
Program() Math(“calculate rate”)
This is intentionally written vaguely. How do you limit that these implementations ensure Program() runs and does the right thing when there is no guarantee Math() or its components are correct?
Normally you could use a typed programming language, unit tests, etc, but if LLM is the ultimate abstraction programs will be written line above. At some point traditional software engineering principles will need to apply.
DiscourseFan
Very few people are even beginning to understand the constraints of these systems, and none of them have yet been elevated to high enough positions of prominence to rise above the noise of all the hype. Give us some time man, jeez
harrall
A transmission error does not have a strictly contained blast radius.
A bad packet could tell a flying probe to fire all thrusters on and deplete its fuel in 15 minutes.
What makes a transmission error controlled is all the protection mechanisms on top of it. An LLM cannot delete a production database unless you give it access to do it.
My hot take is that many people are naturally more comfortable with deterministic systems that have clearly analyzable outcomes. Software engineering has historically primarily been oriented around deterministic systems and it has attracted that type of thinker.
But many of us, myself included, prefer chaotic systems where you can’t fully nail down every cause and effect. The challenge of building a prediction model on top of chaos is exhilarating. I really don’t find many people like me in SWE as in, say, the graphics design department.
To me, that’s the underlying threat here — LLMs are rewriting a field that has previously self selected a certain type of person and this, quite understandably, rubs them the wrong way.
zadikian
This is sorta how I've felt working the past ~7 years.
Simple example, we've been striving for 90% unit test coverage when there's 0% integration test coverage. I blame the metrics only looking at unit tests, but also many people think unit tests should come first. I would prioritize integration. There are some small pieces that need to work reliably, but if your system relies so hard on all of them working right, it's a bad system. That, and too many things will work in pieces but not overall.
Broadly I'm gonna assume that the team will later hire solid SWEs who don't necessarily know how our stuff works, and aren't going to read 100 docs about it. If this is a backend+DB combo, get your DB right and there won't be too many wrong ways to code against it in the future, get it wrong and it becomes a black hole for SWE-time. Or if someone on their first day can't run a system locally for debugging, no matter how elegant the code is, don't count on that system getting fixed quickly during an outage.
sublinear
Yes, but when all it takes to avoid this chaos is hiring someone with at least 5 or 10 years of experience for a reasonable wage, this entire perspective looks insane.
It's... just... not that hard to write code nor does it cost that much. There are millions of us working silently at places that aren't "big tech". We all shrugged our shoulders, took a sip of coffee, and went back to our Teams meetings where the only LLM usage is still just Copilot.
c-linkage
I don't need to be able to write proofs about my maths using logic and determinism. If the answer comes out in a way that I like then it has to be correct!
aeon_ai
Insightful.
Feels like this maps to the J/P of Myers Briggs
zadikian
I'm fine with that. The part that makes it not really an abstraction is, you still deliver code in the end. It'd be different if your deliverable were prompt+conversation, and the code were merely an intermediate build artifact. Usually people throw away the convo. Some have tried making markdown files the deliverable instead, so far that doesn't really work.
It makes even less sense when people compare an LLM to a compiler. Imagine making a pull request that's just adding a binary because you threw the source code away.
mpyne
The whole field of reproducible builds is only a field because compilers also have had trouble historically of producing binary artifacts with guaranteed provenance and binary compatibility even when built from the same source codes.
If I assign a bug fix ticket to a human developer on my team, I won't be able to precisely replicate how they go about solving the bug but for many bugs I can at least be assured that the bug will get solved, and that I understand the basic approach the assigned dev would use to troubleshoot and resolve the ticket.
This is an organizational abstraction but it's an abstraction just the same, leaky as it is.
kibwen
> The whole field of reproducible builds is only a field because compilers also have had trouble
No, this is not comparable. The reason reproducible builds are tricky is not because compilers are inherently prone to randomness, it's because binaries often bake-in things like timestamps and the exact pathnames of the system used to produce the build. People need to stop comparing LLMs to compilers, it's an embarrassingly poor analogy.
z3c0
It's an abstraction for you, not the rest of that developer's team, who have to reproduce the same solution even after said developer has "won the lottery", so-to-speak.
inb4: "Don't worry, just use GPT to make the docs"
vrighter
but even if it didn't it still provided a binary that is mathematically proven (assuming no compiler bugs, which if found are fully fixable, unlike LLMs) to correspond to the code you wrote.
danenania
This is a great point. We’re very much in a transitional phase on this, but I personally do see signs in my own work with agents that we are heading toward the main deliverable being a readme/docs.
The code is still important, but I could see it becoming something that humans rarely engage with.
HarHarVeryFunny
> We are far from solving this problem with LLMs, but this doesn't prevent me from thinking of LLMs as a new level of abstraction that can edit and transform code.
That's more anthropomorphism than an abstraction. An LLM talks like a person because it was trained to predict continuations of human speech. That does not make it a person, or a system with intent, responsibility or any other human attribute. They are what they are: text prediction engines.
Perhaps your input to the LLM is "make all the test cases pass", and so it predicts it better do something to make the test cases pass, and does so by deleting the test cases. I guess in the "abstract" sense it did what you asked.
Or, how about the case in the news from a few days ago where an agentic system deleted all the vendor's customer's data, and last 3 months of backups, despite having been EXPLICITLY "told" not to do any such thing. Should we consider "completely fuck the customer" as an "abstraction" of "never delete any data"?
evrydayhustling
Besides deeply unpredictable factors (like signal transmission), most users of higher-level abstractions do so without certainty about how the translation will be executed. For example, one of the main selling points of C when I was growing up was that you could write code independent of architecture, and leave the architecture-specific translation to assembly to the compiler!
Abstractions often embrace nondeterministic translation because lower level details are unknown at time of expression -- which is the moivation for many LLM queries.
qazxcvbnmlp
Grocery stores are a level of abstraction. Exchange money, get food. If your whole life you had grown food, it might feel a bit strange.
Occasionally the low level details leak through ie: this egg came from this farm, theres a shipping issue so onions are more expensive or whatever.
I think llm assisted coding is going to work something like this.
vrighter
but you either exchange money and get the food you want, always. Or it's out of stock, so you don't get the food but you keep your money, guaranteed.
It's not a good analogy. With an LLM you might ask for a pea, be parted with your money, and be given a watermelon.
cestith
So LLMs are more like InstaCart or DoorDash.
kazinator
This is about the reverse: a non-deterministic/stochastic system built using reliable abstractions.
Also, the problem of "did we receive the, and is it correct" is vastly trivial compared to "did we get correct LLM output".
The problem in networking is making the reliable transfer perform well under many conditions, and scale.
Getting consistently reliable output of LLMs isn't solved, so we can't talk about it scaling to even one instance.
ritcgab
Is "Mostly-reliable" TCP connection a real thing? A TCP connection is either reliable or not working at all. That is what a proper abstraction should be like.
rock_artist
> I'm not sure why people struggle with the fact that an abstraction can be built on top of a non-deterministic and stochastic system. Many such abstractions already exist in the world we live.
It depends on what's the abstraction.
Using LLM for coding is 'abstracting' the developer, adding extra layer that can produce code. But it's not abstraction layer of the code itself.
Terr_
I think a less-brittle claim would be that they are at best a extraordinarily leaky and idiosyncratic layer of "abstraction" enough that for certain tasks you wouldn't want to actually use the term.
It's like saying human personal assistant Bob is an abstraction over your calendar and shopping list.
In other words, it depends where the people talking have placed their cutoff point for a good abstraction versus a terrible and unwise one.
dpark
“Each move from one layer of the tech stack to a higher one involved a function:
f(x) -> y
Given a specific x, you always get a specific y as the artifact being generated.”Not at all. If this were true then the Python code in question would generate deterministic binary. Of course that’s not what happens. The Python runs through an interpreter that may change behavior on different runs. It may change behavior version to version. It may even change behavior during multiple invocations of a function in the same running instance. Because all of that is abstracted away.
Same for the C code. You give up control and some determinism for the higher abstraction. You might get there same output between compilations on the same version but that’s not actually guaranteed and version to version consistent certainly isn’t.
Moving to a higher layer of abstraction very often results in less constrained behavior.
HarHarVeryFunny
That's not a good analogy.
With a high level language implementation of any sort, the actual instructions executed by the CPU may vary according to how it was compiled or run, or what machine you run it on, but the behavior will not.
The high level language defines it's own level of abstraction, defining exact behavior, allowing the developer to have full control over program behavior, algorithms, UI, etc.
An LLM + natural language instructions is not a program-like abstraction of what you want the computer to do, because it does not have that level of precision. Natural languages are fuzzy and imprecise, because they have been developed for communication, not precise machine-level specification.
Obviously you can "vibe code" at different levels of detail, ranging from "build me an app to do X" to "here are 20 1000 word essays specifying the dos and don'ts of what I want you to build", but in either case you are nowhere near the level of precision of using a programming language to specify exactly what you want.
So sure, "vibe coding" let's you accomplish A result with less attention to detail than using a programming language, but it's not a "higher level abstraction" in the sense that HLLs are, since it doesn't define what that abstraction is. It lets you get A result, but not define a SPECIFIC result, since natural languages just aren't that precise... natural language means whatever the person/thing interpreting it interprets it to mean.
Of course you can use an LLM as a way to "rough out" a function or app, and as a crude tool to manipulate that roughed out form (or an existing project), but natural language does not have precise semantics and therefore cannot provide a precise definition of what you want to do.
dpark
It wasn’t my analogy. It was the article’s so I responded to that. There are many more (and better) examples of how abstractions give up control, precision, and/or determinism.
> The high level language defines it's own level of abstraction, defining exact behavior
This is not entirely true. A high level language defines some behaviors, leaving many behaviors to be undefined and implementation specific.
Many of those unspecified behaviors can matter in some cases.
> It lets you get A result, but not define a SPECIFIC result, since natural languages just aren't that precise...
You are repeating the same error as the article and missing the fact that while an abstraction lets you specify or control some things, it leaves many things out of your control. The higher the abstraction, the more stuff that is left out of your control. And maybe you don’t care about the things outside your control (great, the abstraction worked!) but regardless there are many things left unspecified in the typical abstraction and very often eventually you will care, which is why people say things like “all abstractions are leaky”.
For a simple example, think of writing something like this:
MessageBox(“hello world”, OkCancel)
MessageBox is an abstraction over a massive amount of logic. You specified a string and a set of buttons and not much else. You give up control over the styling, the placement of the buttons, the actual button text (very likely will be localized), where the box will appear, and so much more.
You are not getting a specific result. You are getting a result that meets the contract.
“Write a program that shows a hello world message box” is a much higher level of abstraction than even that, and you are giving up significant specificity and determinism. Is it a good abstraction? That’s a great question. But it certainly is an abstraction.
HarHarVeryFunny
> This is not entirely true. A high level language defines some behaviors, leaving many behaviors to be undefined and implementation specific.
Sure, but you don't need to use those, and shouldn't.
A programming language let's you avoid undefined behavior and stick to the defined abstraction provided by the language.
Natural language does NOT let you do this, because words have no strict meaning, and the meaning of any sentence is undefined and up for interpretation and contextual clarification, etc, etc. Maybe more to the point LLMs are not concerned with meaning - they are concerned with continuation prediction. The LLM/agent that "ignored instructions" and deleted all the customer's data and backups wasn't "being bad" or "ignoring instructions", it was just statistically predicting, and someone was daft enough to feed those predictions into an execution environment where real world consequences could ensue.
HarHarVeryFunny
> “Write a program that shows a hello world message box” is a much higher level of abstraction than even that, and you are giving up significant specificity and determinism. Is it a good abstraction? That’s a great question. But it certainly is an abstraction.
But to who/what is it an abstraction?
To a human, sure. If I told an intern to "write a hello world message box", I'd expect at least to get something approximating that request.
To an LLM? The LLM has no intent or understanding - it's just a statistical predictor. Maybe it'll "interpret" your request as only wanting a hello world message box, so it'll delete your company's entire git repository to ensure a clean slate to start from.
I think that when you say "it is certainly an abstraction" what you implicitly mean is "it is certainly an abstraction TO ME", but an LLM is not you, and does not have a brain, so what is an abstraction to you shouldn't be taken as being an abstraction to an LLM (whatever that would even mean).
slopinthebag
Can you explain how Python or C programs change from invocation to invocation?
dpark
Mostly because the behavior is implementation defined. So long as the behavior meets the contract, the compiler/interpreter is free to do whatever it wants.
Python could certainly optimize repeated code paths to make them more efficient. I don’t know that the standard implementation actually does, but it could. Spending extra time optimizing repeated code paths is a reasonable choice for an interpreted or JIT compiled language.
I would not expect C to change from invocation to invocation mostly because C is supposed to be boring and predictable. That’s kind of its thing. But again, it could. There’s nothing in the C spec I’m aware of that says the C compiler has to ensure that each invocation of a piece of code will execute the same machine instructions.
yuye
>So long as the behavior meets the contract, the compiler/interpreter is free to do whatever it wants.
Yes, and that's how it's supposed to be. Any description that determines the totality of a problem space is an implementation itself.
Imagine the following requirements:
f(0) = 0, f(2) = 4
Both f(x) = x^2 and f(x) = 2x are correct ways to implement said requirements. But if you start relying on f(1) = 2, you might get in trouble with a coworker that relies on f(1) = 1. This is undefined behavior and should be avoided.
>There’s nothing in the C spec I’m aware of that says the C compiler has to ensure that each invocation of a piece of code will execute the same machine instructions.
It can't, because C can be written for any system you want. If I ask the compiler to compile x *= 2, it might use the mul primitive or it might use shl, both are ok.
slopinthebag
Ok but how does that change the behaviour of the program?
samstokes
I wouldn't agree that LLMs are a higher level of abstraction, but I've found they do help me think at a higher level of abstraction, by temporarily outsourcing cognitive load.
With changes like substantial refactors or ambitious feature additions, it's easy to exceed the infamous "seven things I can remember at once":
* the idea for the big change itself
* my reason for making the change
* the relevant components and how they currently work
* the new way they'll fit together after the change
* the messy intermediate state when I'm half finished but still need a working system to get feedback
* edge cases I'm ignoring for now but will have to tackle eventually
* actual code changes
* how I'm going to test this
Good lab notes, specs etc can help, but it's a lot to keep in mind. In practice these often turn into multi person projects, and communication is hard so that often means delay or drift. Having an agent temporarily worry about * wiring a new parameter through several layers
* writing a test harness for an untested component
* experimentally adding multibyte character support on a branch
frees up my mental bandwidth for the harder parts of the problem.The main benefit is to defer the concern until I have a mostly working system. Then I come back and review its output, since I'm still responsible for what it delivers, and I want better than "mostly working".
conradludgate
This is what I've found to be very successful for me. My flavour of ADHD has historically made it hard for me to start new projects as I get very stuck on all of the little details from the start, while also thinking about the high level aspects.
Being able to spend my energy on the architectural decisions and validate my understanding before spending time on optimising the internals has actually allowed me to follow through with some of my designs.
Experimentation is then faster. If the data model wasn't good enough, I can actually experiment with it immediately, before we accidentally ship something to production and then have to deal with a very annoying data migration problem. The exact code doesn't matter to begin with when we just want to make sure the data is efficient to decode and is cache friendly.
I recently built a project I had in my mind for 3 years but could never work on because all the individual components were overwhelming. It involved e2e encryption, consensus, p2p networking, CRDTs, and API design. It was very nice to see it come together. The project ended up failing due to some underlying invariant, so it was nice to validate that and finally get it out of my head.
srikanthsastry
Technically, the claim is true, but only technically. I think the reason it is not a reliable level of abstraction is due to what I like to call "directive gap" (https://srikanth.sastry.name/garden/directive-gap/), which is the distance between the human's goal the context available to the LLM. Theoretically, if the directive gap is zero, then with very high probability you will get the correct code. If you think of 'programming in X' as a process, then you have multiple iterations as you go from incorrect code to correct code, and the same can be true of 'programming in LLM'. As you iterate on the prompts and have verification in place, whether it is spec verification, TLA+, unit testing, CI, etc., you can get the same effect.
Now, it is an open question whether this is simpler than programming in any modern programming language. By the time you figure out the exact prompt trajectory that will build what you want, you might as well have used some fancy autocomplete IDE to write the same code. It really depends on your fluency with that specific language. People are usually very fluent in natural language, and so it levels the playing field so to speak.
yongjik
It's orthogonal to whether LLMs can be a useful abstraction layer, but ...
I have a feeling that if LLMs were built on a deterministic technology, a lot of the current AI-is-not-intelligent crowd would be saying "These LLMs can only generate one answer given a question, which means they lack human creativity and they'll never be intelligent!"
xigoi
It’s not really about determinism, but about the fact that the input to an LLM is inherently ambiguous, unlike the input to a C compiler.
xpct
Interesting. I believe some circles reached the consensus that they aren't creative, but that it's independent of their intelligence/modelling capabilities.
byzantinegene
it is a fruitless endeavour to try to appeal to a crowd that does not and will never understand the fundamentals of how llms work.
jefurii
I don't feel that this piece explains its title very well (to me) though the idea expressed by the title is spot-on
I've gone through hand-coding HTML, CGI, CMSes, web frameworks, and CMSes built with web frameworks. Each is (roughly) a layer of abstraction on top of lower layers.
People talk about LLMs as an extension of this layering but they're not. With the layers of abstraction I've listed you can go down to the layers underneath and understand them if you take the time.
LLMs are something different. They're a replacement for or a simulation of the thinking process involved in programming at various layers.
dakial1
Your point is similar to the post in a sense that all abstractions are deterministic, so you could go connect the higher layer directly to the lower layer, while in LLMs, by their very probabilistic/black box nature you can’t have this direct link.
But isn’t this just a semantics discussion? Is there a rule for abstraction in CS that says it needs to be deterministic (I really don’t know)?
I believe deterministic abstraction to natural language is impossible to reach by the very ambiguous nature of it, we get misunderstandings when we talk to each other so naturally when talking to a machine it would need to be probabilistic to understand how to translate it to code.
DauntingPear7
theyre like an advanced form of program synthesis. Something that operates outside of the abstraction layering.
sn9
Everyone should Jimmy Koppel's post on what abstractions are and aren't: https://www.pathsensitive.com/2022/03/abstraction-not-what-y...
Anyone claiming LLMs are an a higher level of abstraction are not using it in the way used by programmers and computer scientists.
They're usually conflating "delegation" and "abstraction", as if a junior developer is an abstraction.
Legend2440
I don't agree with this take. Determinism is a nice property for abstractions to have, but it isn't necessary to be an abstraction.
And LLMs can handle very abstract concepts that could not possibly be encoded in C++, like the user's goal in using software.
farmdawgnation
I think you could also make the case that the existing abstractions aren't actually fully deterministic themselves. The compiler or interpreter may not behave as it should. Therefore, for any correct C code, there's probability that the GCC compiler will turn it into correctly formed machine code. But it may not!
Is the probability much higher with GCC? Sure. But it's still a probability.
anon-3988
I am sorry but this is an insane take. The probability of GCC going haywire with your special snowflake correct C code? Please. Have this EVER happen to you? I am not talking about the performance of the generated assembly because that IS flaky, but functionality wise I do not think so.
If people are so confident about the determinism of LLMs, or at least consider it on par with compilers, please ask it to compile your source code instead. Better yet, replace all your GNU utils with LLM instead. Replace your `ls` with `codex "prompt"`.
elwebmaster
I have done this, alias codex --yolo -p . It's very helpful not having to remember every odd command and its parameters. It's a bit more typing but I type faster than invoke and scan through man pages.
hirako2000
They are deterministic. Including in the way they fail.
yuye
People forget what determinism is.
Non-deterministic systems produce different output states given identical input states.
Even if a compiler's memory gets a one-in-a-million bitflip that produces a different output, it doesn't mean it's non-deterministic. It just means that the output state is different due to an external force changing the internal state.
An infinite loop will halt when the processor is powered off.
vrighter
if a compiler bug is found, it can be fixed. You can't fix an llm.
Havoc
When people say things like that they mean it as a rough mental model.
Bit like when people say "it's like riding a bike" they're not actually talking about bicyle riding being the exact same activity.
Coming with this in response:
> f(x) -> P(y) ∪ P(z1) ∪ P(z2) ∪ ... P(zN)
is a failure in human communication not a disagreement about what LLMs are or aren't.
DauntingPear7
I don't think they fit in as a layer of abstraction, but instead are outside of it. An abstraction simplifies away the inner workings of what is being abstracted. The LLM exists outside of your code. It is not part of it, thus, it is not abstracting it away. If this were the case, a coworker would be an abstraction to code they own (you could argue this, but I think it erodes the meaning of abstraction). LLMs behave like program synthesizers rather than another layer of abstraction. They take natural language as input, and using fancy math produce a (hopefully) relevant and useful output based on that input. They can produce layers of abstraction, but are not part of a program's abstraction stack.
However, they can abstract away the need to understand implementation, similar to a coworker. They can summarize behavior, be queried for questions, etc, so you don't have to actually understand the inner workings of what is going on. This is a different form of abstraction than the typical abstraction stack of a program.
madisonmay
LLMs are not inherently non-deterministic during inference. I don't believe non-determinism implies lack of abstraction. Abstraction is simply hiding detail to manage complexity.
danpalmer
Non-determinism is configurable at the level of the mathematical model, but current production systems do not support deterministic evaluation of LLMs.
orbital-decay
They do, though. Providers don't because batching makes it cheaper. Among the providers, DeepSeek seems to support it for v4 (and have actually optimized their kernels for batching), and Gemini Flash is "almost deterministic".
danpalmer
I'm pretty sure that the determinism issue is at the floating point math level, or even the hardware level. Just disabling batching and reducing the temperature to 0 does not result in truly deterministic answers.
Get the top HN stories in your inbox every day.
I'm not sure why people struggle with the fact that an abstraction can be built on top of a non-deterministic and stochastic system. Many such abstractions already exist in the world we live.
Take sending a packet over a noisy, low SNR cell network. A high number of packets may be lost. This doesn't prevent me, as a software developer, from building an abstraction on top of a "mostly-reliable" TCP connection to deliver my website.
There are times when the service doesn't work, particularly when the packet loss rate is too high. I can still incorporate these failures into my mental model of the abstraction (e.g through TIMEOUTs, CONN_ERRs…).
Much of engineering and reliability history revolves around building mathematical models on top of an unpredictable world. We are far from solving this problem with LLMs, but this doesn't prevent me from thinking of LLMs as a new level of abstraction that can edit and transform code.