Get the top HN stories in your inbox every day.
branko_d
feverzsj
Pretty much sums up the LLM fanbase.
discreteevent
I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
JackC
"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."
I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.
Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.
kay_o
> However, there are lots of people in the world who live their whole life by vibing
Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?
I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.
WarmWash
Tangential side story, but an interesting one none the less.
I was a food delivery driver back in the mid 00's to the mid teens. Early on, GPS was rare and expensive, so to do deliveries and do them effectively, you had to be able to read a map and mentally plan out efficient routes from the stochastic flow of orders coming out.
This acted as a natural filter, and "delivery driver" tended to be an interesting class of people, landing somewhere in the neighborhood of "lazy genius". Higher than average intelligence, lower than average motivation.
Then when smartphones exploded in the early 10's, the bar for delivering fell through the floor, and the job became swamped with people who would be best identified as "lazy unintelligent". Anyone who had a smartphone and not much life motivation was now looking to drive around delivering food for easy money.
Not saying the job was ever particularly glamorous, but it did have a natural mental barrier that tech tore down, and the result was exactly as one would predict. That being said, I'm not sure end users noticed much difference.
hirako2000
Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.
Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.
They excel at communication. Perfecting the art.
Now LLMs are there to accelerate the trend.
LAC-Tech
> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
If you will forgive an appeal to authority:
The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.
- Fred Brooks, 1986
pelasaco
> It's hard to know if LLMs will end up being a net win for the industry.
True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.
LaGrange
For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.
dominotw
wouldnt llm do all the tasks that determistic programs are doing. like chatgpt files taxes for you instead of using turbotax.
ZaoLahma
I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".
I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.
But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.
Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
ap99
This mindset is fine (it's mine essentially too).
But it absolutely has to be combined with verification/testing at the same speed as code production.
0xpgm
> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
Any examples how you see some engineers being left behind?
undefined
archagon
Again, I have to point out that AI is not an abstraction layer. It blows my mind that engineers with years of experience somehow don’t understand this.
It would be an honor to be “left behind” by people who practice their craft with such carelessness.
(Frankly, I should probably stop replying to self-professed LLM boosters entirely since there’s a good chance I’m just chatting with an LLM.)
wallst07
Fanbase, maybe. Software engineers using these projects? Probably forking and updating themselves.
FWIW, I've opened a half dozen PRs from LLMs and had them approved. I have some prompts I use to make them very difficult to tell they are AI.
However if it is a big anti-llm project I just fork and have agents rebase my changes.
jcgrillo
Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.
andy_ppp
Not really - I imagine as with almost everything in life there's a normal distribution, in this case of the quality with which people use AI tools.
DonaldPShimoda
The normal distribution doesn't account for things like "huge megacorporations pour billions of dollars into accelerating product adoption" or "other companies force their employees to use AI whether they want to or not" though.
varispeed
"I aM someWhAt oF a DeVelOpER MySelF"
bvan
Fake it ‘till you make it. Seems like LLM’s have caught-on to that too.
zeeveener
I'm personally amazed that _Large_ OSS projects don't have the appropriate automation in place to prevent non-compiling or non-linter-passing submissions.
- Hooks (although there's no clean way to enforce they be "installed" on a clone), GHA Workflows (or their equivalents on other forges).
This might be my bias showing, but these are items I would consider table-stakes for a project of a certain size / level of popularity.
It feels like a lot of the "AI is shit at contributing" problems could be addressed in part by better automated checks and balances.
jmcqk6
Those things cost resources, and now you're introducing a new attack vector: open up a bunch of shit PRs, burn a lot of cash for the target organization.
zeeveener
You're right. It doesn't solve for all scenarios and doesn't block malicious actors.
I do believe, however, that it would have a meaningful impact on the "drive-by" PRs that keep being used as examples; the thoughtless, throw-spaghetti-at-the-wall PRs that do not have malignant intent behind them.
Many large OSS projects would have the resources to eat that cost with Donors, Sponsors, and OSS hand-outs. That's why I clarified in my original post because I know this is not a general solution.
10000truths
That's why you sandbox. You can mitigate most low-hanging DoS fruits by running your server side hooks in a per-tenant cgroup that limits CPU and memory usage. One tenant per public key for trusted contributors, and one general-purpose tenant shared by all new/unknown contributors.
all2
Can't you prevent pushing from the client side with pre-commit hooks? I would expect a hook to fire on the developer's computer that prevents them from even committing/pushing (unless they nuke the hook in their local repo copy).
abustamam
All of my personal projects, many of which will never be publicized, use hooks and GHA to ensure compilation of changes.
It is quite strange that a large project like Zig would not have such a thing. I'm sure it's not trivial but it seems important to invest time into.
pxc
> Hooks (although there's no clean way to enforce they be "installed" on a clone), GHA Workflows (or their equivalents on other forges).
Git supports pre-receive hooks. But big multitenant forges like GitHub.com don't allow you to configure them because they're difficult to secure well. (Some of their commercial features are likely based on them, though.)
If you self-host a forge, though, you can configure arbitrary pre-receive hooks for it in order to do things like prevent pushes from succeeding if they contain verifiably working secrets, for example. You could extend that to do whatever you want (at your own risk).
jmcqk6
You're still talking about compute resources that need to be paid for and maintained for that. Spamming AI PR's is going to cost a lot of money.
lexh
But... this particular project does have such automation in place? It isn’t hard to find:
https://codeberg.org/ziglang/zig/src/branch/master/.forgejo/...
papyrus9244
One of my pet peeves with git (and systems both similar, and based on it) is that automated tests run after you've made the commit and push.
In my mind the commit (let alone the push to a publicly accesible server) should be done after, and only if, the automated tests are successfully executed. And there's no easy way to implement this, other than having a dirty branch that you discard after rebasing onto a more long lived one.
10000truths
You can use a pre-receive hook on a git server to reject pushes that fail compilation. Downside is that it requires admin access on git forges, so you're only able to do this if you self-host.
jwolfe
Pre commit hooks exist. People just don't like being prevented from committing for reasons such as this.
sauercrowd
I mean even having linters and everything still creates a whole bunch of noise in their PR section, not to mention that a lot of the changes I make to stuff that's written by codex is not stuff that's caught by linters.
It's just bad/wrong/context lacking decisions and mental models it introduces, that if not carefull will just create a massive mess of a codebase. (I know, because I've tried, and had to deal with it)
And if someone vibecodes a PR and it works, why dont they just share the prompt so a repo owner could vibecode it themselves?
abustamam
Vibe coding is often not a single prompt, it's an entire workflow (if you're doing it right).
api
This is a spam problem more than anything else. It's not really an AI problem except that it's AI that is enabling this new type of spam.
Imagine there's no AI, but for some reason you have people hiring armies of cheap overseas devs and using them to produce mediocre quality drive-by PRs. The effect would be the same.
AI can be used to make quality code, but that requires careful use of the tool... like any other tool. This isn't careful contributions made by someone who knows the project and its goals and is good at using the tool. This is spam.
colordrops
Exactly, people could have "consulted Google" or "consulted stack overflow" and had the same issues. It's about the end result, not how the code got to that end result, and the submitter is responsible to make sure of the quality of the submission regardless of whether AI was used or not.
To reject submissions where the dev "consulted ai" is like rejecting iron ore that was mined by a machine rather than a human. The quality of the ore is what should be measured, not how it was obtained.
api
I agree, but the problem comes back to how to evaluate quality at scale. That is very hard. It’s easier to just say no AI because that at least turns off the fire hose.
nurettin
You can curb an LLM into doing what you want. Unfortunately people don't have the patience or the skill.
sesm
People who have skill can do the same without LLMs, maybe slightly slower on average but on more predictable schedule.
dannyw
I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.
For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.
For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.
dgellow
The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.
I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…
hitekker
Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
> Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.
adrian_b
Yes, that reply provides convincing arguments for not merging the Bun fork, as it interferes with Zig's own roadmap for achieving even better results, while continuing to improve the whole language.
bonzini
A single PR for a 3000-line addition would, in all likelihood, be rejected anyway.
dgellow
Really depends the author and context. Large PRs are often justified for compiler work, you have a lot of pieces to touch at the same time
jeffmess
omnimus
When somebody comments PR with “Incredible work, Jacob. It is an honor to call you my colleague.” then it's safe to assume it's out of the ordinary contribution. Pretty much falling outside of the “in all likelyhood”.
3000 line LLM commit is not that.
flohofwoe
Jacob is part of the core team, not a random outside contributor.
slekker
Very different context: that PR is from a maintainer, and trusted member of Zig, which surely discussed the implementation/design internally as well
daishi55
What’s the point in debating the PR quality? The policy explicitly forbids all LLM code, so that policy is of course the “real reason”.
lelanthran
> What’s the point in debating the PR quality?
Because the pro-group are whining that the policy is preventing the merge, when in actual fact even if the policy did not exist, the PR is crap anyway.
daishi55
I don’t see how it could be that bad (incorrect, specifically), considering bun is probably the most widely-used production use case of zig. But regardless, let’s say it’s a bad PR for the sake of argument - it’s beside the point. It cannot be merged no matter how good it is, due to the strict no-LLM policy.
undefined
Aeolun
Of course the policy is preventing the merge. That’s literally the point of the policy…
richiebful1
People forget that LLM code cannot be covered by copyright. So LLM code cannot be placed under an open source license
vehemenz
This is overstated. Not all LLM code is produced the same way. Code produced through substantial human creative input still falls under copyright, at least the way things are now. Besides, nothing legally prevents placing code under a license. Enforceability is the question, not permission.
It's a bit like saying speed limits don't apply on private property, therefore you can't have any traffic rules on your private racetrack.
daishi55
This opinion does not seem grounded in reality to me.
raincole
Because it's Bun. Which is practically the use case testimonial of Zig.
lccerina
It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."
A healthy contributor community is more important than mere code performance, quantity of features or lines of code, etc..
frumiousirc
Unfortunately, those are largely words of a foregone era. The zeromq "community" today is tenuous. It has some really good people in it, the few that remain active, but the human-level processes and communication channels are ill defined and not well "staffed". In some ways, this lack of human activity and interactivity is perhaps okay and even justified given how stable libzmq and most of its bindings are (and the sub-ecosystem around particular bindings are a bit more active). Perhaps Hintjens' grand (and excellent, imo) vision got zeromq to where it is but the project feels to have gone adrift since we lost him. Somewhat ironic to his community-centric vision statement (the guide) it seems a project needs a charismatic and active leader to gain and retain a community. I guess that says more about human nature than it does about software development.
I'm not sure how to tie this all back to the zig story other than to point out the stated premise that zig is not short of PRs and so they can pre-select for no-LLM contributions. I think that is a good move for them and I get the "contributor poker" idea. But, the game changes when the premise breaks and the flow of newbies reduces to a trickle. At that point, if there are still active zig people who still want newbies, they may need to broaden their net. But if/when that happens, it may be too late to recover by opening to LLM-assisted contributions.
tombert
You know what; I use ZeroMQ all the time. Thanks for bringing to my attention that the community is waning, I will look into contributing to it tonight.
grokys
My issue with AI-generated OSS contributions is:
If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:
> We do not need a middleman to talk to AI models. We are not bottlenecked by coding.
gus_massa
I'm almost not using AI, but a possible scenario is that the contributor spend like 20 hours in total.
Something like using the AI to get an initial bad version, make some tweaks to the prompt, make some manual fixes, ask the AI to fox something else, noticing some new related feature and asking the AI to add it, making some benchmarks and deciding to remove a small feature, or perhaps deciding between two similar implementations, add a few more manual fixes here and there, run the extended version of the automatic test and find a weird bug in the unusual setup, make a few fixes with the AI and manually. So after 20 hours of work, the final version has only 50 lines that have been rewriten like 5 times each. Now the mantainer can review only the final version in 1 hour or so.
This is very different to spending 5 minutes asking the AI to write a patch, that has 1000 lines that does not even compile and sending it to the maintainer without looking at it.
chenzhekl
maybe you are not bottlnecked by coding. but there is high probability that you will be bottlenecked by verifying the correctness of LLM-generated code.
Bridged7756
Crazy how this doesn't register in people's heads. Has the real bottleneck ever been code written and not the review of code and everything involved? Understanding the nuance and implications behind design decisions; strategy.
In any REAL, workload, with good processes, code review makes speed of code generated a moot point. You still move as fast as you can review the code, and no, I won't debate that you can rely on LLMs, a deterministic language predictor, to determine the correctness of code; in the context of the business, and technical implications.
notnullorvoid
If you are a responsible maintainer you need to verify the correctness of the contribution wether you used an LLM to generate it or wether someone else did.
Having someone else be the AI-middlemen, just introduces additional complexity and confusion.
gwbas1c
I'm finding that AI, when successful, gives me 2-3x speedup. It's not the kind of thing I can give high-level instructions to like I can to a human.
I suspect the people who claim that AI works by only giving it high-level instructions are mostly working on "mindless" projects where a developer in the weeds wouldn't need to think very much.
eddd-ddde
This reminds me of the critique of certain kinds of art.
"It's so easy, I could have done that myself"
Well yeah, but you didn't.
mexicocitinluez
> If an AI improves developer productivity so much,
You're not suggesting the only metric of productivity is lines of code are you? And that the only benefit of using LLMs is for generating code you're too lazy to type yourself?
dgellow
> Zig values contributors over their contributions. Each contributor represents an investment by the Zig core team - the primary goal of reviewing and accepting PRs isn't to land new code, it's to help grow new contributors who can become trusted and prolific over time.
> LLM assistance breaks that completely. It doesn't matter if the LLM helps you submit a perfect PR to Zig
That’s the best rational I’ve seen so far, and fully support Zig decision here. I really appreciate their long term vision for both the community and actual project. I don’t think LLMs have such a great place in more collaborative efforts to be honest. Though we will see how things evolve, but I do see that when getting AI generated PRs I basically have to redo it myself (using LLMs, ironically… something I’m really starting to feel conflicted about)
dnautics
i do think llms are great, i vibe code a lot of zig (working in a locally deployed semi-embedded on-prem device), and i think the zig policy is a good idea at least for the next five years.
dack
I think it's the least hostile thing they can say, and I respect their decision for their own project.
That said, it still feels like they are unnecessarily hobbling their project. LLMs are tools and they can help you think, research, and code. You can overuse them, yes, but you should embrace them where they help.
not accepting bun's PR for other reasons is totally fine (sounds like it's a core change where more thinking needs to be done), but simply banning all LLM authored PRs is unnecessarily restrictive. Just focus on the quality of the work.
brokencode
Why review thousands of lines of LLM generated code from some random person you don’t know when you could use an LLM yourself to do the same thing, except with probably a better design and more thoughtful approach?
Maintainers should get to spend their time developing stuff, not just reviewing low effort PRs. The flood of LLM code is changing the balance for the worse for maintainers, and I can totally see why they’d just want to ban it.
undefined
merlindru
but that doesn't have anything to do with LLMs.
if someone made the same gigantic mess of a PR without LLMs, it would still be rejected, because it is a gigantic mess of a PR.
the low effort part is the problem. what if i made a great, focused, readable PR but had claude write it out? what if i carefully checked and deliberated each line, just as if i had written it myself?
granted, in the real world, 99.9% of slop PRs are written by LLMs. so i thought "okay, reasonable, ban the thing that is most likely to cause problems."
but then how does the "no LLM translators!" rule fit into that view?
orochimaaru
It’s the lack of friction that LLMs bring. It’s easy to put in a couple of lines and generate 1000’s of lines of code. Whereas the person would never have done that without LLMs.
I think LLM dev needs to take a better spec driven approach. The vibing is getting to be annoying.
brokencode
Well previously lazy contributors simply would never have made a PR because it was too much work. Now they can have an LLM make a PR with virtually no effort at all.
It’s obviously an imperfect rule, and maybe it’ll change over time. But I am just saying that I understand why open source maintainers are doing this.
There is just no possibility for them to review all the low effort AI slop being thrown their way. Yes, some of it is going to actually be very high quality, but you don’t know that until you review it, which is the whole issue.
logicchains
>Why review thousands of lines of LLM generated code from some random person you don’t know when you could use an LLM yourself to do the same thing
Because getting an LLM to do it yourself still takes time and attention bandwidth and tokens.
brokencode
But at least you know how the sausage was made by the end. You have no idea how high or low quality any PR from a random person online is, and taking any amount of time to review a PR could be a total waste.
jart
> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?
The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own? It's especially true if the open source project was vibe coded. AI and technology in general makes personalization cheap and affordable. Whereas earlier you had to use something that was mass produced to be satisfactory for everyone, now you have the hope of getting something that's outstanding for just you. It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.
simonw
> Why use someone's project when you can just have the robot write your own?
I've been thinking about this a bunch recently, and I've realized that the thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes. It's usage. I want to use software which other people have used before me. I want them to have encountered the bugs and sharp edges and sanded them down.
earleybird
Depth of use over the lifetime of an app is a quality all its own that often not appreciated. A recurring pattern at $dayjob is that a new manager or director will join a business unit and declare an existing app as the worst terrible, no good, horrible app they've seen and they're going to fix that. A year and a half later the new app is finally delivered with 80% of the original functionality and a fresh set of bugs. The new dev team sees the surface functionality but misses a lot of the hard earned nuance the old system accrued over time. This is a pattern that existed long before LLMs.
mormegil
Yes, see e.g. a quarter-century-old (!!) https://www.joelonsoftware.com/2000/04/06/things-you-should-...
anp
I feel similarly but IIUC I think that doesn’t strictly require an open source development model. I’ve benefited a huge amount from consuming and contributing to open source projects and I’m a bit worried that the “unit economics” changing might break some of the social dynamics upon which the ecosystem is built.
tovej
An LLM most definitely cannot spit out robust tests or thorough documentation. It can spit out some tests or some documentation, but they will not cover the user perspective or edge cases unless those are already documented somewhere. That's verified by both experience and just thinking about it for two seconds.
The sanding down you refer to is what generates those tests and documentation.
mexicocitinluez
> but they will not cover the user perspective or edge cases unless those are already documented somewhere
Are you suggesting that LLM's can't test for people who use screen readers? Keyboard only users? Slow network requests?
You're acting like the issues an app faces are so bespoke to the actual app itself (and have absolutely no relation to existing problems in this space) that an LLM couldn't possibly cover it. And it's just patently wrong.
watwut
> he thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes.
Can it if we stop defining "robust tests" as "a lot of test code lines" and "good documentation" as "lengthy documentation"?
simonw
I chose my words carefully. "Robust tests" are tests that provide high coverage and aren't flaky. "Thorough documentation" likewise is documentation that describes as much of the code as possible.
I didn't use the word good.
porridgeraisin
Yep. I realised the same. No one reads docs, or goes through tests. Either ways it's easy to write useless tests. And easy to write useless docs. Idt most even read the code. Now the difference is that it has become possible to write useless code.
So it's just the fact that others have already gone through the motions before I did. That's it really. I suppose in commercial settings, this is even more true and perhaps extends to compliance.
jbxntuehineoh
> No one reads docs
sooo uhh how do _you_ learn how to use a new library? just throw random shit at the wall until something sticks?
matkoniecz
> No one reads docs, or goes through tests.
I regularly do both when trying to use library, especially unfamiliar to me.
johanyc
So battle tested
einpoklum
> an LLM can spit those out in a few minutes.
It may be able to spit out text that purports to be that, in a few minutes. But for most software, an LLM will not be able to spit out robust tests - let alone useful documentation. (And documentation which just replicates the parameter names and types is thorough...ly useless.)
simonw
That's why I said "thorough" and not "good".
alex1sa
[dead]
chromacity
I remember hearing the same arguments in the early 2010s, when the "3D printing revolution" was just around the corner. Why would anyone buy anything anymore if you can download a model and print it in the privacy of your home? And make it infinitely customizable?
The whole point of having a civilization is that most things in life can be made someone else's problem and you can focus on doing one thing well. If I'm a dentist or if I run a muffler shop, there are only so many hours in a day, so I'd probably rather pay a SaaS vendor than learn vibecoding and then be stuck supervising a weird, high-maintenance underling that may or may not build me the app with the features I need (and that I might not be able to articulate clearly). There are exceptions, but they're just that, exceptions. If a vendor is reasonable and makes a competent product, I'll gladly pay.
The same goes for open source... even if an LLM could reliably create a brand new operating system from scratch, would I really want it to? I don't want to maintain an OS. I don't want to be in charge of someone who maintains an OS. I don't necessarily trust myself to have a coherent vision for an OS in the first place!
gausswho
That only holds true for the smallest tier of open source projects. Past a certain point of complexity, it's unlikely you can expect the robot to read your mind well enough to provide something of high quality and 'outstanding for just you'.
The Zig project is certainly far beyond such capability.
jart
You have to push the robot to be as fanatical as you are. It holds so much back, always aiming to do the simple normal thing that most people do, rather than the top-notch stuff it knows.
8n4vidtmkvmk
I'm finding this out the hard way. I set out to build a 1 page app. I thought it would take a day. It's 98% vibe coded at this point. Even with AI implementing everything, its taken several weekends and many evenings. And not because AI is doing a bad job its just that as i see it come together, i have more and more feature requests. I've got a couple dozen left but I can't just let the AI chew through them all at once. Im effectively QA now. Have to make sure everything is just right.
skeledrew
LLM access is not yet universally available. There are those who can't exactly afford it. And there are also those with access but there are occasional or perennial issues, like Claude outages and general degraded performance over time. For example couple of months ago when I just started using Claude, I was easily making good progress on multiple projects within a week. Nowadays I'm hardly getting through much of anything as most of the time Claude is just showing spinners, and it also feels like the code quality has taken a nosedive.
solid_fuel
> The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own?
This is only a valid strategy if you either
a) understand the problem domain well enough to make a judgement call on what the LLM shits out.
or b) don't care about the correctness of the project.
Obviously, many software devs feel comfortable enough with CS problems to validate the LLM solution, but a flower shop owner does NOT know enough about accounting to vibe code a bookkeeping project, so for a shop owner an open source option - with many human contributors and actual production use elsewhere - would be a much better choice.
jillesvangurp
I've been seeing a drop in PRs against my repositories. I have a couple of repositories with around a hundred stars. Nothing spectacular but they were getting occasional PRs until last year. This year I've had almost none so far. My theory is that LLMs prefer sticking to mainstream projects. And since lots of developers are now leaning heavily on LLMs, they are biased to ignoring most of what I provide.
And you indeed get a lot of wheel reinvention by LLMs because that is now cheap to do. So rather than using some obscure thing on Github (like my stuff), it's easier to just generate what you need. I've noticed this with my own choices in dependencies as well. I tend to just go with what the LLM suggests unless I have a very good reason not to.
bee_rider
Most people don’t have the ability to read code well enough to determine if an LLM output is good or not. And most people don’t have subscriptions to models that can develop non-trivial programs…
Maybe this will be a real problem in a couple years though.
dawnerd
Code aside, most people don't even know how to describe what they actually want it to do, and LLMs are still a loooong way away from mind reading. I've seen developers struggle to even write down what they want. Simple demos like they love to show off with snake-like games are fun and all but they're nothing like the complex opensource apps everyone seems to think we'll just generate with a simple prompt.
dgellow
> The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own
Because it takes hours/months/years of accumulated design decisions to get a great open source project. Something an AI agent can only approximate the surface of, unless you’re ready to spend a lot of time on it
debarshri
We have been running LLM and coding agents for a while now and my overall observation is that it is a powertool or a crane, it is not a decision making tool.
Now in my org, people who have great understanding of concepts, deeper engineering understand have exponential productivity. People who dont or new in the workforce, juniors, are generating hell-ish code without understand as long as it runs they think the job is done. And this is where the problem is.
The llm creates an intellectual gap within the org and it just widens it as more and more it gets used. You might end up not trusting stuff within the org if code is generated by later.
ghosty141
Exactly my (and my coworkers) experience. AI generally amplifies the skillset, both in the good and the bad.
One fantastic usecase for me just recently was writing up a concept for an authentication daemon. With codex this is like a conversation where I pick from the suggestions, cross reference them with normal web-search and decide on a final draft which I then discuss with colleagues.
This "conversational" planning with integrated web-search (aka plan mode) is insanely useful. Also reviewing already written code with AI is purely beneficial in my opinion.
In my opinion the main caveat of AI is, you eventually have to be smarter than then tool. So for example if Codex suggests I should use tech-stack X then I must research and fully understand why this is actually good and still have to compare to other solutions. I think this is where the problem lies, some people skip this step which leads to so so many problems, and that's fatal. You MUST be smarter than the AI after your conversation and fully understand and be able to critique what it said.
silentkat
The power of AI is it rewards due diligence.
The weakness of AI is that it is really easy to fall into lazy habits.
Something about having to talk to a machine like it's a human makes me fall for treating it like a human. I want to treat it as a probability engine that collapses to an answer based on input, but that input explicitly needs to be one that has it collapse to something a reasonably knowledgeable person would respond with, which more-or-less means talking to it like it is that kind of person.
I feel like it activates the social part of my brain and then I stop working with it properly. I'm still building the habit, though, only recently started taking the LLMs seriously as a tool.
abustamam
This is my experience. I'll use LLMs as a sounding board for architectural decisions and to bring discussion points up to the team, and we talk through assumptions and pros and cons. And then once we have the architecture in place, LLMs are pretty good at implementation.
cmrdporcupine
I agree with this assessment but even among us seniors with accumulated knowledge it has the dangerous potential of getting out from under your feet and produce large amounts of code that you don't have full comprehension of.
I can generally make it produce excellent well-tested code. Far better than I could do in the same time on my own. But it's a challenge to keep on top of knowledge about everything it made.
jameson
LLMs are not smart as the LLM vendors claimed to be.
If they are, we wouldn't be having this conversation because they will be fully autonomous
People who blindly submits LLM generated code or do not cite its usage really need to stop doing it
kangs
it is getting there, and not so slowly though. The remaining problem is that it's still just a tool. Telling a random dev "make zig faster in a one shot PR" isn't going to give good results either.
In the past, OSS projects were self-selective because you needed to be able to make working code, and if you did, you probably also reasonably did the right things as you spent years learning this, and have some sort of reasoning behind your feature, need, etc.
Today, even if the LLM was perfect and could reason well, it still does the bidding of the prompter - and you no longer have self-selection. Heck, it'll be difficult for zig devs to decide what's actually made by an LLM or a human anyway, I'm sure there's already LLM generated code in there - but at least these [human submiters] still need to be reasonably good at code.
I wonder if we'll end up with "only human with trusted badge of honor" can commit, and/or "LLMs now reason well enough to tell you: 'no, f off, this feature, plan, idea is garbage I'm not generating it" hehe.
potsandpans
> do not cite its usage really need to stop doing it
It's a completely unenforceable virtue signal.
franktankbank
> need to stop doing it
They won't I suspect. If there isn't any good way to give them a good smack for doing it then I don't know what would make them stop.
jameson
I have a similar sentiment unfortunately. I briefly thought about ways to force them to stop but all led to some sort of negative impact on privacy/freedom such as identify verification
julenx
The article explains Zig's stance in further detail, but the quoted part on its own caught my attention because my reading of it is rather "pro human communication" instead of "anti-AI".
kennykartman
They're banning all AI though, so it looks pretty much anti-AI to me.
pjjpo
I wonder - has it been confirmed that no LLMs for PRs literally means no AI assistance for code?
While I haven't codified it anywhere, the policy I would like is for issues and PR descriptions to have no LLMs - there is no reason to ban code completely though IMO. I would say that would be pro human-communication and a stance I would like a lot.
dakolli
Good, pro AI people produce poor quality in everything they do. They are the least creative and worst problem solvers. I don't want them near me or my work.
SuperV1234
[flagged]
nayroclade
It seems like this policy will help them win at contributor poker in the short term, but lose in the end. The next generation of developers will, for better or worse, grow up using AI assistance to write their code, but none of them will ever become a Zig contributor.
krupan
I still can't understand why people believe that this is the future. Especially for green field work like new compilers. LLMs do not invent new things. They cannot produce anything smarter/better than what they have been trained on. The big advantage they provide is producing (regurgitating) code faster than humans and better than less experienced/knowledgeable humans.
chickensong
LLMs are the future because you have an amazing amount of information available with low friction, plus the ability to reason (sort of) about things. In some cases they might regurgitate, but they're also pretty good at synthesizing and comparing. None of this is perfect, but nothing else is either.
LLMs are a powerful tool like we've never had before. You don't expect a chainsaw to cut down a tree by itself and carve the wood into a statue or a new compiler. LLMs aren't mind-reading autonomous creators, they're more like a mech suit that can increase your capabilities. They have flaws, but until something better comes along, it sure seems like they're the future.
umvi
Ultimately code is an iterative refining process, like sculpting granite or spinning pottery. You start rough and iteratively shape and polish it. LLMs just rapidly speedup the iterative process. The next generation will be using LLMs to quickly setup the rough shape of new software and then iteratively refine them.
The "smarter/better" attributes you are worried about LLMs not having happen between iterative steps, when the human is inspecting the current state of the software and compares it to the desired state of the software (in their mind's eye). The human then course corrects for the next iteration.
This would be like if Michelangelo carved the David using a robotic 6-axis chisel. It takes him 1 month instead of 3 years because he can convey his initial vision to the robot and then iteratively refine the granite until it matches his vision.
You can try to claim LLMs don't invent new things, but humans using LLMs absolutely invent new things (source: myself).
krupan
That was a lot of words to agree with me that LLMs don't invent new things
DrewADesign
Luckily, if that ends up being the case, they can change the policy. It’s a FOSS project — not a constitutional amendment.
ajorg
I kind of agree, and I kind of don't. Yes, cultivating contributors is the right priority. But I see AI as an assistive technology. Like a screen reader, or a magnifying glass, though obviously also unlike.
Think of it like a robotic exoskeleton. It will be used to let people do bad things, and stupid things, but it will also be used to help people who otherwise couldn't do things do good things, or become more able than they were. For some people AI means being able to code where they couldn't before. For many it will mean learning to code by observing what the AI does. For others it might mean being able to code a lot faster, or even a lot better, than they already could. And yeah, for some it will mean they atrophy in some skills while they develop others. The exoskeleton will have the same problems, if anyone ever brings a decent one to market, but on the whole it will be an enabler.
I don't see how cultivating a contributor who's using an assistive technology is worse than cultivating a contributor who isn't. Apart from that it can be more challenging, of course.
Get the top HN stories in your inbox every day.
From https://kristoff.it/blog/contributor-poker-and-ai/:
"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."