Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

nunez

> If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.

> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise

How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

The author tries to answer this:

> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.

but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.

How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.

netcan

Myopic is inevitable, to some extent. It's very hard to project this stuff.

Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.

We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.

He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.

It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.

abustamam

I think if manual coding becomes "outdated" then there will just be no demand for junior engineers to manually code. People will probably still learn to code manually, just as there are folks who will still build their own furniture. There may just not be a business demand for it.

What that means 20-30 years from now when the seniors of today retire if there are no juniors right now is yet to be seen. People say that AI will probably have advanced far enough that it won't be a problem. But let's say somehow AI stagnantes, then I would guess that AI-generated code that is too difficult to debug will be treated as legacy and there'll be demand for manual coding again.

Companies that aren't able to afford the rewrite or maintenance will probably go out of business.

It's an interesting time we live in for sure.

at-fates-hands

>> What that means 20-30 years from now when the seniors of today retire.

I fear that many won't retire and instead completely leave the industry which is already happening. Its anecdotal, but when I first started as a junior dev, I was working with many intermediate devs who had a few years on me.

I kept ties with a group of about two dozen devs. We all went through a lot of the same stuff. Last year I attended two local conferences. Out of the 24 or so, who were all seasoned senior devs now? Only 3 of us remained in the industry. Granted, I'm in accessibility and another moved more into a UI/UX design role but we were all that's left.

The majority of a discussion at lunch was about why they left and it was pretty universal. They were seeing AI creeping into everything they did and just walked away. The list was long of what they disliked about it and really didn't see the huge upsides that the industry was pushing. They had money, they had other opportunities they choose to pursue far and away from the tech industry.

It was pretty eye opening to say the least. We always imagined sitting around a table in our 60's recounting our experiences in tech and now we're not even into our 40's and the industry is losing amazing talent every year that IMHO cannot be replaced by an LLM prompt.

I don't have a good feeling about where this is headed.

thrownthatway

> Socrates wrote about what was being lost…

Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.

This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058

Archelaos

Socrates never wrote anything. At least, not as far as we know.

base698

It's the opening page of the book Technopoly.

rimliu

I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.

phito

It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.

netcan

A lot of paraimony between your statement and Socrates' comments on the transition to writing.

Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.

hectdev

It just becomes more abstracted but the thinking is still there. And who is to say we aren’t going to keep reading books, delving into hobbies, or watching movies. All those concepts will then be mixed into the our brains and who knows what new things we will think of to extract out and desire to build with AI.

sothatsit

> I'd say that by purging stuff from the brain we are losing thinking itself

The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.

theshrike79

I "purge" - or better yet choose not to retain - the data.

BUT, BUT! I keep the index.

My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)

> Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.

Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.

Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)

ohNoe5

We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live

And we don't need words to think; cognitive problem solving and language processing are separate processes [1]

We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.

[1] https://www.scientificamerican.com/article/you-dont-need-wor...

Der_Einzige

Fuck thinking!

If I am free as “rational I,” then the rational in me, or reason, is free; and this freedom of reason, or freedom of the thought, was the ideal of the Christian world from of old. They wanted to make thinking – and, as aforesaid, faith is also thinking, as thinking is faith – free; the thinkers, the believers as well as the rational, were to be free; for the rest freedom was impossible. But the freedom of thinkers is the “freedom of the children of God,” and at the same time the most merciless – hierarchy or dominion of the thought; for Isuccumb to the thought. If thoughts are free, I am their slave; I have no power over them, and am dominated by them. But I want to have the thought, want to be full of thoughts, but at the same time I want to be thoughtless, and, instead of freedom of thought, I preserve for myself thoughtlessness. If the point is to have myself understood and to make communications, then assuredly I can make use only of human means, which are at my command because I am at the same time man. And really I have thoughts only as man; as I, I am at the same time thoughtless. He who cannot get rid of a thought is so far only man, is a thrall of language, this human institution, this treasury of human thoughts. Language or “the word” tyrannizes hardest over us, because it brings up against us a whole army of fixed ideas. Just observe yourself in the act of reflection, right now, and you will find how you make progress only by becoming thoughtless and speechless every moment. You are not thoughtless and speechless merely in (say) sleep, but even in the deepest reflection; yes, precisely then most so. And only by this thoughtlessness, this unrecognized “freedom of thought” or freedom from the thought, are you your own. Only from it do you arrive at putting language to use as your property. If thinking is not my thinking, it is merely a spun-out thought; it is slave work, or the work of a “servant obeying at the word.” For not a thought, but I, am the beginning for my thinking, and therefore I am its goal too, even as its whole course is only a course of my self-enjoyment; for absolute or free thinking, on the other hand, thinking itself is the beginning, and it plagues itself with propounding this beginning as the extremest “abstraction” (such as being). This very abstraction, or this thought, is then spun out further

- The ego and its own, Max Stirner

kakacik

Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.

Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.

Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.

LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.

hirako2000

I would argue it's virtually impossible going forward for a junior engineer to run that harder path.

Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.

I've seen many junior unable to walk that necessary path before LLMs were a thing.

Der_Einzige

Socrates was histories first Luddite. He opened Pandora’s box. I wish him and Plato would be radically rejected as the garbage trash they are (basically just a defense of hierarchy and dialects)

Quoting my boy Max Stirner who also fking hated these guys

“This war is opened by Socrates, and not until the dying day of the old world does it end in peace.“ - The Ego and its Own, Max Stirner

kusokurae

You aren't thinking myopically; it's a fundamental contradiction the root of which is in how human brains take in and understand new information. No amount of pontification or bollocks hedging as this and all other "thinkpieces" on this issue do, will change that. It is beyond preference and perspective. There is only doing the very task that produces skills pertaining to that task. Prompting alone or even in dominant is too far from this task. They can only write the code.

ookblah

you learn by struggling and slogging through, even as a senior if your shit breaks it's on you to understand why. no LLM will shortcut that process for you (even asking LLMs why something is wrong requires you to actually understand it eventually, aka LEARNING). how that happens is up to the person.

i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.

if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.

Rebuff5007

I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.

markus_zhang

John Carmack did praise restraint of resources when he recalled his early days working as a lone contractor and as an employee of Softdisk, when he and the team had to push out games on a very tight schedule.

I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.

mleo

As a percentage of good to mediocre, maybe. Engineers of 40 years ago were probably better than engineers 20 years ago. Less of them and more constraints they had to deal with. Democratization of technology makes it easier for more people to use. It applies to programming as much as just using a computer.

infecto

I never buy these examples. Being a good engineer is more than purely resource optimization. I can think of many times over my career where resource optimization mattered but it’s not always a valuable undertaking.

eleumik

Also because once there were no people choosing sector for being a rich sector

maccard

20 years ago we were complaining about steam being bloated and unnecessary, we were 6 months off vista being a bloated mess and the Office Ribbon debacle being in full swing. PC games were often half baked console ports with atrocious performance and filled with game breaking bugs. Software was super rigid - there was no real cross platform support. We were just heading into the core 2 duo realm and it was a mess.

Engineers sucked then as much as they suck now

techpression

Understanding something and learning something are not the same things.

ookblah

nobody said they were, they are related. if you don't understand why something is behaving a certain way you need to learn

palmotea

> but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.

> How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.

The contradiction is resolved by your employer pushing professional development into "your own time."

And they'll do that by being totally stupid and unaware: they'll push you to maximally use AI tools, but judge you for the skill deficits those tools create.

bartread

Does AI remove all struggle?

I would credit my relative success (compared with many complaints I’ve read) at building software using AI as a helper to the fact that I’ve already solved the problem in my mind before I prompt the AI, and I tell it how I want the problem to be solved. Generally it’s possible to do that in English much more tersely than it is in code.

So what I’ve done is the hard (in the sense of problem difficulty) work and simply got the AI to do all the typing for me because I just can’t be arsed any more. The typing isn’t the hard part of software development: just the part that, historically, has taken up a massive chunk of my time in the domains I’ve worked it.

I’m not saying this works for everyone in every situation but, as far as I’m concerned, bring it on.

lokhura

>Generally it’s possible to do that in English much more tersely than it is in code.

I really doubt this is the case for any decent amount of complexity. English is ambiguous. Programming languages exist because we don't want this ambiguity from natural languages. A sufficiently descriptive spec written in English is barely indistinguishable from just code. We've come full circle.

bartread

It’s the case because English is wildly more expressive than many commonly used programming languages.

And especially when you take into account the mess of front end web development and tooling.

jmalicki

> How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.

It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.

"Good judgement comes from experience. Experience comes from bad judgement."

AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.

Some do not learn from that experience. They were never going to learn without AI either.

doginasuit

> It's not by writing syntax that you get there.

Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.

jmalicki

Totally! I mean, the same could be said of painstakingly hand coding assembly language - that today's developers haven't done so is what leads us to bloated electron apps, so there is something lost!

But the larger scale system design is stronger than ever. Today, distributed systems, version control, including branching, stacked PRs etc., VMs/containers, idempotency, multimaster ACID databases, all of these things were probably never achievable in the world when the best devs had to spend their time poring over assembly language every day. Losing that skill allowed them more time to build other ones!

nunez

Why are you focusing on syntax so much? There's more to that when writing code.

That's why students learn how to write pseudo-code before picking up a programming language. Learning how to think through implementing a solution to a problem is extremely important. It's exactly this experience that helps engineers grow their scope and understand bigger, more complex system.

There's also the tactical components of using programming languages. The only way to know when to use one type of data structure over another, or to debug tricky language-specific behavior is _to actually have used that language._

And it's exactly this knowledge that's being threatened by LLMs given how they are implemented today.

jmalicki

Data structures are not tactical components of programming languages.

E.g. when I am writing SQL, I need to be thinking about the underlying data structures too - even though I am not specifying the execution path.

classified

You can lead a horse to water, but you can't make it drink.

__alexs

Almost none of my operational knowledge came from writing code but a lot sure came from the reading code in the debugging process.

afro88

This has happened in other industries before. Drafting for example when CAD arrived. Entry level wasn't "can draw, willing to learn" anymore, but demanded high domain understanding. So the pathway became compressed learning through study, and field exposure.

Study of senior drafter "red lines": what and why they changed the initial drawing, RFI response etc. Reverse engineering good work. Failed design studies etc.

SWE equivalents: PRs, code review, studying high quality codebases (guess what: LLMs are amazing at helping here), pair programming (learning why what the LLM did was wrong, how to improve it, etc), customer support, debugging prod incidents, studying post mortems etc

We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end. We give them specific tasks and studies that develop their domain understanding and taste, actively support and mentor them, and expect them to drive some LLMs on the side to solve simple issues that still need human eyes on it.

AgentMatt

> We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end.

Is that generally the case though? I'm about two years into my first job in the industry and that's exactly my experience, and certainly frustrating...

staticshock

The eloquence with which this point gets (repeatedly) made is continuing to improve each next time I read it. However, I still feel like we haven't nailed it. That is, we are not yet at the "aphorism" stage of the discourse (e.g. "the medium is the message", "you ship your org chart", "9 mothers can't make a baby in a month"), in which the most pointed version of this critique packs a punch in just a few words that resonate with the majority of people. That kind of epistemological chiseling takes years, if not decades. And AI certainly won't do it for us, because we don't know how to RL meaning-making.

Edit: 9 babies → 9 mothers

bla3

> "can't make 9 babies in a month"

It's "9 women can't make a baby in one month".

bluefirebrand

In fairness, 9 women can't make 9 babies in a month either

gerdesj

No idea why you were dv'd.

It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!

staticshock

Hah, right, I mixed it up!

jasondigitized

Jobs had it right a long time ago. Its a bicycle for your mind.

ctvdev

> That is, we are not yet at the "aphorism" stage of the discourse

we learn by doing

nkrisc

Put differently: you get good at what you actually do, not what you think you're doing.

If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.

ipython

I’ve also seen along those lines “there is no compression algorithm for experience” - a nice summary of the hn posts from today.

skybrian

It seems overly pessimistic about education. Book learning isn't everything, but a physics textbook could be seen as the compression of centuries of experience.

darkwater

I don't know. Growing up and seeing life and people around me I firmly believe that if you have enough brain power and intuition for $TOPIC you can speed-run it. At the same time, with time and experience and doing/re-doing it, you will learn or master $TOPIC [1] even with less brain power.

[1] Depending on the topic and the level of knowledge of it.

canjobear

There clearly is though. You don’t remember every detail of every moment that constitutes the experience.

kristianc

... or by textbooks, Stack Overflow, senior engineers, code review. How many engineers today got their start by building Minecraft mods or even MySpace?

I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.

embedding-shape

How about "Intelligence amplification, not artificial intelligence"?

Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".

torben-friis

I'm using "don't bring a forklift to the gym".

raincole

"Bicycle of the Mind" has been cited to death.

The problem is that it was coined so early that we are way past the aphorism stage now.

nemomarx

Isn't it the vehicle metaphor about bicycles for the mind? Not fully crystallized yet but I feel like someone will

viccis

>the medium is the message

If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.

WillAdams

More worrisome is that the speech which that came from went on to prophetically observe that for each extension of human capability afforded by technology, there was a matching amputation in human skill/facility --- heretofore, computers have largely fit in with Steve Jobs' vision of them as "bicycles of the mind", making human effort more efficient --- the cognitive engine of LLMs looks to be dumbing down human reasoning to a least common denominator/mean:

https://publichealthpolicyjournal.com/mit-study-finds-artifi...

apsurd

You're right. ive struggled to understand what exactly this means, in large part perhaps it's so often misused?

I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.

but i think im off on that, ill look this person up and find out!

rdevilla

Some examples.

Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).

Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.

Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:

    You are just an image on the air. When you don't have a physical body, you're a
    _discarnate being_ [...] and this has been one of the big effects of the electric age. It
    has deprived people of their public identity.
Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.

If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.

The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwich™ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.

[0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847

viccis

It's confusing because "message" is not using its lay meaning, and decades of "medium" and "media" meaning drift meant that it isn't either.

For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.

McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."

You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.

If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.

luckystarr

The way I use AI now feels more exhausting than the programming I did for the last 20 years. I pose a problem, then evaluate proposals, then pick the one I think is the "right one"(tm), then see the AI propose a bunch of weird shit, then call it out, refine the proposal until it feels just about right (this is the exhausting part), then let it code the proposal. The coding will then run for 1-5 hours and produce something that would have taken me at least 2 or 3 weeks (in that quality).

After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)

dwaltrip

I feel this as well. I think it’s something to do with having to be more “on” as you slowly work with the LLM to define the problem and find a reasonable solution. There’s not much of a flow-state. You have to process mountains of output and identify the critical points, over and over, endlessly. And it will always be an off in this unsettling little way, even when it’s mostly quite good. It’s jarring.

The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…

lordnacho

Could it be that what we called flow state was actually a sort of high level thinking time afforded by doing low level routine work?

For instance I'm the old world, if you wanted to change an interface, you might have to edit 5 or 6 files to add your new function in the implementations. This is pretty routine and you won't need to concentrate that much if you're used to it, so you can spend that low-effort time thinking about the bigger picture.

orangecoffee

you may be right on this hunch. but I think the old world is no longer there now :( more thinking is expected per unit time

luckystarr

Its the "unsettling little ways", right. So you can't skip whole paragraphs, you literally have to read everything. And sometimes its worded in ways I don't understand at all (due to missing implications that the LLM conveniently omitted), so I have to re-ask it about that point as well. For every major feature or work-unit it takes up to 2 or 3 hours.

I figured out some patterns in the way it behaves and could put more guard-rails in place so they hopefully won't bite me in the future (spelled out decision trees with specific triggers, standing orders, etc.), but some I can't categorize right now.

llmssuck

Just to offer a counter-example, using AI makes programming bearable again for me. Most of programming comes down to a short - edit: not quit so short but you may understand the figure of speech.. - list of things which are repeated ad infinitum in myriad variations.

I don't have to slog through yet _another_ way to sort, split, combine a list, open a file, show a UI component, handle events, logging, make data flow through some type of "database", serialize and deserialize endless things, implement yet another protocol in $whateverishotnow, managing authn, authz, the list is endless.

The interesting part of programming, for me, is deciding on and capturing the domain in a tight, surprisingly simple yet powerful architecture. This is hard - for me - and actually has very little to do with "programming" per se, meaning it has nothing to do with wrangling syntax/low-level semantics of whatever platform I'm on and fighting package managers to name just two highly depressing parts of my job. I don't like typing code. I am doing it my entire life and I still don't like it.

I like coming up with invariants and ways of guarding them. To find simple decompositions that turn a hairy, ungodly blob of a problem into a manageable almost trivial network of not-so-complicated things. The not-so-complicated things themselves.. I don't care in the slightest about them. Opening files, managing database connections, forms, the mechanics of i18n, typing the word "class", you name it. I find it exceedingly boring.

Perhaps I am more of the architect type, but I find managing a bunch of AIs and making sure they don't stray from My Path is easy on the mind. Programming works on my level of abstraction finally.

aleksiy123

I think it’s because there is a communication and alignment inherently in the process now.

You went to a single entity (you) implementing an idea.

To two entities (agent) + (you) having to align through language.

My metaphor is it’s going from single process (1) to distributed system (2) with all the tradeoffs.

It’s just a whole different problem now that was originally only limited to other teammates.

kubb

How do you check if what it produced is even the right thing? Models love to go chasing the wrong goal based on a reasonable spec.

luckystarr

When the end result has problems and needs to be reworked.

You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.

Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.

kubb

Got it. I imagine concurrency bugs will hit hard with this approach because they show up rarely and are hard to debug.

Kiro

Do they? I haven't experienced models deviating from a spec in a very long time. If anything I feel they are being too conservative and have started to ask to confirm too much.

luckystarr

The problem is not the LLM deviating from the plan (though that rarely also happens when it thinks it has a better idea) but rather if the plan is not strict enough and the LLM decides on the fly HOW it is going to build your plan.

logicchains

AI does the easy/medium part, leaving only hard stuff and context switching, so naturally it's more exhausting, as the concentration of difficult-work-per-unit-time and context-switching-per-unit-time is much higher.

m463

I think one of the benefits of AI is that it will get started, and keep going.

But maybe pacing/procrastination might be relief valves?

squirrellous

To me it’s more like being a super micro-managing TL that would annoy the hell out of their human reports. It comes with all the pros and cons of micro-management.

hectdev

Sounds like you’re using Waterfall Which, if it works for you, go for it. But maybe Agile would feel more dynamic.

gjuggler

I was surprised not to see any discussion on whether the author used AI to help write this post. As many people say, writing is thinking.

I started getting that "I'm reading another AI-written blog post" feeling around 1/3 of the way through, but I don't consider myself super calibrated on this.

Pangram seems pretty confident it's AI (https://www.pangram.com/history/e9f6eb77-86f9-46d0-a6c1-e57c...). But I know these tools aren't perfect. I'd love to hear from the author what their process was in writing this piece!

Related question (I'm trying to work this out for myself):

If you believe using AI to write an email or blog post for you isn't okay, but using AI to write code for you is... what's the difference?

Right now my instinct is something like:

- Code can be verifiably correct (especially w/ good tests) so it's less of a purely-creative act than writing.

- But always, always double-check the tests!

- I still wouldn't submit a PR where I can't vouch for every line of code.

- AI-written documentation and specs are mostly still bad and should be looked down upon. But mostly because the quality, at least today, is poor. (Lots of duplication, lack of a clear understanding of the reader's intent and needs, no thoughtful curation, etc.)

- Be psychologically ready to update these priors as models change.

I'd love to hear from anyone who's thought more about this.

koshyjohn

Great question! I had A.I. critique what I wrote and wherever it gave me suggestions like “this sentence runs too long”, “this can be more punchy”, etc., I considered what it would tell me and chose to change direction if I thought it was warranted. But, notably, I typed out what I thought in my head to counter specific criticisms if I thought them valid instead of taking the LLM’s direct suggestion. I’d then ask it to critique my revision. I stopped when I read and re-read the final drafts end to end a couple of times and was reasonably happy with the flow myself. All the core ideas, the analogies, the choice of structure, etc. are authentically my thoughts and my message. The thing A.I. reined in the most was my tendency to have run-on sentences in early drafts. The concepts percolated in my head for weeks before I decided to blog about it - writing it end to end, and revising it over and over took about 3 hours.

The one thing I can tell you is that pangram is confidently wrong in this instance. And I now worry about how many may have relied on such assessments blindly in consequential places (school essays?). Which ties back to the thesis of my piece - where do you rely on AI and where do you rely on your own intelligence.

On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read. My school’s librarian wrote ambiguously “write this in your own words”. I asked her what she had meant by that. She had thought I’d copied it from somewhere even though it was all my own words. I went on to become the school topper in my final year for English (and one spot shy of being the school topper for Computer Science).

gjuggler

Thanks for sharing your process! It's helpful and refreshing to hear from someone about how they engage with AI when writing, and where / when the detection tools may fail.

(We obviously live in a more nuanced world than most social media interactions might make you think :P)

> On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read.

My first experience with plagiarism was in first grade, when we were told to write a book report about a subject during our library time. I diligently took my book on the musk ox and copied three pages word-for-word into my notebook as my report. I can't remember when or how we learned this wasn't "right", but I still think back on that and laugh.

lateral_cloud

Sorry but it's very obvious you used an LLM for more than just suggestions. Ironic given the point of the article.

pgwhalen

Can you explain why? I'm getting better at detecting some kinds of AI writing, but I constantly seem comments like this on HN for things I'm much less suspicious of, and I want to understand why people make them.

mpalmer

Taking you at your word, your A.I. revision process nonetheless seems to have yielded content which may as well have been generated at the start for how difficult it is to get through it.

    The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
This is a list of six things, disguised as an actual paragraph. Of sentence fragments disguised as actual sentences. Etc. Either you wrote this yourself and the AI didn't tell you "this is repetitive and list-y", or...

    "The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf."

    "The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."

    "In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output."

    "The ability to explain why something works, not just that it appears to work."

    "That process is not optional. It is how engineers acquire and elevate their competency."

    "The support system may make you look functional, but it does not make you capable."

    "The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive."

    "They will need interview loops that test reasoning, not just polished answers."

    "The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output."
^ Which of these are your thoughts? They all look like slop to me.

boesboes

Don't do it man. just stop. you loose something of yourself when you turn to AI

gombosg

Right at the top: "That distinction matters more than people think." That's basically telltale AI :)

Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.

There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?

undefined

[deleted]

regular_trash

One of the many things that has been strange to me is how often people will label written thoughts as AI slop when the "signs" are just normal phrases. Sure, that's a tired expression, and I 100% agree we should be critical of writing that seems to embolden pointless trite expressions. But people have written in that way for years before LLMs.

I find it very interesting that we only now have more widespread discourse around the quality of prose and rhetoric now that LLMs have become ubiquitous.

chromacity

> I was surprised not to see any discussion on whether the author used AI to help write this post.

It is definitely AI-written, far beyond "AI assisted". This is a shower thought turned into a needlessly long machine-generated essay that doesn't say anything a chatbot wouldn't say if you said "hey ChatGPT, write me a thought-provoking essay on <x> for HN".

I made a comment about it, along with several other folks, but the thing is... we get these AI-written "AI is bad" / "AI is great" articles multiple times a day. Debating them doesn't scale, but neither does complaining about them, and especially not complaining in a thoughtful way. Most people on HN are content to argue with a machine.

ua709

What counts as AI help and therefore should be disclosed? For example I often use Grammarly to edit some of my more important writing (but not this post obviously) because it does find grammar mistakes and it does give good readability suggestions (I have a tendency to be wordy) and the process is quicker saving time. I don't always take its advice, as many of its suggestions are not my voice, but it is a useful tool. So do I disclose?

mediaman

This article looks entirely written by AI to me. It's difficult to buy that it served as a mere light editing tool.

It hit me in the very second sentence:

"The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."

Ironically to the article, using AI so heavily in writing makes us worse writers. There's something in polishing words by hand, over many essays over years, that makes us good writers.

I see it in AI-generated UI, as well. There's some lack of theory of mind the AI has about its users. It seems fine for functional stuff -- I wish a lot of crummy local state government websites were done in AI, because AI seems to still have better theory of mind than lowest-cost bidders -- but it still leaves an "I didn't care enough to polish this myself" feel to it.

In that way AI UI feels very similar to AI writing.

I think the author still has good ideas here, but the problem with having AI write out your good ideas is that it generates the smell of thoughtless AI slop so strongly that even if your thoughts are good, the odor completely masks it.

I suspect people will start learning to not have AI assist too much with their writing, because it will actively hurt the interpretation of their ideas, no matter how good their ideas are. Better to be a bit rough around the edges but with good ideas than slop, because readers will make their judgment early.

madibo3156

AI to write code makes more sense because the grammar is almost infinitesimally smaller and of all the practical repercussions that come with that.

Think, yes, verifiability, but also tone, terseness etc. and how big those "problem spaces" are in written language versus programming language.

AtNightWeCode

Did not read it all but most likely AI. I read the beginning and that was just garbage in my opinion. I did run the whole piece through my shortening tool and the text came back larger than normal which is also a tell that it's not written by a two-legged.

manoDev

"Social media should bring people together instead of being used to set them apart."

"Economy should improve allocation of resources universally, instead of allowing a handful to hold what many are lacking."

And so on...

Society has advanced in technology but didn't progress the same way in philosophy.

benrutter

> Economy should improve allocation of resources universally, instead of allowing a handful to hold what many are lacking

I 100% take your point here, but it's worth saying the modern economy has improved resource access in a lot of ways. It's also worth saying that wealth distribution looks very different between countries[0].

None of this goes against what you've said really, but I think it's important to point out that there our multiple societies, with different values. We can shape and change the ones we're in.

[0] https://en.wikipedia.org/wiki/List_of_sovereign_states_by_we...

tombert

I'm no fan of social media and I'm not even a big fan of capitalism, but isn't this kind of reductive? You can always point out bad things and try and extrapolate that outward.

We live in a time with an extremely low murder rate (historically speaking), a very high literacy rate, the ability to keep in touch with family members across the world extremely cheaply, eradication or near-eradication of lots of diseases. If I were to extrapolate that outward then I could do the same kind of logic to talk about how great everything is.

I guess I just find this kind of doomerism to be extremely tempting but ultimately just reductive.

manoDev

Humanity's doom would be failing to recognize how amazing advancements in one area don't necessarily translate to another – not the opposite.

lauren_knows

We're still human, and have human tendencies. I feel like the tools are out there for us to be great, but they certainly gently push us toward being degenerates :/

manoDev

That's why philosophy (in the original sense of "striving for wisdom") is important. We're thinking animals after all, and can be more than dumb consumers, victims of our instincts.

buddhistdude

Well at least you for yourself can be aware of the danger that AI could assimilate your thinking, I think that's the message of this post.

manoDev

The risk is the next generation, who will not have lived in a pre-AI world, won't know the difference to compare.

renegade-otter

"Our technology is meant to bring people together - so they can fight about a gold-drenched ballroom they will never set foot in, and hate each other".

Waterluvian

I think AI can generally be utilized in two ways:

1) you use it to help write code that you still “own” and fully understand.

2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like it’s someone else’s code if you were asked to make changes without AI.

I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesn’t matter.

I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because it’s quicker and easier. But it’s a lie. They’re mortgaging the codebase. And I think the atrophy sets in when people do this.

kylebyte

And any push to use 2 to build infra to make 1 easier is hard to sell when a lot of engineers think AI will be able to perfectly do 1 in some nebulous time in the near future.

p_stuart82

the thing is it doesn't even feel like mortgaging. shipping, features going out, everything looks fine. then something breaks and you realize you can't debug your own code without asking the model again.

pona-a

It feels like an addiction. Normal coding requires sustained attention, you can sense how deep you are in the progress and when you're too tired to continue, but with LLMs the next feature always feels like another prompt away, having sessions go well into the early morning/late-night. You rationalize you can quit, that you've been reading the source and each diff enough to "understand" the codebase. But the truth is when the rate limit runs out, you'll be absolutely helpless, crawling back for extra-usage, until you finally see the total bill at the end of the month.

anygivnthursday

It also feels like another nail into the coffin for our attention. Smart phones, IM, notifications and new media has already destroyed a good deal of it, AI seems to be doing that to coding. Do more, faster, just ask the AI, dont spend your time on this or that, you can in the meantime switch your attention elsewhere, maybe to another AI, quick.

billbrown

I use it both ways:

1) Day job 2) Side project

It would be unprofessional to treat the first like the second.

rimliu

I did the same. 2 was more of a curiosity, to see how quickly will it paint itself into corner. Maybe not there yet, but close enough that I consider taking over even for the side project.

tabwidth

[dead]

jasonjmcghee

There are plenty of engineers that couldn't work without a modern IDE or in languages without memory management.

Or without the ability to use a library from GitHub / their package manager.

It doesn't feel THAT much different to me.

"Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.

embedding-shape

> couldn't work

"Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.

themafia

It should probably be "would initially struggle to be as efficient without them."

This is not a binary.

Jcampuzano2

Engineer as a term has already drifted vastly since nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

Engineers are accredited and in some countries even come with a title.

keeda

> ... nobody in the field of "Software Engineering" is actually an Engineer if we go by a strict definitions.

This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!

And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.

Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.

coldtea

>I will challenge you to come up with a strict definition that excludes software engineering!

"Structured, mature, legally enforced, physically grounded standards based approach to the construction of repeatable, reliable, verifiable, artifacts under stable (to the degree that matters) external constraints".

Some niche software development (e.g. NASA/JPL coding projects with special rules, practices, MISRA etc) can look like that.

99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.

Jcampuzano2

I don't really disagree with you. I was just pointing out how the parent mentioned how "engineering" is changing when it already has changed many many times.

Of course I want the best of the best who are top notch and rigorously trained working on mission critical software.

2OEH8eoCRo0

It's a pet peeve because the truth hurts. We (most of us) aren't doing anything that resembles engineering.

skywhopper

“Accredited”

undefined

[deleted]

analog31

Engineers are accredited in the US too. But there is an "industrial exemption" that allows you to work as an engineer without a license for certain kinds of employers. You just can't offer engineering services to the public without a license. This is more important in some fields than in others.

Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.

Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.

gombosg

I started my career as a machine designer (mechanical engineering), designing some machines for FMCG factories.

It wasn't that much different from SWE - mostly looking up catalogs, connecting certain pre-made pieces together with custom parts and lots of testing of the final plan to make sure there are no collisions and every movement is constrained properly.

95% of the time no load or sizing calculations were necessary - we just oversized everything based on tacit knowledge (the greybeards reviewing the plans) since these machines were not mass produced and choosing somewhat bigger parts was not expensive given that these machines would operate and produce value 24/7 for years.

(I hope the analogy to software engineering is visible!)

What I'm saying is that the level of "engineering rigor" heavily depends on the field where engineers are operating within. Even certain SWE fields (healthcare, finance, aviation etc.) have more regulation and require more rigor than others.

lkmill

as an actual engineer i just feel sad. i should probably feel happy but i like solving problems. fml i have becomea luddite.

therealdrag0

I get it. But there’s plenty of engineering to do in any serious system. I am in a very AI forward company using AI for everything, but I still am solving engineering problems every day.

jjtheblunt

i think you accidentally overlooked accredited engineers who happen to be writing software

Jcampuzano2

Of course there are engineers who write software, I'm just speaking about the majority of roles where thats not the case.

umanwizard

The concept of engineer predates the accreditation systems you’re referring to by centuries.

torben-friis

The huge difference is that we don't know the cost we're going to end up with.

Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?

heipei

Local AI models are already more than capable enough writing code that surpasses the ability of any bad or even mediocre engineer. That is not something we need to worry about.

In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.

girvo

Qwen 3.6 27B is shockingly good, just to add to your point.

guelo

Thank goodness for China or Silicon Valley capitalists would be locking us down into an unimaginably awful dystopia. Though they're not done trying.

adamddev1

All those examples are fundamentally different because those are hard-coded, deterministic programs/algorithms/libraries.

bpye

At least today, it isn't practical for most people to run these models locally- I think adding a dependency on a cloud service is different enough to some local (possibly open source) tool like an IDE.

StrauXX

Self hosting at a reasonable scale is much cheaper than people think. I am running clusters of DGX Spark machines with BiFrost load balancers in our company and for client projects. They work flawlessly!

128 GB unified memory, Nvidia chip and ARM CPU for just around 3k€ net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.

And they don't even pull that much power.

byzantinegene

factor in depreciation and energy costs, and a subscription might just be cheaper.

jasonjmcghee

Slack, GitHub, Figma, AWS, etc

Lots of people use firebase, supabase etc.

Many people's jobs are centered around using Salesforce

It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it

ares623

"What kind of engineer are you" - Jesse Plemons wearing bright-red sunglasses

vict7

IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

I’m sure you can see the difference between a garbage collector and a nondeterministic slop generator

But it feels good to equivocate, so here we are.

yjftsjthsd-h

> IDEs are free. Libraries are free. Languages are free. This is becoming more like an internet subscription where you’re at the mercy of Anthropic the same way you may be at the mercy of Comcast.

Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.

folkrav

The comparison to me sounds like "you dont have to take a plane to travel between countries, paddle boats exist".

c-hendricks

How much does the hardware to run them on cost? Especially to get decently sized models running at decent speeds.

thunky

Not all IDEs are free. Not all LLMs are subscriptions.

vict7

> Not all

is doing a lot of work to avoid engaging with the actual argument.

undefined

[deleted]

throw4847285

This is what happens when the people building your society's advanced technology have no theory of mind. Too much science fiction, not enough human interaction.

The Social Network is looking more and more prophetic. When your only ambition is success for its own sake, driven by insecurities you can never name, you're going to make things that don't actually serve other people, or only do as an unfortunate side effect of you making money and gaining power.

Matticus_Rex

Hmm? There are a number of top AI people who make this exact point, though, and are trying to drive things toward elevating thinking. There's more that can be done, but quite a bit is a user mindset issue that's just going to have to shake out over time.

throw4847285

You are right that there are a large number of top AI people who are very concerned with the ramifications of AI. I would say there are two core issues.

The first is that these people have often been indistinguishable from the ambitious and power hungry people I was decrying. Sam Altman was able to blend in for a long time by copying the rhetoric of AI safety types. When there is this much money to be made and power to be amassed, it's not hard to pretend to care.

The second is that I have often been disappointed by what the AI safety folks are concerned about. There has been a huge amount of talk about existential risk and not nearly as much about, say, the impact on children if education is outsourced to AI. The obsession with science fiction led to some very out there scenarios that may or may not still happen, but have nothing to do with the very real impacts of AI on people's lives right now. I believe that even the well intentioned have been too detached from humanity as a whole.

sayYayToLife

Do you live in the US? Because there is a military reason for chasing success for success sake. The US doesnt really have a choice here. We live in a unipolar(or bipolar world) and the US must be number 1, or the international system breaks down and we can expect incredible amounts of war. (Its generally agreed that historically multipolar worlds are the worst to live in).

If you don't live in the US and you are taking advantage of the US security umbrella, sure, you can deny AI and enjoy a curated lifestyle.

But living in the US means we must deal with this.

throw4847285

I guess I appreciate the explicit realpolitik of it all, but I'm not sure I buy your argument. The US is the world's current dominant global empire, and unlike other leftists, I don't believe this is inherently a bad thing. But I don't think it's necessarily a good thing either. It's just the reality.

I also think your whole unipolar vs. multipolar framework is ahistorical. It's always been more secure living within the cosmopolitan center of an empire, but there's never just been one empire.

I just think your view of history is too simplistic to be accurate or interesting. If/when the US declined as a global power, the results would be entirely unpredictable. They wouldn't adhere to the kind of formula you're describing.

kolja005

This is an interesting perspective I haven't heard before. Do you have links to anyone who has articulated this further?

erdaniels

This feels AI written as the post goes on. Either way, I'd like for us to stop fetishizing how we can use AI to make us stronger, better, and more valuable engineers. It's exhausting and doesn't consider other ways to use it. I've only been using it lately for tasks that are a step or two above google. Having it write code for me has just been a slippery, unfulfilling slope.

zozbot234

Apparently the write code part works a bit better in languages like Rust and perhaps Swift where the compiler is unforgiving in rejecting outright nonsense and the AI can iterate on any errors it gets. Of course logically flawed code is always possible so this does not replace human review. But code in these languages is also a bit more compact and hopefully easier to understand for a human.

pydry

I wish people would stop pretending that agentic coding and elevated thinking arent mutually exclusive.

Theres way too much money on this hype train now though to point out the emperor isnt wearing any clothes and way too many people who always did think that "boilerplate spew" (the one thing AI really does well) is a valid form of programming rather than a shortcut to tech debt.

iewj

All this stuff is proving to me one thing: we don’t need to get better at producing stuff faster. We need to get better at producing the right stuff. Right stuff meaning getting better at project selection.

I guarantee a firm that is really good at project selection with hand written code will annihilate a firm that is full of agentic engineers. And frankly I hope the outcome is that all those who went all in on agentic coding go out of business because they got beaten by disciplined and visionary leaders who understand this subtle and nuanced point.

deadbabe

Faster faster faster, everything faster except for fixing the climate, sheltering the poor, healing the sick, reducing inequality. Until we are planning to do these things with the same speed and enthusiasm as pushing out more crap for people to buy, the only thing we are moving faster toward is our own doom.

vasco

> I don't know how to do something so nobody else knows either

pydry

They definitely all believe they can.

crakhamster01

Pasted it into Pangram AI and it classified the article as 100% AI generated, so take that as you will...

koshyjohn

Ironically making the case for the thesis of the piece - what happens when you let A.I. do all the thinking instead of exercising competent judgement. Disclaimers and leaving real thought to others do not make it much better. Pangram is confidently wrong here.

crakhamster01

I read the piece before coming to the comments and had a similar feeling as OP - hence my comment. If AI wasn't used then my apologies, I didn't mean to diminish your work.

I agree with the premise of your post, just felt it was a bit long and the section headers read a little weird.

y-curious

Then why is the title photo very obviously AI? I feel like you get people on alert off the bat

freetime2

The scary thing is I have seen high level directors and executives say “I asked ChatGPT and it agreed with me” as a way to try to settle a debate. People seem all too willing to delegate even matters of judgement to AI.

On the other hand I have been in debates where someone asks ChatGPT to draft a list of possible approaches and pros and cons - and after reading through the list we were all in alignment on the best approach.

The latter I think is a constructive use of AI to elevate thinking, while the former has me thinking it may be time for a career change.

sesm

To make an exhaustive list of possible options you need to find key questions that divide solution space. This requires logic, which LLMs lack.

falcor84

> This requires logic, which LLMs lack.

What? I've heard many takes on what AI lacks, but never this one. We had ChatGPT being able to solve an Erdős problem on its own yesterday [0]; how could you explain that if it cannot do logic?

[0] https://news.ycombinator.com/item?id=47903126

sesm

LLM didn't solve an Erdos problem, it generated a text that a human looked at, cleaned up, corrected and used as base for a solution.

WRT logic, there a multiple occasions of LLMs answering incorrectly to trivial logic puzzles. Of course, with each occasion becoming public they are added to training data and overfitted on, but if you embed them in a more subtle way LLMs will fail again.

CorbenDallas

There are plenty of engineers, who simply can't think, AI will not change anything in this regard.

quantum_state

Can’t think properly seems to be the real issue. That’s one of the reasons that SE domain is mostly in ruin. AI won’t help, only to delay a bigger mess.

taurath

Ever since the standard office setup went from offices or cubicles to bullpens and hot desks there is less and less time to think, and all of that is a management decision to ship things as fast as possible

joe_mamba

How do you graduate your engineering degree without being able to think?

Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.

People might hate this but being a good cheat requires a lot of critical thinking.

lispisok

Grade inflation and schools passing kids who should fail to game metrics and keep collecting student loans is a problem. I wouldnt consider hiring anybody from my alma mater who didnt score a sandard deviation or higher on the tests.

23df

Unis imo are irrelvant in the context of software production. Id take someone who didnt finish or dropped out provided they can answer the question below.

The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.

ironman1478

You don't need a 4.0 to graduate. And even if you got one, a lot of grades are composed of tests, not projects. You can just memorize your way through things if you were dedicated enough.

It's not really that hard to get a degree in engineering if your only goal is the degree itself.

sersi

That does seem to depend on countries and universities.

I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining

johndough

> a lot of grades are composed of tests, not projects

(Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.

vips7L

Half of my graduating class could barely program.

whstl

Yep. Way more than half of the people I interview can't even do a very basic FizzBuzz, even with guidance. Those are people with a degree, job experience and reference letters.

spacechild1

What did you study?

patrick-elmore

I've seen it happen multiple times. Engineering degrees are no different than a vast majority of degrees in that if you are good at the read and regurgitate cycle, you can make it through. Not only can you make it through, but you can do it with a very respectable GPA. They come out with a large dictionary of keywords in their arsenal, but no idea how to put them into practice. Some are able to put it into practice and tie it all together. As they see practical examples of those keywords in the real world, it starts falling like dominoes, and at an accelerating rate. For some, it never goes much beyond keywords. The dominoes fall, but it is slow, and they stop falling for extended periods of time for them. Not many mature engineering organizations can tolerate that sort of progression rate. They usually don't last very long at any one place, until they find a company where they can blend into the background due to a combination of company culture, and low complexity systems being worked on.

YZF

The practice of software engineering is not what they teach in university.

I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.

spacechild1

OP should have put "engineers" in double quotes. Many software developers like to describe themselves as engineers although they don't have an actual engineering degree. A lot of software development resembles plumbing more than engineering, so most devs don't really need an engineering degree anyway, but they should be more honest about what they're actually doing and not try to elevate themselves with fancy titles.

You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.

dml2135

You can do engineering without an engineering degree. A degree is just a piece of paper.

what-the-grump

I don't know but I can point at more than half of the people that I work with that can't think, and every time they try to, takes a whole group of people that can think to undo their mess, they all have degrees and I don't.

So what does that tell me?

Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.

shagie

A degree is passing the test. Not all degree programs get into more advanced topics nor do they necessarily require that someone is able to work through how to solve a problem that they haven't seen before.

--

A lot of students (and developers out there too) are able to pass follow instructions and pass the test.

A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".

Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.

--

... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.

Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.

This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.

... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.

jfreds

I agree in part, but I think AI does meaningfully make it harder for leadership to detect their bullshit.

randusername

In strength training circles there's question of "what are you lifting for?"

For many people the root answer is insecurity. There's nothing inherently wrong with this (you would do a lot worse as an outlet) but you ought to be honest about what lengths you're willing to go to in achieving the appearance of strength since your goal isn't strength itself. Poor form, injury from cutting corners, steroids; these are all temptations that the guy hefting concrete buckets alone in his garage won't face.

I think the programmers that write code for the joy of creation and problem-solving won't have much trouble holding onto their expertise. The ones that were never that way, or that had it burned out of them, are the ones in danger.

gwbas1c

Another comment in this thread points out that the people having trouble are the ones that previously used to copy & paste stack overflow without understanding it.

Daily Digest email

Get the top HN stories in your inbox every day.