Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

jdw64

The real issue, in my view, is not AI itself.

The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.

What remains is documentation and automation.

But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

AI is following the same pattern.

What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

The West has seen this before, especially in the case of General Electric.

GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.

The same mindset is visible today.

The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.

vishnugupta

> removing people and organizational slack

You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.

I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.

aleqs

The problem is, in the minds of these people 'firing at 100% all the time' generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value. I would have loved to be 100% engaged working on solving real problems in honest ways at some of my past jobs, but alas MBA/marketing leadership, which has taken over much of tech has very little interest in actually building good things and solving real problems in honest ways.

joquarky

This is what happens when companies become so nepotistic that they only believe in their own bullshit.

"Can they really breathe fire or did we make that up?"

buzzerbetrayed

> generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value

When I read comments like this I can’t help but wonder where people like you work. It’s completely unrepeatable to me. I work with really good people, all the way to the tip, and no try to make money by increasing value for our customers.

Apple, Google, Walmart, Amazon, Home Depot, Anthropic, Toyota, and a hundred other companies all offer me incredible value for so cheap. Why are people so cynical about a world that offers them unimaginable riches everywhere they look.

Sure there are bad companies. And if you work at one of those, go get a new job.

WalterBright

Profit maximization is a continuous process that has generated our high standard of living.

P.S. I welcome all attempts to prove me wrong!

t-3

I think the bean counters get a bad rap for this a bit unfairly. The past century has seen more progress in knowledge and technology than the rest of human history combined. The world and business environment are changing too rapidly to make longtermist thinking practical.

Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.

Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.

Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.

aleph_minus_one

> R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself?

Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.

Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.

gopher_space

> Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

Here's the problem with your reasoning. This paragraph is simply wrong, with each sentence being untrue. Education and training are never wasted money, the skills aren't changing that quickly, there isn't any slack in the workforce, and qualified worker shortages are being reported in every trade across the board. Someone needs to solve the problems you hand-wave away.

> this works just fine in most cases - somebody will learn what they need to get paid.

That's me. I specialize in learning new domains. I cost like 8x more than the random junior you'd be able to hire with a functional onboarding program.

danmaz74

"The world and business environment are changing too rapidly to make longtermist thinking practical." Tell that to the Chinese...

watwut

Universities dont do product oriented research. They do more general research. And also, they should not do product oriented research, that is companies role.

And universities research capabilities are being destroyed too right now.

acomjean

I’ll note at the end of the last century I worked at IBM research which had a budget of 6 Billion dollars. Management was trying very hard to get better return on that investment. Even today IBM though often ridiculed in the tech space (sometimes they do deserve it) spends a lot on R&D.

NordStreamYacht

Lucent at the same time went through the same issue: how to monetise Bell Labs.

Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.

When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.

I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.

rvba

Did anything come out from those billions?

numpad0

> Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time

Not just that, you have to be always doing less for more gains. Real work is bad work. Shrinkflation good. I don't know what it is if it wasn't a pure scammer mindset.

flybrand

> Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time

This is a classic Goldratt / Theory of Constraints mistake.

chanux

> When bean-counters took over the ecosystem [...] in their mind, every part of the system needs to be firing at 100% all the time.

This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)

stephen_cagle

I believe private equity ownership represents this in an aggressive form. The 2 and 20 percent takes that PE usually mandates as part of their purchase agreement means that they are highly highly incentivized to maximize short term "wins" over long term survival.

I think Chesterton and Taleb also had pretty reasonable things to say about understanding a system before you make changes and fragile/anti-fragile systems as well.

abustamam

I haven't read this book but I see it often mentionedin contexts like this. it was written in 2001 and I think its synopsis still stands.

Slack by Tom Demarco (2001)

https://www.goodreads.com/book/show/123715.Slack

port11

It’s especially ironic since the bean counters produce no value. I like ‘Developer Hegemony’, even if the title needs changing. The author makes a great case for why information workers produce almost all the value. It’s them that make the profits, yet they’re always a cost center.

netcan

>. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.

Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.

It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...

If there were known "make more software, make more money" opportunities available, they would have already done them.

Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.

The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.

A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.

Terr_

> If there were known "make more software, make more money" opportunities available, they would have already done them.

Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.

Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.

netcan

Ok... but this just generalizes into the "known things" type.

Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.

More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.

A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.

"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.

lazystar

> doesn't generate any for later.

"any" is quite an assumption.

netcan

I didn't mean this as an absolute statement. Relatively, and in the short term.

lo_zamoyski

I agree that any sufficiently complex human operation - whether industrial or scientific or whatever - requires a culture and a living tradition that develops over time and communicates knowledge and understanding across generations. In fact, many problems in our culture can be attributed to a contempt for tradition that developed. (It is true that tradition can ossify. That's can be a problem with attitudes toward tradition rather than tradition itself, or a sign that something needs to be addressed. A good tradition is a dialogue spanning history.)

However, it is also true that technology develops and produces changes that in the short term cause pain, but in the long term produce a better outcome in some desirable sense. Coding is not an end in itself. Just as switchboard operators and human computers are obsolete, because the conditions that caused the need for them ceased to exist, it may be the case that a certain manual style of programming is also becoming obsolete.

You can imagine human computers decades ago thinking that computing technology is bad, because people will loose numerical facility. But this misunderstands the structure of the value of practical skills and the difference between knowledge of principles and practical skill. Sure, few if any people today can perform numerical computation as quickly and competently in their heads or on paper as human computers, but...

1. that's different from understanding the principles of computation which is closer to a theoretical grasp and has eternal or at least lasting value

2. the value of the practical numerical facility was rooted in the need for obtaining results as quickly as possible, and that particular set of techniques or skills is no longer practical

Perhaps manual coding is like that. I don't know why people are surprised. Generative programming has always been a desired end in CS for along time. CS grads can still and should still learn the principles of their field and learn them well, but the profile of practical industrial techniques and needed skills is changing. As software eats more and more of the world, it is becoming increasingly impractical for manually fiddling with silly bits of plumbing. We obviously haven't been able to develop abstractions well enough to avoid it, and part of the reason is that appetite comes with eating. Once you make something easier, it makes it easier to achieve even greater things more easily...hence new plumbing and implementation complexity.

Let's be honest here. Much of programming is intellectually dull. It's is plumbing. It's not algorithmically interesting. It's not interesting from a modeling perspective. It's not interesting conceptually. It's not interesting as a matter of system design. Most programming out in the wild is the same old crap being recapitulated a million times over. If all you want is to become skilled in doing the same thing over and over again, then I can understand why you might find LLMs threatening. Your market value as a maker of yet-another-flask-web-app has plummeted hard. People who enjoy that kind of programming are generally not very intellectually motivated people - at least not where programming is concerned - and likely prefer the tedious comforts of rehearsed ephemeral detail. LLMs can keep us from rabbit holing and focused on the domain.

In any case, I don't think LLMs are a threat to the field per se. I just think that the skill set is shifting and developing. I think we are still figuring out what it means to develop the right understanding and intuitions to develop software without the benefit of having had done it manually. Time will tell. However, I also think being able to read code has become relatively more important than writing it. When you have to verify the quality of LLM-generated code and put your name behind it, you have to be able to understand it, and that's a somewhat neglected skill in my view. Programming very often prefer to write code than to read it. LLMs might be just the thing to coerce an improvement in the latter sort of literacy. With this also comes a greater importance of formal specification. That's where I would expect the future of the field to shift.

aleph_minus_one

> The core problem is that decision-makers—often far removed from actual engineering work — believe that tacit knowledge can be replaced with documentation, tools, and processes. [It] cannot.

I am not so certain:

For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.

The problem rather is: the volume of documentation that I would have to write would be insane; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.

So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.

This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: this is in my experience the real reason why it isn't done.

Joeri

The deeper I wade through Microsoft’s Azure documentation the more I feel the reality of this. There’s so much of it that it basically is unreadable in real terms, most employees will never get the time allocated, and when you do try to exhaustively read up on a specific area you find that the documentation is incomplete and wrong in subtle but important ways. I’m sure Microsoft spends a lot of resources on that documentation, but it seems somewhat of a hopeless mission.

chanux

There are certain things that are too obvious to some person at a given time. Hence they would not consider it's worth documenting. Some of those things are important bits and pieces of the theory[1] of the program.

[1] https://pages.cs.wisc.edu/~remzi/Naur.pdf

torginus

I think it's an important property of a system to be documentable not just documented. What I mean essentially, is the system was designed with sound principles, and said principles were written down and followed.

I have seen this work only once in my life, and it was so nice to see, but yeah, most code is just a ball of twine, and even if there was a guiding principle beneath, it has been long abandoned, and overruled, and the only way to understand the system is to take it all in at once.

everforward

I think it’s reasonably easy to design a system that’s documentable and documented. It’s very, very hard to maintain and iterate on a system while maintaining those properties.

Hacky things will make their way in because it takes a month to do the documentable thing and a week to ship the hacky thing.

It takes a lot of skilled people from varying disciplines to figure out what things are going to survive long enough and be important enough to spend the resources doing the right thing instead of the hacks.

It bites both ways. I’ve seen core business products crippled by years of digital duct tape, but I’ve also seen internal tooling that never really becomes useful because they insist on doing the “correct” thing and it’s constantly a year behind what we need it to do.

ianstormtaylor

This is such a weird counter-argument, that only serves to prove OP’s point.

“It’s not that it’s not documentable. It’s just that it would take tens of thousands of pages and no one would be able to write that or read that to effectively take over the project.”

Okay, so surely this is what OP had in mind when they said documentation doesn’t work… Is it no longer safe to assume reasonable expectations when making an argument? Why the need to “well actually” them with this response?

lavp

Documentation should serve as a general overview of the system (purpose, architecture, etc.), and elaborate on the interface of that system. Other than documenting historical relics like ADRs, I see it as a net negative in being very granular.

It quickly becomes outdated and at some point you just need to accept that only the code will be the most accurate source of truth.

iugtmkbdfil834

<< [belief that] knowledge can be replaced with documentation, tools, and processes. [It] cannot. << volume of documentation that I would have to write would be insane

I am not sure those are mutually exclusive. We all know if situations where a person knows of tiny and typically undocumented system quirks. We even have a corporate name for it: institutional knowledge. The issue is that executives think it can ALL somehow be done, when even cursory real life project lift will quickly teach one how insane average gap between documented and undocumented tends to be. Add to that near constant changes to API, versions, systems, people and I can't help but wonder at executives, who really do think this way.

wcfrobert

But you've just perfectly described the tacit knowledge problem.

Yes, you can spend all your time writing docs, or just mentor a junior and let them grok the system through osmosis.

Also your doc won't ever have 100% coverage unless you write an absolute tome. Tacit knowledge are things that are so obvious that you wouldnt even think of writing it down in the first place.

paganel

It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading (even tens of thousands of pages of) documentation, that is the crust of it.

aleph_minus_one

> It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading

I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.

If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.

In other words: there mostly exist two kinds of tasks:

- easy, regular adjustments

- deep changes that require a really good understanding of the system

chrisweekly

crust (edge/border) -> crux (heart/essence)

Fr0styMatt88

I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.

The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?

The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.

I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.

MattJ100

We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.

Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.

Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.

In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

strange_quark

> In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.

I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.

ethagnawl

This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).

I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)

2ndorderthought

Personally this type of behavior played a large part in why I left 2 oss communities.

A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.

They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.

torginus

I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs.

From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong.

I just reset the router, actually took the time to do everything by the docs, and then it worked.

Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone.

notnullorvoid

People are losing their ability to reason without prompting an LLM first.

It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.

I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.

lxgr

> if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.

2ndorderthought

There is a lot of wisdom in this.

At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.

It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.

avmich

> At the end of the day chatgpt won't be there

Are you sure it won't?

undefined

[deleted]

gonzalohm

I think you are right, but it also makes sense. Human communication is inherently inefficient. Points of view, miscommunication, interpretation... It's the obvious point to automate. Not defending it, just my thoughts

nomel

I have a couple of colleagues that run all communication through an LLM. It really helps their writing, but it does nothing to help their understanding.

It also makes me hate communicating with them because they'll (somewhat obviously) prompt the LLM to make the conclusion they want. For example, "respond to this jira with why this isn't an issue"

hnthrow0287345

You could have done this with Google search or Wikipedia or reading through books though

musebox35

I am rereading the Asimov robot novels. A decrease in human to human interaction is a major side effect that he has foreseen. Decreasing interaction and collaboration are some of the core themes.

throwaw12

This shows Western government system is broken.

In ideal world (where we don't live):

* Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)

* Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)

* Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)

In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies

sph

> In the west, government is optimizing for "loads and loads of moooney"

More appropriately, government is optimizing for 4 year electoral terms. No one cares about longer timescales necessary to tackle hard problems.

This is where autocracies like China, or monarchies for example, win over democracies.

mancerayder

Counter-examples are France and Japan. Democracies, electoral terms. High-speed rail that the world looks up to, investment in infrastructure everywhere. In France you have Grand Paris, a programme to transform the suburbs into denser housing and commercial space, a calculation and planning that INCLUDES public transport.

And the green initiatives in France. These, transit, Grand Paris, and much more are initiatives that take many years to realize.

Now let's move over to New Jersey and New York City. The most densely populated state (NJ) has some of the worst transit despite being in the NYC greater metropolitan area. An old tunnel between the two needs to be replaced, but politicians with four year mental horizons canned it until recently (ARC project). Infrastructure is a fight between Federal, two states and a city politically and partially from a funding perspective.

We could go on, but I just wanted to point out that the United States is a poor example of good governance. And that we don't need to live in a totalitarian nightmare just because we acknowledge the US fails to produce innovation and investment for the public good.

And let's not talk about debt, as if it is a unique problem to France or anything new.

hermitcrab

>This is where autocracies like China, or monarchies for example, win over democracies.

Autocracies like China, are able to plan longer term. But, because they don't regularly change their leadership like a democracy, the leaders become old, tired, schlerotic and surrounded by 'yes men'. Hence "Democracy is the worst form of government, except for all the others.".

throwaw12

Western democracy is very interesting.

Corporations promote people to Principal or distinguished engineer only when they prove their worth by running long running large scale projects.

But when it comes to governing the whole country: lobby, marketing and boom, you are a president for next 4 years, which is anyway not enough to deliver anything big and see the impact. (Except the destruction, destruction is easy to cause)

markus_zhang

I think that has something to do with the prerequisites of democracy.

I believe one important factor for a democracy to work properly, is to have a large number of citizens who 1) can stand up and push back when they feel something is wrong, and 2) is sufficiently knowledgeable. We don’t have that anymore. Of course I’m also to be blamed for that.

hrimfaxi

I wonder what longer cycles with easier recall methods might yield.

fullshark

It's also where autocracies fail spectacularly and lead to decades of misery for their citizens.

kibwen

> This is where autocracies like China, or monarchies for example, win over democracies.

This is the wrong characterization, and in fact it's where monarchies lost out to democracies. Without an organized system of replacement in response to poor performance, autocracies with a poor leader are stuck with that poor leader for life. Ask North Korea how that's going. The upside is that if you have a brilliant leader, then you also get the benefit of that brilliant leader for life. The variance in an autocracy is absolutely huge, and that's their weakness in the long term. Democracies take the edge off, and are intentionally designed to have both less upside and less downside, trading performance for stability. Xi Jinping looks good comparatively because we have gormless losers like Trump and Biden to compare to him to, but he makes plenty of his own mistakes as well (the whole Taiwan situation is a unforced error driven by his own ego, similar to Putin with Ukraine), and we've seen historically what China looks like when it's stuck with a shit leader for decades (Great Leap Forward, anyone?).

techpression

I think of the four year cycle as one year to whine about the previous (if different) government you took over from, two years of governing and the last as a ”get ready for election”. So in the most optimal scenario you get three ”peaceful” years. It’s very few things that can be done well in three years at ”ruling a country”-scale.

markus_zhang

I always think that’s the failure of citizens, not just the officials. Eventually history is going to blame us for not taking action, not pushing back, and pretty much sleep tightly when things fall apart around us.

throwaw12

> The problem is a management pattern .... Short-term cost cutting

Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.

It works fine with machinery and materials, but not with humans.

When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.

Humans also break in different ways:

* They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought

* Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually

This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.

Next is the knowledge, more you outsource, more you lose it

gnz11

Perhaps US companies should invest more in their employees then? Advancement, promotions beyond %1-3% COLAs, career paths, etc would go along way to keep employees interested in seeing their employers succeed instead of jumping ship every couple of years. The would require some effort from the C-suite however and since they jump ship every few years as well, I don't see that changing anytime soon.

mancerayder

Unfortunately the Wall Street accountants who run our companies don't mind if you jump ship after your 2% 'reward' raise. Because when someone new comes and costs 10 % more plus recruiting costs, that latter person has 'proven' their worth in the market, similar to when a house goes up in value due to scarcity.

If you were to explain the costs of knowledge lost, of training, of taking a risk on a new unknown person, of relationships, there's no answer because it doesn't show up in any operating expense worksheet.

What you're supposed to do is find another job, and explain that you love this job so much, but the other offer is really good, can they come up close to it and you'll stay. Repeat this every few years or find a new job and move to it.

throwaw12

Invest in employees is very broad statement.

Before investing to employees I think it should revisit management practices and strategies, which starts in MBA and university.

Instead of teaching how to increase shareholder value in the short term, it should also teach how to increase value to the society in the long term as well (and focus on it highly) - not just say: if you win society wins kind of generic fluff.

Without changing management strategies everything becomes short term after a while

classified

> Most MBAs are taught to optimize and reduce the slack.

With a myopic definition of "optimize". But as long as they are being rewarded for it, the incentives are broken.

samiv

Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter? Sheesh..worst case scenario is that the work done now will benefit someone else when they've already left.

Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.

baq

> Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter?

Vesting schedules, conditional grants, contractual equity ownership requirements

cucumber3732842

>Vesting schedules, conditional grants, contractual equity ownership requirements

In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.

derf_

> ...any actual product RD and support is a real annoying cost that just cuts into the profits...

Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."

It is a very difficult mindset to argue against.

BoingBoomTschak

Would be interesting to get a law that says that all positions supposed to take long-term decision should be paid with X% of their salary in (non-redeemable until Y years?) stocks.

bsenftner

That 'real issue' is the lack of formal effective communications training across the board in the United States, and probably all of Western Culture.

The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.

That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.

liendolucas

I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.

My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).

I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.

Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.

andai

Recently, after a month of heavily AI assisted programming, I spent a few days programming the good old fashioned way.

I spent most of the time confused and frustrated, straining painfully against the problem. I spent most of my 7 hour session this way, and the task was successfully completed.

But I was startled by the difficulty. I began to worry that I had given myself some kind of brainrot from disuse. Then I remembered, my goodness, it always felt that way, if I was ever doing something new. That's just what it feels like, grappling with a problem you haven't seen before.

It was always as hard as that, I was just no longer used to the feeling. You get used to the difficulty, and then it feels normal.

Or indeed: you get used to its absence, and then it suddenly feels overwhelming and "wrong" !

I think maintaining the capacity to tolerate difficulty and discomfort is a "muscle" well worth preserving.

MengerSponge

I'm biased, but I think you're on to something. People are writing about this under the broad framing of "cognitive surrender"

afarah1

I've had the "problem" of forgetting syntax before any AI, with IDE autocomplete. It was only ever a problem when switching jobs and being expected to write syntactically correct code on platforms without syntax checks or autocomplete. So I did some exercises on such platforms in preparation for interviews.

In the real world, reliance on syntax autocomplete and checks was never an issue. The important thing has always been understanding the core concepts of the language and the runtime, e.g. how the event loop works with Node.js and how to write asynchronous and event driven programs.

whywhywhywhy

I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.

cableshaft

Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.

But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.

It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.

A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.

whywhywhywhy

The puzzle thought was mostly me trying to figure out why AI coding was more emotionally tiring when I'm literally doing less and creating more, maybe it's something else.

tim-projects

I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.

app134

DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.

I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.

confiq

why I agree with both of you?

Forgeties79

>I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

This feels unfair to the people dealing with your (LLM’s) code. You don’t vet it at all? Or am I reading this wrong?

WhatsTheBigIdea

What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.

The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.

It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"

The profit motive simply does not drive society in this direction.

The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.

Avoiding crisis over the longer term is simply not investable.

"Fair" is not a relevant or useful conception in this context.

whywhywhywhy

My boss gets annoyed if I try and do things without AI so eventually I caved but I don't see the point in reading it if thats the culture at the company being pushed.

Also anyone else dealing with it is just gonna be dealing with it via AI so it doesn't really matter.

If I worked somewhere where the CEO cared about hand written code I would be writing it and reading it but I don't.

leptons

What makes you think the people dealing with the LLMs' code won't also be using LLMs to "deal with it"?

We're all now basically junior coders who have no idea what is in the codebase. Without LLMs, we won't be able to "deal" with any of it.

And I don't like it one bit.

drzaiusx11

Another tragedy of the commons tbh

sahilagarwal

I generally don't have as much time (or patience / fucks) anymore in my day. So, I use AI 3 days a week. On the other two days, I don't use assistants to code, just ask them to review my work after its done.

Helps me keep sane tbh. And keeps the edge sharp.

wreath

At work we are literally forced to use AI and it’s part of our performance review. Even though I really like coding by hand, I have to now use AI so I can keep my job. I will try this out though, 2 days per week using AI and the rest handcoding, enough to stave off the inevitable lay off perhaps.

irishcoffee

Surely it can’t be hard to token max at work the same fucking way people have games Jira metrics for years and years.

If I’m ever in that position (everything I work on it air-gapped, it’ll never happen) I would make it a priority to figure out how to game that bullshit metric so I could get on with solving actual problems.

I imagine a lot of people do this. Metric becomes a target, etc.

dd8601fn

I have always had a problem, worse than most I think, where if I’m away from a language for a bit I lose my ability to write it quickly and competently, real quick.

It doesn’t matter if I was quite competent in it… the mechanical bits fade fast.

Doing llm assisted work is going to be like pouring bleach on my brain. I can feel it. The more I use it the worse it will be for me.

I can still formulate what I need, and problem solve just fine, but all the nuts and bolts evaporate.

RataNova

There is a difference between understanding the shape of a solution and keeping the "finger memory" of a language alive

dd8601fn

For sure. I just… I feel really really dumb every time I have to knock the rust off to use a language again.

black3r

> Knowledge comes basically from manual trial/error almost daily.

This is the important statement, although I'd swap the word "knowledge" for "experience" here. You can gain "knowledge" from books, but only trial & error will give you experience to know "which" knowledge to use in which situations.

And what's important about this in the context of working with AI is the "error" part.

You have to experience errors to become truly experienced. And part of the experience is to recognize when you're about to make an error - to avoid it.

AI-driven processes mess up our natural trial & error learning curve in multiple ways:

- the AI push forces us to ship features faster (cause if we don't, our competitors will), reviews are sloppier, we discover errors later on, the feedback loop gets longer...

- using AI to debug and fix errors means we spend less time understanding what the error was about, which means we learn less about how to avoid the error in the first place...

- AI itself sounds overly confident, so reading its outputs without previous experience you may be less likely to recognize when it's making an error, which makes it harder for you to recognize when you're making an error trusting it...

On the other hand, this last point I tried to make is also why I don't think avoiding AI completely is a good strategy. Whether we like it or not, AI is becoming a part of developer's workflow. And as such, we also need to learn the trial & error process of using AI - what makes AI make errors and how to prompt it to avoid that.

coffeefirst

I really don't understand the people who use it for everything.

It's become my first stop for search because it's doing it in bulk—read 50 results and lead me to something useful.

But I just got Claude MCP connected to my personal email/calendar/etc and I can't figure out what to do with it. It wrote a summary of my inbox that took as long to read as flipping through my inbox. And since it makes no sense to delegate decision making, I'm not sure what the actual work I'm supposed to give it would be.

AlecSchueler

> Giving away that at least for me means to become a dependent zombie.

I suppose people felt the same way in the agrarian revolution and later again with inventions like the plough. Suddenly a lot of people offloaded their food independence onto the work of a few.

What might it open up in our lives to be free of knowledge?

That said, these machines don't run themselves, if we disengage our minds we might get stuck in a dead end with them.

TonyAlicea10

“Money was never the constraint. Knowledge was.”

The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.

Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.

If you don’t care enough to write it, why should I care enough to read it?

barankilic

I am really amazed at how we are really okay with LLMs writing code end to end (without human in the loop) / dark factory concept but when it comes articles, HN is suddenly against LLMs writing words. I do not see the difference between writing code and writing prose. Both have keywords, grammars, syntax, meaningful combinations (function or chaining in code / collocations in words). If we think that AI-generated words are not meaningful or easy to follow that same must apply to AI-generated code, which may be harder to read or understand since it is not written by human. Let's stop being hypocrites.

Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.

01100011

Who is the 'we' here? When did I become ok with LLMs writing code end to end or against LLMs being used to assist writers? I wasn't aware I held either of these positions.

undefined

[deleted]

avocabros

That's because the purpose of code is to be used, not to be read.

The only purpose of the written word is to be read.

recursive

I've always tried to write code for future maintainers first. That is often me.

arealaccount

The 1s and 0s are meant to be used, the code is meant to be read

SoftTalker

That's the difference to me. Code is used as instructions to computers. Written human language is used to to communicate thoughts, ideas, feelings to other humans.

I disagree with the premise that "we" are all OK with AI slop computer code however. Even if it's just for consumption by machines, for at least some developers it is a creative outlet.

unleaded

The purpose of writing is to get your thoughts across in words. A prompt sufficient enough to get out an article with zero chance of it adding things you don't mean has to contain as much information as the article itself would. Just write the article.

wiseowise

> I do not see the difference between writing code and writing prose.

That’s the problem.

bontaq

This is a funny point. People don't want to read LLM code either, so who knows where that puts us.

djeastm

It puts us in a secondary, less-rare, less-valuable role out of the driving economic loop we've grown up in.

barankilic

Since I cannot edit my comment, I replied my comment. I did not mean to insult HN moderators. I am actually very happy that they are protecting HN by removing and flagging AI content. I only wanted to attract attention to the topic that for some areas AI is promoted but then for some areas AI is demoted and I do not get it.

What I mean with "we" is that there is a general perception that using AI is okay and mandatory. This idea is becoming more and more prevalent in management positions and it disturbs me deeply.

I got some replies since I commented, but I am still in the same mind. I did not see a strong refutation to my idea. Why are some people (I didn't want to use the word "we" again) are okay with AI use in code but not in prose? I know that they are not exactly same but they have some similarities. If we are unhappy with sloppy prose, why are we happy with sloppy, potentially buggy or hard to maintain code?

undefined

[deleted]

joenot443

I would say that the simple reason is that writing is often artistic and coding very rarely is.

I don’t listen to AI music or watch AI videos, I don’t want to read AI articles

notnullorvoid

It didn't feel at all AI written to me. It's much better than the AI written junk that HN laps up without noticing.

watt

It is full of these short sentences that AI writing loves, sort of to feel "punchy". Normally you would copy-edit that stuff, join them up, have the writing have some rhythm. I agree with GP, the article is hard to read because it seems to have a lot of https://tropes.fyi/

notnullorvoid

Twitter is full of strung together short punchy sentences, and it spread to articles long before AI.

Not discounting the possibility that it's AI, but it didn't have the same repetition, contradiction, and inaccuracies I notice in other AI content. Though even that isn't exclusive to AI.

sinuhe69

I love writing these short, punchy sentences. It makes the impact much better IMO. But maybe it’s just me.

clarkdale

LLMs are trained on real life grammar written by humans. Sometimes the characteristic traits you see by LLMs are written again by human hands.

aldanor

Not the factory floor. The receiving end.

It wasn’t one bottleneck. It was all of them.

Not the nuclear material. The pattern.

Money was never the constraint. Knowledge was.

...

oxag3n

On the other hand articles mistakes (missing the, a or wrong the). So it'z not AI translated, but also not fully AI generated...

xantronix

The advantage of Bored Ape NFTs is that one could very quickly visually identify one and block or scroll past without much thought. On the other hand, reading AI slop takes a few extra cycles to parse out and categorise appropriately, with the occasional false evaluations and second guesses. It will be a fine day, indeed, when this pattern of writing fucks off forever.

7402

Is it really so obvious? It didn’t seem AI-written to me.

Every day I seem to encounter (and skip over in disgust) a dozen or so AI-generated articles at the top of web searches, but this wasn’t anything at all like those.

bonsai_spool

Even the title is likely AI-generated, as are all the subject headings. I worry we're all getting inured to these writing patterns.

perfunctory

> obviously AI-generated article

how can you tell?

sph

#1 rule of slop: anything that can be written, can be AI-generated now

#2 rule of slop: even posts critical of pervasive AI usage and how it's ruining the world can be AI-generated

mawadev

I highly question the ability of companies to gauge the level of experience of any dev.

The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.

Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.

What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.

Does that really make you senior or just politically versed?

The pattern shows up the most whenever failing software pokes holes in perception.

gyomu

There are two kinds of developers.

There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.

And then there's the rest.

Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.

Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.

Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.

You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.

cloverich

I'd color this a little. I think there's also an engineering mindset some people have, and some don't. And over 10 years in, I'm still not sure if it can be trained or not. Some people are just really good at seeing the technical solutions in terms of engineering: Where does the data live, where does it go. How does it get there, how does it change. How does it break, how will we know, how will we fix it, how will we cope with its shortcomings. All of those questions to some people are a relatively quick and intuitive part of scoping and design. And for others its like a constant cliff they run into midway through their projects, or worse (and far more common) a set of bugs that are "tech debt" (for someone else to inherit) as the slap the "Mission Accomplished" on yet another project.

I've seen people that are very proactive and generally fall into your former group, but also don't quite seem to think like an engineer. I really want it to be trainable - I am trying - but IDK if it is or not.

hnthrow0287345

>It's almost impossible to turn someone from the latter group into the former.

Only if you're constrained by the same short-term thinking as US businesses. The way to do that is more of an apprenticeship model when someone observes/works closely with someone from the first group over years.

Even then, the businesses don't want to pay for that, and why should the workers give that away for free? They want people to churn out code because they've chosen to hire micromanagers that need constant updates and babysitting through communication.

luckylion

My experience is that that plainly does not work. I work with developers of both types, and the junior ones who are part of the first group are limited in their ability by experience, but they have an inquisitive mind and don't give up quickly when they encounter something they don't understand.

Much more experienced developers of the second type just throw their hands up and give up (or now: turn to AI). I've worked closely with them to try and reform them. Maybe I'm doing it all wrong, but it has never succeeded.

With the ones from the first group it can work that way: you can show them how you approach problems and they will ask questions and pick up patterns and you'll see them improve.

> Even then, the businesses don't want to pay for that, and why should the workers give that away for free?

Businesses would need a high likelihood that they can reap the rewards of upskilling employees. Why invest a lot of money and high-talent attention into someone who might quit? At the same time, I'll happily pay three times as much for a truly skilled senior developer. I think the employee's incentives are much more aligned: it will increase their market value, it's an investment into their wealth, not the business'.

mawadev

I highly agree. Let us stop talking about software dev and look behind the curtain a bit...

I want to point at the discouraging framing of developers who learn to be helpless whenever they are stuck in an organization for long enough where they start to believe they cannot do better and there is something that cannot be changed.

The majority of people I met started out in the former group but have been reshaped by the environment they are in.

If you are in the former group and find yourself turning into someone from the latter group, becoming unrecognizable to your past self, ask yourself if you have been reshaped by the organizational experience you were having and where that spark went.

The chance is very high the organization figured out how to keep you in your position or you have internalized things about your professional self at face value that do not depict reality outside the org and that start to affect deeper layers like your confidence and outlook.

You start to identify with the made up labels or arbitrary hurdles of reaching some next level that will change everything. Essentially you are in a box and you developed tunnel vision, could be dept, org, branch or industry wide.

Just bash your head against a project away from work for 3 months on the side and gauge for yourself how capable you are with no economic intention disregarding time and value investments and so on. Repeat this long enough to fill any gaps in knowledge you encounter and at some threshold the coin flips again.

You will do DevOps but you are no DevOps guy, you do Frontend but you are no Frontend guy. Just a free flowing human walking through the layers that create something real.

There is an art in unplugging yourself from economics and thinking in organizational hierarchies and patterns and I think if most of us see it for what it is, software would turn into a calmer place, grounded closer in reality.

noisy_boy

> There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.

I would rather tweak that a bit and say, we need a kind that has two things: 1. aptitude - not genius or 10x something just plain being able to think clearly and having problem solving skills 2. care i.e. not just dump whatever hack works in the short-term and declare victory. It doesn't imply that you obsess over perfection and ignore deadlines etc. But basic care about the solution being sensible, good code quality and not causing a new set of problem due to shortcuts. Both are things we routinely expect from programmers and I see less and less of them. #2 is rarer than #1.

bluefirebrand

> There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc

You're framing this person as a good developer, and sure. Probably some people who behave like this are good. MANY people who are like this leave mountains of problems in their wake. It takes a very special person to be able to build good quality with this kind of approach.

You're basically taking about someone getting the right answer on the fly at full speed

It's much more likely to get a subtly wrong answer, which is then dropped on that second group to manage and maintain going forward, while the fast moving person is hurried along to go drop a subtly wrong solution on another project with another team. This has happened to me many times in my career

balamatom

Thank you for this.

ivan_gammel

> Technically you don't have to be an employed developer to become a senior developer.

Outside of a sufficiently large organization „seniority“ of a developer doesn‘t make any practical sense. So, technically you can assign yourself any label, but that would be weird thing to do.

A freelancer is measured by portfolio, a computer scientist in academia by publications, an OSS contributor by the volume and impact of contributions. In either case, it‘s proportional to the effort spent on learning and building.

Anyway, regardless of employment status the measure of your professionalism is not defined by only something you can learn from the books. Experience matters a lot: it‘s nearly impossible to succeed in stakeholder management or presentation of your solutions by reading anything. You need practice and feedback. Senior engineers aren‘t those who excel in writing code: fresh CS graduates are supposed to know algorithms better. Senior engineers can contribute at full scale of SDLC themselves and support others. That is much easier to achieve in a professional environment rather than working on amateur projects.

therealdrag0

Sure, we live in a society. Seniority is about your ability to make an impact, which generally requires social and organizational skills. It can be bemoaned as much as you want but that’s how the world works.

teaearlgraycold

> What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.

This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.

brabel

> Technically you don't have to be an employed developer to become a senior developer.

That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??

I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.

You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.

lelanthran

> That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn.

That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.

BoingBoomTschak

Yeah, but working in a team isn't something you can learn without doing.

rglover

I've never worked in a corporate environment beyond client projects.

Picked up a book on XHTML (no, that isn't a typo) and CSS in 2007, just kept trying to build stuff I wanted to build and backfilling knowledge as I went. Not only is it possible, it's preferred. ~20 years in and I've learned how to build my own full-stack JS framework, deployment infra, a CSS framework, and an embedded database to boot.

Not one drop of this would have been possible had I taken the traditional corporate track.

ivan_gammel

All of this is possible on a corporate track. Ability to build frameworks and tools do qualify a person as at least a solid mid-level professional, not having corporate experience and associated skills can be a pretty big gap in their CV.

kaashif

Maybe they mean you can be not employed and build products yourself? Technically true, but that's like running your own surgeries or something, you're still doing surgery.

andrewstuart

Analogies to other professions give your argument an air of legitimacy, with none.

There’s plenty of people in this world who are expert programmers without following any traditional path.

“Oh yeah, like who”, you say.

Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.

brabel

I don’t say you cannot learn by yourself, my claim is you cannot learn without doing. Was that really unclear??

Animats

> They can’t tell you what the AI got wrong.

AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.

This is not fun. It has no flow.

simondotau

I beg to differ, insofar as my own experience has been the exact opposite. I enjoy fixing other people's mistakes. And I especially enjoy outsmarting the LLMs. I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.

Terr_

I think I might enjoy it for a little bit and then become very depressed at the idea that it will never end, a future of fixing things that should never have been broken in the first place and which won't stay fixed.

lelanthran

> I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.

I can do that too. Most programmers can.

That's because it requires less skill! Critiquing something is always easier than doing it.

I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.

That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.

They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...

bradleyjg

That's because it requires less skill! Critiquing something is always easier than doing it.

No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.

neonstatic

Perhaps you have the psychological make up to thrive in this new environment. Glad it is working for you.

cbg0

It should have the same flow as reviewing PRs from humans.

t43562

Who really truly enjoys that and doesn't see it as a chore?

I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.

sampullman

I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness.

I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.

cbg0

> Who really truly enjoys that and doesn't see it as a chore?

This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.

Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.

Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.

fg137

Which is a really, really bad idea.

Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.

I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"

btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.

microtonal

The problem is the LLMs completely change the equation. Before LLMs, beyond very junior (needs serious coaching) levels, reviewing was typically faster than writing the code that was reviewed. With LLMs, writing code is orders of magnitude faster than reviewing it. We already see open source projects getting buried in LLM slop and you have to find the real human or at least carefully curated contributions among the slop.

I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.

But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.

catcowcostume

Anything AI generated is troll. There's no logic. It's just pattern repetitions. I don't get how supposedly smart engineers fall for it

smallstepforman

We humans cannot scan 100’000 articles looking for the golden nugget, the AI data mining can do it and present it in seconds. Obviously we need to verify the data.

A couple of decades ago, we didnt trust compilers, we did assembly manually. Today is same barrier, some developers will explode with productivity while others will be left behind.

barnabee

Because a lot of engineering is pattern repetition, which is not very fun for engineers either, and LLMs can do it much faster?

skydhash

Not really. Any patterns got optimized and automated. If you’re still seeing patterns, then you need to look harder, because they will be similar onlu superficially.

solumunus

[flagged]

allending

There's a certain irony in that the article itself is quite clearly assisted by AI. Not a criticism per se as I don't have a problem with AI assistance, but food for thought given the material being commented on.

rezonant

The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well. It seems people use them to "polish" up their writing but in reality it would have read better if they hadn't.

My current pet peave is using period instead of comma, as in:

> My people lived the other side of this equation. Not the factory floor. The receiving end.

Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.

lelanthran

> The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well.

Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.

Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.

SanjayMehta

People have also started copying the AI tropes, especially your period/comma example.

microtonal

I am not sure if it is necessarily copied. A lot of influencer-style people used some of these patterns (periods, not X but Y). So I'm not sure who is copying who?

ykonstant

Unnecessary emphasis can get... quite comical... indeed.

ijk

It really feels sometimes like they were trained on too much short-form fiction or something. Really stunts their sentence and paragraph texture.

concinds

The uncanny valley is an attractor basin.

morningsam

Made me stop reading a few paragraphs in. I don't have a "problem" in the ethical sense either, but as the sibling comment notes, the way LLMs write is rather grating. To make matters worse, a) people seem to use them to add pointless volume / "filler" to their texts, so now I have to wade through pages and pages of this stuff, and b) I have no easy way to distinguish between an article at least based on novel human insights vs entirely LLM-generated from a "write me something about X topic" prompt. I don't think it's a stretch to say that the latter just isn't worth reading given the state of the art.

AngryData

The filler from AI articles drives me absolutely bonkers. Im a fast reader in general and can skim through articles to find important details at lightning speed, but it still takes me 5x longer to skim through all the AI fluff than it does to go back and search for a non-ai article and then read through that.

chneu

The filler stuff is really a huge waste of time and effort. I tried to Google weather Ranch Corn Nuts are vegan and every result in the top 10 was the same AI generated slop with 10 paragraphs that had nothing to do with what I was trying to find.

All the top results had the same AI feel to them. The same format and structure.

The best part? None of them said yes or not. None of them answered the question. They just listed common dairy and non-vegan ingredients to look out for. So, all that AI and nobody put in the ingredients list. Lol

rotis

I don't have a problem with AI assistance either, but this undermines the point the article is making. For me it is like a priest preaching gay sex is wrong and then being caught in bed with a male prostitute (snorting cocaine optional). Leaves bad taste in the mouth.

sph

Many such cases. Both the priest anecdote, and AI-critical posts being AI-generated.

A_D_E_P_T

Out of curiosity, what are you basing this on?

The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.

allending

Sort of a taste receptor I’m sure many have developed now.

The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.

bonsai_spool

What are the obvious tells? List them, because I think our sense of the tells may not overlap.

This article is clearly LLM-generated, even the title. A key indicator is that it almost makes sense: we forgot how to manufacture because that got sent to a different nation. The coding thing isn’t getting sent anywhere, so humanity is forgetting how to code. The distinction undermines a lot of the emotional baggage about offshoring that the article wants you to bring along.

A_D_E_P_T

There are quite a lot of them: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

But it's really just the usual ones that are truly obvious. "Not X but Y," em-dashes, "underscores the significance of X," and so forth.

A terse sentence structure can be a tell, but, IMO, it's a weak one.

lkm0

The blog post reads nothing like Hemingway. Here's a classic example: https://anthology.lib.virginia.edu/work/Hemingway/hemingway-...

Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.

LLM slop reads more like slide bullet points extrapolated to prose-length text

lelanthran

Blog posts aren't typically written like Hemingway.

Find some pre 2020 that are, and you'd have a point.

Kerrick

https://awnist.com/slop-cop (via https://news.ycombinator.com/item?id=47806845) points out Staccato Burst, Dramatic Fragment, Colon Elaboration, and Short-Hook Paragraph. To me, those define the tone of this article.

A_D_E_P_T

Interesting tool.

I'm not trying to defend the blog post, but I gave Slop Cop 775 words of an essay by Schopenhauer (translated into English) and got "15 patterns detected."

I fear we're approaching the point where AI-written text grows indistinguishable from human-written text, unless the AI-user is exceptionally lazy and uses an obsolete model...

anonzzzies

I saw academic rigor fall of a cliff in exchange for 'better job alignment' between end 80s when I had my first class after finishing highschool called 'Formal verification in software' on to beginning of the 2000s when I left giving the first class to new students 'Programming in Java'. All the 'teaching how to think' was replaced with 'how to get a well paying job'.

mschuster91

> All the 'teaching how to think' was replaced with 'how to get a well paying job'.

Yeah. Companies didn't want to train new employees any more as that costs money (both for paying the trainees and the teachers) so they shifted to requiring academic degrees. That in turn shifted the cost to students (via student loans) and governments.

People call it a red flag for scams if you are supposed to pay your employer for training or whatever as a condition of getting employed... but the degree mill system is conveniently ignored.

Sharlin

Costs are externalized, profits are privatized. A tale as old as the society itself.

lotsofpulp

The problem was the government providing the blank check loans with no underwriting. Without that subsidy from future taxpayers, incentives would be properly aligned.

No lender would have been stupid enough to give 18 to 22 year olds $200k for bullshit degrees and sports facilities.

The onus would have remained on employers and government to pay for education, rather than a certification, because they would have been the ones paying.

999900000999

College should have never been presented as the only way to the middle class. In high school they shutdown my advanced trades class, maybe I could have been ready to hop into a decent job after graduation.

I recently spoke to a young art school grad who talked about getting on disability over a life of the corporate grind.

Who am I to disagree ? The Pentagon has never passed an audit, the government coffers are effectively a slush fund for defense contractors.

At this point, I think a universal basic income is the only way.

Not enough jobs exist for everyone. Poverty doesn’t need to exist

whycombinetor

>I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.

>In defense, the substitute was the peace dividend. In software, it’s AI.

Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?

Tade0

Not sure why that was ever the plan, as there are clearly not enough people.

Also over here, east of 15°E we were fired all the same.

I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.

I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not that frugal.

NSUserDefaults

Happy to help and eventually take over.

neonstatic

It had to be H1B Indians and outsourcing to India. As a European, I have seen some "Eastern European devs" around, sure. But they were not present at every company I worked with. Indians were. Quality-wise, it was always the same story, but I'm not going to elaborate. Everyone who is ready to accept it, knows what I would be saying anyway.

codingdave

No, you probably need to elaborate on that. Because in my experience, the quality from people in India varies just as much as the quality from any other country, including the USA.

What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.

You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."

Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.

neonstatic

Okay, since you insist.

I'm not racist. I don't care about race. I do care about culture a lot. By culture I mean a set of "default behaviors" and values that people from said culture are more likely to exhibit. That's where my issues with Indians began and continue. Of course you are right that generalizing over 1+ billion people is a futile exercise. Intellectually, I agree. And yet, in my personal experience, certain behaviors and attitudes they have just keep coming up with frequency, that just doesn't match any other group of people I have been interacting with. I live a rather international life. I interact with people from many, many cultures. I currently live in a culture, that is completely alien to my own, and I love it. It's not a problem of closed mind or some kind of supremacy thinking. I am free from that.

Specifically about Indians - I find that great many of them prefer memorizing over thinking. In the IT consulting days of my career, I noticed that they seemed to have 4-5 solutions, that they would apply to all problems. Whether the solution would fit the problem or solve it, was secondary. If it did, great. If it didn't, well that was someone else's problem. Half of my job was fixing stuff that an Indian "fixed" before me. The appearance of having fixed something was much more important than the actual fixing. It was all about appearances with them. While people in general seek recognition, I have never met another group of people who are so eager to lie and cover things up to gain some perception of short-term bump in status. It's not isolated to work environment. You see, I suspected myself of perhaps being racist in the end, so I would challenge myself to befriend Indians if I met any - just to see. Maybe I was being judgmental and wrong? The last time I tried it, the Indian man I met kept kissing my ass so much I had to cut him off. Why did he do that? Based on what he was saying, he saw me as someone from an "upper caste" (he projected his ideals of a successful businessman on me) and desperately wanted me to know how much I have done for him (I haven't done anything other than having a few conversations about life and business in general). Took me a while to understand that all this excessive praise and ass kissing was an attempt to elevate himself by proximity to something great. Needless to say I am nowhere as great as he portrayed me to be. Later I also found that half the stuff he shared with me was made up to impress me.

Another feature of their culture is extreme pride. They will never stop talking about India, Indian culture, Indian food, etc. They expect you to praise it, be in awe. If you aren't, they will pressure you to change your mind. Since working with them was a universally appalling experience, I wasn't impressed, so that came up a lot. You see this pride and attention seeking everywhere online. A normal person will say "Hello", "Good morning". An Indian will say "Good morning FROM INDIA". It must be mentioned, because it must be noticed and praised. It's just tiring. There is a reason why so many are waiting for country-based filters on Twitter. You wouldn't have guessed which countries are most upset about this.

I am certain that there are reasons and explanations for all of this and that there are many exceptions. As you have mentioned, there are so many of them, they can't all be like that. And fair enough. I just find all of this so tiring, that I don't want to deal with them at all. If 1 out of a 100 is a smart and pleasant person, they are still surrounded by 99 that I don't want to deal with. It might be sad, but it is what it is.

Madmallard

Pretty sure cheap foreign labor is more prevalent now than ever at every major tech company.

They really, really do not want to spend money. Especially not on Americans and their health insurance.

It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.

dmix

America could just reduce their cost of living, optimize their healthcare, make domestic business more attractive etc instead of trying to ban everything to duct tape over deeper problems

Madmallard

What's your evidence that they could do that?

lotsofpulp

> It's really strange how we're just letting them get away with this.

Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.

> They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.

When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.

“American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.

Madmallard

Yeah that's true.

I think it's all bad and counter-productive toward a stable society though. I think economic sacrifices likely have to be made to ensure long-term viability. What we're doing now is accelerating the demise of everything. The entire planet even.

Nux

India for the most part.

undefined

[deleted]

Scroll_Swe

[flagged]

gitowiec

[flagged]

solid_fuel

Take your racist attitude elsewhere or even better, keep it yourself. The comment chain was only about where IT work is being outsourced.

cladopa

People are not perfect. I went to Ukraine just days before the invasion. Travel and Hotels in Kiev had become extremely cheap. You asked the Ukrainians about the possible invasion. "Not going to happen" everybody said."Russia talks always aggressively, but never does anything".

They did not properly prepare and as a result lost 20% of its territory in days.

Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.

Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.

The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.

It is about humans, not AI.

wiseowise

> They did not properly prepare and as a result lost 20% of its territory in days.

Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.

jakub_g

According to [0] the military was basically doing under-the-radar preparations in the last few weeks before the attack, because the official narrative was that nothing's gonna happen.

> A small group of officers at HUR, Ukraine’s military intelligence agency, did begin quiet contingency planning in January, prompted by the US warnings and the agency’s own information, one HUR general recalled. Under the guise of a month-long training exercise, they rented several safe houses around Kyiv and took out large supplies of cash. After a month, in mid-February, the war had not yet started, so the “training” was prolonged for another month.

> The army commander-in-chief, Valerii Zaluzhnyi, was frustrated that Zelenskyy did not want to introduce martial law, which would have allowed him to reposition troops and prepare battle plans. “You’re about to fight Mike Tyson and the only fight you’ve had before is a pillow fight with your little brother. It’s a one-in-a-million chance and you need to be prepared,” he said.

> Without official sanction, Zaluzhnyi did what little planning he could. In mid-January, he and his wife moved from their ground-floor apartment into his official quarters inside the general staff compound, for security reasons and so he could work longer hours. In February, another general recalled, table-top exercises were held among the army’s top commanders to plan for various invasion scenarios. These included an attack on Kyiv and even one situation that was worse than what eventually transpired, in which the Russians seized a corridor along Ukraine’s western border to stop supplies coming in from allies. But without sanction from the top, these plans remained on paper only; any big movement of troops would be illegal and hard to disguise.

[0] https://www.theguardian.com/world/ng-interactive/2026/feb/20...

Tomis02

What's the choice here? Start saying "Russia will attack" and then see 20% of your population flee, while Putin sits and waits until your economy runs into the ground, and THEN attacks? There's no good choices here.

the-smug-one

I'd say that Ukraine were very prepared for the invasion, though? They managed to survive for the first 2 weeks, leading to a long-term war. The Donbas war had already been going on for 8 years, and I don't think Ukrainians were under some illusion that those weren't Russians.

undefined

[deleted]

blitzar

On the flip side, all around the world you have "leaders" talking about imaginary conflicts with foreign countries that we must spend billions (they have a friend who really should get the contract) to prepare for and if the other side (tm) gets in your whole family will be killed instantly.

fifilura

Killing of families is what happened in Ukraine in the Russia controlled territories.

teiferer

In hindsight, it's easy to be smart. You picked two examples where somebody said "never gonna happen" and then it happened. How about the countless examples where somebody said the same and then the thing actually didn't happen?

Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.

hnfong

GP also probably had a sampling bias. The ones who were actually concerned about the impending Russian invasion presumably fled out of the country (or at least, away from the major cities to rural areas that probably see less fighting)

_heimdall

I was in a neighboring country in Europe at the time, not Ukraine, but we didn't see any Ukrainians move into our area until a few weeks after the war started.

That's not to say the country wasn't prepared though. If the GP did talk to people on the ground days before it started, saying it won't happen would match the public propaganda at the time coming out of the Ukrainian government and their allies. They knew it was coming and seemed to decide they were better to faint like the weren't ready and avoid public panic before it started.

sofixa

> They did not properly prepare and as a result lost 20% of its territory in days.

They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.

_heimdall

> While nobody actually believed Putin would be dumb enough

I'm not sure why you'd say nobody thought they would invade. To me it was clear in December the year before when the Russian navy began sailing the long way around Europe, getting in the way of Irish fisherman and confirmed days before the invasion when they had stockpiled medical personnel and blood on the front lines.

mmmmmbop

You must have made a fortune from shorting Russian stocks in December 2021.

anonymars

When the US warned, days before, of the imminent invasion, the broad reaction was still one of "the boy who cried wolf"

lotsofpulp

It was clear when they captured Crimea.

vasco

> Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer.

Why would we listen to anything related to right or wrong from you then if you don't care?

zero0529

Every day Peter Naur’s paper programming as theory building gets more relevant

Link: https://gwern.net/doc/cs/algorithm/1985-naur.pdf

apitman

This was the first article I thought of as well. Highly recommended read.

neuderrek

I remember same complaints about junior engineers copy pasting snippets of code from StackOverflow without understanding. And without curiosity to understand, without code review and mentorship from senior engineers they never grew to the senior level. But that is only some of them, others used StackOverflow to learn, did not use the snippets without understanding them first and properly adapting to their context, and they got good coaching in their teams and now have reached senior level from there. I see the same dynamic with LLMs, just more opportunities for both juniors to learn more by following up, and for seniors to to create tooling to enforce better architectur, test coverage and fault resiliency.

isodev

I think you're missing the point. Nobody removed people thanks to their SO copy-paste skills. If anything, more folks were hired to troubleshoot and sort out any copy pasta blunders (since you actually need working software, at the end of the day).

With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).

RossBencina

Excellent post. Two stand-out points are deskilling through abolition of apprenticeship (or equivalent progression through the rank and responsibility), and loss of institutional knowledge, especially tacit knowledge stored in individual people. These are people problems more than they are technology problems. Without continuity of process and practice stuff gets lost. Sometimes change really is progress, for example software safety and security practices have progressed over the past 50 years, but other times change is just churn, or choices driven by misaligned incentives which will bite later, as the article describes.

RangerScience

What comes to mind is how the cure for scurvy was simply… forgotten, causing it to come back.

Daily Digest email

Get the top HN stories in your inbox every day.