Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

Animats

What did they write that article with?

The year is 2026. The unemployment rate just printed 4.28%, AI capex is 2% of GDP (650bn), AI adjacent commodities are up 65% since Jan-23 and approximately 2,800 data centers are planned for construction in the US. In spite of the current displacement narrative – job postings for software engineers are rising rapidly, up 11% YoY. ... We wrote last week that we see the near-term dynamics around the AI capex story as inflationary, but given markets are focused on the forward narrative, we outline a more constructive take on the end state below. Before that, however, it’s worth reflecting that the imminent disintermediation narrative rests on the speed of diffusion.

The chart "Job Postings For Software Engineers Are Rapidly Rising" seems to show a rise from 65 to 71 for "Indeed job postings" from October 2025 to March 2025. That's a 9% increase. Then they inflate that by extrapolating it to a year. The graph exaggerates the change by depressing the zero line to way off the bottom and expanding the scale. This could just be noise.

The chart "Adoption Rate of Generative AI at Work and Home versus the Rate for Other Technologies" has one (1) data point for Generative AI.

This article bashes some iffy numbers into supporting their narrative.

Suggested reading: [1]

[1] https://en.wikipedia.org/wiki/How_to_Lie_with_Statistics

buppermint

Worth seeing the whole chart in perspective:

https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

rogual

Worth also noting that this chart has the bottom of the Y-axis cut off, exaggerating differences and making visual intuition basically useless.

ajb

The format is editable. The line chart seems always to be scaled so the minima is at the bottom, but you can get the zero point by changing it to bars.

The options do seem a bit idiosyncratic, but I guess they are useful for the kind of data the site users usually look at.

abirch

This graph was scaled to 2020=100 so not as bad as excluding 0 for raw numbers.

euroderf

The art of putting wavy lines across an axis to denote a range skip has atrophied, probably because few charting software packages support it ?

pembrook

But it essentially shows the same thing, the covid overhiring boom and then layoff cycle post-covid is over. And jobs are rising again.

What’s absolutely mind blowing to me though…the idea AI isn’t causing software engineering jobs to collapse…which you would think would make people here happy…is something that makes software engineers upset??

It’s almost as if everyone here has married their identity to the idea they are victims of AI progress and any suggestion otherwise is ego destruction.

”What??? You mean the job market is expanding and the reason I can’t find a job is…me? That can’t possibly be true I’m a genius, the data is clearly wrong!”

Animats

Wow. Huge crash between 2022 and 2023, from 230 to down around 80. Why? That's the real question. What happened? It's post-COVID.

Then stuck in the 60-80 range since 2023. The sample period chosen by Citadel is wildly deceptive.

This is an important question and these crap stats are not helping.

mbgerring

There was a change in US tax law that revoked the ability of software companies to classify engineer salaries as an R&D expense, which massively increased the tax liability for many software companies.

ahoka

It’s not a crash, but a huuuge peak around ‘22.

sublinear

There was a hiring bubble in 2022 just before the Fed raised interest rates. I'm not understanding what the mystery is.

The link you're responding to has the option to zoom out more to 2020. If you scroll down to view the other related graphs, you'll find that they also index 2020 as a starting point because they're all tracking this hiring bubble.

tarsinge

Interesting chart that confirms hiring dynamics for SE have not much to do with AI despite all the PR, as in 2023 models and agents capabilities were quite limited, and now that capabilities increase hiring is picking up. I hope more journalists will start to challenge that narrative.

ido

Click "max" to see its the corona peaks that's the outlier.

georgeecollins

Wow, that says a lot with data. Thank you.

samrus

"Whole chart"

Graph starts with a black swan event

blharr

Unfortunately that's when they started recording the data

emp17344

Just go look at the chart: https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

It’s continuously updated, and postings are still on the rise as of last week, so your criticism doesn’t make much sense.

choppaface

While I like that you debunked the article . . . I want to hear an argument for where the SWE job market can grow in a post-Claude world. I might expect something like: “CEOs are naturally greedy. So after trimming the team, they then recognized (versus “replacing” people with AI) they could actually accomplish _more_ with more engineers, each empowered with AI.

But I do like folks calling out the OP for being AI spam.

Animats

I'm not sure whether it's AI spam, or somebody at an investment company who actually writes like that. It's an exaggerated version of the style in McKinsey reports.

They're addressing a very important question, and one for which there is surprisingly little hard data. It's too soon to try to see a trend from low-quality data. Three years of this data might be meaningful.

swiftcoder

> It's an exaggerated version of the style in McKinsey reports.

McKinsey reports are the original slop

davedx

Its not AI spam, there are typos

senexes

I don't work in IT but I use and love Claude code. What strikes me is maybe the overall software job market can not grow to surpass the post covid peak but any current professional software engineer has immensely valuable skills that can no longer be gained in the same way, if at all. I would think the counter argument to the greedy CEO argument is that AI breaks the former economy of scale in the opposite direction towards hyper specialization in business with small teams. In that scenario, as the economy grows with more and more business, the current software engineers are the substrate for a new type of off brand bargain CTO as opposed to the current , luxury brand CTO sitting at the top of oversized companies.The bull market becomes at the higher level that current software engineers step into. Most likely though, none of this is true and 15 years from now it all shakes out in a way that none of us could have really predicted from our vantage point because the prediction would sound ridiculous with the information at hand.

ricardobayes

Computing cost and reliability remains the bottleneck. AI agents are nowhere near smart enough to carry out tasks on their own. Combined with the fact, 95% of gen-AI pilots "failed" [1], at least failed to improve the bottom line. Layoffs were never about AI, they were almost always about capex, and correcting the pre-2022 overhiring. All CEOs are hearing in 2026, "I didn't get anything done, but the model hit the limit".

However, if there will be, locally deployable, meaningfully capable AI models that can change the computing cost equation.

[1]: https://www.forbes.com/sites/jasonsnyder/2025/08/26/mit-find...

everforward

It really depends on how you define a software engineer. If you mean software engineers doing what we do today, the market probably won’t.

If you just mean “people who make software in any capacity”, it will probably grow (or has already grown) via product, marketing, etc folks making internal tools with AI (which may not work out, we’ll see).

Presuming we keep seeing LLM improvements, SWE will move up the stack like they did in the past. They used to work directly with hardware and software. Ops folks sprung up to do the hardware, and SWEs do basically all software using abstractions over hardware. This will be another step up where SWEs no longer work directly on software, but rather on the tooling that writes software which they hand over to marketing, HR, etc.

Again, presuming this all works out the way the AI folks plan.

undefined

[deleted]

endless1234

The world runs on software. AI makes it easier to create more software, but it still requires humans to keep running and decide what to do. Maybe each individual project will need less pure coders, but there might be a lot more projects?

seanmcdirmid

As long as software engineers are needed to leverage AI (they can manage the output, refine the prompts, check the BS), there is plenty of software to write and not having SWEs still means you will have to write less of it.

littlexsparkee

Does any serious SaaS HR use Indeed? Whenever I hear that is the source, I immediately question it because companies I look up use Ashby, Lever, Greenhouse, Jobvite, Dover, etc.

edit: nvm they probably pull in results from these ATS

disgruntledphd2

Indeed scrape a lot of places looking for jobs. While they don't get a lot of the startup scene, it's a better metric across the economy.

undefined

[deleted]

rswail

I've noticed this "XYZ just printed rate%". WTAF does "printed" mean in this usage?

Do they mean "published" or "latest" or what?

halper

Is it not just the same as when people suddenly started having "an ask"? It is some kind of in-group speak that it is important that you adopt just to show that you are with the times.

tanseydavid

I believe this wording originates from references to a Stock Ticker machine and the Ticker Tape which would "print" the "latest" values of stocks, interest rates etc.

jdw64

Personally, I prefer vibe coding in the sense of stitching things together at the function-to-method level.

Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns.

The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again.

At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable.

In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well.

Eventually, a person is still needed.

If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility.

wombat-man

The models are improving. The software that harnesses them is also improving. It wasn't that long ago that the models were quite bad at a lot of the tasks that they are excelling at today. I do agree there's probably a ceiling to what we can get out of these, but I also don't think we have quite hit that point yet.

jdw64

I agree with what you said. And perhaps my belief that “people like me are still needed” is just a desperate form of self-persuasion.

If AI replaces everything, then I become unnecessary. So maybe I am simply trying to convince myself that developers like me are still needed.

That said, realistically, I still think there are limits unless the essence of architecture itself changes. I also acknowledge part of your perspective.

Those of us who are not in the AI field tend to experience AI progress not as a linear or continuous process, but as a series of discrete events, such as major model releases. Because of that, there is inevitably a gap in perspective.

People inside the industry, at least those who are not just promoting hype, often seem to feel that technological progress is exponential. But since we are not part of that industry, we experience it more episodically, as separate events.

At the same time, capital has a self-fulfilling quality. If enough capital concentrates in one direction, what looked like linear progress may suddenly accelerate in an almost exponential way.

However, even that kind of model can eventually hit a specific limit. I do not know when that limit will arrive, because I am not an AI industry insider. More precisely, I am closer to someone who uses Hugging Face models, builds around them, and serves them, rather than someone working on AI R&D itself.

tharkun__

    “people like me are still needed” is just a desperate form of self-persuasion.
No, no it's not. I've seen what "PM armed with an LLM" will do. Trust me, if you're a decent enough Full Stack software engineer that can take an idea and run with it to implement it, you'll have a leg up over the PM with the idea that has no idea how to "do computers".

Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past.

Will there be some PMs that can become "software developers" while armed with an LLM? Sure!

But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say.

The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience.

wombat-man

I have a more optimistic take. Those of us who have done it by hand for a while are armed with that experience. Yes you can just use an LLM to do everything now, but I think it's tough to supervise it on tasks that you've never actually had to do. Maybe that won't be as important as I think, but I think that I'd have learned a lot less in school if I just used an LLM to code everything.

Day to day, the resolution of our work is probably different. We're zooming out and spending more time strategizing and managing the AI tooling. This might mean less jobs. It might also mean we just get more done.

I don't work on AI directly either, but I'm finding a lot of value in learning the new tooling. I think being able to competently leverage these tools is going to be a key skill from now on.

riffraff

I'm with you at the "bargaining" phase of AI grief (sure AI is useful but it won't replace me!).

I think my reasoning is you still need a tech person to translate from feature to architecture. AI can do both but not everyone knows they need the latter.

ponector

If everything is improving why the quality of the released software is going down?

ares623

At $800B collective spend, you would hope these things are improving. The point is that have the improvements been worth $800B and counting.

wombat-man

I think part of the motivation for the big spend by the big players is to choke out Anthropic and OpenAI. They're going to make sure they're they only ones scaling up the huge capacity they expect is needed. To meet demand, Anthropic is just going to need to pay the cloud bill to somebody, which will really hamstring their ability to profit.

vasco

Yes for sure, even if we stopped today the amount of almost free software that can be produced with current models will improve the world by a lot as the knowledge of how to use it propagates over more people.

bambax

Yes, but I don't think having LLMs only write functions, and doing the architecture yourself qualifies as "vibe coding": rather "AI-assisted engineering" (which is what I do).

Vibe coding, to me, means having an LLM, with or without agents, do everything after an initial vague prompt. Which is why "anyone" can vibe code (because anyone can write general hand-waving imprecise instructions). This inevitably results in pointless demos and/or unmaintainable monsters.

8note

its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people

the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.

i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.

if you get the base right though, the agent can make precise changes in large code bases

jdw64

Thinking about it, I think what is interesting about the output of agentic coding is this:

I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure.

I think this depends on the setup, and it is ultimately a trade-off.

In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size.

At that scale, I felt that agentic coding was actually very good at building the initial skeleton.

I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently.

In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way.

sroussey

The coding agents are good at growing code.

Like a child growing up!

Also, like a cancer.

Similar process, different outcomes.

slopinthebag

I think it's just the context in which it's working in.

1m lines of html are infinitely more conducive for a language model to work in than 10k lines of complex multithreaded low level code.

A lot of coding is just rehashing the same concepts in slightly novel ways, language models work great in this context as code gen machines.

The hope is that we can focus our efforts on harder problems, using language models as a tool to make us more productive and more powerful, and with the advancements open weight models have made, also less reliant on big tech companies to do so.

energy123

I find LLMs are good at skeletons but only if you are tedious about writing down what you want before you start. Then give that text to GPT 5.5 Pro, and be prepared for a number of iterations.

slopinthebag

I agree. Language models are good at codegen, in some sense they are just another codegen tool, except instead of transforming a structured language (like a config file or markdown) into code, they can convert natural language into code. Genuinely useful for the repetitive boilerplate grunt work. If that's all you do, then I can see fearing getting replaced. Thankfully by handling the drudgery, it frees us up to work on more complex and cutting edge work.

Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me.

But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated.

ngruhn

Try to get an animation just right without human guidance. It's difficult to give the agent feedback on its work. With browser MCP the agent can only make screenshots and see a single frame of the animation. Also agents are quite slow with browser handling. If the animation starts when a button is clicked, the animation is usually over before the agent has taken the screenshot.

All behavior of backend code can at least be described with automated tests.

slopinthebag

Yeah like I don't mean to demean front end work because there is a lot of stuff that isn't gruntwork or boilerplate, especially in the artistic fields or UI that is actually really complex. I actually made my initial career off of UI/UX. And a lot of the CRUD backend stuff really is literally just shuffling data in the most boring and replicated way as well.

I guess my point is more that we have a lot of code being written that probably should have been automated already in some way, but it was simply more practical to just have people writing it. I dont see much harm in automating it with AI - the people doing the grunt work are largely capable of more, but at the end of the day someone has to dig the ditches. Now that we have a backhoe they can go do more interesting stuff.

However when I see people who were largely writing meaningless boilerplate now claiming that software development is dead because they've become automated, I think it's important that people are being realistic about the different contexts in which AI is either useful or not. There is a wide range of experiences, some people believe AI is useful in completely automating their jobs and others feel it's mostly useless, and of course most people are in the middle somewhere. They're all correct, but the context is crucial.

As far as I'm concerned it's just another tool in the toolbox.

colechristensen

I've found the LLM limitation of codebase size is removed with correct design of the codebase.

If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension.

Your task management has to match, the organization of your ticketing system has to parallel the codebase.

With this the LLM can think at different scales at different times.

slopinthebag

Yeah but this is just regular programming.

Of course you can break things down into the right atomic units where a code gen machine becomes useful. Because you are an expert. People who aren't literally have no clue.

In any task, you can break it down no matter how complex into units where a language model can output useful code. The more complex, the smaller the units. At some point it's faster to write it yourself, thats the limit on the codegen.

I still don't see how it's anything else than a tool that experienced and knowledgeable workers can use to save time and energy to focus on the hard parts.

ricardobayes

Yes that is all true. LLMs are excellent in providing a single function, but decision-makers extrapolated that capability so they thought LLMs can work on their own with minimal or no supervision. That's not going to be realistic for a very long time.

altern8

How long before they will raise the amount of context it can hold?

Or, it there a ceiling that we can't go passed?

disgruntledphd2

The context length scales quadratically in terms of compute, so barring algorithmic improvements, there's definitely a limit.

altern8

OK, so we'll have to deal with this for a good while

rcpt

We all got agents at work now and still the engineers haven't equalized

sakopov

I know this is anecdotal but after almost 2 years of no activity, I have been absolutely hounded by recruiters for nearly a month. They show up in my LinkedIn feed and I get multiple emails a week asking to interview. What in the world changed? It doesn't look like the job market's improved much. In fact I see more layoffs than ever before.

jesse_dot_id

LLMs need a competent engineer to create code that doesn't suck. Reality is catching up to the hype.

wg0

Absolutely true. LLMs need a really experienced engineer that has great intuition about the system design.

Otherwise good luck getting things done in a business environment where people and processes depend on the software you produce.

Thugs of AI thought reality won't catch up to them they're untouchables.

ygrr

And it takes years of doing it the ‘old way’ to build intuition.

Many firms are implicitly assuming the models will keep improving to the point where all these problems go away. But what if they don’t?

jedberg

I have an alternate explanation. With the rise of AI recruiters, there is no cost to them to contact you. They don't even have to do the search and compose the message. They're basically reaching out to everyone.

At first I found the AI recruiters impressive, because they tricked me. I thought the recruiter had really done their homework and read my profile deeply!

Now I know that an AI is reading it, picking random things to highlight, and then composing a message. But they're not real. They're just trying to connect to you so that they can say you are in their network when they go to sell their services to hiring managers.

nacozarina

Same, and it has been enormously satisfying telling ppl pitching low-grade offers and multiple interview rounds to get rekt.

ai_slop_hater

How many years of professional experience do you have?

wg0

Basically - I anticipate a 3x rise in software engineering salaries in next five years if the dumb rhetoric of "oh coding is solved problem" rhetoric continued because of the collapse in supply side.

roncesvalles

And the hockeystick trend of new code being written. There is literally no better time to be studying CS than today, yet the average person believes the exact opposite.

nunez

This is the career version of "buy the dip"

enraged_camel

That's because unemployment among new CS grads is the highest it has ever been.

undefined

[deleted]

theiz

What I see use an immense amount of bugs and security issues that can be found much easier now then even before, because of AI. Also I see less trust in using AI in direct coding, because there are many examples of code additions that breach the safety of software in enterprises. Now to solve this, it requires for actual humans to do coding. And with that, it is probably true that more use of AI in coding leads to more SE's required to oversee ensure security. I personally see the big benefit of AI tooling to be in testing, security checking, documenting, etc. rather then coding itself.

princevegeta89

Using coding agents, it feels like always working under a blanket where you cannot see beyond it, and there is this thick mask blocking you from knowing what's going on. it unfortunately projects a very bad impression that things can be built very quickly and that systems can be designed in a robust and maintainable manner. But even with the best models that I've used, that is not true. When the number of features reaches a decent figure, the hallucinations grow, and more often than not, we have no idea what the AI agent is writing. Pull requests become meaningless because there is too much code to review, and AI is handling it anyway. So it's basically taking the eyes off engineers in general. There are many bugs waiting to be uncovered. Compare this scenario to the absence of all these coding agents. All engineers would know the codebase very well, how the flows happen, and how to do a deep dive. I have a very bad feeling about this unproductive direction in general. It's good for writing small modules, but companies seem to be expecting to churn out a lot of code in a very short amount of time.

An overwhelmingly large number of engineers have close to zero satisfaction with their work. A lot of firefighting happens across the board. There is a ubiquitous use of AI everywhere in reading documents, writing documents, and wherever hallucinations occur, critical information is also being missed. It's not a surprise at the end of the day, but this entire situation has put us in a very messy overall circumstance.

DeathArrow

So there will be again waves of hiring developers only for companies to realize after 5 years that they have too many employees and fire them again?

killingtime74

Like James Franco said in The Ballad of Buster Scruggs, "First time?"

dontgetfired

90% of the job ads I see have the word "AI" in them. It can be a startup hoping for a get-rick-quick opportunity from the AI hype, or an established company.

Both types expect you to spend as many tokens as possible so that the AI bubble doesn't burst (presumably because leadership has a financial interest in this).

Your actual productivity isn't important. If you point out that you're much faster writing code on your own in 90% of cases, you will be told you're not good at AI, you're not prompting it correctly and that generally you're not AI-native and that you'll be left behind. To be precise, token usage is a performance metric, so you'll be let go if Claude is not running continuously 8 hours a day.

I'd like to know how many places have mandates to write 100% of your code using AI, as well as to max out your AI agent's plan. For some reason nobody talks about it even though I know several companies around the world that are forcing this on their employees.

If you're looking for a job then you don't have a choice, it's better to have an income. But if you're looking to change jobs to get away from AI to actually be productive and gain experience then it's a very bad job market.

ed_elliott_asc

I’ve been programming for 25 years, I’d struggle to think of a scenario where I’m faster writing code manually than prompting ai to do it

[edit 25 years not 20]

dontgetfired

You read the AI-generated code, right? That takes time and effort. Whereas if you wrote it yourself then you already read it.

ed_elliott_asc

Yes but it takes a lot less time to read it

nine_k

"AI" is everywhere, because it's the fashion. A lot of jobs do not require AI mastery, or even heavy use.

I'm searching for a job for many months, and I do see the uptick quite clearly.

dontgetfired

> "AI" is everywhere, because it's the fashion.

Fashion is when developers jump on the next web framework because they got bored of the old one.

But when you get fired for not enough token usage, that's something else. When bosses start demanding you write 100% of your code using AI, and then a few months later Anthropic reports 30% increase in usage, that's not fashion. People who invested in AI are putting a lot of pressure on developers to ensure their investment pays off.

groundzeros2015

It feels like when Java and Object-oriented programming were popular. You must use the object orientation, it is the future. Imagine not being able to reuse code, etc.

andrekandre

java, xml, corba, soap, uml out the wazoo... blech

by that metric it will take the industry about 10 years to recover from this

otabdeveloper4

> your AI agent's plan

Token billing is coming very-very soon, there won't be a "plan".

What will these companies do then?

ai_slop_hater

I will probably use a local model

undefined

[deleted]

ai_slop_hater

"AI-native" lmao, what a term

undefined

[deleted]

enraged_camel

Title is editorialized and the report is from two months ago.

emp17344

https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

The chart is continuously updated, and postings are still on the rise as of last week. Your criticism is moot.

legitster

I foresee the need for engineers to be really "wavy".

I have personally never been busier or more productive. It's like all the "work" of my work has disappeared. There are no more blockers and I can just run free and get as much done as I want and the only thing slowing me down is Jira.

The real downturn is going to be the SaaS apocalypse. In the next year or two there will be a reckoning where all these expensive low-code/no-code middleware applications suddenly don't make sense.

So I think it will be less about the ranks of engineers being thinned out unilaterally, and more about large swathes of products being obsolete.

ed_elliott_asc

Which SaaS companies/products do you think are at risk?

tossandthrow

We are already in the works of removing 2: backoffice software that we moved to an in house react app and a library that has a license fee.

None of these are really because of cost. But more because we can get a superior product by doing so.

rwmj

None of them because those who think SaaS companies are just a bunch of bad code that is going to be quickly rewritten have no clue what they're talking about. No sane company is going to vibecode a replacement for Salesforce, because then they have a half-assed, buggy, broken pile of code they have to maintain, instead of outsourcing that problem along with legal, compliance and support to someone else.

It's honestly tiresome to keep having to debunk this with people who have no clue at all how large companies operate.

lbreakjai

Nobody's going to vibecode an internal salesforce. On the other hand, the barrier for ex-salesforce engineers to take their knowledge and build a competitor with the 20% of the features that represent 80% of the usage is dramatically lower.

I think the SaaS landscape will look vastly different in five years.

baq

Nobody wants to hire a new team member when it takes 3 months to train them and at the same time a new opus comes out by then.

I suspect hiring will pick up when capability of the models stops growing so quickly or gaps between start widening. Obviously the problem capabilities are not slowing down and gaps get shorter…

krzyk

Model capabilities are rising slower compared to model pricings. Recent price increases made hiring juniors cheaper and in the short run, not to mention in the long run.

misiek08

Companies hiring more people to build AI based, self-healing and self-developing systems faster? „We don’t need those old programmers, we need new people who know how to build harnesses around AI”. Hiring those „old” programmers, but from other companies.

Daily Digest email

Get the top HN stories in your inbox every day.