Get the top HN stories in your inbox every day.
cortesoft
TheScaryOne
>The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society.
This was the argument about robots. It did not pan out. No taxes materialized. Robots and Automated Machines have not shared productivity. In fact, things like self-checkout has spread the labor load to the customer, instead of the company.
>We have the technology to produce energy in sustainable ways, but it is expensive
AI Datacenters should be completely sustainably self-powered. Full stop. We did not spend decades bringing down the cost of power only to have it all hoovered up by robber barons who "need" it to be the first immortal AI God. We did not install water treatment plants to bring down our water usage rates just to feed the machine spirit.
>How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
Quarrelsome
> Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
Well then we should maybe ask ourselves why RealityTV gets more views than well written work.
troosevelt
If you lost your $60,000 a year job due to this, do you really believe a basic income funded by it will make up that loss? It won't. Basic income in the US is usually proposed at $12k per year, which would add another $3 trillion to the budget. Do you think you can even get that just taxing these companies? I don't.
People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution.
omikun
People complain UBI doesn’t make mathematical sense doesn’t realize our current economy doesn’t make mathematical sense either. All this prosperity we in the developed world get comes at the cost of extracting wealth from the rest of the world and all government taking on ever more debt.
mvdtnz
That's an absolutely enormous claim to make with zero evidence.
joquarky
Also, UBI will inevitably become just as convoluted and corrupted as our current tax laws.
flashgordon
Not to mention inflation. Whats not clear is whether UBI is cash or does it include access to "basic" facilities like health, education and housing?
ggsp
Fair warning: I’m quite ignorant in terms of economics, so this is a naïve way of looking at it.
The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”?
andriamanitra
The cost of living would certainly rise somewhat but the point is that UBI is redistributive: the same absolute amount to everyone raises low incomes by a larger percentage than high incomes. Long term effects are hard to predict but in the short term it would mean the poor doing slightly better while the middle class is slightly worse off. The non-working (owning) class would be mostly unaffected as assets are insulated from inflation.
Another factor to consider is that putting more money in the hands of people in need of <thing> means producing <thing> becomes more profitable and thus more investment and resources are directed towards <thing>. If we assume the economy works the way the proponents of capitalism say it does, this should eventually drive the cost of living back down.
But personally I think the biggest benefit of UBI would be the reduction in number of people who are desperate enough to accept work – both legal and illegal – that is unfairly compensated, inhumane and/or immoral. The existence of that class of people is the driving force behind many societal problems. Exorbitant amounts of resources are wasted treating the symptoms of those problems instead of fixing the root cause.
animegolem
And even if you did get the 60k and never can find work again are you gonna be happy about the next door neighbor working for 120k and getting his 60k on top?
site-packages1
Well I can tell you that I work 40+ hours a week and am very unhappy my neighbor has a more expensive house than me. Someone should do something!
abakker
All the proposals I’ve seen would set the marginal tax rate on the 120 so high that his earnings would end up more like 40k from the 120k job and then he gets his 60. So, still some benefit to working, but a very progressive tax rate on higher earnings. Not sure I agree with this, but that is what I’ve seen.
Aurornis
Your neighbor would get $60K UBI but their tax bill would go up by $80K because the government needs tax revenue to pay the UBI.
For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math.
undefined
hashmap
You never see it how. Like in terms of raw resources or political will?
troosevelt
I mean the numbers. 12k per year is peanuts. You cannot live off that and to do it we'd be nearly doubling the budget (that's old data, it's probably not that portion of the budget anymore).
That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible.
I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends.
I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing.
It's rough out there.
ip26
If companies are faced with the choice between:
- employ you at 60k/yr
- replace you with a machine that costs a lot of money, and also send you UBI of 60k/yr
It should be obvious the latter is not an option that is ever going to happen.
xboxnolifes
What if the machine in this context is 3x as productive as you?
JeremyNT
The solution to the subsequent devaluation of labor, and ability for tech oligarchs to pocket the cash instead, will not be found in capitalism.
Unless we are all to become serfs, a new way to distribute resources needs to be on the table.
UBI is a salve, offered to keep victims of the system out of abject poverty. It is too little, too late.
mschuster91
The problem is, companies will go for the third route: hire a company in India to launder AI. It has already worked out once with the offshoring wave.
JumpCrisscross
There is also a likeability problem. Altman and, shockingly, to a lesser degree, Musk have terrible brands. When folks see those people at the top of these companies, folks who have been publicly saying they're going to cause massive job losses and cause human extinction or whatnot, they're going to hate the companies irrespective of the actual risk of job losses or environmental impacts.
throwatdem12311
Why does Dario get off the hook here? He also comes off like a greasy asshole 99% of the time.
happytoexplain
Virtually no "normal people" know who he is. I don't think most programmers I know even know who he is. They just know "Altman" and "Anthropic".
JumpCrisscross
> Why does Dario get off the hook here?
I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.)
keybored
What a blessing that they don’t have an Obama frontman for their schemes.
frm88
The fourth aspect to discuss is how do we want to restrict the influence of AI companies on politics? Will we allow the CEOs to implement Thiel's vision of a world run as a company with CEOs at the top via massive monetary influence on political decision making, effectively abolishing democracy? If they really manage to replace 50% of the workforce with AI, their influence over everything from regulation to elections to social security networks as well as foreign policy will be enormous.
TrevorFSmith
I think you're missing one of the major reasons people are against "AI": the jerks at the top. When obviously nefarious people are lining their pockets and not bothering to even pretend to care about the people around them, it's no surprise they're hated.
guyomes
> How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
Just as a good for thought, looking back into history, during the late 1920s, mass production had a critical impact on Art Deco [1]. Artists were divided on the question if mass-produced art (using new industrial methods) could have a quality similar to hand-crafted art. It is clear that different people will have different opinion on the subject.
The technology is not there yet, but one example of mass production from AI would be book adaptation into movies. I'm sure that there are many other examples hard to predict that might: empower people, degrade art quality, improve art quality, divide people or maybe gather people.
Aurornis
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Every call for UBI should be qualified with two estimates:
1) How much money you think UBI will pay out
2) How much money you think the tax will generate
Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math.
If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person.
So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day.
Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world.
cortesoft
This is based on the assumption that AI is going to take all our jobs. If this is true, than as more jobs are absorbed by AI the revenue would increase.
VitaliyKorbut
As more jobs are absorbed by AI the revenue would increase, but not dramatically. As AI competition will lower the prices to the cost of serving tokens, which is close to zero. We’ll get more stuff done and the world would be better off. Even if we don’t invent new jobs we can compete for existing jobs harder, for example we can have 3 times more of X (for example restaurants) at 1/3 the client base on average. It doesn’t mean that everyone will be of a higher social status than average, which is what people actually want, and which is mathematically impossible.
Aurornis
You’re assuming that this AI will be in the same taxable jurisdiction as the people whose jobs were replaced.
The work that is most replaceable by AI is work that is mostly digital. That work most easily moves to another country.
When the work is replaced by AI you can relocate it to another country much more easily than when you have to relocate workers.
pj_mukh
I don't think the last two critiques are good critiques at all. The environmental impact is a function of our energy sources not energy uses. Complaining about energy and water when we have infinite energy beamed down to us surrounded by a planet that is 70% water seems silly.
And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always.
The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one.
rescripting
The AI CEOs have been screaming for years now about how AI is scary, you should be afraid of it and it’s going to take your job.
“Mythos is too dangerous to release.”
“OpenAI offers a bounty if you can get ChatGPT to teach you how to do a bioterroism.”
“Agentic agents will replace entire categories of jobs. They’ll just be like, gone”
This is all signaling to their customers; no not you on their $20/month plan, the governments and corporations of the world who have deep pockets, fat to trim, and borders to defend and expand.
It’s no surprise that people don’t like AI. It’s not for people.
Tyrubias
This was evident everywhere except within the AI industry itself. The rhetoric from many of the industry’s top leaders has been “this technology will eliminate millions of jobs, fundamentally reshape countless other jobs, and automate the use of lethal force, but we’re going to develop it anyways”. Many of the current economic woes, including mass layoffs, have been blamed on AI by the very executives conducting said layoffs. In addition, the major AI companies have shamelessly stole intellectual property to train their models and shoveled AI down everyone’s throats. Is it any wonder that the general public hates AI? The AI industry isn’t exactly doing its best to appear likable.
monksy
Roy Sutherland has a really good take on AI. Most of the AI companies are targeting a cost cutting proposoition where they should target a value creation one. Targeting and pushing towards a regressive elimination route is tox and destructive to those around it.
Then again the CEOs of these companies want to get their company at all cost to society.
ncouture
The title of the original article feels like click-baiut to me. It's covering an act of violence under the pretext that people hate AI.
In fact it's a very sad story about a 20 year old throwing their life away instead of fighting for what he believes is right through non-violent activism and/or regulations.
Last year I wrote an article asking the very question "Who will be the next Luddites?", National Geographics followed-up months later. I'm sure many before, after or in-between covered the same topic. There is truth to it, we will be impacted but let's not forget we went through this during the industial revolution and we should be better equipped than ever to fight using meaningful non-violent acts and operations.
https://www.linkedin.com/pulse/who-neo-luddites-more-importa...
http://nationalgeographic.com/history/article/luddite-indust...
estimator7292
Non-violent means don't work and get you killed by cops. This is what the people are left with.
Balgair
So, at my BigCo, this rings very true.
We've tried to internally pitch many ideas to the larger organization before but mostly got nothing back.
Finally, one of the various board members talked to my boss and told them that, essentially, it has to be top line growth, not bottom line savings.
We looked this up and it came down to some MBA mumbo-jumbo about how X% of growth is better than that same X% of savings once you run the math (?). Look, I know, that's not how percentages work and I know that savings actually do matter. But in 'I have an MBA-land' the mantra is topline > bottomline.
So, then we started to pitch ideas around growth (new lines, more customer sales, more customers, etc). Which went ... nowhere ... again.
Time goes by again, and another helpful person reaches out and tells us that our ideas are 'not worth considering' as they 'don't meaningfully impact revenue targets'. Again, essentially, just to justify the salary-time that these internal boards spend, the idea has to be net positive. Then it we learned that, no, it has to impact the revenue to 1%. For our BigCo that in the ~$10M ballpark. We do have the customer base to support that, but it is in the revenue ballpark of Atari or the Hypixel servers.
Look, either way, the run-around that I get told is that for AI projects that we pitch internally: 1) Top line growth only 2) ~1% increase in revenue (~$10M).
Now, why anyone would not just go take that ~$10M idea and not just make a company themselves is beyond me, but I don't get paid the big bucks, so who knows.
Still, that is what these BigCos are looking for: Growth in the ~$1-10M range.
autaut
I think tech ceo got a little bit too excited and their mask fell off and started saying “oh yeah, you don’t like it? Too bad nothing you can do about it”. You’ll see them quickly backpedal to woke 1.0 when it turns out they were a bit too quick about it.
Gigachad
And when people showed up at Sam Altmans house.
z3c0
Not to mention, it doesn't actually create the productivity promised at the lower rates promised. The most enthusiastic proponents are middle-management, not actual doers.
It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.
the_snooze
They got way over their skis on this one. There's a difference between "impressive" tech vs. "operational" tech. That difference usually boils down to prioritizing engineering rigor over marketing.
EA-3167
Unreliable mediocrity, because you simply can never be sure when the damned thing lies/hallucinates unless you double-check everything.
So now you're wrangling an "AI" system and you're doing most of the work you would have had to anyway. ...And when you don't it can get really embarrassing.
https://www.abajournal.com/news/article/elite-wall-street-la...
Not the first time, surely not the last. The problem is that so much money is tied up in this thing, and the moment the music stops the bag holders are going to be utterly doomed.
girvo
> the bag holders are going to be utterly doomed
Good news, the plan is for us to be the bag holders as they rush to IPO.
yoyohello13
I’m frankly more shocked that people in the industry are surprised the general public hates them. Like it’s been non-stop fear mongering and hype from them for years, AI has basically done nothing to improve the lives of normal people, wtf did they expect?
SpicyLemonZest
They didn't expect anything else and aren't surprised. "The X industry is discovering..." is one of those stock phrases that people just kinda deploy willy-nilly; the article contains no argument that anyone in the AI industry didn't know or didn't expect this.
emp17344
Presumably these companies on the verge of an IPO don’t want the public to hate them or their product. It wasn’t exactly a calculated maneuver - they made a decision to leverage fear-based marketing and it backfired.
operatingthetan
If they are in a cult that shuns outside opinions, it could be a surprise when they find out...
threethirtytwo
People hate admitting the truth. Everything you said here is utter bs that people say to lie to themselves.
I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI. The majority of HN called vibe coding useless and said LLM have no potential. Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
Face reality.
Starman_Jones
>> People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Let’s explore this fundamental reality a bit. The “and” necessitates both parts of the clause be true, but would people hate AI if it became better than them but didn’t replace them? That’s an easy no; Big Blue and AlphaGo didn’t cause mass hatred, and machines have been broadly better than humans in some capacity for centuries - that’s literally why we build machines.
Would humans stop hating AI if it replaced them, but wasn’t able to become better than them? Again, no. So the second piece is both incorrect and unnecessary, and what we’re left with is “People hate AI because it’s on a trajectory to replace them,” which is accurate, but not exactly revelatory; many people have already come to this same conclusion, including in this very comment thread. So the good news about your face reality line is that you’ll find a lot of people already facing that direction alongside you.
threethirtytwo
>but would people hate AI if it became better than them but didn’t replace them? That’s an easy no;
Yes they will. Jealousy. But they'd never admit it. What are you proud of? What skill do you value and identify yourself with? Say AI did it 1000x better than you but some law was in place to prevent it from replacing you. You'd love that law, and you'd make up some excuse to to hate AI.
>Big Blue and AlphaGo didn’t cause mass hatred
Excuses. Just think a little rather then finding some obvious surface level reasoning that fits within your own bias. First nobody hates those things because it's only a select niche that takes pride in their chess or go skills. Those people will hate alphago if alphago was a direct challenge to their identity as a player. But laws are in place to prevent that as in tournaments only allow humans. Why are such laws in place? Because go and chess are just games. They produce no intrinsic value so it doesn't hurt the bottom line if you restrict AI in that case.
This isn't the case for programming and any other field out there that can be replaced by AI. Ai will be directly attacking a business skill you use to pay the rent and it is currently challenging my identity as a programmer. And laws to restrict this will be actively fought against because monetarily and utility wise there is actual real world benefits to AI.
But why do I even need to spell this out to you? You're not mentally deficient. You're not stupid. All of this is obvious. Why do I have to literally tell you why your example is biased when it is OBVIOUS. It's because you're lying to yourself. You subconsciously avoided the obvious reasoning above. You chose convenient rationale to fit the narrative YOU want. Nobody hates "alphago" lol, did you see that koreans guy face when alphago fucking dominated his ass? Come on bro.
That is the reality. And you are denying it. When there's two people in disagreement and one of them is lying to themselves... how do we know which one it is? The lie is so convincing that both people believe in it.
I'll tell you the best way to determine this. The best way is to see which persons reasoning aligns with their identity and biases. Which person is constructing a logical scaffold that is optimistic? Because lies are told to cover up the horrors of reality. Guess what? I'm a programmer. I hate AI. But I cannot lie to myself. You? Probably made up all kinds of lies about how you're not afraid of AI taking over your job cuz AI can't do this... or that... or whatever bs to help you sleep at night.
nmeagent
> Now my company...
Which company is that? Do let us know so I can make sure to never be your customer.
undefined
threethirtytwo
[flagged]
hackable_sand
The public also hates the lies and the threats about the tech
salawat
>People hate admitting the truth. Everything you said here is utter bs that people say to lie to themselves.
Stares at poster silently from a lotus position waiting for the enlightenment lightbulb
>I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Nu-bie, come, sit, be silent & reflect. When was the last time a tool was made that truly replaced the wielder? Without the wielder, a tool is nothing, without the tool, the wielder still strides as a beacon of divine potential.
>Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI.
Continues staring in silence awaiting the moment of enlightenment
>Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Nu-bie. Does the man disappear because the machine exists? Or is he redirected according to his nature? What nature consumes a man abandoned by his tribe? Surrounded by hoarders of the necessities & means of life? Reflect on this. Reflect also on the potential capabilities of a group of people that through attention to detail, great patience, and acts of artifice on behalf of their fellows once enabled the animation and thinking of rocks. Think very carefully about this.
>Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
The corporation is as a cup. It's direction is controlled and agency guided by men. It is the oldest form of AI, with us for hundreds of years. The only thing keeping it in check being the occasional times of great strife during which generations of men wrestle the beast, to remind ourselves of wherein our problems truly originate.
>Face reality.
Nu-bie, it is time for you to resume your chores. You have not been enlightened.
dominotw
not to mention all the AI boosters seems to have the most hatable scammy personalities. why are they all so smug.
magnet for scum like boosters on X, middle managment types, linkedin ai influences, ppl making fake videos on facebook.
rvz
Not a surprise. Seems like AI is more hated than crypto and this shows that the AI industry is in a bubble.
At least crypto does not take away more jobs than it creates, where as we all know AI takes away more jobs and no-one can give a solution or explain what the "new jobs" are.
Because the value from AI is to automate the jobs from humans. Claiming otherwise is being intellectually dishonest. Same goes for defining "AGI".
binyu
> At least crypto does not take away more jobs than it creates
Except sometimes when there's a huge black swan event, or when the bubble pops. Such things can result in significant layoffs even though it's a completely different mechanism.
YZF
Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
AI has real value. We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not, whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
rvz
> Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
All of what you said is false.
Stablecoins are not speculative and have value, and you can send money at a low fee, low cost, worldwide to wallets on the same day, right now with far less energy than today's "AI".
> AI has real value.
What do you mean by "AI" specifically? LLMs in data centers?
The value in for this mysterious "AI" or even "AGI" paradise is not even for you. It is actually used against you.
> We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not
You understand that the current iteration of "AI" needs tens of gigawatts of energy and hundreds of billions of dollars and wasteful amounts of water which causes electricity prices in certain cities to skyrocket?
The way that it is financed appears to be close to fraudulent with vague "commitments" and mountains of debt that would take almost a trillion dollars in revenue to pay off the data centre build out.
> whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
Assuming after the data centers will be built (if they ever will be), can you name what are those new jobs that will be created from "AI"?
MBCook
Are they? I heard a presentation from some pro-AI people on Friday to the large company I work at. They said they surveyed people at an AI conference and 93% of people were excited about it.
This was said with a straight face like “people love puppies!”.
No self awareness at all.
kolja005
Think about what the implication here is for people who answered no to that question. If I were to go up to my boss and say "I'm not interested using AI because I think it's bad for society" I would essentially be saying that I'm not interested in becoming more productive and thus making more money for the company. That's a very poor reputation to be carrying around and most people are going to avoid it. I believe that this, more than any specific actions by AI companies, has contributed to the sense of inevitability that this technology is taking over whether we want it or not.
operatingthetan
In consulting firms and corporations you kind of have to pretend to be into it, it's just the culture.
zmmmmm
It is hazardous to swim against that tide currently from a career perspective - people rapidly categorise you as generically anti-AI even if you try to express a reasonable nuanced view. It's pretty toxic.
turpentine
Sometimes an employer will tell you what your view on AI is too, and make you sign an agreement.
rtdq
Ask anyone who is a gamer what they think of AI. I guarantee you'll get a universally negative reaction because of RAMageddon.
throwatdem12311
Not just that, they go absolutely fucking ballistic if they in so much as find a single AI generated texture in a game.
sph
See the terrible DLSS 5 fiasco.
Ekaros
I wonder what result you would get if you run survey about do people love dogs at dog show...
Also, looking at current market situation how many people would be willing to say to their bosses or even publicly that they think AI is quite a lot of bullshit.
pepperoni_pizza
Exactly.
My new favorite game at work is "guess if this person is really into AI or they just have to be because their boss is and if they weren't they would get replaced by someone who is" and it's quite hard to say.
And since the "boss" of CEOs are the investors in the stock market, and the stock market is automated to ridiculous degree, is this AI pushing for itself?
rolph
it seems to be a case of nonrepresentative sample bias
MBCook
Obviously. But they’re using it as “evidence” that goes their confirmation bias.
Meanwhile I saw some survey where only something like a third of Gen Z and lower are pro-AI.
Of course the survey also said like 70%+ of them still used it.
jrflowers
> they surveyed people at an AI conference
You can tell that everyone loves chain buffet restaurants by going to Golden Corral and asking everybody if they are enjoying their meals
The_Blade
yes, the Kelvin Benjamin agent
sodapopcan
Oh, I love puppies! There's another data point for them.
insane_dreamer
> They said they surveyed people at an AI conference and 93% of people were excited about it.
LOL. That's like saying 93% of people who go to Star Wars conventions like Star Wars.
deepsquirrelnet
> In a provocative GitHub post, machine-learning engineer Han-Chung Lee argued that even rosy internal numbers that do show AI-assisted productivity gains are suspect, as they’re produced to hit adoption targets no one can effectively audit.
Isn't this fundamentally what MBAs do with their time? Keep going with this analysis, because it goes much deeper... In my experience, BI is often a house of cards. A lot of times it's just narrative crafting, just like we're all encouraged to do when we write our resumes.
Can you embellish a story? Can you invent a convincing political narrative? As far as I can tell, that's the fundamental unit of US corporation.
pibaker
It should not take more than one brain cell to realize that in an era when employment is already perceived as precarious, you are not going to earn any public favors by telling people you are taking their jobs and making them obsolete. Doubly not so when you offer no alternative path towards building personal wealth. Triply not so when you address none of the economic problems people face like housing or healthcare costs but make others, like social cohesion and energy prices worse.
If the industry continue to gleefully ignore public discontent over AI impact on society, I imagine what might happen is a public backlash that would make the post Chernobyl anti nuclear sentiment look tame.
nayroclade
Bear in mind, in the same survey this article is talking about, nothing and nobody had an overall positive rating amongst those polled. So yeah, AI is unpopular, but it's just one more thing that people hate amongst a broader cultural movement of generalised hate.
Tyrubias
What you term “a broader cultural movement of generalised hate” is just a reflection of people’s dissatisfaction and fear regarding the state of the world. They’re seeing wages stagnate and prices go up. They hear news about how well the stock market is doing, but they don’t see any of those benefits. They see their politicians spend money on war and destruction but refuse to spend money on social programs. At the same time, the rise of the Internet paradoxically makes it both easier and harder for people to question the narratives they’ve been taught. Amidst all this confusion and worry, is there any wonder people are dissatisfied and looking for someone or something to blame?
alexjplant
Did people act like this in the 70s too when we had stagflation, mass unemployment, gas rationing, Vietnam, Nixon, etc. to contend with? I ask that sincerely because I wasn't around then. The US got a ton of cool music and cinema during that decade (disco and soft rock excepted) but the rest of it sounds even worse than things are now.
SpicyLemonZest
People acted like this constantly, the 70s are incredibly sanitized in the popular imagination. A government strike force murdered 4 student protesters at Kent State in 1970; Boston had 40 race riots between 1974 and 1976.
Kiro
> Even within tech and coding, one of the areas where AI is reported to have the most promise, there’s the question of whether the productivity gains reported can be trusted.
I wish articles like this would at least acknowledge the massive adoption AI has among programmers. It's not comparable to stuff like helping you write the occasional email, which I presume is the baseline for most people outside tech. Making it sound like a minor tool that some people are still just experimenting with completely misses the impact it has already had on software development.
happytoexplain
The impact in software has been very hard to measure. There are so many ups and downs and variables.
Adoption in particular is a useless metric. They are forced to adopt even if it's not really helping in their case, or if it does help but using it makes them miserable, like being forced to switch jobs from something you enjoy to something you find boring and tedious. And then there's the "expertise debt" that will have who knows what impact in the coming decades.
WorldMaker
Also, this "Legacy Code as a Service" tech debt boom may eventually find creditors that need paying. These models are trained on other people's Legacy Code. No company is letting OpenAI or Anthropic or Meta have access to their best code and smartest experts. It's all just other people's Legacy Code. LLMs are going to force companies towards a lot of mediocre software regression to the mean. How long before companies are scrambling for expertise they laid off or didn't invest in because all their software sucks and no one around knows what it is doing or why? The tech debt bank account is going to get weird in the coming decades.
fnoef
Many of these developers adopted the tools against their will, as means to bring home salary while they still can. In the mean time, the AI folks are working hard to just eliminate their job.
jhack
To a lot of people AI is just image and text generation. And yes, these uses alone aren't worth the time, money, and energy.
But there are a lot of areas where AI is helping that people don't see, like in medicine. Drug development, cancer research and early detection, CT and MRI analysis, just to name a few. These uses cases are vastly more important but rarely get discussed. It's important to know that AI isn't this one singular thing or else we risk throwing the baby out with the bathwater.
joquarky
> Drug development, cancer research and early detection, CT and MRI analysis, just to name a few.
What good are these to someone who will never afford them?
A lot of this talk reminds me of Elysium (2013).
happytoexplain
They do see those use cases. It's not surprising that they focus on the enormous number of other, negative use cases. It's misleading to describe the medical use cases as "more important" - yes, they are, in the same way that healing a person is "more important" than ruining their lives. That's not what you're implying by your usage of the term, though.
A person having a negative attitude about AI doesn't mean that they wouldn't keep the parts that are mostly positive if they could.
HighGoldstein
> They do see those use cases. It's not surprising that they focus on the enormous number of other, negative use cases. It's misleading to describe the medical use cases as "more important" - yes, they are, in the same way that healing a person is "more important" than ruining their lives. That's not what you're implying by your usage of the term, though.
This comment could just as easily apply to a conversation about computers in general, it's just that people whose lives have been "ruined" by now-established technologies have been largely forgotten by society.
undefined
fnoef
You know, perspective matters. When you sell a knife with the promise of a tool that helps you cut onions, is a completely different story from when you market it as a weapon to kill your neighbor.
AI is massively marketed by AI people as a tool to replace your job. So either the AI people are bad at marketing or the gains in other industry are insignificant/ do not generate shareholder value.
joquarky
> AI is massively marketed by AI people as a tool to replace your job
Keep in mind who pays the AI companies.
It's not you, it's the C-levels. The marketing is aimed at them.
oldmanhorton
“Think of the children!”
When AI produces those meaningful advances in those fields, great, we can start having meaningful discussions about them. The greatest medical advancement of the 21st century is likely mRNA, or maybe GLP-1 for some. Neither were LLM assisted in any meaningful way as far as I know (they predate ChatGPT, perhaps more primitive models were involved in ways I’m not familiar with). Until those advances come, this argument is fanfic.
Plus, in the most morbid way possible: who gives a shit about living longer if they are stripped of their career, are inundated with slop at every angle, and can’t trust any information. These are real problems that AI has already created, unlike the fanfic of ridding cancer.
insane_dreamer
The problem is, it's not being pushed as just a tool that helps you or your subordinates do your jobs better -- that's nice but not going to generate $Ts in revenue -- but rather as a tool that replaces your subordinates, so your company can increase their quarterly profits -- that's the golden ticket.
KaiserPro
I think there is conflation here.
Data centres popping up near you probably means higher electricity prices, poor air quality and water problems
Sam Altman is a massive penis, with a gift for saying the wrong thing at the wrong time.
The two things that link them are "rich" people imposing their will on everyone else, publicly.
joquarky
> with a gift for saying the wrong thing at the wrong time
It's not a gift, it's ignorance.
Altman and his peers live in a bubble.
They don't interact with former translators, copywriters, or web developers.
They don't hear feedback from the people they have put out of a job.
mark_l_watson
So much of the public hates AI, at least the non-tech people I talk with. Good to see so much common sense among the general public.
While I find a Gemini Ultra subscription worthwhile for myself, most of the value is in the fun and entertainment of interacting with a strong API in AntiGravity (usually use Claude models), Gemini App, NotebookLM, etc. It is intellectually interesting and fun.
Can I justify the cost to society for data centers, possibility of US government bailing out the AI tech giants, etc.?
No I can't. I think the Chinese are skunking us. Building cheaper AI is the winning strategy. GLM-5.1 and Deepseek v4 are amazingly effective for much lower inference costs.
Get the top HN stories in your inbox every day.
I feel like their are (at least) three main critiques of AI, and I wish we could debate them separately, because I think they each have different resolutions.
The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue.
The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times?