Get the top HN stories in your inbox every day.
jerhewet
btschaegg
> I've come up with a set of rules that describe our reactions to technologies: Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you're thirty-five is against the natural order of things.
Douglas Adams, The Salmon of Doubt
croon
It's not so much the technologies that are good/bad, but IMHO that the pipeline between invention of and abuse of technology is increasingly short.
soloto
There are just too many examples by now that immoral and even illegal behavior will not be prosecuted or otherwise punished, especially if you make money with it. No wonder the pipeline is getting shorter.
benterix
It's funny because I was watching a lot of amazing new tech appear after I was 35 and most of it was exciting. Learning these things was fun and rewarding. You could say it made me happier.
Not sure why LLMs feel the opposite. Maybe it's because of the terrible marketing and pushing it down everyone's throats. Maybe it's because of the personality of people like sama, or how it's being used to produce the so-called AI slop globally. Maybe something completely different. But there's something bleak and off-putting in it.
archagon
Kinda funny that you're quoting a real author who would never in a million years have resorted to using slop generators.
"I strongly feel that this is an insult to life itself." -Hayao Miyazaki
raddan
I’m not so sure. Douglas Adams was an avid technologist who worked on two interactive fiction games: the famously-cruel Infocom Hitchhiker’s Guide and Starship Titanic. I don’t remember whether there was anything free-form about the dialogue in the HHGG game, but Starship Titanic had many bots you could talk to. It was immensely fun, and I suspect he would have loved the ability to spin out dialogue a little more naturally.
On the other hand, the HHGG universe is just packed to the brim with deranged robots. Everybody loves Marvin, of course, but my favorites were the sycophantic ones like the elevators that sigh with pleasure upon delivering you to your destination. Adams always seemed to do perfectly anticipate the insanity of marketeers, and I expect that we’ll actually get some of this someday…
intended
I WANT to be excited by LLMs.
But it has been so frustrating to listen to the hype and then contend with the actual results.
How bizarre is this tool, that I have been forced to conduct interviews across all domains, to see where people are having actual productivity gains?
I never had to do this with the iPhone, or with word, excel or any number of cool technological inventions.
The early internet empowered people, and we created some of our most amazing collaborative intellectual achievements like wikipedia.
Today, aside from being built by harvesting the IP of people, LLMs are shuttering our entire information economy. Sites I go to for info are either going behind paywalls or shutting down because they are being rapaciously harvested by bots.
Yes, history rhymes and repeats; however there are other tunes that have been played that sound similar but end differently.
JKCalhoun
I have been treating LLMs as a research assistant (and sometimes tutor) as I have explored analog computing for a hobbyist project that I have been working on for 6 months now.
To my mind, saying the AI will cause me to forget how to learn is equivalent to suggesting that going to school for instruction will cause me to forget how to learn.
rurp
A huge part of what attracted me to programming was how free and open it was. The fact that literally anyone with a computer could install Python/Javascript/etc for free and create virtually any software they wanted, limited only by their own abilities and determination, was wildly exciting to me. I would say empowering, if that weren't such a cheesy overused term. If you were any good at it, you could get a great job at an interesting company.
Now like you said we're entering a world where anyone with a computer can pay a giant tech company thousands of dollars a year to spin up some agents for them. That's much less exciting to me, and I'm certain I would not enter the field if I were just starting out right now (assuming there even was a junior job available).
We've seen how big tech monopolies treat domains they control like search and social media. They try to extract all of the value, leaving nothing for the individual or common good, and they're quite effective at it. I'm not looking forward to them gatekeeping the field of software development as a whole.
benterix
Fortunately, we do have more or less open models, and they get better and better each year.
Unfortunately, sama & co hunger for global domination makes them more and more expensive to run.
znpy
> A huge part of what attracted me to programming was how free and open it was. The fact that literally anyone with a computer could install Python/Javascript/etc for free and create virtually any software they wanted, limited only by their own abilities and determination, was wildly exciting to me.
but you can still do that, AI is not preventing you from doing any of that in any way.
benterix
True, but this is like saying 10 years ago: you don't need to learn React, you can continue coding in Angular.
People do want to learn and use new tech but instead what is promoted is an access to a proprietary and (increasingly more) expensive API.
sph
Good to see another Luddite (as they’d call us) on here! I am quitting tech in a month. I chose to go through 6 month of identity crisis, depression and reinventing my life after 20 years in software than having those bullshit generators imposed upon me to compete with those for whom thinking is démodé.
Meanwhile the front page is people complaining that using a particular word causes their evil genie to go haywire. You guys still call this stuff engineering? Writing requirements in prose, because programming languages are too hard? Fuck that, I’m out.
hmokiguess
Going full circle is what we do, it's everywhere throughout human history. Actually, one could argue it's how life works. Nature has seasons to help life grow and be balanced. We're only starting to understand how this affects us in a larger scheme of things. Who knows, maybe we will wipe ourselves to dust and be discovered by the next iteration until we reach v1.0.0
znpy
> that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software
yeah i'm gonna call BS on that. what you describe was happening well before modern-day AI (LLM, agentic stuff etc) became mainstream: think of google accounts binding your identity to your searches, gmail, google adsense, facebook, instagram and twitter (and others).
And the products and services that do what you describe can do that just as well without ai.
So yeah the problem is absolutely real but AI is not the culprit here.
dsiegel2275
Take me with you, please.
raffael_de
Where he goes, he has to go alone.
classified
That sounds like it could be a quote from a book or a movie. Any hints?
mondomondo
[dead]
raffael_de
[dead]
sdevonoes
Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
giancarlostoro
The better you are at architecting or even directing a junior developer, the better your output too. Dont let AI make decisions, its supposed to take your decisions and turn those into code. When AI makes decisions, well the unexpected outcome is always on you.
sshine
> Dont let AI make decisions, its supposed to take your decisions and turn those into code.
I let the AI make decisions all the time. I often approve them, and I sometimes revert them. Most of the time they’re really good decisions based on my initial intent, but followed by analysis I didn’t make but agree with.
JeremyNT
I think there's a spectrum of where to draw the line.
There's clearly some level where you want a human making decisions for even the most vibey of project, because without some kind of a spec about what you're trying to build and what features you want you'd get nonsense.
But like... maybe don't stress the details too much.
pydry
i always found it to be easier to write code myself than to direct a junior developer.
the level of teaching involved would always mean the overall velocity of work slowed down.
some people say you can throw them the drudge work but i find that if you're doing coding right (e.g. you dont let your code base degenerate into a mess of boilerplate), there is barely any drudge work to do.
giancarlostoro
You're missing the real goal of directing a Junior, which is you're teaching them to be a team player, Junior devs will surpass your expectations, the rate at which they goof or are about to goof should decrease over time the more you mentor them. If you do it right, you not have a strong ally and coder under your belt, or would you rather someone else teach them their bad habits?
CamperBob2
i always found it to be easier to write code myself than to direct a junior developer.
Me, too. But that doesn't mean I'm a great developer, just a shitty manager.
rstuart4133
Others are disagreeing with you here, and I do too.
The difference is profound, and takes more than a couple of days to get your head around the implications. I'd summarise it as: "if you give a computer the same input it always produces the same output, but if you give a model the same input it always produces different output". Add to that the output is often wrong and it can't reliably follow instructions, and the difference is so great it breaks most of your intuitions.
The reward working with this piece of unreliable jelly is it can be far smarter than you (think the difference between a man with a shovel and a 20 ton excavator - they can literally find bugs in minutes that would take a human hours or days), and they know far more than you.
The engineering challenge is to make this near random machine produce a reliable product. It isn't easy.
The hype you see around them is it's trivially easy to get it to produce a feature rich but very unreliable product, as Anthropic demonstrates with their vibe coded claude-cli. I refuse to use it now. Among its other charms, it triggers a BSOD on windows: https://github.com/anthropics/claude-code/issues/30137 (Granted, it's just another Windows bug: https://learn.microsoft.com/en-ca/answers/questions/5814272/..., but if you are shipping to Windows you should be working around such bugs.)
intended
> The reward ... is it can be far smarter than you.... and they know far more than you
I think this solidified an idea for me. A tool being smarter than me but inconsistent, is useless.
I can work with people who are smarter than me, because I can trust them, and I can trust them to own up or be held accountable for screw ups.
For a calculator, I can only hold myself accountable. However I cannot hold myself accountable for not knowing something I dont know.
kody
I agree. It really is not a difficult “skill” to learn. It took me probably 4 days to configure agents to break down requirements, write tests, write code, open PRs, review and merge PRs. Learned how to use skills, MCP, AGENTS.md. It is really not complicated. It’s just…writing well. Knowing how to decompose problems. Trying out different tools. I also learn new things every day but I sincerely do not think I would be X times more effective with these tools had I started a year ago.
elevatortrim
Just learn, sure. But the difference between my efficiency of using it on my day 2 and month 6 is significant. Yet I feel I am barely scratching the surface of it.
pixxel
[dead]
jjav
> If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
Disagree. It takes a lot of experimenting to find the right balance between sufficient guardrails and insane halluciations. And it'll be different depending on work domains.
I'm still refactoring AI workflows every week after more than a year or so and still working on it. Will probably be a perpetually ongoing effort as models change.
ch_fr
But does this translate as "one year of cumulative work" or rather "one year of rearranging your workflow and discarding obsolete ideas"?
If you spend a year walking in circles, someone can easily close the gap with one step. Especially if models and harnesses are supposedly getting more powerful all the time.
simonw
Firmly disagree. Learning how to use these tools effectively is unintuitively difficult.
They're great at some stuff and terrible at other stuff in ways that are very hard to predict.
I'm figuring out new and better ways to use them in a daily basis, and I've been an almost daily user for nearly three years.
ASalazarMX
They're difficult and hard to predict because they're still primitive, despite what their companies say. When (or if) they get advanced enough to deliver consistently, there will be no chance of being left behind, because even a kid will be able to use them effectively. Right now they're still at the gimmick level, although a very impressive one.
simonw
If the models get to a point of total consistency there's still a LOT that we need to figure out and learn about how to use them.
Let's say models can exactly and correctly write any code you ask of them.
- How do you break down a project into a sequence of requests to models?
- How can you most effectively parallelize the work - models will never be instant, so there will always be benefits in working out how best to use several agents at once
- Now that the models can handle the details of Lean, and Swift-UI, and Oracle stored procedures, and thousands of other technologies that you never got around to learning in the past... what can you do with those and how do you pick which projects to go after?
- How do you collaborate with other engineers and designers and product people in a world where you can churn out the right code reliably in a few minutes?
The models we have today are already effective enough to change the shape of our work as software engineers. As the models continue to improve figuring out and adapting to whatever that new shape is becomes even more complicated.
arcxi
can you share any examples of these "new and better ways to use them"? because the only way I've used LLM and seen other people use it is to literally just talk to it, which doesn't require any skills beyond basic conversational abilities.
simonw
I'm taking about coding agents, not chatbots.
With coding agents you need to think very carefully about how you design the agentic loop such that the agent has the right tools and information available to it to compete the goal.
I've been writing a lot more about that here: https://simonwillison.net/guides/agentic-engineering-pattern...
dude250711
If these tools stopped drastically improving, what justifies the crazy valuations?
simonw
Not much. The valuations are wild.
Rekindle8090
[dead]
embedding-shape
> a weekend would be enough to be on par with everyone else in the industry
I kind of agree in general that it is a learned skill, but considering how unclear people generally are when they communicate, I'm guessing it'll take longer than a weekend to be able to catch up, especially catch up to people who've been working on precise and careful communication and language for years already in a professional environment.
bluegatty
A weekend is enough to get going, but not nearly enough to 'be on par' with everyone else.
That said - what we have learned in the last year could be compressed quite a lot - there are a lot steps we could skip, and 'learn by failure' that need not be repeated.
It takes a while to get the subtleties of it, it's among the most highly nuanced things we've ever encountered.
spamizbad
The statement is absurd because the skill curve for AI tooling is so small you can can mess around for a day or two and get "caught up" with the zeitgeist. And what you need to know to get started is actually far less these days than it was 1.5 years ago thanks to all the product refinement that took place in the space.
The only real risk is that today there's an expectation from employers that you've got some AI experience under your belt you can articulate. But you can get that experience today.
scrumbledober
6-12 months ago I felt like i was constantly behind the curve with all the different things people were doing to get more out of their claude code. as the year has progressed though, all of those features keep making their way into vanilla claude code, at a faster and faster rate. Now someone working on the bleeding edge is using things that i'll be using without having to think about them a month from now. It has really reduced my anxiety of being left behind.
Unmotivator2677
That's the thing, any "advancement" you might discover will be integrated into main tools soon enough, I am going to say that in fact, you probably shouldn't even learn them before they are integrated. Helps you filter through all the noise and avoid wasting time on learning something that isn't going to take off.
mlinhares
You're discounting the "being able to write properly and put ideas into inteligible text" skill piece here.
avgDev
Most people who have been programming for a while should have those skills. If they don't then learning AI is not the issue but communication is.
skydhash
Most good programmers are good at writing. If you’re capable at simultaneously writing instructions for a dumb abstract machine and have those instructions being understandable for humans, you’re clearly good at expressing at least technical ideas.
Unmotivator2677
Yeah, never had a problem with explaining to AI what I want from it. That doesn't mean AI always follows what I tell it to do ...
dudisubekti
Black-and-white thinking like this is not healthy.
You can still do creative thinking while using AI as a powerful tool at your disposal.
Some mathematicians like Terence Tao are comfortable doing this, for example.
mawadev
I feel like I use AI this way, but a majority of my peers lean too much into it. There used to be the sentence "we don't think, we google", and I see that with ai usage. As soon as a roadblock appears, the situation is pasted in GPT without further engaging with it, then they pick up the phone and open an app while GPT does its thing 0_o
dudisubekti
I have a coworker like that too, my pet theory is that they're not passionate about their job to begin with. It's just something that can pay their bills.
While waiting for Claude to finish, we talked about our hobbies outside of work, and the same guy will go into deep details on how steroids and the HPG axis works, and even gave me a spreadsheet with several NCBI PubMed links on the topic.
I think we are all naturally be more creative and opinionated in things we are interested in.
Unmotivator2677
We don't think - sounds like the same crowd as the people who think creativity doesn't exist.
redsocksfan45
[dead]
eleventen
An orthogonal observation: Bearblog seems to have become an anti-AI echo chamber. Their community responds very positively to posts exactly like this one [1] [2] [3]
I think it's just important context to keep in mind that these sorts of takes are very typical to top https://bearblog.dev/discover/ in the same way that certain types of posts are designed to rank well here. I considered migrating my blog there earlier this year and ended up deciding that, while I loved the product, the community was not healthy.
[1] https://forkingmad.blog/ai-summary-blog-post/
[2] https://blog.spu.io/you-dont-want-to-make-things-you-want-to...
[3] https://blog.happyfellow.dev/simulacrum-of-knowledge-work/
saadn92
People also used to say that Google or calculators will make you dumber. Neither happened. Won't happen with this either.
virissimo
People are worse at mental arithmetic than they were in the recent past, so it's not clear that they aren't "dumber" in the sense people meant at the time.
ericd
And did our thinking about the importance of being good at arithmetic change in response? I think so.
We also used to be much better remembering things, when we relied on oral histories, our memory skills have degraded quite a bit. And there's a quote from Socrates criticizing how writing is a crutch that degrades our skill (https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1... , the last bit). Over time, we've just moved to valuing other things more.
dylan604
Well, with anything, practice is key. When I was in school, I was in a math competition where you had to do everything in your head. There was no scratch paper, you could not modify your answer once written, and erasing was obviously not allowed either. I wasn't the greatest at it, but I didn't suck at it either. That was decades ago, and I no longer do math in my head that way. What I used to do in seconds for a result now takes a couple of seconds to think about what needs to be done and then the time to come up with the result.
Insanity
Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
mjr00
> Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
That's extremely speculative, especially given there was a major event in 2020 which massively disrupted education worldwide.
customguy
I once worked for a guy who typed 7 + 4 into a calculator, after freezing for 1.5 secs trying to work it out in his head. It was in a "stressful" situation (not something extreme, we just were in a hurry), and I'm sure the guy could add those numbers in his head, generally... he owns his own business, after all. It took so much out of me to not move a face muscle.
greenchair
Sounds like you haven't used it much. It starts small with you forgetting the arcane params to commonly used tools that you don't need to type anymore. Where it will stop nobody knows.
dudisubekti
Well, I forget arcane params all the time before AIs too. I rely on terminal history search and google.
tsukurimashou
it clearly did make "people" dumber because now "people" believe in AI ;)
archagon
"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."
Richard Stallman was unreasonable. So was Linus Torvalds. I'm hoping that something wonderful and entirely human-centered will come out of the anti-AI movement, up to and including a bifurcation of the internet.
dudisubekti
But...
Take electricity adoption for example, with your adage then it means unreasonable pro-electricity people vs unreasonable anti-electricity people. We know how this turns out, I don't see a lot of people joining the Amish.
It's okay to have strong opinions (be "unreasonable"), but in the end humanity as a whole (the "reasonable" people) will judge whether your opinion is a good one or not. Only time will tell.
sph
> I'm hoping that something wonderful and entirely human-centered will come out of the anti-AI movement
It will. That's why it's important to be a "Luddite" and to make our voices heard. Plenty are on the sidelines, plenty dislike AI, and plenty don't care to drink the kool aid and would rather exercise empathy towards fellow humans than welcome the era of the robotic overlords and their billionaire masters.
This place is an echo chamber and doesn't reflect the views of the vast majority.
DANmode
> up to and including a bifurcation of the internet.
Enforced how?
beepbooptheory
Once you write something and publish it, its just out there, it doesn't really get healthy or unhealthy. I do not think all writing is meant to be, or needs to be, the representation of someone's mind and its health. We write to have the opinion exist outside of ourselves. Why would we even read things if what we read didn't have strong beliefs or opinions? It sounds so boring!
dudisubekti
Well I'm not saying that the blog writer shouldn't have written the article.
I've read the article and to me it reads like a very angry rant, which is why I commented with something akin to "bro calm down"
LaGrange
> You can still do creative thinking while using AI as a powerful tool at your disposal.
It remains the case that AI _erodes_ your ability to that.
So, eventually, after a few years, no, you can't.
Edit: meanwhile you're making yourself disposable. So, have fun with that.
dudisubekti
Thanks for the warm wishes, stranger
I see no point in denying the technology, it's best to do what we humans do best: adapt with it.
d_silin
Most people aren't anywhere close to Terence Tao on intelligence scale. Even most of HN commenters aren't that close to Terence Tao.
kylecazar
I don't think balancing AI use with creativity and thought is a matter of IQ. It comes down to how you use the tool.
d_silin
In my practice, I found AI are more useful in adversarial mode ("criticize this concept, "find a possible bug in this code", "challenge me", "quiz me on the knowledge"), because the knowledge found adds up to your own skills.
dudisubekti
You don't need super big brain IQ to be creative and expressive, all you need is simply a strong opinion on something, and you don't let AI (or other people) dictate otherwise.
Now the skill issue lies in whether your opinion is a good idea or not lol.
furyofantares
Some people who don't use AI will be left behind - those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind if they just refuse to leverage the superhuman properties that LLMs have.
I don't think it's hard to catch up if such a person changes their mind, though.
Some people who do use AI will be also left behind - those who use it to replace their skills without developing new ones themselves, and those who use it to do the same or worse work more cheaply. They will be left behind in a competitive world where others will work out how use it to do more or better work with no reduction in effort.
fdsajfkldsfklds
>those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind
It sounds more like there is no chance that most of those people will stay employed, regardless of how "ahead" they try to stay.
moron4hire
If LLMs mean I never have to open a PowerPoint from a client to pull out their "data" again, that'd be great. I gain nothing from being a manual data entry monkey for people who don't understand the concept that presentation-ready output formats are not data transmission formats.
But if I'm to be expected to employ vibecoding in my day to day job as a software engineer, I'll dismantle my house and go live off grid somewhere in Alaska. I have enough power tools and knowledge to do it. Probably massively healthier for my kids.
furyofantares
For now at least, I think it really depends on what type of coding that is.
I don't have any particular predictions going forward about it, but something I think about right now is, do I want to focus my time where the interesting decisions, the valuable contributions I make, are product-level thinking about what to build and what problems to solve? Or do I want to focus my time where the interesting decisions are technical ones, fully wrapping my head around a technical problem and coming up with a solution?
I do think both options are still available, and personally I love them both. But I don't know what types of coding would involve significant amounts of both activities anymore.
skydhash
There’s still a lot of place for both. Because they are just a shift of perspective around the same thing: Solving a problem for someone.
Product is when you’re seeing things as the one who have the problem and designing the solution in a way that is usable. Technical is when you shift to see how the solution can be implemented and then balancing tradeoffs (mostly costs in time and monetary resources).
While the code is valuable (as it is the solution). Building it is quite easy once you have a good knowledge on both side.
The issue with AI is not in their capabilities, but in people rushing to accept the first version when there are still unknowns in the project. And then, changes costs almost as much as redoing the project properly.
FiReaNG3L
Weird fallacy that if you use a tool you can't use your brain anymore
bluefirebrand
Like people who use machines are always physically strong enough to do the job the machine does, right?
beepbooptheory
This is not what a fallacy is.
moron4hire
I think it's pretty obvious that people who offload their thinking to an LLM will eventually get used to not thinking hard about things. Anything you do that you stop doing regularly eventually atrophies. Thinking hard about things and performing on work is as much a skill as it is an innate property of being smart, as evidenced by the many "prodigy" sort of folk who languish in obscurity later in life.
- https://publichealthpolicyjournal.com/mit-study-finds-artifi...
wesleywt
If you a tool that replaces your brain then you won't use your brain.
CivBase
A lot of people do outsource their thinking to AI, so it's not that weird to bring up. That's effectively how many AI companies are marketing the technology.
But it's definitely possible to use AI without letting it think for you. OP should at least acknowledge that.
Those who dogmatically refuse AI outright may be disadvantaged for some things in the future. But it's also probably hyperbolic to say they will be "left behind".
tsukurimashou
I agree with OP it's the other way around, while some will gradually lose basic skills by relying more and more on AI for productivity sake and laziness, those "people who don't use AI" value will go up by choosing to simply keep "learning the hard way"
abustamam
> Why wouldn't you aim to be better, to learn how to be or do something that AI would never?
Because it doesn't make sense to be better than a tool. A woodworker could use a hand saw and take an hour to cut wood... Or he could use a buzz saw and cut it in a few minutes. Is the woodworker any less of a woodworker when he uses a buzzsaw vs a hand saw?
Outsourcing thinking to AI is not healthy, and certainly if everyone used AI like this we're doomed.
I still think it's true that those who don't use AI will be left behind, but it's a bit tautalogical because the thing they're behind left behind on is AI. A lot of the biggest companies on earth are putting a lot of money in AI, but if you're OK with working for a company that is not putting all their money in AI that's perfectly fine.
Just like block chain was everywhere ten years ago and now is just kinda _there_. If you got in before the hype you could have made a lot of money. If you didn't, you were left behind. I was left behind and I'm OK with that.
mgaunard
I find that good people get better with AI, but I'm not sure more average people really do.
I've seen some produce stuff without really understanding it, barely review anything, and pretty much suffer from imposter syndrome.
cyanydeez
I think what we're seeing is it's just an amplification of whatever intrinsic motivations people have; the whole mirror to the self thing, on steroids.
Obviously people who are motivated by curiousity will have a different view and those who value creativity will end up thinking otherwise.
Also, it's basically impossible to separate the technical capabilities with the big money fascists pushing it.
lexandstuff
I have a feeling that a big risk of using AI all the time is that our own neurological capacity starts to dwindle.
Just as many people leading sedentary lifestyles have to make a deliberate effort to exercise, because inactivity is really bad for our bodies, I think we're going to realise that a similar process is necessary for our minds.
You really want to be spending a bit of time every day operating at your cognitive limits - trying to fully engage your System 2 - if you want to avoid brain atrophy. Coding used to kind of give you this exercise for free, but you can go really far with just your System 1 nowadays - literally get things done while scrolling Reddit.
I'm trying to allocate 30-60 minutes a day to doing something difficult, like writing code by hand for an unfamiliar problem or reading and summarising difficult papers without AI.
voidmain
When on the road to hell, it's OK to be left behind.
jckahn
But what if all the good jobs are only in hell?
dodu_
Then it seems you have a flawed concept of what constitutes the "good jobs".
tmountain
A job from hell is a bad job by definition.
Barrin92
a good job is one that brings you joy and improves your creativitiy, they by definition can't be in hell. if you mean well paying, that's a different thing entirely, ditch the fancy car and adjust your lifestyle
jckahn
I have a fancy car? News to me. I'm just trying to pay my bills and live a sensible and reasonably comfortable life. These days that requires a lot of money.
Muhammad523
If that's the case, so be it.
Get the top HN stories in your inbox every day.
Thank ghod I'm retiring in six months.
I'm very thankful I came of age during the golden age of personal computing. I was able to own my own computer(s) and earn a living writing software on them and for them. Fifty years was a good run, and I consider myself lucky to have participated in it.
IMO we've gone full circle: dumb terminals chained to mainframes and the whimsey of someone else's rules, restrictions, and rent-seeking, to my own bought-and-paid-for computer sitting on my desk that did exactly what I told it to do using software that never changed unless I wanted it to change, and now we're back to dumb terminals (browsers) that talk to mainframes (the cloud) that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software and have gone back to renting me the software and pushing changes that I never asked for and never wanted in the first place.
I will never use spicy autocomplete for anything, and I find it depressing that people are being forced to use it in order to keep their job. I see a very dark future for computing if real skills are all replaced with garbage being vomited out by rules engines that harvested their "guess the next word" results from today's internet.