Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

cdfalcon

There's something so off-putting about academics giving industry advice when they haven't spent a day working as an engineer at a company.

> Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.

Outside of the bit on avoiding cutting corners, this advice seems like a straight path towards unemployment in a few years. The implication is that "your craft" is writing and polishing code, a skill which seems to be increasingly antiquated in favor of higher level system design. Who is going to read your carefully crafted documentation lol? The agents who replace you?

If a tree falls in the forest...

saadn92

What gets me is the craft point. I've shipped more useful software in the last year than probably the previous five combined, and most of that is because I stopped treating code as the artifact and started treating the product as the artifact. The craft moved up a layer.

> until it is clear and elegant

New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.

bdelmas

> New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.

With AI becoming so prevalent, on the long run I won’t be so sure. True experts will become more and more rare.

ryandrake

I guess my response to this would be that not everyone is the same. You might like to ship more useful software. I got into writing code and made a long career out of it because I like writing code. Not "making products." Not "shipping things." Yes, making products and shipping things were necessarily a small part of my career, but that's not what got me out of bed in the morning.

It's like telling a writer "your job is now to bind up books and place them on the store shelves." OK, but it's a totally different job and not exactly one the writer is going to like.

2ndorderthought

This person is an educator. You should absolutely learn how to code by deep practice. You can easily learn how to use the slop machine in I don't know a week or something if the job demands it.

sixtyj

Absolutely, wise man (not an entrepreneur) writes a message to his students.

We seem to forget what it was to be a freshman.

At that age, you look up to anyone who’s more experienced.

Yes, you have to deliver, iterate and make mistakes very often to learn from them.

But as the text clearly states: relationships, people and justice matter more.

Where can I sign it?

smolgumball

Absolutely wild to see this take downvoted. While it's abundantly clear that Hacker News has long since become a mouthpiece for the AI investment machine, I really hadn't felt the loss of strong engineering ethos until recently.

minihoster

So now we're downvoting the idea that people should have a strong understanding of how to code? We're cooked. A week does seem about right for getting to 90% of optimal AI agent use if you earnestly explore its boundaries.

hhjinks

The slop machine is stupidly easy to use. Recently switched jobs and got to use Claude Code for the first time. Literally just talk to it. There's nothing to learn.

cpill

yeah, but everyone can use the coding agents, not everyone can choose well. the rarer the skill the more you get paid for it. also, the better you code the better the results you get from the agent

jauntywundrkind

There's a lot of ways to ship things & iterate without having any idea what you are building or doing technically, without building any tastes for how things work.

Those people are going to be the absolute most dangerous possible thing you can do to a company.

Maybe some day we can just totally give up the technicals to the machine, but I strongly doubt it. Every single model is both brilliant, but also a fool, no matter how frontier it is.

Yes, the feedback loops are faster. But you need to assess what's actually technically happening. Someone does. Maybe you offload the actual thinking up the chain, delegate taste understanding and judgement to only people up the chain, and make them all go mad dealing with endless slopcoding they are being hit with. But just as bad, that junior engineer is robbing themself too. Maybe they get away with not looking, but they sure aren't going to learn a lot.

I'm missing the link but there was a great submission maybe a month ago about two hypothetical grad students, I think in astronomy, where one failed and flailed and did things largely the old fashioned way, and the other used AI to get it done. The advisor couldn't really tell who was doing what. But at the end, one student had learned & gained wisdom, and the other had served as a glorified relay between the AI and the advisor and learned little. Same work output, but different human outcomes.

Junior engineers are really not that cheap. Relative to your capabilities you are not a bargain. You take a ton of valuable time from other people. If a company is hiring you, they either are truly fools lacking basic understanding, or they are in on the bargain that they want you to be getting better, are testing to see if you can become more useful. Sure it's great to show up and have impressive output, but you need to actually be learning and growing. You need to be participating in the feedback loop actively. Or you will be lapped by people who care & think like engineers.

beej71

> Those people are going to be the absolute most dangerous possible thing you can do to a company.

I hear you, but here's the thing: the companies don't give a shit about software quality any farther than it takes to keep you coming back as a customer. And it's actually been like this for a long time. They're going to hire people who can ship who-cares-how-buggy software as fast as possible. It's better for the bottom line.

And that pains my soul and pains me as a consumer (because we already had to put up with too much crap software before genAI started producing it in reams), but there's very limited money in the kind of quality you're talking about.

I hear stories from people interviewing now--the interviewers react negatively if you tell them you're working on keeping your programming skills fresh. They just want to know how many agents you can run at a time and how many lines of code you can generate per day.

Personally, I think someone skilled in software development working with genAI is going to be more productive than someone not skilled working with genAI, but I don't think that's even being selected for now.

Grim days.

The one thing that gives me hope is that every time we ask our graduates who are now in the field (and all work with AI) if we should drop classic CS education and only do AI, they all emphatically reply in the negative. Yes, we need some AI education in there, but they want the foundation, too.

archagon

AI is not an abstraction layer. If this is not obvious to a so-called engineer, they should probably not be throwing stones.

undefined

[deleted]

lo_zamoyski

We have to ask ourselves what the purpose of refactoring is. People use that word like some magic incantation, as if the value of some particular instance of "refactoring" were self-evident. "What are you doing?" "Oh, I'm refactoring X." "-hushed tones- Ohhhh, yes, carry on, then..."

Refactoring improves code organization. It makes the code more maintainable, arguably and more reusable. And, from an academic POV, makes code more satisfying conceptually by aligning it with the model of a domain more clearly and conspicuously. Good stuff.

Great. Now, in industry, what matters is the result. Nobody cares if the result was produced by a witch casting magic spells or a grunt hitting a rock with another rock. Industry is practical. It cares about "craft" as far as it enables commercial success (and yes, short-term thinking can be bad, but guess what: you need to eat in the short-term!). Maintainability is a nice thing to have, because it does allow us to more quickly develop code. But how maintainable something needs to be, especially in relation to other competing concerns, has no fixed answer. It really depends on the situation.

Practical wisdom, known as prudence in the classical literature, is the foundation of all moral behavior. The right decision, the right concern, really does depend on the circumstances. You cannot derive from principles, from the armchair, what the right course of action is for everything. The general principles may be immutable and absolute and fixed, but the way in which they are applied in particular circumstances will vary.

Academia can insulate people from certain kinds of practical concerns, which is supposed to aid theoretical work, but this demands that the academic recognize his limits. He is not in a position to pass judgement on prudential matters, which is to say matters that are not strictly matters of principle, if he is not prepared to engage competently with the concrete reality of the situation.

danny_codes

Perhaps your vantage point from industry is in fact myopic. We all have our own biases.

cdfalcon

Completely fair - but at least my PoV comes from having actually worked as a SWE, you know? I feel like the best understanding this fellow can have is purely secondhand from watching the success / failures of his students.

I also think I get doubly upset from advice like this because it’s given and marketed to impressionable young students. Even agreeing with all the moral points he’s made, I truly think this advice would set up a new grad for failure and have them focusing on the wrong skills for this market.

The bit about ignoring trends feels too head in the sand for my liking :/

danny_codes

Fads come and go in industry. This version of LLMs will come and go as well, as will the coding languages and paradigms we used before (and, presuming you want your code to actually run, still do with some decent frequency).

Will LLMs in their current ergonomics have staying power? Perhaps. Nobody can predict the future. But I don’t think it’s a given in the least

DJBunnies

How do you know they didn't? My college professor was formerly at NASA, where this stuff is important.

I recognize not everyone's work is [as] important, but we should still strive for excellence (and safety.)

microtherion

When I started studying CS, the "industry" thought students should be taught COBOL, and maybe some PL/I and Fortran, because obviously that was what the market wanted.

archagon

I worked at a FAANG in a senior role for around 6 years and I completely agree with the article. (I left before LLM/agent use became widespread, but I would have flamed out anyway if it was forced upon me.)

gipp

Buddy... The whole point of the post is that he wants his students to question whether "succeeding in this market" is really the right choice.

lo_zamoyski

That's a flippant reply.

Programming is a practical skill, and its most common expression is industrial or commercial, not academic proofs of concept. The post addresses students who will enter industry; that's the focus of the professor's own post.

And I sympathize with many points being made here. However, the point of refactoring code is somewhat odd and detached from the real life constraints of programming in the wild.

Like, sure, in the ivory tower, you can confine yourself to nicely bounded problems and tidy little toy POCs. You can survive doing those things, because the selective pressures allow for it. I love those things, personally. They help me understand the nature of the thing. And in an academic settings, you can refine and refactor the hell out of those things to your heart's content (not that there is necessarily an objective end point to refactoring; code organization is subject to goals and constraints which can shift around).

But the reality of software in a commercial setting is not the tidy one you can expect in an academic setting. It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it? Whether you should refactor something is not just a question of whether it suits your conceptual tastes or even whether it is more maintainable. Unlike algorithms and principles and even techniques, software is not eternal. It is ephemeral. It's shelf-life is bounded. It is a piece of a larger business process. You're not refining some theory or some grasp of a Platonic ideal. You're mostly just putting into place plumbing to get something done. Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.

So, in light of that, there are actually quite absurd things to say given the difference between the privilege of academia and the gritty reality of industrial and commercial software development. If we were to force our professor into the world of industry, he would quickly lose his job or he would quickly learn that some of his strange idealism is silly and detached from the reality that his students will face.

godelski

  > It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it?
Probably because it's a good way to be more profitable.

Code that's easier to understand is easier to: maintain, generate new features for, fix bugs, onboard new engineers, etc

Code that's well written: executes faster (saving computational costs), scales better, has higher uptimes/more robust, reduces bandwidth, and so on.

The thing is the business people will never understand this. Why would they? They're not programmers. They're not in the weeds. But that's what your job is as an engineer. To find all these invisible costs.

I'm pretty confident the industry is spending billions unnecessary. Hell, I'm sure Google alone is wasting over $100m/yr due to this.

Don't be penny wise and pound foolish. You're smarter than that. I know everyone here is smarter than that. So don't fall for the trap

sosodev

I sense that the frustration you feel is that professors are able to make choices based on their values, but the average person is not. That is broadly speaking, of course.

I think it is a great shame that we live in a modern world where we do we must to survive regardless of how it makes us feel. I suspect it is the root of much suffering.

ryandrake

Seriously. This thread is so depressing. It's like the entire software industry has given up and just accepted "increase speed forever at any cost" as some kind of iron law of software employment. Is nobody even pushing back anymore? Even offering token resistance? The 'bros have truly won. Our only imperative now is "Can we crush it in the market?"

foltik

Wild how many people take “care about your craft” as a condescending personal insult. Maybe it’s hard to hear once the job’s beaten it out of you. And it’s about to get a lot worse.

g-b-r

The parent comment (cdfalcon) has 41 votes right now, it's disgusting

undefined

[deleted]

thundergolfer

Completely agree that it's off-putting. The author indeed has only ever worked in academia per his LinkedIn.

But disagree that this is a path to unemployment. At work we go very fast and yet I think fast is compatible with each of those points, just not in all situations.

Marc Brooker, distinguished eng at AWS, gives much more useful advice for industry, as you'd expect given his almost 30 years in industry.

https://brooker.co.za/blog/2026/03/25/ic-junior.html

rowanG077

From that guys LinkedIn he was in academia and then at AWS. I guess it's better than the professor but hardly someone who knows the ins and outs of the industry. For that you need someone who has had a multitude of jobs at various different types of organizations.

loveparade

Doesn't matter who reads it. The point is that you will probably never learn to do "high level system design" well if you do not have enough experience writing and refactoring code yourself. It's like you wanting to become the chef of a kitchen and giving instructions without having ever prepped food.

There is indeed something useful about trying to write elegant code. Not because others read it. But because that's how you learn about the engineering tradeoffs and abstraction that exist everywhere.

torrance

I think you are making exactly his point. Practicing code as a craft, caring about how you do it, how well you do it, and what it’s ultimately used for is, as you correctly point out, not going to bring you profit or employment.

So maybe there’s something wrong with how we organise work?

nikcub

> If a tree falls in the forest...

I hope this is a pun on the content management system used to publish OP. It's forester[0], written in OCaml and parses TeX-like .tree files into semantic XML which uses browser XSLT to render the HTML.

View source on the page to get an idea.

Reminder of what the idealised web promise from decades ago was. Long gone. Very apt.

[0] https://www.forester-notes.org/index/index.xml

remywang

He is not giving advice to the industry, he is giving advice to aspiring programmers and computer scientists. He has no experience in industry, but has produced lots of high quality software and research.

dijksterhuis

> Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.

my uk mechanical engineering bachelors degree had a required module on the ethics of engineering which has always stuck in the back of my mind. i think we went over the bhopal disaster as a case study one week, although it was about 16 years ago now so i can't be sure.

i've rarely seen any ethics modules in computer science departments, at least here in the uk. and i think we sorely need them in general.

edit -- so i guess it's a UK thing xD though i am glad to hear that you folks in the US enjoyed your ethics modules too

mavleop

As others have said, my comp sci degree also had a required ethics course. But it’s also pretty silly to think that a single ethics course where people don’t pay attention is going to change the hearts and minds of students. No amount of discussion about therac is going to make someone question if they should really be working for palantir or raytheon

nightpool

Every ABET accredited CS course (almost every CS course in the US I think?) requires an Ethics in Computer Science credit. I remember going over a lot of case studies, including Therac 25, but our course also included a lot of general grounding in ethics and philosophy as well, which I enjoyed a lot.

dijksterhuis

ah, fair enough! maybe it is/was a uk thing (admittedly times might have changed a little since i did my masters/phd).

at the very least i have a wikipedia article on therac 25 to read through now. so thanks for that!

also, yea i remember really enjoying the ethics module too. lots of discussion and not always a clear answer. was very different to the rest of the "one correct maths answer" in a lot of the other modules.

hgoel

In my computer engineering undergrad ~8 years ago in the US, an ethics class was mandatory, but IIRC the CS curriculum did not have it, despite both leading to similar careers. My memory may be wrong though.

Edit: they do seem to have one now, so either I remembered wrong or they added it.

Edit 2: I remember enjoying my ethics class, we covered some of the usual examples, and also things like basic contract negotiations. But I think I still didn't register that these concerns were real at that time. It was easy to believe that I wouldn't be working on anything that impactful. This did change once I started work.

dijksterhuis

> But I think I still didn't register that these concerns were real at that time. It was easy to believe that I wouldn't be working on anything that impactful. This did change once I started work.

The case study i mentioned (it may not have been bhopal, but it was definitely based on something that happened in india) stands out for me because it really hit home about the impact and seriousness of some decisions we could end up making.

There was another time I remember the lecturer making a point of saying there was no single correct answer about something that caused a lengthy discussion. We would have to figure what's right/wrong out for ourselves going forward. That really stuck with me.

hgoel

I was thinking about it differently. I understood the potential harm on paper, but I think I was still pretty immature. I thought I would be willing to put aside morals (eg working for companies like Palantir) to work on interesting cutting edge things.

But when I started working and found myself doing equally cutting edge research, but genuinely for the public benefit, I realized I definitely wouldn't be comfortable with putting aside my morals like that. Maybe I didn't really believe this was an option back then.

bbor

Yeah I was wondering about that… I got one, but prolly only because my uni put CS under the engineering school.

I don’t think scientists usually have mandatory ethics classes and mathematicians certainly don’t, so if it falls under either of those departments it might’ve gotten skipped!

pjmorris

I pull from these articles when teaching:

'We should teach our Students what Industry doesn’t want', Kevin Ryan, https://dl.acm.org/doi/pdf/10.1145/3377814.3381719

'Are you sure your software will not kill anyone?', Nancy Leveson, https://dspace.mit.edu/handle/1721.1/136281.2

ciupicri

I wouldn't be surprised if some students don't want it too.

dijksterhuis

ooo these look interesting. thanks! i shall have a read.

ciupicri

Ah, ethics - the silver bullet which will magically make good people out of bad people.

Omniusaspirer

I went from being a largely self taught software dev with a small 1-man software business to working as a nurse in the US, and a lot of the motivation to make that change was that I wanted to spend my time doing work that I felt genuinely made the world better. Tech has incredible potential for good but the actual industry itself in my eyes has extremely perverse incentives and no strong moral foundation like that which exists in nursing/healthcare. Nurses broadly consider themselves to be patient advocates and the voice for people who often can't have their own voice. As you can imagine, this culture is not in line with the modern pursuit of healthcare profits but yet nurses stay fighting the good fight. I see these battles play out nearly every day I go to work and while it's usually done professionally these are real battles with jobs on the line.

In a perfect world I think the software industry would have instilled these same virtues- software is just as (or more) capable of causing harm as poor healthcare. Yet we seem to be racing to a dystopian future at record speed courtesy of the tech industry, and our modern egalitarian societies will not survive that transition.

dejawu

My Computer Engineering degree had an "ethics" course (really a course on "engineering communications", but it was considered to satisfy the ethics requirement for graduation). It was a semester on how to file memos, cargo-cult your resume, and tell recruiters what they wanted to hear. Not a word was said about considering the implications of the things you're hired to build. When defense contractors took over the entire ground floor of the engineering building to hold a recruiting fair, we were encouraged to go.

The only time ethics in engineering was ever mentioned to me was in a class on applied number theory (cryptography), taught by a professor who had previously worked for the EFF. He went off-topic to tell us that many problems, like how to hit a target with a missile, may fascinate and compel us as engineers, but we shouldn't let that distract us into building instruments of death.

That course was an elective, and it was entirely possible to complete my degree without hearing a single mention of ethics.

There are many reasons I look back on my academic experience with disdain, but this one stands out to me.

davidw

The 90ies weren't perfect, but it felt more idealistic to me, with the rise of open source software. People thought about ethics a bit more. It felt like the ultimate tide rising to empower people locally on their own computers, and that tide has been going out for some years. A bit with cloud computing, and now a lot more with LLM's. And the company a lot of SV people keep these days is pretty gross.

the_snooze

I wouldn't necessarily say "idealistic," but certainly constrained. Microsoft has always been scummy in one form or another, but always-on internet connectivity has allowed them to be scummy in persistent ways long after your purchase of their product. It's a serious money-maker, but I think that explosive growth has bred a whole generation of tech "professionals" these days that think more like Wall Street bros than sober engineers: make line go up, damn the consequences.

tptacek

"I do not and will not use LLMs, in any form, for any purpose. Although LLMs are fascinating from a purely technical perspective, I refuse to participate in or contribute to such systems that are built on massive exploitation of human labor and make profligate use of scarce resources. I also don't think they are actually very good for a lot of the applications people seem excited about. Even in cases where LLMs are technically good at a task, that does not necessarily mean their use for that task contributes positively to human flourishing.

A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%."

simonw

I remain hopeful that some day someone will train an LLM which is tolerable to people who take this stance (which I respect, much like I respect food vegetarians despite not being one myself).

I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model.

Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587

Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles.

UPDATE: Huh, speaking of potentially vegetarian models, I just saw https://talkie-lm.com/introducing-talkie on the HN homepage https://news.ycombinator.com/item?id=47927903

I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/

Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities."

strange_quark

I don't get why it's so hard for you and others in this comment section to understand why people hate AI so much because it's not just the theft and environmental destruction. A college professor, especially one at a liberal arts school, is obviously not going to like something that enables you to outsource your thinking and steals your agency. I think that's a perfectly valid viewpoint; maybe talk to someone without STEM-brain who lives outside of SF for once.

simonw

I've recently been amplifying this excellent piece about that by Nilay Patel https://www.theverge.com/podcast/917029/software-brain-ai-ba...

I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.

infotainment

> Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587

I suspect that even if you reduced the cost of training or any other real world metric, the goalposts would immediately move. It seems to me that it has never been about those things, but simply about the feeling of superiority one can attain by eschewing something seen as trending.

WatchDog

It's that but also the narcissistic injury caused by seeing an LLM practice the craft you have spent your life trying to perfect.

infotainment

> built on massive exploitation of human labor and make profligate use of scarce resources

This kind of hyperbole repeated ad infinitum by haters online is not-constructive, IMO. I would be quite certain that the manufacture of whatever computing device the author is accessing the internet on used far more resources and exploited far more human labor than training an ML model ever did.

cwillu

Be that as it may, it is a quote from the “Statement on LLMs” at the bottom of the link.

infotainment

Of course, which tells you the position from which the author of the linked post is arguing.

tcfhgj

Mentioning facts is not constructive, interesting.

How constructive are ad hominem arguments?

undefined

[deleted]

nikcub

* real programmers write assembly, not FORTRAN

* real programmers manage memory, it's a craft

* real programmers don't drag and drop

* real programmers don't use intellisense

* real programmers don't need stack overflow

* real programmers don't tab-complete

* real programmers don't need copilot

* real programmers don't use llms <- you are here

2ndorderthought

That's also not what he is saying. I don't see how that is what everyone is taking from this.

undefined

[deleted]

jmward01

'Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time.'

I find that when I get back into exercise and reading so much more of my life falls into place. These are things that I never have enough time for until I start doing them regularly at which point I realize that they actually enable me to have more time to do things, not less.

uejfiweun

It is very weird how that happens. I hardly expected starting a marathon training program to drastically increase my day to day energy. But here we are.

wanderingmind

No disrespect to the person, but this seems to be written by someone who has spent their life in academic bubble, without having to deal with people and entities with diverging interests and the impact of time on any decisions. I'm sure many artists will love to spend more time perfecting their art, based on their subjective interests. However, if they prioritize that, without understanding what their customer wants, they will go bankrupt. Nourish your interests through your hobbies. If they align with money making capability, you are one of the lucky few. For a significant majority, they do not align.

0x000xca0xfe

How has not honing their craft and churning out generic slop instead as fast as possible worked out for artists?

Everybody can do that now with zero training.

LLMS are the ultimate equalizer. You won't have a future if you can only do average things fast. It's time to become eccentric, and the academic bubble is perfect for that.

undefined

[deleted]

_jackdk_

Prof. Yorgey has done some great work over the years, and wrote one of my favourite papers*. Good on him for speaking up like this. I saw an engineer from Anthropic speak at my alma mater a little while ago and the overwhelming impression I took away from the session was, "if Anthropic are meant to be the good ones, we're really going to be in for a rough time."

* Monoids: Theme and variations (functional pearl): http://ozark.hendrix.edu/~yorgey/pub/monoid-pearl.pdf

undefined

[deleted]

cdot2

"where technology is used to distract, extract, surveil, and kill"

The first general purpose, programmable computer was designed in 1945 to calculate artillery firing tables for the US Army and was immediately used to help design nuclear weapons. Computers and all technology has always been, and will always be, used as a weapon (either directly or indirectly).

MattyRad

See below that:

'Don't believe self-serving lies about technologies being "inevitable" or "here to stay". You don't have to just go along with the dominant narrative. You can make deliberate choices and help others to do the same.'

torben-friis

>Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.

Currently struggling hard to achieve this. We all know everything fights for our attention nowadays, but I can assure you that you don't have an idea of the degree this happens until you actively try to fight it.

oxag3n

Being part of academia while few family members faang-ish, I see comments to this post are amusingly resembling debates we have - I'm in for deep thinking, research and analysis with code being a product of your mental work, while family members with 10+ years in the industry are proud of not writing a single line of code and thinking Opus is just their new tool to be more productive. They can't answer one simple question though - why those corporations need them in this working setup. It's a hard one because my family well-being depends on it and it doesn't look good.

vitacoco

"I do not and will not use LLMs, in any form, for any purpose." The academic navel gazing is strong with this one.

bithavoc

for the curious, this is mentioned by the author in another post.

http://ozark.hendrix.edu/~yorgey/forest/009L/index.xml

arcfour

I don't know why everything has to be so polarized these days. You can use LLMs without it making you an idiot or delusional or psychotic or something. Are there issues with them? Of course. Are there people and organizations that rely on them too heavily? Without a doubt. Does recognizing that they have utility and using them as another tool in your arsenal mean you think they're infallible and are a suitable substitute for your own capacity to think? Obviously not.

It's really tiring. You can't even talk about the pros and cons of the technology now without people immediately jumping to their respective sides over it. It either has to be all good or all bad. No nuance allowed.

The position espoused by the author of the article is so extreme it makes them look ignorant and foolish. I would want my teacher to be more open-minded/curious and have a more nuanced opinion than that.

2ndorderthought

This took a lot of courage. Glad to see this is being shared. It's the best honest advice I have seen to date.

cwillu

> This took a lot of courage.

It was important to say, but I very much doubt there was any courage involved.

JyB

Courage is not the appropriate word

2ndorderthought

I think it fits. Look at the anonymous posts in here, the sheer volume of posts saying this person is failing their students, is a relic, a Luddite, etc.

He put his name and career on it. That takes courage in my opinion.

Vaslo

There were never going to be repercussions for this, so not very courageous.

Vaslo

[flagged]

smitty1e

I don't understand how you extracted political bias from this.

A one-word summary of TFA would be: "wisdom".

Vaslo

The nonsense about not using AI for defense is a start

hgoel

I agree with the points made, even if personally I am okay with LLMs (as long as they're used with appropriate caution).

Especially relevant for students I think, since they are hurting themselves most by relying on LLMs. Just like how young children are forced to do math by hand instead of using calculators to build intuition and memory, students should aim to do things manually to build their skills.

Go make that toy website, game, OS, emulator or programming language. Read specifications and try implementing them yourself. You aren't in an environment that requires you to churn out features, you can explore!

Trav5

I totally agree and have been thinking about this lately. My son has been been doing an incredible amount of engineering work on a fun side project manually including 3D design and printing, soldering etc. But he used Ai to program an arduino. I want him to understand the code but I'm trying to find the balance between the school classes, and all the other things needed for this project. He doesn't have time dive into it all right now. This project is giving him exposure but maybe the coding doesn't need to be done manually just yet.

hgoel

Yeah, I can imagine that it's tricky to find a balance when so many new things are put together.

I didn't have the benefit of skilled teachers or parents that believed in my interests (and of course no LLMs!) so I didn't have a choice besides learning to deal with the frustration and getting good at scouring docs/code.

Daily Digest email

Get the top HN stories in your inbox every day.