Brian Lovin
/
Hacker News

Tell HN: AI tools are making me lose interest in CS fundamentals

With powerful AI coding assistants, I sometimes feel less motivated to study deep computer science topics like distributed systems and algorithms. AI can generate solutions quickly, which makes the effort of learning the fundamentals feel less urgent.

For those who have been in the industry longer, why do you think it’s still important to stay strong in CS fundamentals?

Daily Digest email

Get the top HN stories in your inbox every day.

jmward01

There are two aspects to this. The desire to learn and the utility of learning. These are two very different things. Arguably the best programmers I have known have been explorers and hopped around a lot. Their primary skills have been flexibility and curiosity. The point here was their curiosity, not what they were curious about. Curiosity enabled them to attack new problems quickly and find solutions when others couldn't. Very often those solutions had nothing to do with skip lists or bubble sort. Studying algorithms is useful for general problem solving and hey, as a bonus, it helps sometimes when you are solving a real world problem, but staying curious is what really matters.

We have seen so many massive changes to software engineering in the last 30 years that it is hard to argue the clear utility of any specific topic or tool. When I first started it really mattered that you understood bubble sort vs quicksort because you probably had to code it. Now very few people think twice about how sort happens in python or how hashing mechanism are implemented. It does, on occasion, help to know that but not like it used to.

So that brings it back to what I think is a fundamental question: If CS topics are less interesting now, are you shifting that curiosity to something else? If so then I wouldn't worry too much. If not then that is something to be concerned about. So you don't care about red black trees anymore but you are getting into auto-generating Zork like games with an LLM in your free time. You are probably on a good path if that is the case. If not, then find a new curiosity outlet and don't beat yourself up about not studying the limits of a single stack automata.

siva7

If there's a single trait that divides the best developers in the world from the rest it's what you described there - curiosity and flexibility. No academic course could bring you on par with those people.

raw_anon_1111

The best software engineers I know, know how to go from ambiguous customer requirements to solutions including solving XYProblems, managing organization and code complexity, dealing with team dynamics etc.

andrei_says_

Important to remember that these skills evolve iteratively and often best in an apprenticeship environment. No one started with these abilities. They get developed by solving problems and getting better at it.

raw_anon_1111

And how many companies do you know of today offer any kind of “apprenticeship environment”? Out of those how many pay former juniors market rate when they become mid level and senior developers and don’t suffer from salary compression and inversion where they pay new employees market rate and existing employees some HR mandated maximum where it makes more sense to job hop?

jorl17

Exactly this. Couldn't have said it better.

Do you feel yourself losing interest, curiosity, "spark"? If so, then maybe worrying is right.

If you're just (hyper?)focused on something else, then, congrats! Our amazing new tools are letting us focus on even more things -- I, for one, am loving it.

sam_lowry_

> The desire to learn and the utility of learning.

See also Profession by Isaac Asimov for a fictional story about the distinction between the desire to learn and the utility of learning: https://www.inf.ufpr.br/renato/profession.html

zem

and "the feeling of power", also by asimov, for a satirical take on what happens when no one learns stuff the computer can do for them.

faangguyindia

I'd take another view here and suggest you not learn all this untill you need it.

The day you need it, you'll be more motivated to learn it. That's pretty much how I learnt most things.

kccqzy

Because AI still hallucinates. Since you mentioned algorithms, today for fun I decided to ask Claude a pretty difficult algorithm problem. Claude confidently told me a greedy solution is enough, before I told Claude a counterexample that made Claude use dynamic programming instead.

If you haven't learned the fundamentals, you are not in a position to judge whether AI is correct or not. And this isn't limited to AI; you also can't judge whether a human colleague writing code manually has written the right code.

kevv

That is correct. But for how long ? How long would it take for AI to learn all of this too ? AI sure does learn faster than humans and even though it will never degrade the relevance of fundamentals, don't you think that the bar for someone beginning to learn about the fundamentals, would just keep increase exponentially.

Flatterer3544

AI takes XYZ data to set N range, it never created anything new but took all and created a baseline, which is at many tasks very good.

It cannot really create anything new and never seen, which most people will never do either.

So if we push away even more onto AI, I am afraid MANY(not all) that would previously gone through the discovery path won't stumble onto their next innovation, since they simply prompted a good baseline for ABC task, because we are lazy.

nightski

Even if AI knows everything and is basically sentient, we still need to understand these things to work with it. How can we prompt it reliably without understanding the subject matter for which we are prompting?

If anything I consider fundamentals in STEM (such as Math/CS) to be even more valuable moving forward.

theshrike79

Did you give Claude a way to test/verify/benchmark said algorithm compared to other solutions?

If not, how can it not hallucinate when you didn't give it any constraints?

rerdavies

You can just tell it that it's doing it wrong (and why). Of course, you have to know that it did it wrong.

theshrike79

The point is that if you know the algorithm will produce X as the output if the input is Y, give that as a tool to Claude

And if you know that the previous algorithm completes in Z milliseconds, tell Claude that too and give it a tool (a command it can run) to benchmark its implementation.

This way you don't need to tell it what it did wrong, it'll check itself.

rishabhaiover

I'm curious, what was the algorithm problem?

kccqzy

It’s a variant of a knapsack problem. But neither Claude nor I initially realized it was a knapsack problem: it became clear only after the solution was found and proved.

babas03

AI is great at giving you an answer, but fundamentals tell you if it's the right answer. Without the basics, you're not a pilot; you're a passenger in a self-driving car that doesn't know what a red light is. Stay strong in the fundamentals so you can be the one holding the steering wheel when the AI hits a hallucination at 70mph.

undefined

[deleted]

royal__

These tools actually make me more interested in CS fundamentals. Having strong conceptual understanding is as relevant as ever for making good judgement calls and staying connected with your work.

rossjudson

This is the right answer. AI writing code for you? Then spend that time understanding what it is writing and the fundamentals behind it.

Does it work? How does it work? If you can't answer those questions, you should think carefully about what value you bring.

We're in this greenfield period where everybody's pet ideas can be brought to life. In other words...

Now anyone can make something nobody gives a shit about.

theshrike79

> Now anyone can make something nobody gives a shit about.

As a corollary, I can build shit that's perfect for me and I don't really care if it's any good for anyone else =)

Before I had to find someone else's shit and deal with their shit, trying to make it do the shit I need it to do and nothing else.

j3k3

"Now anyone can make something nobody gives a shit about."

lol nice one

atonse

Read this article from the Bun people about how they used CS fundamentals (and that way of thinking) to improve Bun install's performance.

https://bun.com/blog/behind-the-scenes-of-bun-install

Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?

Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.

And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.

Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.

someprick

As a non-member of the exalted-many who get to hack for a living-

I agree. The nature of the machine, is to crush the artisanry and joy from the task. However, you can't beat it, so…

I use the miserable things as "research accelerators." I have neither the time, nor the capacity to sustain the BAC necessary, to parse all of the sources and documentation of the various systems in which I'm liable to take interest. I very rarely ask them to "do ${task} for me," but rather: "What is the modern approach to ${task}? And, how do I avoid that and do ${task} in the spirit of Unix?” "Has anyone already done ${task} well?" "Are there any examples of people attempting ${task} and failing spectacularly?"

If you treat it like your boss, it'll act like your boss. If you treat it like your assistant, it'll act like your assistant.

Edit: derp.

wcfrobert

To borrow a concept from Simon Willison: you need to "hoard things you know how to do”. You need to know what is possible; you need to be able to articulate what you want. AI is a fast car, but it’s empty and still needs a driver. As long as humans are still in the loop, the quality of the driver matters.

theshrike79

Terminology matters, if you use the right words, the AI will work better.

Just saying "use red/green TDD" is a shortcut to a very specific way of fixing bugs.

Or when you use a multi-modal model to transcribe video saying "timecode" instead of "timestamp" will improve the results (AV production people say timecode, programmers say timestamp, it hits different parts of the training material)

hedora

Fundamentals are the only thing left to learn in our field.

Either the AI doesn’t understand them, and you need to walk it down the correct path, or it does understand them, and you have to be able to have an intelligent conversation with it.

tartoran

I think that AI, particularly LLMs, can be quite effective for learning, especially if you maintain a sense of curiosity. CS fundamentals, in particular, are well-suited for learning through LLMs because models have been trained on extensive CS material. You can explore different paradigms in various ways, ask questions, and dissect both questions and answers to deepen your understanding or develop better mental models. If you're interested in theory, you can focus on theoretical questions but if you're more hands-on you can take a practical approach, ask for code examples etc. If you have a session and feel that there's something there that you want to retain ask for flash cards.

TehShrike

There are two types of CS fundamentals: the ones that help in making useful software, and the rest of them.

AI tools still don't care about the former most of the time (e.g. maybe we shouldn't do a loop inside of loop every time we need to find a matching record, maybe we should just build a hashmap once).

And I don't care if they care about the latter.

rslomkow

I actually find the the inverse is true. I find that thinking more in terms of algorithm trade-offs and strategies for distributed systems, and work with the LLM to explore what the state of the art options are and how to analyze which are appropriate for my current project.

I see over and over those with the deeper understanding are able to drive the AI/LLM code generation processes faster and more effectively, and build things that can be built on by others without hitting hard bottlenecks.

The less people understand CS fundamentals the faster they his a blockade of complexity. This is not necessarily bad code, but sloppy thinking. And CS fundamentals are information and logical processing fundamentals.

It is the Centaur issue. You need to help provide the evaluation and framing for the AI/LLM to search out the possibilities and well known solutions, and code up the prototypes. Without the fundamentals you have to rediscover them slowly after you already hit the hard problems and pause for days or months while trying to work you way around them.

anhldbk

Well, it depends. There's no right or wrong answer here.

Simon wrote an article "What is agentic engineering?" [1]

> Now that we have software that can write working code, what is there left for us humans to do? > The answer is so much stuff. > Writing code has never been the sole activity of a software engineer. The craft has always been figuring out what code to write. Any given software problem has dozens of potential solutions, each with their own tradeoffs. Our job is to navigate those options and find the ones that are the best fit for our unique set of circumstances and requirements

Such navigations may require various skills. For example: people/product skills (e.g customer empathy) to determine what to build, or engineering skills (e.g optimizations). Please be open for learning and get stronger via feedbacks.

[1]. https://simonwillison.net/guides/agentic-engineering-pattern...

Daily Digest email

Get the top HN stories in your inbox every day.