Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

akoboldfrying

It's good that the article points out that DP algorithms are "just" clever ways to cache a recursion. Looking for a recursive solution is certainly the best way to start out looking for a DP solution, in my experience -- if you can find one, memoising it is trivial and may give a big speedup (and may even be faster than a "bottom-up" DP, since it only ever computes solutions that we definitely need).

With enough practice, it's usually possible to come up with a (correct but slow) recursive solution. When turning this into a DP, it doesn't matter if there are large numbers of subproblems in the call tree -- what's important is that there are a relatively small number of distinct subproblems. (Since there's no point caching a result that's only ever needed one time.) And that's where the difficulty tends to lie: Figuring out how to partition the original problem into few enough distinct subproblems.

vishnugupta

> what's important is that there are a relatively small number of distinct subproblems.

This is the crucial part IMO. Whether the larger algorithm is recursive or iterative etc is secondary. But yes DP usually tends to show up in recursive algorithms most often.

nsonha

> DP algorithms are "just" clever ways to cache a recursion

This is what made it click to me. When I was in university, perhaps because of the prevalence of procedural programming at the time (as opposed to FP), DP looked magic to me with the bottom-up tabularization examples in the textbook.

Practically that is the way to do it because tail-call elimination isn't always applicable, but I wish they's taught us the more intuitive way to look at it first (top-down, cache a recursion)

OskarS

The bottom-up tabularization isn't always necessary, but it's a key technique that is not just "avoiding overhead" or whatever. Doing the top-down memoization, you don't know which subproblems you need to store at any given, but going bottom-up, you do, and you can usually be much more clever about memory usage and overhead.

The obvious example is Fibonacci numbers: the "top down" memoized version needs to store every Fibonacci number in memory, because you don't know what number the recursion is asking for. But if you do it "bottom up", you know that to calculate f(N), you only need f(N-2) and f(N-1). Therefore, you only need to store the last two numbers, reducing your memory usage from O(n) to O(1).

This principle almost always applies in DP problems: for the "edit distance" problem, the number possible values for the function is N^2 (where N is the size of the two words you need to compare, assuming they're similar sizes), so the memoized version takes O(N^2) memory. But if you do it the "bottom up" version, you realize that to calculate all values in a row, you just need the previous row of values. So it takes memory from O(N^2) to O(N).

Point being: turning the recursion "upside down" is not just a thing you do to be clever (or to avoid "function calling overhead", something like that), it has very real algorithmic benefits, usually because it takes MUCH less memory to do it like that.

hansvm

I think their point is that naive caching attempts still result in duplicate work, and if you start by viewing the problem recursively then DP is a clever insight allowing you to avoid that duplicate work. You're right; it's not _just_ function calling overhead or anything minor; it often represents major Big-O improvements. I think that's in alignment with what they're saying.

I like your insight about Fibonacci needing less state than a naive DP would suggest. Diving into that a bit, DP implementations represent a DAG. More importantly, for all the problems that easily fit into DP that DAG has small numbers of "local" connections -- you can partition it into at least O(some_meaningful_dimension) many topologically sorted levels where each level only depends on a small, known, finite number of previous levels (often 1-2), not deeper DAG nodes. A clever choice of order of operations then looks for a min-cut on that graph, proceeding through those levels. Fib has at most O(n) levels and at least O(n) state for naive DP, so a clever implementation has O(n/n)=O(1) state. Edit distance has at most O(n) levels and at least O(n^2) state, so a clever implementation has O(n^2/n)=O(n) state. I did something similar for some DP probability calculation [0], and it reduces state from quadratic to linear.

Fibonacci is, slightly pedantically, one of the worst algorithms to analyze this way. Only the smallest inputs actually fit in machine integers, and the exponential increase in the size of any bigints involved really puts a damper on a naive Big-O analysis. Fib mod 2^64 (or some biggish prime) fits into the spirit of things better.

[0] https://github.com/hmusgrave/zdps

Shorel

Agree. When I first learned the subject, my first thought was:

As fancy as that feature is, it should be called array memoization, or call stack memoization.

The term "dynamic programming" should be reserved for something better. IMO.

vanderZwan

I'm sure you know this (since in my experience the history of the name is usually taught together with the technique), but for those reading along: the name "dynamic programming" originates from the late forties, early fifties. While the "office politics"-related motivation Richard Bellman gives for the name cannot be completely true due to anachronisms[0], it still fits with terms that made sense at the time, before the modern usage of "programming" existed:

> The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems (...). The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization.

I fully agree that would be nice if we relegated "dynamic programming" to a confusing historical umbrella term, and switched to less confusing names, like the ones you suggested. In that sense it is similar to "trie" - it's a funny pun, but incredibly impractical for verbal communication especially. It would be so much better if we all decided to just use "prefix tree" or "prefix search tree" and avoid the ambiguity.

[0] https://en.wikipedia.org/wiki/Dynamic_programming#History

da39a3ee

This is a widespread misconception: thinking of dynamic programming as just a form of memoized recursion is not the way to learn DP because it makes it extremely difficult to understand how to do the style of DP problems that involve filling out a 2D array.

For example, look at the "best time to buy and sell stock" series on leetcode: https://leetcode.com/problems/best-time-to-buy-and-sell-stoc....

These are much more naturally done by filling out an array, right? I've never done them with a recursion; I can't say I've thought hard about it -- is there a natural recursive solution?

(I linked to iii above but for anyone who hasn't tried them they are a great intro to DP problems; start with the first one: https://leetcode.com/problems/best-time-to-buy-and-sell-stoc...)

lifthrasiir

The point is that you don't have to exactly fill the 2D array to solve the problem in that way. The 2D array is an optimization in this view, and can be safely replaced with a cache without breaking the correctness. Of course there is also some learned techniques specific to dynamic programming, and that makes it worthy to learn at some point because otherwise you will never think of them, but at its core dynamic programming is just a specific way of doing recursion.

da39a3ee

OK, but, those are solved in just a few lines of code with 2D arrays. I'm not convinced it's helpful to approach them as recursions.

Also, anyone who thinks they understand how to solve DP problems on leetcode because they understand how to memoize a fibonacci recursion is in for a rather large disappintment.

tylerhou

Yes, there is a natural way to solve these recursively. Here is an example: https://leetcode.com/problems/best-time-to-buy-and-sell-stoc...

Not the best runtime (I could use a multidimensional array instead of an hash map for the cache, and recursion is "slow" in Python), but it's a clear solution.

Given a recursive solution, it is also "trivial" to convert it into filling an array by visiting the recursive states in reverse topological order. So here is a submission that fills in the array: https://leetcode.com/problems/best-time-to-buy-and-sell-stoc...

undefined

[deleted]

thdespou

> DP algorithms are "just" clever ways to cache a recursion

It does't work all the time like that since caching increases the Space Complexity.

There are DP problems that can be solved without storing all possible values in an array.

qsantos

The issue I have with not connecting dynamic programming with caching is that it becomes an exercise in cleverness, and many people just give up. It was pretty fun in school, but if I need colleagues to ramp up on it, I need a more effective approach. Sure, they might not grok the full theory right away, but they won't use it at all, and definitely won't get to the theory, if they think it's too abstract for them.

undefined

[deleted]

dataflow

The best analogy I have here is that saying "DP is just caching/memoization" is like saying "investment is just buying stuff and selling them later." Regardless of how technically accurate it might be, it misses such massive complexities and difficulties in the topic that it just makes you look silly rather than insightful.

natpalmer1776

Would you prefer someone start learning the principles of investing from an initial premise of “investment is just spending money to make money” or “investment is just buying something for less than it will be worth in the future”

Both are technically correct and both statements are vast oversimplifications of a complex subject, however the second statement leads into more advanced topics naturally, creating a smoother progression of knowledge that builds on itself.

Every complex topic generally starts with a few foundational bits of knowledge that support the whole house of cards. Pick the wrong foundational bits and you won’t get as far.

dataflow

I certainly don't disagree that there should be a solid foundation here; I just think this is the wrong foundation, for what I believe to be good reasons (some of which I'll get to below) - mainly, it confuses the concept with the implementation mechanics.

Instead... I find it much more productive to start teaching this with the fundamentals of problem-solving techniques, as follows:

- One method for solving complex problems is to divide them into separate subproblems that are simpler to solve. We call this "divide-and-conquer", as you've seen in [blah, e.g. mergesort]. Observe that this technique decomposes your subproblem structure into a tree. (Draw a picture.)

- Another (more advanced) method for solving complex problems is to find subproblems that might potentially overlap, and then find a clever way to combine their solutions into the final answer. An example here is finding the shortest N-hop path by finding the shortest (N-1)-hop paths, and picking out the shortest of those to find the shortest N-hop path. Observe that this technique decomposes your subproblem structure into a DAG form. (Draw a picture.)

Take a moment to read the above... notice the lack of any mention of caching/memoization/tables, etc.? That's not an accident. It's because those are optimizations to DP, not the core idea! Before you can even start talking about "how do I solve this quickly", the question you need to answer is "what systematic method can I use to solve this at all?" The central idea there is the decomposition of the problem into a tree vs. DAG structure. Once that sinks in, the question of how to make it faster (or whether to use recursion vs. iteration) is secondary, and also far more obvious to figure out on your own.

Moreover, this also lets the student realize that you can (sometimes) speed up divide-and-conquer with memoization, too. Which also hammers home the point that such an optimization is very much distinct from the technique itself!

dsego

I remember learning about the knapsack problem and trying out recursive ways to solve it as well.

https://github.com/dsego/codility/blob/main/knapsack.js

toomanyrichies

Origin of the name "dynamic programming", from its inventor, Richard Bellman:

"I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, ‘Where did the name, dynamic programming, come from?’

The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence.

You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose?

In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word, ‘programming.’ I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense.

Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities”.

Source:

https://alliance.seas.upenn.edu/~cis520/dynamic/2021/wiki/in...

bombela

I really like how this article exposes the problem recursively first then progressively adds caching, and finally reduces the size of the cache to only what is necessary.

I have often made the mistake of trying to get to the dynamic programming solution directly, and either got stuck or had to go through heroic efforts to get it working. I think from now on, I will force myself to go through the steps in order.

qsantos

From my own experience, being taught dynamic programming straight way makes it more of a puzzle. By going through the steps and explaining _why_ we are using a table, and connecting the concept to caching, I feel like it makes much more sense.

flobosg

gww

I would argue that these are some of the most important algorithms in bioinformatics/biology. They have a wide range of applications.

marcodiego

I had a very good algorithms professor. He studied at UCLA. His classes about dynamic programming were superb. He started with a problem for which the naive solution was simple and had exponential complexity. Then he broke the problem into smaller problems and reduced the complexity to something polynomial. Then he applied memoisation and the complexity dropped to linear.

I really would like to remember the problems he used.

Horffupolde

1. *Fibonacci Sequence*: The classic example where the naive recursive solution has exponential complexity. By storing previously computed values (memoization), the complexity can be reduced to linear.

2. *Coin Change Problem*: Given different denominations of coins and a total amount, finding the number of ways to make the change. The naive approach is exponential, but dynamic programming reduces it to polynomial complexity.

3. *Knapsack Problem*: Particularly the 0/1 Knapsack problem, where items with given weights and values must be placed in a knapsack of a fixed capacity to maximize total value. The naive exponential solution can be optimized using dynamic programming.

4. *Matrix Chain Multiplication*: Determining the most efficient way to multiply a chain of matrices. The problem can be solved in exponential time using a naive approach but becomes much more efficient with dynamic programming.

5. *Longest Common Subsequence*: Finding the longest subsequence common to two sequences. A classic dynamic programming problem that can be solved in polynomial time.

6. *Longest Increasing Subsequence*: Finding the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order.

7. *Shortest Path Problems*: Like the Floyd-Warshall algorithm for finding the shortest paths in a weighted graph with positive or negative edge weights.

8. *Edit Distance (Levenshtein Distance)*: Finding the minimum number of edits (insertions, deletions, substitutions) needed to change one word into another.

JohnKemeny

I just want to add

9. Longest Path in DAGs: Find a longest path in a directed acyclic graph.

10. Weighted Independent Set on a Path: Given an array of integers, compute the maximum sum of numbers provided that you may not take two consecutive cells.

manvillej

I am a big fan of the Knight Dialer. I wrote an article on it and how you can actually use graph theory to reduce it to an incredibly efficient 4x4 matrix. was super fun.

qsantos

I have listed a few in the article. It's pretty common to see them in lectures and practical exercices.

- longest common subsequence

- longest common substring

- line warp

- subset sum

- partition

- knapsack

You can also have a look at [1] for more ideas.

[1] https://en.wikipedia.org/wiki/Dynamic_programming#Algorithms...

jakeinspace

In addition to the suggested problems others posted, perhaps it was a scheduling problem? Like, for example, scheduling N overlapping events in time, like course schedules or processes for a CPU. Generally this would be done to optimize something like throughput - I believe that when you start adding special requirements, like "These 2 courses must be taken together", then things get much more complicated and intractable compared to plain-old dynamic programming.

colordrops

Did he study under Kang at UCLA?

marcodiego

I don't know, but I know he studied there in late 90's or early 2000's.

colordrops

I did as well in the 90s - I studied algorithms, which included dynamic programming, with Kang, who was recognized as an amazing algorithms teacher.

LargeTomato

Maybe Eggert? He's also the chief maintainer of emacs and Linux tzdata.

gligorot

wmil

Effective web caching is black magic.

qsantos

Thanks! I should have planned better for that before posting.

tromp

Dynamic programming is what allowed me to compute the number of legal Go positions [1,2], a 171 digit number.

While the naive method would take time 3^n^2 to examine all possible positions on an nxn board, dynamic programming effectively strips away one dimension, reducing time to O(n^5 * 5.4^n), while using space O(n * 5.4^n).

[1] https://tromp.github.io/go/legal.html

[2] https://tromp.github.io/go/gostate.pdf

qsantos

Interesting, I have never really looked into this kind of calculation. Thanks for the links!

_v7gu

The name "Dynamic Programming" might seem out of place because it doesn't come from programming as a discipline. In this case, it actually refers to something like optimization, similar to linear programming. Dynamic programming is basically a method you can use to solve decision problems with discrete time, i.e picking the optimal sequence {a_t} in order to maximize \sum_t u_t(a_t) (plus constraints). The "dynamic programming" is defining a value function V* where V*(t) = max_{a_t}{ u_t(a_t) + V*(t-1) } which greatly reduces the dimensionality of the optimization problem.

roenxi

In fact, the official line [0] on where the name comes from is quite funny. I shall quote my favourite part:

> [Dynamic] also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to.

[0] https://en.wikipedia.org/wiki/Dynamic_programming#History

mtlmtlmtlmtl

My dad told me a joke/story once about one of his old psychologist colleagues(call him Dave) arguing with a psychoanalyst* about some psychology thing.

After a while of the discussion going nowhere, the psychoanalyst said something like this. "Not to worry, Dave, I know you're dynamically minded enough that you'll eventually learn to agree with me."

Dave was not pleased.

I guess that was more condescending than pejorative, but oh well.

(Most modern clinical psychologists consider psychoanalysis to be a pseduoscience. A small minority still practice it even clinically. They don't like eachother very much)

ordu

> After a while of the discussion going nowhere, the psychoanalyst said something like this. "Not to worry, Dave, I know you're dynamically minded enough that you'll eventually learn to agree with me."

It looks like psychoanalyst suffered from a professional deformation. Not enough evidence to be sure, but it is the main premise of psychoanalysis that psychoanalyst knows better then his "patient". A psychoanalyst knows what his patient feels, thinks, wishes, and so on. If patient doesn't agree then it is a "resistance" and overcoming it is the first priority task for a psychoanalyst.

> I guess that was more condescending than pejorative, but oh well.

The whole statement is condescending, but not because of "dynamically minded". It is "you'll learn to agree with me" that does the trick.

undefined

[deleted]

savolai

Care to offer evidence of pseudoscience status? All I could find was debunkings of this claim as myth and outdated, and to my understanding research in the field has caught wind in past decades. I’d love to learn more about the debate, so any pointers are welcome.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6020924/

https://cosmonautmag.com/2021/10/on-the-scientific-status-of...

” Depending on where your philosophical allegiances lie, Karl Popper might be a friend or an enemy to whatever version of philosophy of science you accept. Falsificationist approaches to science imply that psychoanalysis makes untestable and unfalsifiable claims! Popper made the very same claims about Marxist social theory as well. There are two issues with this approach. One, if falsificationism is true, then all social sciences, and even some natural sciences, are pseudo-scientific. This is an unacceptable conclusion since much of social science is clearly science. Lots of cutting-edge physics whose models do not lend themselves to controlled experimental testing would also lose their status as science. That is also an absurd conclusion. Newtonian mechanics, for example, would never have been accepted as a theory if we use Popper’s standards of theory corroboration and falsifiability. The theory offered better ways of explaining phenomenon that previous theories were already decent at predicting reliably. The process of legitimizing Newtonian physics had nothing to do with testability.4 A good theory of science does not rule out obvious cases of scientific rigor.5”

smokel

Dynamic typing comes to mind ;)

kevindamm

Dynamic scoping ;)

karmakaze

Scripting languages are called 'dynamic' sometimes pejoratively. Dynamic microphones are also inferior in studio recording situations. I could see a poor DP implementation wasting memory being described negatively.

qsantos

It _is_ a funny quote.

But I would still point out that this is the exact reason “dynamic” does not help if you do not know about the history. Since it can be applied to anything, it does not help you trim down what it can refer to.

glimshe

Most of the time I hear the term being used by other people (not you :) ), I feel it's for showing off and look smarter than everybody else - "Look, I used dynamic programming to solve that problem", when in fact they were just employing what I'd consider the natural and intuitive approach for solving the problem. There was nothing they actually "used", besides happening to identify that the problem could be broken into progressively smaller subproblems.

mk89

> happening to identify that the problem could be broken into progressively smaller subproblems.

Which is what dynamic programming is about. And no, not everyone is capable to do that, especially since not all problems solved at work are Leetcode.

Sometimes people really have to spend hours or days to understand how to split a problem. Only then optimizations can be applied or become "obvious".

Like usual, solutions are "so obvious" when someone has done a lot of heavy lifting to simplify the problem, improve the context, etc.

bcrosby95

Do you feel similarly if someone says they used an iterative, recursive, or greedy algorithm?

Dynamic programming is a whole chapter in most algorithms books. It's not about showing off, it's the name of the technique.

throwaway2037

As I interpret the GP, the person is peacocking or gate-keeping by (humble)bragging about using dynamic programming to solve a problem. For all we know, they Googled for an efficient algorithm and copied the result. I have done it before, and I have no shame about it. If a teammate asks me how I knew about that dynamic programming algorithm, I would reply: "Are you joking? I could never program that myself. Thank you Google." Except for a few algorithms, most are solved using iterative or recursive.

valval

Well aren’t you cynical.

closeparen

It's interesting how computing things - like optimization problems - used to be much more dominant in terms of what people thought about and did with computers. It feels like most of the time we are just doing data storage and retrieval combined with networking... some computations are embedded within those things, but usually well encapsulated.

layer8

“Optimization” is similarly misleading (as someone who once took a CS class on “optimization” expecting something very different ;)).

LudwigNagasena

Optimisation is just function minimisation/maximisation. What did you expect and how was it different?

hcs

Not speaking for the GP but I'd imagine "optimizing" for performance. I had a professor who would get very irritated when anyone mentioned "optimizing" a program for just this reason, since you weren't finding the best possible runtime.

uoaei

If you go further back, "programming" describes it precisely, and what we call "programming" today is "writing code" which can be subdivided into different kinds of "programming" such as functional, declarative, procedural, etc. But there's a lot more that fits under that umbrella.

eru

Compare also 'Linear Programming'. Or the usage of a 'TV programme' or a 'musical programme'.

bigger_cheese

Yeah the "programming" part has always thrown me off when I started my first Engineering job one of the guys was always talking about an "LP" model I thought it was some deep piece of black magic until I realized it was essentially brute searching for a solution to a series of simultaneous equations.

I think "Linear Optimization" might be a better term, or maybe "Linear Solving".

eru

Modern Linear Programming solvers are a lot more advanced than 'brute searching'. But otherwise, you are right.

tnecniv

That’s the programming part. As others have shared, the “Dynamic” part was made up by Bellman so it sounded important and politicians wouldn’t try to cut his funding. It’s often applied to dynamical systems, but that’s a coincidence.

asimpletune

This is the correct definition of dp.

cubefox

You forgot to define u and t.

tnecniv

Here, u is presumably utility and t is time

cubefox

I think utility (of what, why?) doesn't make sense in this context.

sethammons

Would it be wrong to just think "memoization" when you hear "dynamic programming"? The missing part may be intelligently breaking up the problem to use memoization?

jltsiren

Memoization is a more general technique. It's often little more than caching the results you happened to compute, in case you need them again in the future.

Dynamic programming is systematic memoization. You solve subproblems of increasing size, until you reach a solution to the full problem. "Inductive algorithm" would be kind of appropriate term, as the typical dynamic programming algorithm is effectively a proof by induction. Unfortunately that term already has other meanings.

deaddodo

And "dynamic programming" is an insanely general term that leetcode enthusiasts have pigeonholed into meaning memoized reduction. It can describe dynamic dispatch, code morphing/rewriting, dynamic loading, etc.

I would argue "memoization", while still a bad term, is far more specific and descriptive of what's happening than "dynamic programming".

greyw

The term "Dynamic programming" comes from mathematics of the 1940s before modern programming languages even existed. It's about a certain way to do optimization. This is also how it is used by "leetcode enthusiasts" so imo they are using it correctly.

Nowadays, you could naturally confuse dynamic programming with the things you have listed.

owlstuffing

> Dynamic programming is systematic memoization.

Exactly! This should have been clarified in the op.

JohnKemeny

That's exactly how I teach dynamic programming. First solve it recursively, then add memoization (I call that top-down).

Then notice that recursion and memoization has some overhead, and construct the table from bottom-up, remove the recursive call, and voila, dynamic programming.

throwaway2037

This sounds like a promising teaching technique. Do you have a slide deck to share? I am sure HN crowd would be interested to read.

globular-toast

It's in TFA...

GuB-42

To my approach to dynamic programming, memoization is step 2 of 3.

1- find a recursive algorithm

2- memoization

3- make it iterative / bottom-up

3b- if possible, optimize for space

Step 3 is the most characteristic part of dynamic programming, but if you stopped at step 2, it would still qualify, I think, but it would not be as efficient as it could be.

Another way to think of step 3 would be: memoization is about caching, is there a way to pre-fill the cache?

cubefox

This makes it clear why dynamic programming is not easy. The problem occurs already in step 1 -- we naturally tend to think in loops, not recursions. So before you find a recursive algorithm, you probably come up with a loop algorithm first. Then you have to try to convert the loop into recursion, then go back to loops that use caching. And hope that the result is more efficient than the loop algorithm you started with.

WJW

Speak for yourself, there are dozens (dozens!) of us for which recursion is the more natural way to think about most algorithms. Joking aside, DP is very straightforward in more functional languages like Haskell where recursion is the default way to do iteration anyway.

GuB-42

Indeed, to me, step 1 is the hardest. Not only you need a recursive algorithm, but you also need an algorithm where memoization is possible.

Step 2 and 3 are more systematic. That is, if you understand the Fibonacci sequence example in the article, you can probably do all of the problems. It still requires training, especially if you are doing competitive programming, but it is always essentially the same thing.

Finding the algorithm is the tricky part as there is no set formula.

That's why the Fibonacci sequence is a good introduction. You know where you are starting from: part 1 is trivial as the formula is given to you. And you know where you are going, as you probably already have a good idea on how you are going to implement it iteratively. So you won't get lost in the problem solving part, which may be hard and requires a lot of attention, instead focusing on the basic technique. Once well understood, it is time to solve the actual problems, where step 1 is not trivial and where intuition is not enough for finding the final solution.

naet

Where I have the most trouble is usually going from recursive to iterative. It's pretty easy for me to understand the recursive + memoization, but when that ends up being taken off the table due to various inefficiencies of recursion I often get a little stuck.

kevindamm

There are dynamic programming solutions not based on memoization. See for example finding the longest common substring between two strings. Memoization will not help as much because you only need the table's left and up cells once.

In general, if the problem's sub-problems have a lot of overlap and the optimal subproblem must be part of the optimal overall solution, you've got an opportunity for dynamic programming. Saying memoization is the only kind of dynamic programming is like saying hash tables are the only abstract data type.

recursive

The left of the up is the up of the left.

kevindamm

Yeah, I had a momentary misgiving when I hit send and thought (_well, the cell is used twice_) but in practice it is still very different than what would be considered memoization. You can make it work with just two rows at a time, forgetting all the calculated values of the earlier rows. You can also do it in one row if you work backwards in each row. If anything, it's a cache with a very specific eviction policy.

I had a whole other paragraph where I nearly convinced myself it's the same here for LCS so I'm just going to stop myself here. If you look up any texts that describe LCS they don't refer to it as using memoization. Let's not confuse things.

undefined

[deleted]

ecshafer

Yes it would be wrong in my book. First i think a obvious counter example of memoization can be used outside of dynamic programing. Otherwise you can do most dynamic programming algorithms with storing the results in a table, and searching the table afterwards for the best answer. Memoization is basically a strategy to speed up algorithms.

sk11001

> First i think a obvious counter example of memoization can be used outside of dynamic programing

This isn't relevant, the quesion is whether there are dynamic programming solutions which do not involve memoization.

eru

Well, dynamic programming also tells you when you can drop your 'memoization' caches.

dps

>This year’s Advent of Code has been brutal (compare the stats of 2023 with that of 2022, especially day 1 part 1 vs. day 1 part 2).

I enjoyed completing AoC this year. While it was very clear that day 1 (esp. part 2) was significantly harder than previous years (I wrote about this among other things [0]), OP's claim seemed not obviously self evident when comparing _current_ 2022 stats to _current_ 2023 stats, as folks have had an additional full year to complete the 2022 puzzles.

I grabbed the 2022 stats from Jan 14th 2023 [1] and, indeed, the difference is quite stark. Graphing the part two completion stats[2] for both years, there was a relatively similar starting cohort size on day 1, but 2023 looks clearly harder than 2022 up until day 15. As OP observes, the ratio[3] of folks completing pt1 but not going on to complete pt 2 is way higher for a lot of days in 2023 and suggests the day 5, 10, 12 and especially day 22 part 2s were particularly difficult.

[0] https://blog.singleton.io/posts/2024-01-02-advent-of-code-20...

[1] https://web.archive.org/web/20230114172513/https://adventofc...

[2] https://blog.singleton.io/static/imgs-aoc23/completion.png

[3] https://blog.singleton.io/static/imgs-aoc23/ratios.png

Kwpolska

Early AoC was fun, you could get away without anything fancy until late in the game. Then it got harder, not fun, so I gave up and stopped touching it.

benrow

I didn't get very far into AoC this year as I ran out of time. Maybe I'll pick it up again later.

But my point is, I was surprised at how hard day 5, part 2 was. I didn't give up and solved it, but went away wondering whey I'd missed something obvious and overcomplicated it. So it brings some relief to know it was 'supposed" to be a bit challenging!

binglebob

This was just my personal experience (which certainly came from trying out a different language than I typically use in my day to day), but I'd argue that day 1 part 2 wasn't _hard_, but improperly specified from the prompt. The examples given are:

  two1nine
  eightwothree
  abcone2threexyz
  xtwone3four
  4nineeightseven2
  zoneight234
  7pqrstsixteen
There is one critical example missing from this set and you can't exactly just figure out how you're meant to substitute the values without an example like:

  oneight

qsantos

Thanks for the details. To add to this discussion, I have a script to see the progression over the days.

Looking at the last two columns, you can see how brutal 2023 was compared to 2022. Especially in the beginning. The first few days, most people keep playing, with a retention higher than 80% most days, and virtually everyone people solve both parts. In contrast, only 76% of people solved part 2 after solving part 1. And many people gave up on days 3 and 5.

Interestingly, the last few days are not that much lower. And that can be explained by the fact that AoC 2023 is more recent than AoC 2022, like you said. My interpretation is that this group of people will get over all the challenges regardless of the difficulty (to an extent, of course), while many other people will give up when they realize it will take too much of their time.

    Stats for year 2022 of Advent of Code
    -------------------------------------
    
    Day   Both puzzles   One puzzle       Total   Rel. puzzle 1/2   Rel. day before
      1        280,838       15,047     295,885              95 %             100 %
      2        232,752       12,403     245,155              95 %              83 %
      3        200,016       11,392     211,408              95 %              86 %
      4        184,435        3,734     188,169              98 %              92 %
      5        157,392        3,116     160,508              98 %              85 %
      6        155,921        1,602     157,523              99 %              99 %
      7        113,241        2,592     115,833              98 %              73 %
      8        107,224        7,659     114,883              93 %              95 %
      9         82,414       11,449      93,863              88 %              77 %
     10         85,075        5,511      90,586              94 %             103 %
     11         68,838        9,258      78,096              88 %              81 %
     12         59,253        1,061      60,314              98 %              86 %
     13         51,512        1,220      52,732              98 %              87 %
     14         49,051          991      50,042              98 %              95 %
     15         39,677        5,773      45,450              87 %              81 %
     16         23,298        5,650      28,948              80 %              59 %
     17         21,525        6,237      27,762              78 %              92 %
     18         25,420        4,927      30,347              84 %             118 %
     19         17,516          928      18,444              95 %              69 %
     20         22,141        1,003      23,144              96 %             126 %
     21         23,022        3,060      26,082              88 %             104 %
     22         15,393        5,083      20,476              75 %              67 %
     23         18,531          254      18,785              99 %             120 %
     24         16,419          252      16,671              98 %              89 %
     25         13,192        7,473      20,665              64 %              80 %
    ~/src/advent-of-code% ./stats.py 2023
    Stats for year 2023 of Advent of Code
    -------------------------------------
    
    Day   Both puzzles   One puzzle       Total   Rel. puzzle 1/2   Rel. day before
      1        230,737       73,941     304,678              76 %             100 %
      2        196,352        9,256     205,608              95 %              85 %
      3        130,406       19,913     150,319              87 %              66 %
      4        130,271       17,691     147,962              88 %             100 %
      5         80,255       31,029     111,284              72 %              62 %
      6        103,358        1,918     105,276              98 %             129 %
      7         81,905        7,308      89,213              92 %              79 %
      8         74,034       14,707      88,741              83 %              90 %
      9         76,438        1,229      77,667              98 %             103 %
     10         48,313       17,054      65,367              74 %              63 %
     11         57,339        2,386      59,725              96 %             119 %
     12         30,985       14,440      45,425              68 %              54 %
     13         38,217        5,223      43,440              88 %             123 %
     14         36,500        7,457      43,957              83 %              96 %
     15         40,881        4,156      45,037              91 %             112 %
     16         35,347        1,023      36,370              97 %              86 %
     17         24,014        1,097      25,111              96 %              68 %
     18         24,799        4,937      29,736              83 %             103 %
     19         22,525        7,197      29,722              76 %              91 %
     20         18,287        4,398      22,685              81 %              81 %
     21         14,311       10,149      24,460              59 %              78 %
     22         15,830          988      16,818              94 %             111 %
     23         14,562        2,964      17,526              83 %              92 %
     24         11,864        4,918      16,782              71 %              81 %
     25         10,522        3,048      13,570              78 %              89 %

ctk_25

Would recommend DPV Algorithms book and Georgia Techs lectures on udacity for graduate algorithms. The way to master dynamic programming is - practice practice practice solving problems…

62951413

Consider https://www.amazon.com/Dynamic-Programming-Coding-Interviews... for the most pedagogical approach I have seen.

mbwgh

I find it disappointing that the "coming up with a recurrence formula" part is always glanced over, which to me is very much the hard part.

qsantos

I understand! I do not think there is a magic formula for this. This only comes with a lot of practice.

Daily Digest email

Get the top HN stories in your inbox every day.

Dynamic programming is not black magic - Hacker News