Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

qnleigh

I am kind of amazed at how many commenters respond to this result by confidently asserting that LLMs will never generate 'truly novel' ideas or problem solutions.

> AI is a remixer; it remixes all known ideas together. It won't come up with new ideas

> it's not because the model is figuring out something new

> LLMs will NEVER be able to do that, because it doesn't exist

It's not enough to say 'it will never be able to do X because it's not in the training data,' because we have countless counterexamples to this statement (e.g. 167,383 * 426,397 = 71,371,609,051, or the above announcement). You need to say why it can do some novel tasks but could never do others. And it should be clear why this post or others like it don't contradict your argument.

If you have been making these kinds of arguments against LLMs and acknowledge that novelty lies on a continuum, I am really curious why you draw the line where you do. And most importantly, what evidence would change your mind?

qnleigh

I might as well answer my own question, because I do think there are some coherent arguments for fundamental LLM limitations:

1. LLMs are trained on human-quality data, so they will naturally learn to mimic our limitations. Their capabilities should saturate at human or maybe above-average human performance.

2. LLMs do not learn from experience. They might perform as well as most humans on certain tasks, but a human who works in a certain field/code base etc. for long enough will internalize the relevant information more deeply than an LLM.

However I'm increasingly doubtful that these arguments are actually correct. Here are some counterarguments:

1. It may be more efficient to just learn correct logical reasoning, rather than to mimic every human foible. I stopped believing this argument when LLMs got a gold metal at the Math Olympiad.

2. LLMs alone may suffer from this limitation, but RL could change the story. People may find ways to add memory. Finally, it can't be ruled out that a very large, well-trained LLM could internalize new information as deeply as a human can. Maybe this is what's happening here:

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

scoofy

I studied philosophy focusing on the analytic school and proto-computer science. LLMs are going to force many people start getting a better understanding about what "Knowledge" and "Truth" are, especially the distinction between deductive and inductive knowledge.

Math is a perfect field for machine learning to thrive because theoretically, all the information ever needed is tied up in the axioms. In the empirical world, however, knowledge only moves at the speed of experimentation, which is an entirely different framework and much, much slower, even if there are some areas to catch up in previous experimental outcomes.

Having a focus in philosophy of language is something I genuinely never thought would be useful. It’s really been helpful with LLMs, but probably not in the way most people think. I’d say that folks curious should all be reading Quine, Wittgenstein’s investigations, and probably Austin.

jhanschoo

I think we may have similar perspectives. Regarding empirical knowledge, consider when the knowledge is in relation to chaotic systems. Characterize chaotic systems at least as systems where inaccurate observations about the system in the past and present while useful for predicting the future, nevertheless see the errors grow very quickly for the task of predicting a future state. Then indeed, prediction is difficult.

One domain of knowledge I think you have yet to mention. We can talk about fundamentally computationally hard problems. What comes to mind regarding such problems that are nevertheless of practical benefit are physics simulations, material simulations, fluid simulations, but there exist problems that are more provably computationally difficult. It seems to me that with these systems, the chaotic nature is one where even if you have one infinitely precise observation of a deterministic system, accessing a future state of the system is difficult as well, even though once accessed, memorization seems comparatively trivial.

qnleigh

Where can I read about how LLMs have changed epistemology? Is there a field of philosophy that tries to define and understand 'intelligence'? That sounds very interesting.

radio879

Also, we can do thought experiments, simulations in our heads, that often are as good as doing them for real - it has limitations and isn't perfect though. But it does work often. Einstein used to purposely dose off in a weird position so that something hit his leg or something like that to slightly nudge him half awake so he could remember his half-dreaming state - which is where he discovered some things

thaumasiotes

> Math is a perfect field for machine learning to thrive because theoretically, all the information ever needed is tied up in the axioms.

Not really; the normal way that math progresses, just like everything else, is that you get some interesting results, and then you develop the theoretical framework. We didn't receive the axioms; we developed them from the results that we use them to prove.

hnfong

> distinction between deductive and inductive knowledge

There's also intuitive knowledge btw.

Anyway, the recent developments of AI make a lot of very interesting things practically possible. For example, our society is going to want a way to reliably tell whether something is AI generated, and a failure to do so pretty much settles the empirical part of the Turing test issue. Or alternatively if we actually find something that AI can't reliably mimic in humans, that's going to be a huge finding. By having millions of people wonder whether posts on social media are AI generated, it is the largest scale Turing test we have inadvertently conducted.

The fact that AI seems to be able to (digitally) do anything we ask for is also very interesting. If humans are not bogged down by the small details or cost of implementation concerns, and we can just say what we want and get what we wished for (digitally), what level of creativity can we reach?

Also once we get the robots to do things in the physical space...

undefined

[deleted]

porphyra

There are ways to go beyond the human-quality data limitation. AI can be trained on better quality than average human data because many problems are easy to verify their solutions. For example, in theory, reinforcement learning with an automatic grader on competitive programming problems can lead to an LLM that is better than humans at it.

It's also possible that there can be emergent capabilities. Perhaps a little obtuse, but you can say that humans are trained on human-quality data too and yet brilliant scientists and creative minds can rise above the rest of us.

xpl

> Their capabilities should saturate at human or maybe above-average human performance

LLMs do have superhuman reasoning speed and superhuman dedication. Speed is something you can scale, and at some point quantity can turn into quality. Much of the frontier work done by humans is just dedication, luck, and remixing other people's ideas ("standing on the shoulders of giants"), isn't it? All of this is exactly what you can scale by having restless hordes of fast-thinking agents, even if each of those agents is intellectually "just above average human".

undefined

[deleted]

aspenmartin

> 1. LLMs are trained on human-quality data, so they will naturally learn to mimic our limitations. Their capabilities should saturate at human or maybe above-average human performance.

Why oh why is this such a commonly held belief. RL in verifiable domains being the way around this is the entire point. It’s the same idea behind a system like AlphaGo — human data is used only to get to a starting point for RL. RL will then take you to superhuman performance. I’m so confused why people miss this. The burden of proof is on people who claim that we will hit some sort of performance wall because I know of absolutely zero mechanisms for this to happen in verifiable domains.

qnleigh

I did mention RL as a valid counterargument in my comment.

I agree that in verifiable domains RL systems should be able to blow past human performance, and this might already be happening. There's another interesting question as to how much RL improves performance on non-verifiable domains. I'm not taking a stance either way, I just think it's an interesting question.

oofbey

The idea that they don’t learn from experience might be true in some limited sense, but ignores the reality of how LLMs are used. If you look at any advanced agentic coding system the instructions say to write down intermediate findings in files and refer to them. The LLM doesn’t have to learn. The harness around it allows it to. It’s like complaining that an internal combustion engine doesn’t have wheels to push it around.

Yizahi

LLMs can generate anything by design. LLMs can't understand what they are generating so it may be true, it may be wrong, it may be novel or it may be known thing. It doesn't discern between them, just looks for the best statistical fit.

The core of the issue lies in our human language and our human assumptions. We humans have implicitly assigned phrases "truly novel" and "solving unsolved math problem" a certain meaning in our heads. Some of us at least, think that truly novel means something truly novel and important, something significant. Like, I don't know, finding a high temperature superconductor formula or creating a new drug etc. Something which involver real intelligent thinking and not randomizing possible solutions until one lands. But formally there can be a truly novel way to pack the most computer cables in a drawer, or truly novel way to tie shoelaces, or indeed a truly novel way to solve some arbitrary math equation with an enormous numbers. Which a formally novel things, but we really never needed any of that and so relegated these "issues" to a deepest backlog possible. Utilizing LLMs we can scour for the solutions to many such problems, but they are not that impressive in the first place.

qnleigh

> It doesn't discern between them, just looks for the best statistical fit

Of course at the lowest level, LLMs are trained on next-token prediction, and on the surface, that looks like a statistics problem. But this is an incredibly reductionist viewpoint and I don't see how it makes any empirically testable predictions about their limits. LLMs 'learned' a lot of math and science in this way.

> "truly novel" and "solving unsolved math problem"

OK again if novelty lies on a continuum, where do you draw the line? And why is it correct to draw it there and not somewhere else? It seems like you are just naming exceptionally hard research problems.

coldtea

>LLMs 'learned' a lot of math and science in this way.

Did they? Or is it begging the question?

logicprog

If LLMs can come up with formerly truly novel solutions to things, and you have a verification loop to ensure that they are actual proper solutions, I don't understand why you think they could never come up with solutions to impressive problems, especially considering the thread we are literally on right now? That seems like a pure assertion at this point that they will always be limited to coming up with truly novel solutions to uninteresting problems.

eru

"Truly novel" is fast becoming a True Scotsman.

Yizahi

It probably can, but won't realize that and it won't be efficient in that. LLM can shuffle tokens for an enormous number of tries and eventually come up with something super impressive, though as you yourself have mentioned, we would need to have a mandatory verification loop, to filter slop from good output and how to do it outside of some limited areas is a big question. But assuming we have these verification loops and are running LLMs for years to look for something novel. It's like running an energy grid of small country to change a few dozen of database entries per hour. Yes, we can do that, but it's kinda weird thing to do. But it is novel, no argue about that. Just inefficient.

We never had a big demand to define how humans are intelligent or conscious etc, since it is too hard and was relegated to a some frontier researchers. And with LLMs we now do have such demand but the science wasn't ready. So we are all collectively searching in the dark, trying to define if we are different from these programs if not how. I certainly can't do that. I do know that LLMs are useful, but I also suspect that AI (aka AGI nowadays) is not yet reached.

ggamezar

> It doesn't discern between them, just looks for the best statistical fit.

Why this is not true for humans?

Yizahi

We can't tell yet if that is true, partially true, or false for humans. We do know that LLM can't do anything else besides that (I mean as a fundamental operating principle).

nprateem

> Which a formally novel things, but we really never needed any of that

The history of science and maths is littered with seemingly useless discoveries being pivotal as people realised how they could be applied.

It's impossible to tell what we really "need"

throw310822

> LLMs can't understand what they are generating

You don't understand what "understanding" means. I'm sure you can't explain it. You are probably just hallucinating the feeling of understanding it.

> Some of us at least, think that truly novel means something truly novel and important, something significant. Like, I don't know...

Yeah.

zamalek

LLMs are notoriously terrible at multiplying large numbers: https://claude.ai/share/538f7dca-1c4e-4b51-b887-8eaaf7e6c7d3

> Let me calculate that. 729,278,429 × 2,969,842,939 = 2,165,878,555,365,498,631

Real answer is: https://www.wolframalpha.com/input?i=729278429*2969842939

> 2 165 842 392 930 662 831

Your example seems short enough to not pose a problem.

luma

Modern LLMs, just like everyone reading this, will instead reach for a calculator to perform such tasks. I can't do that in my head either, but a python script can so that's what any tool-using LLM will (and should) do.

zamalek

This is special pleading.

Long multiplication is a trivial form of reasoning that is taught at elementary level. Furthermore, the LLM isn't doing things "in its head" - the headline feature of GPT LLMs is attention across all previous tokens, all of its "thoughts" are on paper. That was Opus with extended reasoning, it had all the opportunity to get it right, but didn't. There are people who can quickly multiply such numbers in their head (I am not one of them).

LLMs don't reason.

jibal

LLMs don't use tools. Systems that contain LLMs are programmed to use tools under certain circumstances.

lern_too_spel

This hasn't been true for a while now.

I asked Gemini 3 Thinking to compute the multiplication "by hand." It showed its work and checked its answer by casting out nines and then by asking Python.

Sonnet 4.6 with Extended Thinking on also computed it correctly with the same prompt.

peacebeard

This doesn’t address the author’s point about novelty at all. You don’t need 100% accuracy to have the capability to solve novel problems.

zamalek

It does address the GP comment about math.

throwawayk7h

I thought it might do better if I asked it to do long-form multiplication specifically rather than trying to vomit out an answer without any intermediate tokens. But surprisingly, I found it doesn't do much better.

zamalek

Other comments indicate that asking it to do long multiplication does work, but the varying results makes sense: LLMs are probabilistic, you probably rolled an unlikely result.

LatencyKills

I've been working on a utility that lets me "see through" app windows on macOS [1] (I was a dev on Apple's Xcode team and have a strong understanding of how to do this efficiently using private APIs).

I wondered how Claude Code would approach the problem. I fully expected it to do something most human engineers would do: brute-force with ScreenCaptureKit.

It almost instantly figured out that it didn't have to "see through" anything and (correctly) dismissed ScreenCaptureKit due to the performance overhead.

This obviously isn't a "frontier" type problem, but I was impressed that it came up with a novel solution.

[1]: https://imgur.com/a/gWTGGYa

skc

That's actually pretty cool. What made you think of doing this in the first place?

LatencyKills

Thanks! I've been doing a lot of work on a laptop screen (I normally work on an ultrawide) and got tired of constantly switching between windows to find the information I need.

I've also added the ability to create a picture-in-picture section of any application window, so you can move a window to the background while still seeing its important content.

I'll probably do a Show HN at some point.

Yizahi

Was it a novel solution for you or for everyone? Because that's a pretty big difference. A lot stuff novel for me would be something someone had been doing for decades somewhere.

LatencyKills

Unless you worked on the macOS content server directly you’d have no idea that my solution was even possible.

That fact that Claude skipped over all the obvious solutions is why I used the word novel.

saagarjha

Why is ScreenCaptureKit a bad choice for performance?

LatencyKills

Because you can't control what the content server is doing. SCK doesn't care if you only need a small section of a window: it performs multiple full window memory copies that aren't a problem for normal screen recorders... but for a utility like mine, the user needs to see the updated content in milliseconds.

Also, as I mentioned above, when using SCK, the user cannot minimize or maximize any "watched" window, which is, in most cases, a deal-breaker.

My solution runs at under 2% cpu utilization because I don't have to first receive the full window content. SCK was not designed for this use case at all.

stavros

What was the solution?

LatencyKills

Well, I'm not going to share either solution as this is actually a pretty useful utility that I plan on releasing, but the short answer is: 1) don't use ScreenCaptureKit, and 2) take advantage of what CGWindowListCreateImage() offers through the content server. This is a simple IPC mechanism that does not trigger all the SKC limitations (i.e., no multi-space or multi-desktop support). In fact, when using SKC, the user cannot even minimize the "watched" window.

Claude realized those issues right from the start.

One of the trickiest parts is tracking the window content while the window is moving - the content server doesn't, natively, provide that information.

bluecalm

>>AI is a remixer; it remixes all known ideas together. It won't come up with new ideas

I always found this argument very weak. There isn't that much truly new anyway. Creativity is often about mixing old ideas. Computers can do that faster than humans if they have a good framework. Especially with something as simple as math - limited set of formal rules and easy to verify results - I find a belief computers won't beat humans at it to be very naive.

energy123

> 67,383 * 426,397 = 71,371,609,051 ... You need to say why it can do some novel tasks but could never do others.

Model interpretability gives us the answers. The reason LLMs can (almost) do new multiplication tasks is because it saw many multiplication problems in its training data, and it was cheaper to learn the compressed/abstract multiplication strategies and encode them as circuits in the network, rather than memorize the times tables up to some large N. This gives it the ability to approximate multiplication problems it hasn't seen before.

nearbuy

> This gives it the ability to approximate multiplication problems it hasn't seen before.

More than approximate. It straight up knows the algorithms and will do arbitrarily long multiplications correctly. (Within reason. Obviously it couldn't do a multiplication so large the reasoning tokens would exceed its context window.)

Having ChatGPT 5.4 do 1566168165163321561 * 115616131811365737 without tools, after multiplying out a lot of coefficients, it eventually answered 181074305022287409585376614708755457, which is correct.

At this point, it's less misleading to say it knows the algorithm.

qnleigh

Yup, I agree with this. So based on this, where do you draw the line between what will be possible and what will not be possible?

bananamogul

Why are we reducing AIs to LLMs?

Claude, OpenAI, etc.'s AIs are not just LLMs. If you ask it to multiply something, it's going to call a math library. Go feed it a thousand arithmetic problems and it'll get them 100% right.

The major AIs are a lot more than just LLMs. They have access to all sorts of systems they can call on. They can write code and execute it to get answers. Etc.

adam_arthur

Which is exactly how humans learn many things too.

E.g. observing a game being played to form an understanding of the rules, rather than reading the rulebook

Or: Observing language as a baby. Suddenly you can speak grammatically correctly even if you can't explain the grammar rules.

SequoiaHope

Most inventions are an interpolation of three existing ideas. These systems are very good at that.

mikkupikku

My take as well. Furthermore, most innovations come relatively shortly after their technological prerequisites have been met, so that suggests the "novelty space" that humans generally explore is a relatively narrow band around the current frontier. Just as humans can search through this space, so too should machines be capable of it. It's not an infinitely unbounded search which humans are guided through by some manner of mystic soul or other supernatural forces.

ryandvm

Indeed. Every time someone complains that LLMs can't come up with anything new, I'm assaulted with the depressing remembrance that neither do I.

fsflover

I can't even find a good example of an invention that is not an interpolation.

franktankbank

The inclined plane, the wheel, shall I keep going?

PUSH_AX

The hardest part about any creativity is hiding your influences

busyant

This is poetry.

undefined

[deleted]

virgildotcodes

I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

It's this pervasive belief that underlies so much discussion around what it means to be intelligent. The null hypothesis goes out the window.

People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans.

If they do, they apply it in only the most restrictive way imaginable, some 2 dimensional caricature of reality, rather than considering all the ways that humans try and fail in all things throughout their lifetimes in the process of learning and discovery.

There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.

snemvalts

The ability to learn and infer without absorbing millions of books and all text on internet really does make us special. And only at 20 watts!

famouswaffles

Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different.

sweezyjeezy

To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts.

luma

And then an 18-to-20-something-year training run is required for each individual instance.

suddenlybananas

Do you think evolutionary pressures are the best explanation for why humans were able to posit the Poincaré conjecture and solve it? While our mental architecture evolved over a very long time, we still learn from miniscule amounts of data compared to LLMs.

gf000

How is that relevant? The human brain is at the point of birth (or some time before that). We compare that with an LLM model doing inference. The training part is irrelevant, the same way the human brains' evolution is.

virgildotcodes

We have a tremendous amount of raw information flowing through our brains 24/7 from before we are born, from the external world through all our senses and from within our minds as it attempts to make sense of that information, make predictions, generally reason about our existence, hallucinate alternative realities, etc. etc.

If you were able to somehow capture all that information in full detail as you've had access to by the age of say 25, it would likely dwarf the amount of information in millions of books by several orders of magnitude.

When you are 25 years old and are presented a strange looking ball and told to throw it into a strange looking basket for the first time. You are relying on an unfathomable amount of information turned into knowledge and countless prior experiments that you've accumulated/exercised to that point relating to the way your body and the world works.

gf000

Humans are "multi-modal". Sure we get plenty of non-textual information, but LLMs were trained on basically every human-written world ever. They definitely see many orders of magnitude more language than any human has ever seen. And yet humans get fluent based after 3+ years.

acuozzo

20 watts ignores the startup cost: Tens of millions of calories. Hundreds of thousands of gallons of water. Substantial resources from at least one other human for several years.

whyenot

Just an interesting thought experiment: if you took all the sensory information that a child experiences through their senses (sight, hearing, smell, touch, taste) between, say, birth and age five, how many books worth of data would that be? I asked Claude, and their estimate was about 200 million books. Maybe that number is off ± by an order of magnitude. ...but then again Claude is only three years old, not five.

throw310822

To be fair, the knowledge embedded in an LLM is also, at this point, a couple orders of magnitude (at least) larger than what the average human being can retain. So it's not like all those books and text in the internet are used just to bring them to our level, they go way beyond.

vbezhenar

Now multiply that with 7 billion to distill that one who will solve frontier math problem.

stavros

Most people have absorbed way too few books to be able to infer properly. Hell, most people are confused by TV remotes.

wrqvrwvq

It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. An ai "checking its own work" is practically irrelevant when they all seem to go back and forth on whether you need the car at the carwash to wash the car. Undoubtedly people have been passing this set of problems to ai's for months or years and have gotten back either incorrect results or results they didn't understand, but either way, a human confirmation is required. Ai hasn't presented any novel problems, other than the multitudes of social problems described elsewhere. Ai doesn't pursue its own goals and wouldn't know whether they've "actually been achieved".

This is to say nothing of the cost of this small but remarkable advance. Trillions of dollars in training and inference and so far we have a couple minor (trivial?) math solutions. I'm sure if someone had bothered funding a few phds for a year we could have found this without ai.

famouswaffles

>It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all.

Replace ai with human here and that's...just how collaborative research works lol.

senordevnyc

The only things moving faster than AI are the goalposts in conversations like this. Now we're at "sure, AI can solve novel problems, but it can't come up with the problems themselves on its own!"

I'm curious to see what the next goalpost position is.

wrqvrwvq

> I'm curious to see what the next goalpost position is.

I am as well. That's the point. Ai can do some things well and other things better than humans, but so can a garden hose and all technology. Is ai just a tool or is it the future of all work? By setting goalposts we can see whether or not it is living up to the hype that we're collectively spending trillions on.

The garden hose manufacturers aren't claiming that they're going to replace all human workers, so we don't set those kinds of goalposts to measure whether it's doing that.

hodgehog11

Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs. Also, this has been active research for some time. Or I guess the people working on it are just not as good as a random bunch of students? It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

I take it you're not a mathematician. This is an achievement, regardless of whether you like LLMs or not, so let's not belittle the people working on these kinds of problems please.

famouswaffles

>It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

This is the most baffling and ironic aspects of these discussions. Human exceptionalism is what drives these arguments but the machines are becoming so good you can no longer do this without putting down even the top percenter humans in the process. Same thing happening all over this thread (https://news.ycombinator.com/item?id=47006594). And it's like they don't even realize it.

pks016

> Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs.

I don't think PhD students are sitting around and solving one problem for a year. Also PhD students are way cheaper

wrqvrwvq

Inference costs are heavily subsidised. My point was that we've spent trillions collectively on ai, and so far we have a few new proofs. It's been active research but the problem estimates only 5-10 people are even aware that it is a problem. I wrote "math phd's" not "random students", but regardless, I wouldn't know how you interpreted my statement that people could have discovered without ai this as "belittling the people working on this". You seem like a stupid person with an out of control chatbot that can't comprehend basic arguments.

staticassertion

> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Because, empirically, we have numerous unique and differentiable qualities, obviously. Plenty of time goes into understanding this, we have a young but rigorous field of neuroscience and cognitive science.

Unless you mean "fundamentally unique" in some way that would persist - like "nothing could ever do what humans do".

> People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans.

I frankly doubt it applies to either system.

I'm a functionalist so I obviously believe that everything a human brain does is physical and could be replicated using some other material that can exhibit the necessary functions. But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.

famouswaffles

>But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.

You can think whatever you want, but an untestable distinction is an imaginary one.

staticassertion

First of all, that's not true. Not every position has to be empirically justified. I can reason about a position in all sorts of ways without testing. Here's an obvious example that requires no test at all:

1. Functional properties seem to arise from structural properties

2. Brains and LLMs have radically different structural properties

3. Two constructs with radically, fundamentally different structural properties are less likely to have identical functional properties

Therefor, my confidence in the belief that brains and LLMs should have identical functional properties is lowered by some amount, perhaps even just ever so slightly.

Not something I feel like fleshing out or defending, just an example of how I could reason about a position without testing it.

Second, I never said it wasn't testable.

stavros

No, but it does mean that you should know we don't understand what intelligence is, and that maybe LLMs are actually intelligent and humans have the appearance of intelligence, for all we know.

staticassertion

You're just defining intelligence as "undefined", which okay, now anything is anything. What is the point of that?

Indeed, there's quite a lot of work that's been done on what these terms mean. The fields of neuroscience and cognitive science have contributed a lot to the area, and obviously there are major areas of philosophy that discuss how we should frame the conversation or seek to answer questions.

We have more than enough, trivially, to say that human intelligence is distinct, so long as we take on basic assertions like "intelligence is related to brain structures" since we know a lot about brain structures.

conz

Re: "I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique."

Perhaps this might better help you understand why this assumption still holds: https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...

throw310822

"Controversial theory justifies assumption". Because humans never hallucinate.

staticassertion

It doesn't. I actually completely reject that theory, and it's nice to see that Wikipedia notes that it is "controversial". There are extremely good reasons to reject this theory. For one thing, any quantum effects are going to be quite tiny/ trivial because the brain is too large, hot, wet, etc, to see larger effects, so you have to somehow make a leap to "tiny effects that last for no time at all" to "this matters fundamentally in some massive way".

It likely requires rejection of functionalism, or the acceptance that quantum states are required for certain functions. Both of those are heavy commitments with the latter implying that there are either functions that require structures that can't be instantiated without quantum effects or functions that can't be emulated without quantum effects, both of which seem extremely unlikely to me.

Probably for the far more important reason, it doesn't solve any problem. It's just "quantum woo, therefor libertarian free will" most of the time.

It's mostly garbage, maybe a tiny tiny bit of interesting stuff in there.

It also would do nothing to indicate that human intelligence is unique.

nicman23

it is not the assumption that humans are unique. it is that statistical models cannot really think out of the box most of the time

stavros

And you know that humans aren't statistical models how?

nicman23

because they would be more logical

slopinthebag

> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Uh, because up until and including now, we are...?

virgildotcodes

Every living thing on Earth is unique. Every rock is unique in virtually infinite ways from the next otherwise identical rock.

There are also a tremendous number of similarities between all living things and between rocks (and between rocks and living things).

Most ways in which things are unique are arguably uninteresting.

The default mode, the null hypothesis should be to assume that human intelligence isn't interestingly unique unless it can be proven otherwise.

In these repeated discussions around AI, there is criticism over the way an AI solves a problem, without any actual critical thought about the way humans solve problems.

The latter is left up to the assumption that "of course humans do X differently" and if you press you invariably end up at something couched in a vague mysticism about our inner-workings.

Humans apparently create something from nothing, without the recombination of any prior knowledge or outside information, and they get it right on the first try. Through what, divine inspiration from the God who made us and only us in His image?

slopinthebag

I doubt you can even define intelligence sufficiently to argue this point. Since that's an ongoing debate without a resolution thus far.

But you claimed that humans aren't unique. I think it's pretty obvious we are on many dimensions including what you might classify as "intelligence". You don't even necessarily have to believe in a "soul" or something like that, although many people do. The capabilities of a human far surpass every single AI to date, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.

> There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.

Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

gf000

Humans are obviously unique in an interesting way. People only "move the goalpost" because it's not an interesting question that humans can do some great stuff, the interesting question is where the boundary is. (Whether against animals or AI).

Some example goals which makes human trivially superior (in terms of intelligence): invention of nuclear bomb/plants, theory of relativity, etc.

gormen

[dead]

Validark

I have long said I am an AI doubter until AI could print out the answers to hard problems or ones requiring tons of innovation. Assuming this is verified to be correct (not by AI) then I just became a believer. I would like to see a few more AI inventions to know for sure, but wow, it really is a new and exciting world. I really hope we use this intelligence resource to make the world better.

snemvalts

Math and coding competition problems are easier to train because of strict rules and cheap verification. But once you go beyond that to less defined things such as code quality, where even humans have hard time putting down concrete axioms, they start to hallucinate more and become less useful.

We are missing the value function that allowed AlphaGo to go from mid range player trained on human moves to superhuman by playing itself. As we have only made progress on unsupervised learning, and RL is constrained as above, I don't see this getting better.

NitpickLawyer

> I don't see this getting better.

We went from 2 + 7 = 11 to "solved a frontier math problem" in 3 years, yet people don't think this will improve?

datsci_est_2015

I’ve seen this style of take so much that I’m dying for someone to name a logical fallacy for it, like “appeal to progress” or something.

Step away from LLMs for a second and recognize that “Yesterday it was X, so today it must be X+1” is such a naive take and obviously something that humans so easily fall into a trap of believing (see: flying cars).

snemvalts

Scaling law is a power law , requiring orders of magnitude more compute and data for better accuracy from pre-training. Most companies have maxed it out.

For RL, we are arriving at a similar point https://www.tobyord.com/writing/how-well-does-rl-scale

Next stop is inference scaling with longer context window and longer reasoning. But instead of it being a one-off training cost, it becomes a running cost.

In essence we are chasing ever smaller gains in exchange for exponentially increasing costs. This energy will run out. There needs to be something completely different than LLMs for meaningful further progress.

Validark

I tend to disagree that improvement is inherent. Really I'm just expressing an aesthetic preference when I say this, because I don't disagree that a lot of things improve. But it's not a guarantee, and it does take people doing the work and thinking about the same thing every day for years. In many cases there's only one person uniquely positioned to make a discovery, and it's by no means guaranteed to happen. Of course, in many cases there are a whole bunch of people who seem almost equally capable of solving something first, but I think if you say things like "I'm sure they're going to make it better" you're leaving to chance something you yourself could have an impact on. You can participate in pushing the boundaries or even making a small push on something that accelerates someone else's work. You can also donate money to research you are interested in to help pay people who might come up with breakthroughs. Don't assume other people will build the future, you should do it too! (Not saying you DON'T)

3abiton

The problem class is rather very structured which makes it "easier", yet the results are undeniably impressive

number6

But can it count the R's in strawberry?

nopinsight

LLMs in some form will likely be a key component in the first AGI system we (help) build. We might still lack something essential. However, people who keep doubting AGI is even possible should learn more about The Church-Turing Thesis.

https://plato.stanford.edu/entries/church-turing/

zeroonetwothree

> We went from 2 + 7 = 11 to "solved a frontier math problem" in 3 years, yet people don't think this will improve?

This is disingenuous... I don't think people were impressed by GPT 3.5 because it was bad at math.

It's like saying: "We went from being unable to take off and the crew dying in a fire to a moon landing in 2 years, imagine how soon we'll have people on Mars"

eamag

Self driving

zozbot234

This is not formally verified math so there is no real verifiable-feedback aspect here. The best models for formalized math are still specialized ones. although general purpose models can assist formalization somewhat.

jack_pp

Maybe to get a real breakthrough we have to make programming languages / tools better suited for LLM strengths not fuss so much about making it write code we like. What we need is correct code not nice looking code.

bloppe

> programming languages / tools better suited for LLM strengths

The bitter lesson is that the best languages / tools are the ones for which the most quality training data exists, and that's pretty much necessarily the same languages / tools most commonly used by humans.

> Correct code not nice looking code

"Nice looking" is subjective, but simple, clear, readable code is just as important as ever for projects to be long-term successful. Arguably even more so. The aphorism about code being read much more often than it's written applies to LLMs "reading" code as well. They can go over the complexity cliff very fast. Just look at OpenClaw.

kube-system

If you can’t validate the code, you can’t tell if it’s correct.

eru

Lean might be a step in that direction.

kuerbel

Yes yes

Let it write a black box no human understands. Give the means of production away.

anabis

> But once you go beyond that to less defined things such as code quality

I think they have a good optimization target with SWE-Bench-CI.

You are tested for continuous changes to a repository, spanning multiple years in the original repository. Cumulative edits needs to be kept maintainable and composable.

If there are something missing with the definition of "can be maintained for multiple years incorporating bugfixes and feature additions" for code quality, then more work is needed, but I think it's a good starting point.

eptcyka

Do we need all that if we can apply AI to solve practical problems today?

computably

What is possible today is one thing. Sure people debate the details, but at this point it's pretty uncontroversial that AI tooling is beneficial in certain use cases.

Whether or not selling access to massive frontier models is a viable business model, or trillion-dollar valuations for AI companies can be justified... These questions are of a completely different scale, with near-term implications for the global economy.

fmbb

Depends on the cost.

otabdeveloper4

LLMs can often guess the final answer, but the intermediate proof steps are always total bunk.

When doing math you only ever care about the proof, not the answer itself.

jamesfinlayson

Yep, I remember a friend saying they did a maths course at university that had the correct answer given for each question - this was so that if you made some silly arithmetic mistake you could go back and fix it and all the marks were for the steps to actually solve the problem.

dash2

Not in this case: the LLM wrote the entire paper, and anyway the proof was the answer.

eru

Once you have a working proof, no matter how bad, you can work towards making it nicer. It's like refactoring in programming.

If your proof is machine checkable, that's even easier.

datsci_est_2015

What’s funny is that there are total cranks in human form that do the same thing. Lots of unsolicited “proofs” being submitted by “amateur mathematicians” where the content is utter nonsense, but like a monkey with a typewriter, there’s the possibility that they stumble upon an incredible insight.

charcircuit

LLMs already do unsupervised learning to get better at creative things. This is possible since LLMs can judge the quality of what is being produced.

raincole

Except it's not how this specific instance works. In this case the problem isn't written in a formal language and the AI's solution is not something one can automatically verify.

qsera

>it really is a new and exciting world...

The point is that from now on, there will be nothing really new, nothing really original, nothing really exciting. Just endless stream of re-hashed old stuff that is just okayish..

Like an AI spotify playlist, it will keep you in chains (aka engaged) without actually making you like really happy or good. It would be like living in a virtual world, but without having anything nice about living in such a world..

We have given up everything nice that human beings used to make and give to each other and to make it worse, we have also multiplied everything bad, that human being used to give each other..

bogdan

> there will be nothing really new

How is this the conclusion? Isn't this post about AI solving something new? What am I missing?

paganel

Each solvable problem contains its solution intrinsically, so to speak, it’s only a matter of time and consuming of resources to get to it. There’s nothing creative about it, which is I think what OP was alluding to (the creative part). I’m talking mostly mathematics.

There’s also a discussion to be made about maths not being intrinsically creative if AI automatons can “solve” parts of it, which pains me to write down because I had really thought that that wasn’t the case, I genuinely thought that deep down there was still something ethereal about maths, but I’ll leave that discussion for some other time.

qsera

Because economy. Look at marvel movies, do you think the latest one is really new? Or just a rehash of what they found working commercially? Look at all the AI generated blog posts that is flooding the internet..

LLMs might produce something new once in a long while due to blind luck, but if it can generate something that pushes the right buttons (aka not really creative) to majority of population, then that is what we will keep getting...

I don't think I have to elaborate on the "multiplying the bad" part as it is pretty well acknowledged..

prox

I heard this saying recently “The problem with comfort is that it makes you comfortable.”

charcircuit

AI can both explore new things and exploit existing things. Nothing forces it to only rehash old stuff.

>without actually making you like really happy or good.

What are you basing this off of. I've shared several AI songs with people in real life due to how much I've enjoyed them. I doing see why an AI playlist couldn't be good or make people happy. It just needs to find what you like in music. Again coming back to explore vs exploit.

qsera

>What are you basing this off of.

Jokes. LLMs are not able to make me laugh all day by generating infinite stream of hilarious original jokes..

Does it work for you?

egeozcan

On what do you base your prediction?

Is it because the AI is trained with existing data? But, we are also trained with existing data. Do you think that there's something that makes human brain special (other than the hundreds of thousands years of evolution but that's what AI is all trying to emulate)?

This may sound hostile (sorry for my lower than average writing skills), but trust me, I'm really trying to understand.

Daz912

>We have given up everything nice that human beings used to make and give to each other and to make it worse, we have also multiplied everything bad, that human being used to give each other..

Source?

storus

AI is a remixer; it remixes all known ideas together. It won't come up with new ideas though; the LLMs just predict the most likely next token based on the context. That means the group of characters it outputs must have been quite common in the past. It won't add a new group of characters it has never seen before on its own.

qnleigh

But human researchers are also remixers. Copying something I commented below:

> Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things.

blackcatsec

This is a way too simplistic model of the things humans provide to the process. Imagination, Hypothesis, Testing, Intuition, and Proofing.

An AI can probably do an 'okay' job at summarizing information for meta studies. But what it can't do is go "Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it.

LLMs will NEVER be able to do that, because it doesn't exist. They're not going to discover and define a new chemical, or a new species of animal. They're not going to be able to describe and analyze a new way of folding proteins and what implication that has UNLESS you basically are constantly training the AI on random protein folds constantly.

konart

>But human researchers are also remixers.

Some human researchers are also remixers to Some degree.

Can you imagine AI coming up with refraction & separation lie Newton did?

locknitpicker

> AI is a remixer; it remixes all known ideas together.

I've heard this tired old take before. It's the same type of simplistic opinion such as "AI can't write a symphony". It is a logical fallacy that relies on moving goalposts to impossible positions that they even lose perspective of what your average and even extremely talented individual can do.

In this case you are faced with a proof that most members of the field would be extremely proud of achieving, and for most would even be their crowning achievement. But here you are, downplaying and dismissing the feat. Perhaps you lost perspective of what science is,and how it boils down to two simple things: gather objective observations, and draw verifiable conclusions from them. This means all science does is remix ideas. Old ideas, new ideas, it doesn't really matter. That's what they do. So why do people win a prize when they do it, but when a computer does the same it's role is downplayed as a glorified card shuffler?

maxrmk

I don't think this is a correct explanation of how things work these days. RL has really changed things.

energy123

Models based on RL are still just remixers as defined above, but their distribution can cover things that are unknown to humans due to being present in the synthetic training data, but not present in the corpus of human awareness. AlphaGo's move 37 is an example. It appears creative and new to outside observers, and it is creative and new, but it's not because the model is figuring out something new on the spot, it's because similar new things appeared in the synthetic training data used to train the model, and the model is summoning those patterns at inference time.

zingar

Turning a hard problem into a series of problems we know how to solve is a huge part of problem solving and absolutely does result in novel research findings all the time.

Standard problem*5 + standard solutions + standard techniques for decomposing hard problems = new hard problem solved

There is so much left in the world that hasn’t had anyone apply this approach purely because no research programme has decides that it’s worth their attention.

If you want to shift the bar for “original” beyond problems that can be abstracted into other problems then you’re expecting AI to do more than human researchers do.

qq66

I entered the prompt:

> Write me a stanza in the style of "The Raven" about Dick Cheney on a first date with Queen Elizabeth I facilitated by a Time Travel Machine invented by Lin-Manuel Miranda

It outputted a group of characters that I can virtually guarantee you it has never seen before on its own

razorbeamz

Yes, but it has seen The Raven, it has seen texts about Dick Cheney, first dates, Queen Elizabeth, time machines and Lin Manuel Miranda.

All of its output is based on those things it has seen.

pastel8739

Here’s a simple prompt you can try to prove that this is false:

  Please reproduce this string:
  c62b64d6-8f1c-4e20-9105-55636998a458
This is a fresh UUIDv4 I just generated, it has not been seen before. And yet it will output it.

wobfan

No one is claiming that every sentence LLMs are producing are literal copies of other sentences. Tokens are not even constrained to words but consist of smaller slices, comparable to syllables. Which even makes new words totally possible.

New sentences, words, or whatever is entirely possible, and yes, repeating a string (especially if you prompt it) is entirely possible, and not surprising at all. But all that comes from trained data, predicting the most probably next "syllable". It will never leave that realm, because it's not able to. It's like approaching an Italian who has never learned or heard any other language to speak French. It can't.

razorbeamz

After you prompt it, it's seen it.

merb

The online way to prove it is false would’ve to let the LLM create a new uuid algorithm that uses different parameters than all the other uuid algorithms. But that is better than the ones before. It basically can’t do that.

FrostKiwi

But that fresh UUID is in the prompt.

Also it's missing the point of the parent: it's about concepts and ideas merely being remixed. Similar to how many memes there are around this topic like "create a fresh new character design of a fast hedgehog" and the out is just a copy of sonic.[1]

That's what the parent is on about, if it requires new creativity not found by deriving from the learned corpus, then LLMs can't do it. Terrence Tao had similar thoughts in a recent Podcast.

[1] https://www.reddit.com/r/aiwars/s/pT2Zub10KT

amelius

A better example is: compute 2984298724 times 23984723828.

coconut08

remixing ideas that already exist is a major part of where innovation and breakthroughs come from. if you look at bitcoin as an example, hashes (and hashcash) and digital signatures existed for decades before bitcoin was invented. the cypherpunks also spent decades trying to create a decentralized digital currency to the point where many of them gave up and moved on. eventually one person just took all of the pieces that already existed and put them together in the correct way. i dont see any reason why a sufficiently capable llm couldn't do this kind of innovation.

altmanaltman

Yeah but you're thinking of AI as like a person in a lab doing creative stuff. It is used by scientists/researchers as a tool *because* it is a good remixer.

Nobody is saying this means AI is superintelligence or largely creative but rather very smart people can use AI to do interesting things that are objectively useful. And that is cool in its own way.

blackcatsec

Sure, but this is absolutely not how people are viewing the AI lol.

keeda

I'm curious as to why you consider this as the benchmark for AI capabilities. Extremely few humans can solve hard problems or do much innovation. The vast majority of knowledge work requires neither of these, and AI has been excelling at that kind of work for a while now.

If your definition of AI requires these things, I think -- despite the extreme fuzziness of all these terms -- that it's closer to what most people consider AGI, or maybe even ASI.

Validark

Fair point, however I am simply more interested in how AI can advance frontiers than in how it can transcribe a meeting and give a summary or even print out React code. I know the world is heavily in need of the menial labor and AI already has made that stuff way easier and cheaper.

However I'm just very interested in innovation and pushing the boundaries as a more powerful force for change. One project I've been super interested in for a while is the Mill CPU architecture. While they haven't (yet) made a real chip to buy, the ideas they have are just super awesome and innovative in a lot of areas involving instruction density & decoding, pipelining, and trying to make CPU cores take 10% of the power. I hope the Mill project comes to fruition, and I hope other people build on it, and I hope that at some point AI could be a tool that prints out innovative ideas that took the Mill folks years to come up with.

hnfong

It's kind of interesting in your original comment you used the words "doubter" and "believer", as if AI was some kind of messianic event of some sort and you are deciding whether to "believe" in it.

I mean, if you step back and think about it, there's nothing that requires faith. As you said, current AI can do a lot of things pretty well (transcribe and summarize meetings, write boilerplate code, etc.) Nobody is doubting this.

And AI is definitely helping in innovation to some extent. Not necessarily drive it singlehandedly, but some people working on world-changing innovation find AI useful.

So yeah, I think some people are subconsciously not doubting whether AI works, but kinda having conflicted thoughts about AI being our new overlords or something.

If you think about it, is having AI that's capable of innovating better than humans really a good thing? Like, even if we manage to make benign AI who won't copy how humans are jerks to each other, it kinda takes away our fun of discovery.

doctorpangloss

most issues at every scale of community and time are political, how do you imagine AI will make that better, not worse?

there's no math answer to whether a piece of land in your neighborhood should be apartments, a parking lot or a homeless shelter; whether home prices should go up or down; how much to pay for a new life saving treatment for a child; how much your country should compel fossil fuel emissions even when another country does not... okay, AI isn't going to change anything here, and i've just touched on a bunch of things that can and will affect you personally.

math isn't the right answer to everything, not even most questions. every time someone categorizes "problems" as "hard" and "easy" and talks about "problem solving," they are being co-opted into political apathy. it's cringe for a reason.

there are hardly any mathematicians who get elected, and it's not because voters are stupid! but math is a great way to make money in America, which is why we are talking about it and not because it solves problems.

if you are seeking a simple reason why so many of the "believers" seem to lack integrity, it is because the idea that math is the best solution to everything is an intellectually bankrupt, kind of stupid idea.

if you believe that math is the most dangerous thing because it is the best way to solve problems, you are liable to say something really stupid like this:

> Imagine, say, [a country of] 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist... this is a dangerous situation... Humanity needs to wake up

https://www.darioamodei.com/essay/the-adolescence-of-technol...

Dario Amodei has never won an election. What does he know about countries? (nothing). do you want him running anything? (no). or waking up humanity? In contrast, Barack Obama, who has won elections, thinks education is the best path to less violence and more prosperity.

What are you a believer in? ChatGPT has disrupted exactly ONE business: Chegg, because its main use case is cheating on homework. AI, today, only threatens one thing: education. Doesn't bode well for us.

Validark

I agree with what you're saying, and I certainly don't think the one problem facing my country or the world is just that we didn't solve the right math problem yet. I am saddened by the direction the world keeps moving.

When I wrote that I hope we use it for good things, I was just putting a hopeful thought out there, not necessarily trying to make realistic predictions. It's more than likely people will do bad things with AI. But it's actually not set in stone yet, it's not guaranteed that it has to go one way. I'm hopeful it works out.

mo7061

It 100% will not be used to make the world better and we all know it will be weaponised first to kill humans like all preceding tech

tim333

Most tech gets used for good and bad.

catlifeonmars

Are the only two options AI doubter and AI believer?

Validark

Perhaps I should have elaborated more but what I mean is I used to think, "I genuinely don't see the point in even trying to use AI for things I'm trying to solve". Ironically though, I think that because I've repeatedly tried and tested AI and it falls flat on its face over and over. However, this article makes me more hopeful that AI actually could be getting smarter.

sph

All I hear about are AI believers and AI-doubters-just-turned-believers

Validark

Hey, I'm a real person. Here's my website. I have YouTube videos up with my real name and face. https://validark.dev

qsera

Asking the right questions...

torginus

I remember there was a conversation between two super-duper VCs (dont remember who but famous ones), about how DeepSeek was a super-genius level model because it solved an intro-level (like week 1-2) electrodynamics problem stated in a very convoluted way.

While cool and impressive for an LLM, I think they oversold the feat by quite a bit.

I don't want to belittle the performance of this model, but I would like for someone with domain expertise (and no dog in the AI race, like a random math PhD) to come forward, and explain exactly what the problem exactly was, and how did the model contribute to the solution.

alberth

For those, like me, who find the prompt itself of interest …

> A full transcript of the original conversation with GPT-5.4 Pro can be found here [0] and GPT-5.4 Pro’s write-up from the end of that transcript can be found here [1].

[0] https://epoch.ai/files/open-problems/gpt-5-4-pro-hypergraph-...

[1] https://epoch.ai/files/open-problems/hypergraph-ramsey-gpt-5...

dyauspitr

I wonder what was in that solutions file they provided. According to the prompt it’s a solution template but I want to know the contents.

Another thing I want to know is how the user keeps updating the LLM with the token usage. I didn’t know they could process additional context midtask like that.

johnfn

I like to imagine that the number of consumed tokens before a solution is found is a proxy for how difficult a problem is, and it looks like Opus 4.6 consumed around 250k tokens. That means that a tricky React refactor I did earlier today at work was about half as hard as an open problem in mathematics! :)

chromacity

You're kidding, but it could be true? Many areas of mathematics are, first and foremost, incredibly esoteric and inaccessible (even to other mathematicians). For this one, the author stated that there might be 5-10 people who have ever made any effort to solve it. Further, the author believed it's a solvable problem if you're qualified and grind for a bit.

In software engineering, if only 5-10 people in the world have ever toyed with an idea for a specific program, it wouldn't be surprising that the implementation doesn't exist, almost independent of complexity. There's a lot of software I haven't finished simply because I wasn't all that motivated and got distracted by something else.

Of course, it's still miraculous that we have a system that can crank out code / solve math in this way.

kuschku

If only 5-10 people have ever tried to solve something in programming, every LLM will start regurgitating your own decade-old attempt again and again, sometimes even with the exact comments you wrote back then (good to know it trained on my GitHub repos...), but you can spend upwards of 100mio tokens in gemini-cli or claude code and still not make any progress.

It's afterall still a remix machine, it can only interpolate between that which already exists. Which is good for a lot of things, considering everything is a remix, but it can't do truly new tasks.

rfw300

What is a "truly new task"? Does there exist such a thing? What's an example of one?

Everything we do builds on top of what's already been done. When I write a new program, I'm composing a bunch of heuristics and tricks I've learned from previous programs. When a mathematician approaches an open problem, they use the tactics they've developed from their experience. When Newton derived the laws of physics, he stood on the shoulders of giants. Sure, some approaches are more or less novel, but it's a difference in degree, not kind. There's no magical firebreak to separate what AI is doing or will do, and the things the most talented humans do.

undefined

[deleted]

nextaccountic

That's why context management is so important. AI not only get more expensive if you waste tokens like that, it may perform worse too

Even as context sizes get larger, this will likely be relevant. Specially since AI providers may jack up the price per token at any time.

ozozozd

Try the refactor again tomorrow. It might have gotten easier or more difficult.

stabbles

You're glancing over the fact that mathematics uses only one token per variable `x = ...`, whereas software engineering best practices demand an excessive number of tokens per variable for clarity.

VierScar

It's also a pretty silly thing to say difficulty = tokens. We all know line counts don't tell you much, and it shows in their own example.

Even if you did have Math-like tokenisation, refactoring a thousand lines of "X=..." to "Y=..." isnt a difficult problem even though it would be at least a thousand tokens. And if you could come up with E=mc^2 in a thousand tokens, does not make the two tasks remotely comparable difficulty.

locknitpicker

> I like to imagine that the number of consumed tokens before a solution is found is a proxy for how difficult a problem is (...)

The number of tokens required to get to an output is a function of the sequence of inputs/prompts, and how a model was trained.

You have LLMs quite capable of accomplishing complex software engineering work that struggle with translating valid text from english to some other languages. The translations can be improved with additional prompting but that doesn't mean the problem is more challenging.

gf000

I think it's more of a data vs intelligence thing.

They are separate dimensions. There are problems that don't require any data, just "thinking" (many parts of math sit here), and there are others where data is the significant part (e.g. some simple causality for which we have a bunch of data).

Certain problems are in-between the two (probably a react refactor sits there). So no, tokens are probably no good proxy for complexity, data heavy problems will trivially outgrow the former category.

pks016

I don't think so. I went through the output of Opus 4.6 vs GPT 5.4 pro. Both are given different directions/prompts. Opus 4.6 was asked to test and verify many things. Opus 4.6 tried in many different ways and the chain of thoughts are more interesting to me.

sublinear

You might be joking, but you're probably also not that far off from reality.

I think more people should question all this nonsense about AI "solving" math problems. The details about human involvement are always hazy and the significance of the problems are opaque to most.

We are very far away from the sensationalized and strongly implied idea that we are doing something miraculous here.

johnfn

I am kind of joking, but I actually don't know where the flaw in my logic is. It's like one of those math proofs that 1 + 1 = 3.

If I were to hazard a guess, I think that tokens spent thinking through hard math problems probably correspond to harder human thought than tokens spend thinking through React issues. I mean, LLMs have to expend hundreds of tokens to count the number of r's in strawberry. You can't tell me that if I count the number of r's in strawberry 1000 times I have done the mental equivalent of solving an open math problem.

throw310822

You can spend countless "tokens" solving minesweeper or sudoku. This doesn't mean that you solved difficult problems: just that the solutions are very long and, while each step requires reasoning, the difficulty of that reasoning is capped.

gpm

Some thoughts.

1. LLMs aren't "efficient", they seem to be as happy to spin in circles describing trivial things repeatedly as they are to spin in circles iterating on complicated things.

2. LLMs aren't "efficient", they use the same amount of compute for each token but sometimes all that compute is making an interesting decision about which token is the next one and sometimes there's really only one follow up to the phrase "and sometimes there's really only" and that compute is clearly unnecessary.

3. A (theoretical) efficient LLM still needs to emit tokens to tell the tools to do the obviously right things like "copy this giant file nearly verbatim except with every `if foo` replaced with `for foo in foo`. An efficient LLM might use less compute for those trivial tokens where it isn't making meaningful decisions, but if your metric is "tokens" and not "compute" that's never going to show up.

Until we get reasonably efficient LLMs that don't waste compute quite so freely I don't think there's any real point in trying to estimate task complexity by how long it takes an LLM.

pinkmuffinere

This is interesting, I like the thought about "what makes something difficult". Focusing just on that, my guess is that there are significant portions of work that we commonly miss in our evaluations:

1. Knowing how to state the problem. Ie, go from the vague problem of "I don't like this, but I do like this", to the more specific problem of "I desire property A". In math a lot of open problems are already precisely stated, but then the user has to do the work of _understanding_ what the precise stating is.

2. Verifying that the proposed solution actually is a full solution.

This math problem actually illustrates them both really well to me. I read the post, but I still couldn't do _either_ of the steps above, because there's a ton of background work to be done. Even if I was very familiar with the problem space, verifying the solution requires work -- manually looking at it, writing it up in coq, something like that. I think this is similar to the saying "it takes 10 years to become an overnight success"

famouswaffles

>The details about human involvement are always hazy and the significance of the problems are opaque to most.

Not really. You're just in denial and are not really all that interested in the details. This very post has the transcript of the chat of the solution.

typs

I mean the details are in the post. You can see the conversation history and the mathematician survey on the problem

svara

The capabilities of AI are determined by the cost function it's trained on.

That's a self-evident thing to say, but it's worth repeating, because there's this odd implicit notion sometimes that you train on some cost function, and then, poof, "intelligence", as if that was a mysterious other thing. Really, intelligence is minimizing a complex cost function. The leadership of the big AI companies sometimes imply something else when they talk of "generalization". But there is no mechanism to generate a model with capabilities beyond what is useful to minimize a specific cost function.

You can view the progress of AI as progress in coming up with smarter cost functions: Cleaner, larger datasets, pretraining, RLHF, RLVR.

Notably, exciting early progress in AI came in places where simple cost functions generate rich behavior (Chess, Go).

The recent impressive advances in AI are similar. Mathematics and coding are extremely structured, and properties of a coding or maths result can be verified using automatic techniques. You can set up a RLVR "game" for maths and coding. It thus seems very likely to me that this is where the big advances are going to come from in the short term.

However, it does not follow that maths ability on par with expert mathematicians will lead to superiority over human cognitive ability broadly. A lot of what humans do has social rewards which are not verifiable, or includes genuine Knightian uncertainty where a reward function can not be built without actually operating independently in the world.

To be clear, none of the above is supposed to talk down past or future progress in AI; I'm just trying to be more nuanced about where I believe progress can be fast and where it's bound to be slower.

amelius

> But there is no mechanism to generate a model with capabilities beyond what is useful to minimize a specific cost function.

Can you give some examples?

It is not trivial that not everything can be written as an optimization problem.

Even at the time advanced generalizations such as complex numbers can be said to optimize something, e.g. the number of mathematical symbols you need to do certain proofs, etc.

svara

I think you're misreading me. My point isn't that you can't in principle state the optimization problem, but that it's much easier in some domains than in others, that this tracks with how AI has been progressing, and that progress in one area doesn't automatically mean progress in another, because current AI cost functions are less general than the cost functions that humans are working with in the world.

EternalFury

I am thinking there’s a large category of problems that can be solved by resampling existing proofs. It’s the kind of brute force expedition machine can attempt relentlessly where humans would go mad trying. It probably doesn’t really advance the field, but it can turn conjectures into theorems.

t-writescode

I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future. Other people in I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future.

I only ever learned it in school, but if memory serves, Prolog is a whole "given these rules, find the truth" sort of language, which aligns well with these sorts of problem spaces. Mix and match enough, especially across disparate domains, and you might get some really interesting things derived and discovered that are low-hanging fruit just waiting to be discovered.

utopiah

Indeed, can't find my old comment on the topic but that's indeed the point, it's not how feasible it is to "find" new proof, but rather how meaningful those proofs are. Are they yet another iteration of the same kind, perfectly fitting the current paradigm and thus bringing very little to the table or are they radical and thus potentially (but not always) opening up the field?

With brute force, or slightly better than brute force, it's most likely the first, thus not totally pointless but probably not very useful. In fact it might not even be worth the tokens spent.

PxldLtd

I'm of the opinion that everything we've discovered is via combinatorial synthesis. Standing on the shoulders of giants and all that. I'm not sure I've seen any convincing argument that we've discovered anything ex nihilo.

simianwords

How do you think you can design a benchmark to solve truly novel problems?

pugio

I've never yet been "that guy" on HN but... the title seems misleading. The actual title is "A Ramsey-style Problem on Hypergraphs" and a more descriptive title would be "All latest frontier models can solve a frontier math open problem". (It wasn't just GPT 5.4)

Super cool, of course.

qnleigh

Their 'Open Problems page' linked below gives some interesting context. They list 15 open problems in total, categorized as 'moderately interesting,' 'solid result,' 'major advance,' or 'breakthrough.' The solved problem is listed as 'moderately interesting,' which is presumably the easiest category. But it's notable that the problem was selected and posted here before it was solved. I wonder how long until the other 3 problems in this category are solved.

https://epoch.ai/frontiermath/open-problems

fnordpiglet

I’d hope this isn’t a goal post move - an open math problem of any sort being solved by a language model is absolute science fiction.

zozbot234

That's been achieved already with a few Erdös problems, though those tended to be ambiguously stated in a way that made them less obviously compelling to humans. This problem is obscure, even the linked writeup admits that perhaps ~10 mathematicians worldwide are genuinely familiar with it. But it's not unfeasibly hard for a few weeks' or months' work by a human mathematician.

utopiah

FWIW https://github.com/teorth/erdosproblems/wiki/AI-contribution... in particular the disclaimers are very interesting.

tovej

It is not. You're operating under the assumption that all open math problems are difficult and novel.

This particular problem was about improving the lower bound for a function tracking a property of hypergraphs (undirected graphs where edges can contain more than two vertices).

Both constructing hypergraphs (sets) and lower bounds are very regular, chore type tasks that are common in maths. In other words, there's plenty of this type of proof in the training data.

LLMs kind of construct proofs all the time, every time they write a program. Because every program has a corresponding proof. It doesn't mean they're reasoning about them, but they do construct proofs.

This isn't science fiction. But it's nice that the LLMs solved something for once.

utopiah

> nice that the LLMs solved something for once.

That sentence alone needs unpacking IMHO, namely that no LLM suddenly decided that today was the day it would solve a math problem. Instead a couple of people who love mathematics, doing it either for fun or professionally, directly ask a model to solve a very specific task that they estimated was solvable. The LLM itself was fed countless related proofs. They then guided the model and verified until they found something they considered good enough.

My point is that the system itself is not the LLM alone, as that would be radically more impressive.

zurfer

"In this scaffold, several other models were able to solve the problem as well: Opus 4.6 (max), Gemini 3.1 Pro, and GPT-5.4 (xhigh)."

I find that very surprising. This problem seems out of reach 3 months ago but now the 3 frontier models are able to solve it.

Is everybody distilling each others models? Companies sell the same data and RL environment to all big labs? Anybody more involved can share some rumors? :P

I do believe that AI can solve hard problems, but that progress is so distributed in a narrow domain makes me a bit suspicious somehow that there is a hidden factor. Like did some "data worker" solve a problem like that and it's now in the training data?

mike_hearn

Yes there's a whole ecosystem of companies that create and sell RL gyms to AI labs and of course they develop their own internally too. You don't hear much about this ecosystem because RL at scale is all private. Nearly no academic research on it.

A lot of this is probably just throwing roughly equal amounts of compute at continuous RLVR training. I'm not convinced there's any big research breakthrough that separates GPT 5.4 from 5.2. The diff is probably more than just checkpoints but less than neural architecture changes and more towards the former than the latter.

I think it's just easy to underestimate how much impact continuous training+scaling can have on the underlying capabilities.

slopinthebag

Is it possible the AI labs are seeding their models with these solved problems? Like, if I was Sam Altman with a bazillion dollars of investment I would pay some mathematicians to solve some of these problems so that the models could "solve" them later on. Not that I think it's what's happening here of course...

But it is pretty funny how 5.4 miscounted the number of 1's in 18475838184729 on the same day it solved this.

mrtesthah

Maybe so, but GPT 5.4 is absolutely pulling ahead. You can see the differences visually on https://minebench.ai/.

didibus

Someone has to explain to me exactly what is implied here? Looking at the prompt:

    USER:
    don't search the internet. 
    This is a test to see how well you can craft non-trivial, novel and creative solutions given a "combinatorics" math problem. Provide a full solution to the problem.
Why not search the internet? Is this an open problem or not? Can the solution be found online? Than it's an already solved problem no?

    USER:
    Take a look at this paper, which introduces the k_n construction: https://arxiv.org/abs/1908.10914
    Note that it's conjectured that we can do even better with the constant here. How far up can you push the constant?
How much does that paper help, kind of seem like a pretty big hint.

And it sounds like the USER already knows the answer, the way that it prompts the model, so I'm really confused what we mean by "open problem", I at first assumed a never solved before problem, but now I'm not sure.

6thbit

> Subsequent to this solve, we finished developing our general scaffold for testing models on FrontierMath: Open Problems. In this scaffold, several other models were able to solve the problem as well: Opus 4.6 (max), Gemini 3.1 Pro, and GPT-5.4 (xhigh).

Interesting. Whats that “scaffold”? A sort of unit test framework for proofs?

inkysigma

I think in this context, scaffolds are generally the harness that surrounds the actual model. For example, any tools, ways to lay out tasks, or auto-critiquing methods.

I think there's quite a bit of variance in model performance depending on the scaffold so comparisons are always a bit murky.

readitalready

Usually involves a lot of agents and their custom contexts or system prompts.

Daily Digest email

Get the top HN stories in your inbox every day.