Get the top HN stories in your inbox every day.
kccqzy
readthenotes1
I recall an experiment in proving software correct from the 1990s that found more errors in the final proof annotations than in the software it had proved correct.
Then, I foresee 2 other obstacles, 1 of which may disappear:
1. Actually knowing what the software is supposed to do is hard. Understanding what the users actually want to do and what the customers are paying to do --which aren't necessarily the same thing--is complex
2. The quality of the work of many software developers is abysmal and I don't know why they would be better at a truth language than they are at Java or some other language.
Objection 2 may disappear to be replaced with AI systems with the attention to do what needs to be done. So perhaps things will change in that to make it worthwhile...
Groxx
I think the hope for 2 is that those programmers would be forced into inaction by the language safety, rather than being allowed to cause problems.
I don't really think that works either, because there's endless ways to add complication even if you can't worsen behavior (assuming that's even possible). At best they might be caught eventually... but anyone who has worked in a large tech company knows at least a few people who are somehow still employed years later, despite constant ineptitude. Play The Game well enough and it's probably always possible.
teiferer
It's even conceivable that 2 gets worse with AI: The AI does the proof for them, very convolutedly so, and as long as the proof checker eats it, it goes through. Comes the day when the complexity goes beyond what the AI assistant can handle and it gives up. At that point, the proof codes complexity will for a long time have passed the threshold of being comprehensible for any human and there is no progress possible. Hard stop.
sdeframond
> I think the hope for 2 is that those programmers would be forced into inaction by the language safety, rather than being allowed to cause problems.
Ah! That's funny :)
In practice there are always ways to circumvent safety, especially when it is easier than the alternative.
In a Typescript codebase I work on, I configured the type-checker to fordbid `any`. Should be easy enough to use `object` When we don't know the type, right? Well then things started being serialized into `string` way more often than I'd like...
jsmorph
Re 1: Discussing and guiding the desirable theorems for general-purpose programs has been a major challenge for us. Proofs for their own sake (bad?) vs glorious general results (good but hard?). Actual human guidance there can be critical there at least for now.
deterministic
Check out SeL4 (a proven correct micro kernel used by billions of devices worldwide) and CompCert (a proven correct C compiler used by Airbus).
dharmatech
While reading though this book, I messed around with a basic computer algebra simplifier in Lean:
https://github.com/dharmatech/symbolism.lean
It's a port of code from C#.
Lean is astonishingly expressive.
grogers
Have you tried dafny, which seems roughly comparable for your purposes? I heard some buzz about it a little while ago but I haven't been following this space closely.
NooneAtAll3
what about non-functional programming?
FP is just as irrelevant for most programmers as the math you already shoved aside
DauntingPear7
Hmm like the “new” JS Fetch api with `then` chaining? What about map, filter, reduce? Anonymous functions? List comprehensions? FP is everywhere. Pure FP code isn’t seen very often, as side effects are necessary for most classes of programs, but neither is pure OOP code, as not everything is dynamically dispatched, nor imperative code, as Objects or functions may more cleanly describe/convey something in code.
kccqzy
I shoved math aside because I think for most of the HN crowd it wouldn’t be a good use of their time to do what mainstream mathematics is about, like the “things such as Grothendieck schemes and perfectoid spaces” the article also references. FP is much more relevant because for any program for which a proof of correctness is worthwhile, you can always extract a functional core of that program (functional core, imperative shell). And that functional core will be easier to prove than if it were written in an imperative style.
threethirtytwo
FP and math are the same concept.
The semantics of math are equation based.
Everything in the math universal language is defined as an expression or formula.
All proofs are based on this concept.
To translate this into programming think about what programming is? Programming rather being a single line formula is a series of procedures.
1. add 1
2. add 3
3. repeat.
in functional programming you get rid of that and you think from the perspective of how much of a program can you fit into a single one liner? An expression? Think map, reduce, list comprehensions, etc.That is essentially what functional programming is. Fitting your entire program onto one line OR fitting it into a math expression.
The reason why you see multiple lines in FP languages is because of aliasing.
m = b + c
y = x + m
is really: y = x + (b + c)
This is also isomorphic to the concept of immutability. By making things immutable your just aliasing part of the one liner...So functional programming, one line programs, formulas and equations in math, and immutability are essentially ALL the same concept.
That is why lean is functional. Because it's math.
icosahedron
You might like looking at Dafny. It is more imperative focus, but has many of the same software proving functionality that Lean has.
It is different in that it uses SMT instead of dependent types and tactics to prove the software, but I found it more approachable.
Also, it compiles to several target languages, whereas Lean 4 only compiles to C and therefore only supports the C ABI.
undefined
nrds
The author appears to have a serious misconception about Lean, which is surprising since he seems to be quite knowledgeable in the area.
Specifically, the author seems to be under the impression that Lean retains proof objects and the final proof to be checked is one massive proof object, with all definitions unfolded: "these massive terms are unnecessary, but are kept anyway" (TFA). This couldn't be further from the truth. Lean implements exactly the same optimization as the author cherishes in LCF; metaphorically, that "The steps of a proof would be performed but not recorded, like a mathematics lecturer using a small blackboard who rubs out earlier parts of proofs to make space for later ones" (quoted by blog post linked from TFA). Once a `theorem` (as opposed to a `def`) is written in Lean4, then the proof object is no longer used. This is not merely an optimization but a critical part of the language: theorems are opaque. If the proof term is not discarded (and I'm not sure it isn't), then this is only for the sake of user observability in the interactive mode; the kernel does not and cannot care what the proof object was.
burakemir
A proof object in dependent type theory is just the term that inhabits a type. So are you saying the Lean implementation can construct proofs without constructing such a term?
nrds
No, I'm saying it is checked and then discarded. (Or at least, discarded by the kernel. Presumably it ends up somewhere in the frontend's tactic cache.) That matches perfectly the metaphor, "rubs out earlier parts of proofs to make space for later ones".
The shared misconception seems to be in believing that because _conceptually_ the theory implemented by Lean builds up a massive proof term, that _operationally_ the Lean kernel must also be doing that. This does not follow. (Even the concept is not quite right since Lean4 is not perfectly referentially transparent in the presence of quotients.)
vilhelm_s
Yeah. I guess the abstract type approach saves some memory, but it's a constant factor thing, not an asymptotic improvement. The comment that Lean wastes "tens of megabytes" seems telling: it seems like something that would be a critical advantage in the 1980s and 1990s, when Paulson first fought these battles, but maybe less important today...
nrds
To be fair, lean wastes and leaks memory like a sieve, but this is almost all in the frontend. It has nothing to do with the kernel or the theorem proving approach chosen.
auggierose
It is more a conceptual thing. In LCF, proofs and terms are different things, and that is how it should be in my opinion. This Curry-Howard confusion is unnecessary, but many people don't realise that, they think it is the only way to do math on a computer. You can still store proofs in LCF if you want, and use them; just as in Lean, you might be able to optimise them away.
nrds
You have done no more to show an actual distinction in the approach than TFA and its linked blog post... It sounds like a naming thing to me. On one side we name the term/program as a term and see it as something checked by the kernel, and on the other you name the term/program as a program and see it as something executed by the runtime. What's the difference?
auggierose
The difference is that a term is not (necessarily) a program. Also checking is not executing. Its like saying riding a horse is the same as eating a fish. Really just a naming thing, what's the difference?
zozbot234
There is indeed no difference if your dependent-typing approach is using reflection (where the checked term is actually a program that's logically proven to result in a a correct proof when executed - such as, commonly, by running a decision procedure) but that's not a common approach.
danilafe
People tell me Lean is really good for functional programming. However, coming from Agda, it feels like a pretty clunky downgrade. They also tell me it's good for tactics, but I've found Coq's tactics more powerful and ergonomic. Maybe these are all baby-duck perceptions. So far, it feels like Lean's main strength isn't being the best at anything, but being decent at everything and having a huge community. I see the point and appeal, but it's saddens me that a bit of the beauty and power are lost in exchange.
btilly
In other words, it is a network effect.
My perspective is that network effects are far less long-lasting than they feel in the moment. For example if being decent at everything and having a huge community was the only thing that mattered, Perl would still be a big deal. Many similar examples exist.
In the case of Lean, being the first with a huge library really makes a difference. Just as Perl got a big boost from having CPAN. (Which was an imitation of CTAN, except for a programming language instead of TeX.)
But, based on scaling laws, we should expect the value of a large library for most users to grow around the log of the size of the library. (See https://pdodds.w3.uvm.edu/teaching/courses/2009-08UVM-300/do... for the relevant scaling laws.)
When your library is small, this looks like an insurmountable barrier. But you don't have to match the scale for factors of usability to become more important. And porting mathematical libraries is a good target for LLMs. The source is verified, the target is verifiable, and the reasoning path generally ports.
The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect. Because if their library can be ported to your platform, then your proof can probably be ported to their platform as well!
danilafe
> The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect
This is a nice thought, but with Agda in particular it's just not true. It's one of the few languages I've seen that's sufficiently unrepresented in training data. Frontier LLMs (Codex, Claude Code) reliably say "I realized I can't do this." after wasting lots of tokens going back and forth.
In fact, I think this positions Agda uniquely poorly.
krupan
This. LLMs are no good at stuff they haven't seen a lot of training data for (saying this as a SystemVerilog programmer who also does some C-coding for interfacing with SystemVerilog, neither of which has a lot of open source code to show LLMs)
btilly
Huh.
I wonder how compiler technology would do. Possibly as a component of an attempted solution.
gucci-on-fleek
> except for a programming language instead of TeX
TeX is a programming language. It's not a very good programming language, but people have implemented floating-point math [0], regular expressions [1], an Arduino emulator [2], and terminal input/output [3] in pure TeX. The last two examples are obscure, but the first two examples are used (internally) by the vast majority of modern LaTeX documents.
[0]: https://github.com/latex3/latex3/blob/develop/l3kernel/l3fp....
[1]: https://github.com/latex3/latex3/blob/develop/l3kernel/l3reg...
lmm
> So far, it feels like Lean's main strength isn't being the best at anything, but being decent at everything and having a huge community. I see the point and appeal, but it's saddens me that a bit of the beauty and power are lost in exchange.
To me that feels like a community that's finally matured enough to start getting on with things. Perfect tools aren't the point; get tools that are good enough and do actual work with them.
antonvs
Sounds like an excuse for mediocrity.
You can apply that same argument for the of Python in the ML world. It results in all sorts of pain points that everyone has to deal with all the time.
lmm
All large-scale projects are made of mediocre parts. At some point you run out of brilliance and have to structure it so that mediocre can still be a positive contribution.
solomonb
I prefer agda as proof checker but its not a practical choice for building software. Lean feels like it could legitimately become a successor to Haskell as the go to functional programming language for software development.
danilafe
I think what holds Agda back from being "practical" is that it just doesn't have good tactics. You can't easily automate proofs and even simplification techniques require some language-level tricks[1]. There's technically support for elaborator reflection (as in Idris) but it's painful and impossible to debug. Certainly nowhere near where Coq and Lean are.
[1]: like this one I've used for several proofs so far: https://danilafe.com/blog/agda_expr_pattern/
solomonb
Its also really slow and doesn't have a huge library ecosystem. The latter is fixable but not so much for the former.
ModernMech
fyi it's Rocq now: https://en.wikipedia.org/wiki/Rocq
JuniperMesos
I've used agda a tiny bit and Lean somewhat more, and I definitely found it much easier to write functional programs not focused on mathematical proofs in Lean than Agda. IIRC the difference was mostly tooling - Agda's documentation is kind of bad and it's a pain to get it working on your system (and it really wants you to be using Emacs specifically). Whereas Lean documents how to write the cat utility in its own docs and generally has a much better, more modern tooling experience.
danilafe
I believe you, but this hasn't been my experience. It took me hours to get Lean to work (something odd was happening with the package manager + version + tooling combination). Agda worked out of the box with macOS homebrew. Agda's docs are petty bad, but I've found its cross-linked module documentation incredibly useful. The main issue is knowing something exists.
fooker
This has also been my experience with lean4.
I don't understand the forced vscode path, just let me get it as normal software in a convenient way and run it as a tool
markusde
I'm curious what you like about Agda functional programming? Many of the praises I hear about it have to do with it's dependent pattern matching, and I think Lean suffers a lot more in that regard. I'm curious though if you still find Agda friendlier for "normal" fp (and if so, how?)
danilafe
Its parameterized modules, extremely elegant yet flexible mixfix notation mechanism, the various niceties around pattern matching (though this one might be a bit of Stockholm syndrome; Agda doesn't nicely allow pattern matching anywhere except at function clauses), the fact that records, GADTS, and modules all feel like aspects of the same thing, and the fact that typeclasses are 'just' records that are automatically filled in. The typeclass and module features IMO already give it some edge over Haskell. I don't know if it's friendlier, but it is more ergonomic.
markusde
Thanks! Typeclasses are also something I really like about Lean.
moomin
Thing is, it comes after both. Maybe it is just being a jack of all trades, but something made it success when the others remain fairly niche.
capitalhilbilly
I think its pretty clear that being too early has been as bad as being too late for most technologies. There are a few that have gradually gained community after decades but it is easier to make a poor copy of one of them and have better momentum and less skepticism.
portly
When I woke up this morning I could not have predicted someone calling a proof assistant a "Jack of all trades"
floxy
Thoughts on Idris?
wk_end
Idris feels mostly dead to me at this point. Which breaks my heart, because for a split second it had real momentum around it.
Not OP, but as Haskell-derived dependently-typed languages Idris and Agda are quite similar, so I suspect if they like one they’d like the other.
c0balt
Isabelle/HOL as a language is nice, but the tooling has deep flaws even outside the pure desktop-first app approach.
The language is different (not necessarily better) in comparison to Lean, but I do agree with some of the points on dependent types. It seems both languages mostly just made different tradeoffs, which imo, were fair and have shaped them into quite efficient tools for their domains. The domain of "proofs" is large and different paradigms just have different strengths/weaknesses, Lean just specialized for a different part of this space.
Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean. It might also be nice to use for explorative phases but is a resource hog, it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
Working on Isabelle itself however is painful (especially communicating with developers) in comparison to Lean. Things like "we don't have bugs just unexpected behaviour" on the mailing list just seems childish/unprofessional. The callout to RAM consumption of Lean and related systems is also a bit of joke when looking at Isabelle's gluttony for RAM.
zozbot234
> but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
One issue with this is that coming up with a quickly-checkable certificate for UNSAT is not exactly a trivial problem. It's effectively the same as writing a formal proof.
thaumasiotes
> Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean.
I have no knowledge of what sledgehammer is. However...
> it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer"
This description makes sledgehammer sound identical to Mathlib's `grind`. https://leanprover-community.github.io/mathlib4_docs/Init/Gr...
c0balt
The goal (ATP) is similar but the idea is a bit different, sledgehammer is not directly learning/applying rules but instead effectively a driver for invoking a bunch of ATPs + SMT solvers at once on a goal in Isabelle/HOL.
You can read more about it here: https://isabelle.in.tum.de/dist/doc/sledgehammer.pdf
nextaccountic
The authors of Lean also wrote a smt solver (Z3). Other proof languages like F* use Z3 to prove its things
Why isn't there a tactic in Lean to prove things by SMT using Z3? This could be integrated to grind
vintermann
Last I checked, Isabelle/HOL used a custom Emacs mode as their interface. (I could be mixing it up with one of the other HOLs).
c0balt
The current GUI interface is Isabelle/jedit. Afaik, no Emacs interface is officially provided atm.
smj-edison
I think what's interesting about Lean is that Lean is a language, and what most people are talking about is a library called Mathlib. From what I've read about Mathlib, the creators are very pragmatic, which is why they encode classical logic in Lean types, with only a bit of intuitionistic logic[1].
[1] for those unfamiliar with math lingo, classical logic has a lot of powerful features. One of those is the law of the excluded middle, which says something can't be true and false at the same time. That means not not true is true, which you can't say in intuitionistic logic. Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound. There's quite a few results that depend on these techniques, so trying to do everything in intuitionistic logic has run into a lot of roadblocks.
hackandthink
>the law of the excluded middle, which says something can't be true and false at the same time
This is the not excluded middle, it is the "Law of noncontradiction"
Excluded middle means: either p is true or the negation of p is true
zozbot234
> I think what's interesting about Lean is that Lean is a language, and what most people are talking about is a library called Mathlib. From what I've read about Mathlib, the creators are very pragmatic, which is why they encode classical logic in Lean types, with only a bit of intuitionistic logic
The computer science folks are now working on their own CSLib. https://www.cslib.io https://www.github.com/leanprover/cslib Given that intuitionistic logic is really only relevant to computational content (the whole point of it is to be able to turn a mathematical argument into a construction that could in some sense be computed with), it will be interesting to see how they deal with the issue. Note that if you write algorithms in Lean, you are already limited to some kind of construction, and perhaps that's all the logic you need for that purpose.
cubefox
> One of those is the law of the excluded middle, which says something can't be true and false at the same time.
That would be the law of non-contradiction (LNC). The law of the excluded middle (LEM) says that for every proposition it is true or its negation is true.
LEM: For all p, p or not p.
LNC: For all p, not (p and not p).
Classical logic satisfies both, intuitionistic logic only satisfies LNC.
leonidasrup
Five stages of accepting constructive mathematics:
Denial
Anger
Bargaining
Depression
Acceptance
A talk about constructive mathematics by Andrej Bauer at the Institute for Advanced Study.
thaumasiotes
> Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound.
As far as lean is concerned, this isn't an example of classical logic. It's just the definition of "not" - to say that some proposition implies a contradiction, and to say that that proposition is untrue, are the same statement.
Most "something"s that you'd want to prove this way will require a step from classical logic, but not all of them. (¬p ⟶ F) ⟶ p is classical; (p ⟶ F) ⟶ ¬p is constructive.
zozbot234
More generally, any negative statements can be proven classically, even in intuitionistic logic. Intuitionistic logic does not have the De Morgan duality found in classical logic, you have to go to linear logic if you want to recover that while keeping constructivity. (The "negative" in linear logic actually models requesting some object, which is dual to constructing it. The connection with the usual meaning of "negative" in logic involves a similar duality between "proposing" a proof and "challenging" it.)
smj-edison
So proof by contradiction proves ¬p, but it requires the law of excluded middle to prove p (in the case of ¬p -> F)? I didn't realize that was constructive in the first case.
thaumasiotes
Well, at some point you have to define what you mean by "proof by contradiction". I was responding to your statement, "prove something by showing that the alternative is unsound". You can prove that something is false that way without needing classical logic.
Mathlib defines `by_contradiction` as a theorem proving `(¬p → False) → p` for any proposition p. ( https://leanprover-community.github.io/mathlib4_docs/Mathlib... ) This does require classical logic.
For what's happening with `¬p -> F`, recall that this is by definition the statement `¬¬p`; classical logic will let you conclude `p` from `¬¬p`, or it will let you apply the law of the excluded middle to conclude that either `p` or `¬p` must be the case, and then show that since it isn't `¬p`, it must be `p`. (Again, not really different approaches, but perhaps different in someone's mental model.)
On the other hand, if you have `p -> F`, that is by definition the statement `¬p`, and if you've established `¬p`, you've already finished proving that p is false.
Something that I find particularly absurd about the hypothetical distinction between intuitionistic and classical logic is that intuitionistic logic is sufficient to prove `¬p` from `¬¬¬p`. (This is quite similar to how 'proof by contradiction' is constructive if you're proving a negative but not if you're proving a positive; it might be the same result.) So for any proposition that can be restated in a "negative" way, the law of the excluded middle remains true in intuitionistic logic. The difference lies only in "fundamentally positive" propositions. (You can do that proof yourself at https://incredible.pm/ ; it's in section 4, `((A→⊥)→⊥)→⊥` -> `A→⊥`.)
There's a fun article on this very blog telling a similar story: https://lawrencecpaulson.github.io/2021/11/24/Intuitionism.h...
> Martin-Löf designed his type theory with the aim that AC should be provable and in his landmark Constructive mathematics and computer programming presented a detailed derivation of it as his only example. Briefly, if (∀x : A)(∃y :B) C(x,y) then (∃f : A → B)(∀x : A) C(x, f(x)).
> Spoiling the party was Diaconescu’s proof in 1975 that in a certain category-theoretic setting, the axiom of choice implied LEM and therefore classical logic. His proof is reproducible in the setting of intuitionistic set theory and seems to have driven today’s intuitionists to oppose AC.
> It’s striking that AC was seen not merely as acceptable but clear by the likes of Bishop, Bridges and Dummett. Now it is being rejected and the various arguments against it have the look of post-hoc rationalisations. Of course, the alternative would be to reject intuitionism altogether. This is certainly what mathematicians have done: in my experience, the overwhelming majority of constructive mathematicians are not mathematicians at all. They are computer scientists.
Chinjut
You mean intuitionistic logic, not "intuitive logic".
smj-edison
Oops, just edited. I'm still fairly new to this area, so I keep mixing up my terms :)
undefined
vscode-rest
When/why would one prefer to use intuitive logic, given the limitations/roadblocks?
ux266478
Classical logic has plenty of limitations/roadblocks, all logics do. Logic isn't a unified domain, but an infinite beach of structural shards, each providing a unique lens of study.
Classical logic was rejected in computer science because the non-constructive nature made it inappropriate for an ostensibly constructive domain. Theoretical mathematics has plenty of uses to prove existences and then do nothing with the relevant object. A computer, generally, is more interested in performing operations over objects, which requires more than proving the object exists. Additionally, while you can implement evaluation of classical logic with a machine, it's extremely unwieldy and inefficient, and allows for a level of non-rigor that proves to be a massive footgun.
layer8
Classical logic isn’t rejected in computer science. Computer science papers don’t generally care if their proofs are non-constructive, just like in mathematics.
zozbot234
But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction. Talking about oracles is useful in turn since it's a very general way of talking about side-conditions that might make something easier to construct.
Twey
Intuitionistic logic is a refinement of classical logic, not a limitation: for every proposition you can prove in classical logic there is at least one equivalent proposition in intuitionistic logic. But when your use of LEM is tracked by the logic (in intuitionistic logic a proof by LEM can only prove ¬¬A, not A, which are not equivalent) it's a constant temptation to try to produce a constructive proof that lets you erase the sin marker.
In compsci that's actually sometimes relevant, because the programs you can extract from a ¬¬A are not the same programs you can extract from an A.
ogogmad
There are non-computational interpretations of intuitionistic logic too, like every single thing mentioned on this page: https://ncatlab.org/nlab/show/synthetic+mathematics
I think stuff like "synthetic topology", "synthetic differential geometry", "synthetic computability theory", "synthetic algebraic geometry" are the most promising applications at the moment.
It can also find commonalities between different abstract areas of maths, since there are a lot of exotic interpretations of intuitionistic logic, and doing mathematics within intuitionistic logic allows one to prove results which are true in all these interpretations simultaneously.
I'm not sure if intuitionism has a "killer app" yet, but you could say the same about every piece of theory ever, at least over its initial period of development. I think the broad lesson is that the rules of logic are a "coordinate system" for doing mathematics, and changing the rules of logic is like changing to a different coordinate system, which might make certain things easier. In some areas of maths, like modern Algebraic Geometry, the standard rules of logic might be why the area is borderline impenetrable.
zozbot234
These are more like computational-ish interpretations of sheaves, topological spaces, synthetic geometry etc. The link of intuitionistic logic to computation is close enough that these things don't really make it "non-computational". One can definitely argue though that many mathematicians are finding out that things like "expressing X in a topos" are effectively roundabout ways of discussing constructive logic and constructivity concerns.
ngruhn
You're walking down a corridor. After hours and hours you ask "is it possible to figure how far it is to the nearest exit?". Your classical logic friend answers: "Yes, either there is no exit then the answer is infinity. Or there is an exit then we just have to keep walking until we find it. QED"
This kind of wElL AcTUaLly argument is not allowed in constructive logic.
smj-edison
As far as I understand it, classical mathematics is non-constructive. This means there are quite a few proofs that show that some value exists, but not what that value is. And in mathematics, a proof often depends on the existence of some value (you can't do an operation on nothing).
The thing is it can be quite useful to always know what a value is, and there's some cool things you can do when you know how to get a value (such as create an algorithm to get said value). I'm still learning this stuff myself, but inuitionistic logic gets you a lot of interesting properties.
zozbot234
> As far as I understand it, classical mathematics is non-constructive.
It's not as simple as that. Classical mathematics can talk about whether some property is computationally decidable (possibly with further tweaks, e.g. modulo some oracle, or with complexity constraints) or whether some object is computable (see above), express decision/construction procedures etc.; it's just incredibly clunky to do so, and it may be worthwhile to introduce foundations that make it natural to talk about these things.
seanhunter
It’s not intuitive, it’s intuitionist. I’m not saying that to nitpick it’s just important to make the distinction in this case because it really isn’t intuitive at all in the usual sense.
Why you would use it is it’s an alternative axiomatic framework so you get different results. The analogy is in geometry if you exclude the parallel postulate but use all of the other axioms from Euclid you get hyperbolic geometry. It’s a different geometry and is a worthy subject of study. One isn’t right and the other wrong, although people get very het up about intuitionism and other alternative axiomatic frameworks in mathematics like constructivism and finitism.
BigTTYGothGF
> if you exclude the parallel postulate but use all of the other axioms from Euclid you get hyperbolic geometry
No, you don't.
(You need to replace the parallel postulate with a different one)
smj-edison
I think they called it intuitive, because I called it intuitive in my original post, so that's on me :)
seanhunter
I thought of a concrete example of why you might use intuitionist logic. Take for example the “Liar’s paradox”, which centres around a proposition such as
A: this statement (A) is false
In classical logic, statements are either true or false. So suppose A is true. If A is true, then it therefore must be false. But suppose A is false. Well if it is false then when it says it is false it is correct and therefore must be true.
Now there are various ways in classical logic [1] to resolve this paradox but in general there is a category of things for which the law of the excluded middle seems unsatisfactory. So intuitionist logic would allow you to say that A is neither true nor false, and working in that framework would allow you to derive different results from what you would get in classical logic.
It’s important to realise that when you use a different axiomatic framework the results you derive may only be valid in the alternative axiomatic system though, and not in general. Lean (to bring this back to the topic of TFA) allows you to check what axioms you are using for a given proof by doing `#print axioms`. https://lean-lang.org/doc/reference/latest/ValidatingProofs/...
[1] eg you can say that all statements include an implicit assertion of their own truth. So if I say “2 + 2 = 4” I am really saying “it is true that 2+2=4”. So the statement A resolves to “This statement is true and this statement is false”, which is therefore just false in classical logic and not any kind of paradox.
zodiac
We still care about computation and algorithms even when proving theorems in a classical setting!
For e.g., imagine I'm trying to prove the theorem "x divides 6 => x != 5". Of course, one way would be to develop some general lemma about non-divisibility, but a different hacky way might be to say "if x divides 6 then x ∈ {1, 2, 3, 6}, split into 4 cases, check that x != 5 holds in all cases". That first step requires an algorithm to go from a given number to its list of divisors, not just an existence proof that such a finite list exists.
amavect
In constructive logic, a proof of "A or B" consists of a pair (T,P). If T equals 0, then P proves A. If T equals 1, then P proves B. This directly corresponds to tagged union data types in programming. A "Float or Int" consists of a pair (Tag, Union). If Tag equals 0, then Union stores a Float. If Tag equals 1, then Union stores an Int.
In classical logic, a proof of "A or not A" requires nothing, a proof out of thin air.
Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
pron
> Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
I don't know who "we" are, but most proofs of algorithm correctness use classical logic.
Also, there's nothing "obvious" about what you said unless you want proof objects, and why you'd want that is far from obvious in itself.
layer8
You aren’t giving any justification why proofs should necessarily map to data structures.
MarkusQ
We need more of this.
For every "well of course, just...X, that's what everybody does" group-think argument there's a cogent case to be made for at least considering the alternatives. Even if you ultimately reject the alternatives and go with the crowd, you will be better off knowing the landscape.
lmm
Completely disagree; IMO we build far too many frameworks and alternatives (probably because it's fun) instead of a) enhancing the things that already exists to have the thing we want or b) just getting on with the actual work. The whole field would be much better off if we had half as many languages, half as many libraries, half as many build tools...
sdenton4
It depends!
Every time you go off the beaten path, you're locking yourself into less documentation, more bugs (since there's less exploration of the dark corners), and fewer people you can go to for help. If you've got 20+ choices to make, picking the standard option is the right choice on average, so you can just do it and move on. You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
The exceptions to this are when a) it becomes apparent that the standard tool doesn't actually fit your use case, or b) the standard tool significantly overlaps the core problem you're trying to solve.
MarkusQ
> You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
Reading that took five minutes and gave a good intro to the counter argument to Curry-Howard-all-the-things monomania. If having invested those five minutes, Lean still seems like the way to go (for whatever reason) fine. You are making a (closer to) informed choice, and likely better off than if you'd spent those five minutes doubling down on the conventional solution.
Most deviations from the group consensus are mistakes, but all progress comes from seeing past the group consensus. So making frequent small bets on peeking around your blinders is a good strategy.
c-linkage
Which shows the lie of the common engineering trope "use the right tool for the job."
It really should be "use the same tool that everyone else is using so you don't have decide which tool is the right one -- the herd made that decision for you!"
pstoll
Feels like all the write ups that point out the short comings of eg Python for scientific computing.
Sure, except that once you have a community at critical mass around a reasonably good tool, that trumps most other things. Momentum builds. People write tutorials, explainers, better documentation, etc. it hits escape velocity.
Feels like Lean, with Terrance Tao as a strong proponent / cheerleader, is in that space.
Everyone who argues “but language X is better” … may not be wrong, but they are not making the argument that matters. Is it better than the thing everyone else knows and can use and has more people hours going into it to improve it?
Feels like a “worse is better” situation; or maybe “good and popular is good enough”.
pfdietz
I think the point that LLMs should enable effective translation between different formalisms is a good one. So I don't see the choice as being a big issue. This is especially the case here because to a large extent the translations can be checked automatically.
ModernMech
> once you have a community at critical mass around a reasonably good tool, that trumps most other things
This matters a lot less in the age of AI. AI doesn't need a massive number of community-built libraries, it can just write its own. It doesn't need a million tutorials floating out there on the interwebs because unlike most programmers, it will actually read the spec and documentation (tutorials are just projections of the docs/spec anyway). AI doesn't have to avoid languages with no job market because it just needs to do the job at hand, not build a career. This makes it easier for small languages and DSLs to gain usage where they never would have before.
I think AI might spell the end of the language monoculture (top 20 are mostly slight variations on languages circling the same design) that has persisted in programming.
tehjoker
It's the opposite and has been recognized for years. AI depends on training data and this nearly freezes the use of languages that were popular at the time of inception. Hopefully that is not true.
AI needs community libraries if there is to be interoperability and baseline quality between systems. At least at this level of quality and development.
ModernMech
> Hopefully that is not true.
I'm here saying this "PL ossification theory" is probably wrong, that it's not going to be the case at all. Yes, AI depends on training data, but that doesn't imply that AI can only use those programming languages or only reason in languages that existed at the time of their training. In fact the AI is able to reason able new languages the same way humans can -- by drawing inferences to the next closest language that it knows, pattern matching to things that are different from other languages, and also figuring out the semantics and reasoning through execution itself where it doesn't have training examples.
> AI needs community libraries if there is to be interoperability and baseline quality between systems.
Not everyone is looking to do the kind of work you're doing, and that's my point. Up until now, programming languages have been written by and for people who want to do businey/mathy/sciencey things with computers. But there's a huge world out there of other stuff to do with programming languages that have never been considered because the proposition of making languages for those domains is daunting and outside of the wheelhouse of normal people.
Now, DSLs are sprouting up where they have no business existing because of AI, just proliferating all over the place. Some of them are going to find communities (of people, AI, or both) and they will flourish completely apart from the systems we are building now in the tech world. It's not going to be the case that AI writes in Python for the rest of time because it writes in and was trained on Python today.
ianhorn
I remember trying to play around with Coq/Rocq and a few others about 15 years ago, and I couldn’t make heads or tails of them. Not the concepts, but the software. What’s weird about proof assistants/interactive provers is that the “interactive” part makes it a combo of IDE and language and they seem to get pretty tightly coupled in practice. You can’t talk about the language without talking about the environment you use it in.
I’m not the biggest VS code fan, but a battle honed extensible IDE used by zillions and maintained by $$$ has proved itself miles ahead of the environments for alternatives. As far as i can tell, the excellent onboarding path that is the Natural Numbers Game is possible because of VS code’s hackability and ecosystem.
My main concern as I’m learning lean is that the syntax extensibility seems to be a double edged sword. Once i’ve learned a language, i want to be able to read code written in it. If everything is in a project’s own DSL, that can get out of hand, but that comes down to community/ecosystem so i’m crossing my fingers.
loglog
"I believe that almost anything that has been formalised today in any system could have been formalised in AUTOMATH. Its main drawbacks were its notation, which really was horrible, and its complete lack of automation. Proofs were long and unreadable." That's like saying that anything that could be programmed today in your modern language of choice could have been programmed 50 years ago in assembly. Technically yes, economically no.
c7b
Lean isn't the most loved proof assistant by mathematicians, it's not the most suited to formal verification of software, it just sort of works for both. Yet it's got the thing that's arguably the hardest to achieve, momentum. Compounded by the fact that in the AI age, the amount of publicly available human-written code directly affects how well agents can produce new code.
rzerowan
One of those names that forces a double take when seen disconnected from context:
'Lean or purple drank is a polysubstance drink used as a recreational drug. It is prepared by mixing prescription-grade cough or cold syrup containing an opioid drug '
proving that one of the hardest problem in CS - 'naming things' still keeps on keeping on.
bradleyy
The two hardest problems in CS: * naming * cache invalidation * off by one errors
InkCanon
Wait till you hear what Rocq was originally called!
still_grokking
Can someone ELI5 that for me?
Is this the mathematician's variant of "my language is better than your language", or what does this post actually discus? Something fundamental in the philosophy underpinning things or just the way to express them?
BlanketLogic
Paulson is a lead developer of Isabelle , a proof assistant that is not based on dependent types.
> Is this the mathematician's variant of "my language is better than your language",
Almost. A closer analogy is comparing paradigms, say OOP vs functional programming.
Isabelle is different from the big three - rocq. lean and agda. The latter three have propositions as types. The type of your function is the theorem statement. The function body is the proof. Isabelle works differently. Author makes a convoluted argument that (a) one doesn't have to stick to the currently popular paradigm and (b) in conjunction with AI, Isabelle offers distinct benefits.
Get the top HN stories in your inbox every day.
For the HN crowd who are generally programmers but not necessarily mathematicians, it’s more relevant to consider the programming side of things. There is a very good book (one I haven’t finished unfortunately) that covers Lean from a functional programming perspective rather than proving mathematics perspective: https://leanprover.github.io/functional_programming_in_lean/
I am learning Lean myself so forgive me as I have an overly rosy picture of it as a beginner. My current goal is to write and prove the kind of code normal programmers would write, such as real-world compression/decompression algorithms as in the recent lean-zip example: https://github.com/kiranandcode/lean-zip/blob/master/Zip/Nat...