Hacker Newsnew | past | comments | ask | show | jobs | submit | foooobar's commentslogin

What simplifications do you have in mind?

I think one of the reasons is that Lean's approximation of defeq still works well enough in practice. As mentioned by others, you never really run into the kind of counter examples that break theoretical properties, but instead into examples where checking defeq is inacceptably slow, which remains an issue even when defeq is decidable.

From my understanding CiC is used because it allows using a unified language for proofs, specifications, programs and types, which greatly simplifies the system. For example, in Lean 4, more or less the entire system is built on the same core term language: Proofs, propositions / specifications, programs, tactics (i.e. meta programs), the output of tactics and even Lean 4 itself are all written in that same language, even if they don't always use the same method of computation. In my eyes, for the purposes of writing and checking proofs, CiC is quite complex; but if you want to capture all of the above and have them all checked by the same kernel, CiC seems like a fairly minimal metatheory to do so, even with undecidable type checking.


For example, if you are willing to accept undecidable typechecking, you can define inductive types in a much simpler theory (inductive types and dependent pattern matching are by far the most complex parts of CiC): https://homepage.cs.uiowa.edu/~astump/papers/from-realizabil... .

There are a few other examples but this is the one that immediately sprung to mind. The complexity of CiC is very much about decidable typechecking, it's not fundamental to writing a sound dependently typed proof assistant. And anyone who has actually worked on a kernel for CiC knows that while it is "minimal" compared to what people want to do with it, no efficient kernel is nearly as minimal as something like λP2 with dependent intersections.


Thanks! But does that not already answer your question as to why Lean would use CIC and not a simpler metatheory?

That particular paper is from 2016 and the type theory presented there is still lacking a number of features from CIC, with those particular extensions being listed as further research. Work on Lean started in 2013 and the kernel in particular has not been touched in a long time for good reason. Around that time, CIC was already battle-proven with there being popular and well-tested reference implementations to learn from. Since then, all the fancy work has been done on the system surrounding the kernel.

I believe one of the goals at the start was that the kernel should be sufficiently simple and small so that it is easy to ensure that the kernel is correct. Despite the apparent redundant complexity that you suggest, the implementation still seems to be sufficiently simple to reach this goal, as no proof of false that was a result of a kernel bug has been reported yet. CIC being well-tested surely also contributed to that.

Another reason is that, as I understand it, it was not always obvious that Lean would break decidability of defeq. Afaik proof irrelevance was only baked in with Lean 3, while it was an option before that. By that point I don't see much of a reason to rewrite an entire kernel that has worked fine so far.

I also vaguely remember that there were arguments related to performance in favor of the current system (e.g. versus W-types), but I don't really understand these things well enough to make that point with confidence :)


How are the ergonomics of a proof assistant based on λP2? Lean does a lot of elaboration of metavariables and typeclass instances and such before handing things off for checking by the kernel -- I'm curious about whether λP2's undecidability is decidable enough, so to speak.


That's currently an open research question, but the answer most likely is that the ergonomics could be made just as good, considering that elaboration of metavariables, typeclass instances, etc. already uses possibly-undecidable algorithms distinct from those used in the kernel in just about every proof assistant I'm familiar with (Lean included).

In terms of stuff that do go more or less directly through the kernel, some followup papers to the one I linked discuss how to translate inductive definitions etc. automatically into a form accepted by the simpler kernel, so I am even more confident that this would not present a problem.


The fact remains though that Lean 3 worked well enough to do Scholze level mathematics. I never quite know what to make of people saying "if the designers had done it in some other way, it would have been better". Of course if someone comes along doing it another way, and then they go on to also achieve Scholze level mathematics, then I will be all ears.


That's a very strange line of argument. Many great programs have been written in C. That doesn't mean that there is no way to improve on C. Perhaps to even the playing field, this hypothetical competitor should be given the resources of Microsoft Research as well :)


I think Kevin is asserting that to make strong comparative claims about the quality of either system, it's good to have a large and functionally similar body of work to compare the systems with, not that there is no way to improve on Lean's core :)

MSR funds a single developer. The Lean community consists of at least 30 or so active contributors right now, with many more having contributed in the past. Most folks work in academia. Much of Lean's success is absolutely due to the work of that single developer, but I don't think that the playing field is uneven due to MSR's funding.


If you're interested in interactive theorem proving with Lean (and not condensed mathematics), the Lean community landing page is a good place to start. https://leanprover-community.github.io/

Especially the "Natural Number Game" under "Learning resources" has been successful in teaching folks the very basics for writing proofs. Once finished, a textbook like "Theorem Proving in Lean" can teach the full basics. Feel free to join the Lean Zulip at any point and ask questions at https://leanprover.zulipchat.com/ in the #new members stream.

Mathlib has plenty of contributions from interested undergrads :)


Some constructivists may also take offense with proof irrelevance (and the resulting loss of normalization [1] or its incompatibility with HoTT), which you can only really avoid by avoiding Prop.

[1] https://arxiv.org/abs/1911.08174


There is also a problem of HoTT and equality reflection incompatibility[1].

[1] https://www2.mathematik.tu-darmstadt.de/~streicher/barc_corr...


I don't think anyone is trying to sell Lean as a constructive system. The current developers certainly don't think of it that way, further evidenced by the fact that the typical way of doing computation in Lean does not involve definitional reduction, but using `#eval` with additional low level code for performance. Proof irrelevance (and quotients) were adopted with that in mind.


As far as I can tell, this is not quite true. Tactic proofs aside, you can also write functional term mode proofs and declarative "structured" proofs in the sense of Isar. Theorem Proving in Lean introduces that style, so most people who use Lean are familiar with it: E.g. https://leanprover.github.io/theorem_proving_in_lean/proposi...

Additionally, even in tactic proofs you can use tactics like `have`, `suffices`, etc. to manipulate the structure of the proof and make subgoals explicit like you would usually do in the structured style. In practice, people in Lean still prefer imperative tactic proofs with the option to write in a structured/declarative style where reasonable. The full "structured" mode does not see much use, since it is quite verbose. As a result, Lean 4 will not support this style out of the box anymore, but you could still add it yourself using the macro system.


Which tutorial did you use? As someone coming from computer science, I found https://leanprover.github.io/theorem_proving_in_lean/ very approachable as an introduction to ITP in general.


That's the one I meant I think. It is nice, but as I said the impression it left me with was "you could have told me how the approach generally works sooner".


That books wants to build the foundation (dependent types) before making the big claim in chapter 3.

It's a bit weird because has Haskell shows, you don't need dependent types for basic theorem proving. (but dependent types do give a lot of useful power)


Well if you just have Haskell 2010 types, you're talking about really really basic theorem proving since all you have is propositional logic. The most interesting thing I can think of to prove with Haskell 2010 types is (the constructive version of) de Morgan's laws. Almost all other interesting mathematical statements are out of reach.


Because the book is a great introduction to theorem proving in general. All those concepts learned translate nicely to other theorem provers, and especially Lean 4.

Lean 4 has already gone public: https://github.com/leanprover/lean4/

Also, afaik it won't differ from Lean 3 to the degree where learning Lean 3 is useless.


Buzzard asked Wiles himself, who stated "that he would not want to sit an exam on the proof of Langlands–Tunnell".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: