Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get it that Homotopy type theory is an alternative to set theory that is based upon topology. What's less clear from their marketing is whether it's just yet another formulation of something equivalent (set/standard type theory/category theory) Or if like any really interesting theory it does lead to different prediction or has properties which make it superior in some ways. Until such marketing is addressed I won't spend brain energy on learning it.

As an example, theorem provers are really an interesting topic, I heard that Homotopy type theory was used in some theorem provers but couldn't find whether it leaded to any significant improvements. As an AGI researcher, should I learn Hott?



Theorem provers using constructive logic have the nice property that proofs are programs; if you have a proof that A implies B, and you have an A, you can produce a B. Classical mathematics is usually based on set theory plus the axiom of choice (implying law of excluded middle), which allows non-constructive proofs (which don't compute). If the computer knows "not not X" implies "X" due to the law of the excluded middle, it can't necessarily produce an "X" from an "not not X".

The most popular theorem provers (at least among computer scientists) are based on some form of Martin-Löf type theory. In these theories, some very helpful proof techniques (often used in classical mathematics) like function extensionality and quotients don't compute. While they can be added as axioms, this breaks the computational property of the proof. What HOTT brings is a type theory with an improved notion of equality in which such things do compute, based on the notion of transport along equivalences.

If have two types

    Inductive nat : Set :=
      | O : nat
      | S : nat -> nat.

    Inductive natural_number : Set :=
      | Zero : natural_number
      | Successor : natural_number -> natural_number.
And I have proved something about nat, HOTT allows me to automatically convert that proof into a proof about natural_number (after showing nat is isomorphic to natural_number).


> The most popular theorem provers (at least among computer scientists) are based on some form of Martin-Löf type theory.

Are they? Judging by posts on HN, I 100% agree. But judging by the number of users and the amount of verified mathematics and code, I'm not so sure.

Maybe things have changed recently, but I believe that systems based on simple type theory (HOL and Isabelle) and systems based on set theory (Mizar, Metamath) have much larger repositories of verified mathematics and verified programs than those based on dependent type theories.


I don't have hard numbers to prove it. But I think it's significant that https://deepspec.org/main, the largest recent formal verification project (at least in the US) chose to use Coq, which is based on type theory. There's also Coq-verified code in Chrome (http://adam.chlipala.net/papers/FiatCryptoSP19/FiatCryptoSP1...), a claim that can't be made for any other theorem provers (as far as I'm aware).


Yep, and you could even prove "nat" is isomorphic to the type of binary numbers "Bin := List (0 | 1)" and transport proofs there too.


How do you prove isomorphism between those two types?


I wouldn't expect homotopy type theory to lead us any closer to artificial general intelligence, but who knows. Maybe this will lead to some opportunities for AI to interact more closely with mathematics in the end.

HoTT and type theories in general, do have some apparent advantages when it comes to making it practical for humans to express proofs in a way which computers can understand, but the entire practice of that is not something which could be said to be extremely well-developed thus far, and so everything is to some degree a big experiment.

As a synthetic approach to homotopy theory, it's a beautiful piece of work, the latest approach in a line of approaches to that. You could say that HoTT is to traditional homotopy theory what axiomatic geometry (where lines are objects unto themselves) is to Cartesian geometry (where lines are collections of points). Homotopy theorists have built various abstractions for capturing that, and HoTT gives a very satisfying one.

As an informal foundations for thinking about mathematics, it's intriguing and certainly has some new ideas about how to approach the definitions of various familiar mathematical objects. Especially for the sorts of things that are normally defined using quotients, those come out looking quite a bit different, since rather than forming equivalence classes in the usual way, we can construct "higher inductive types" which guarantee certain "paths" or "proofs of equality" between particular points/terms, and come equipped with an "induction principle" for defining maps out of the type (or what is the same activity, proving properties about it), and which insist that we say where those paths are sent in the destination, basically guaranteeing that certain equalities-made-tangible are conserved.

Type theories generalise the propositions of logic and sets of set theory into what they call "types" and those types give each mathematical statement a kind of interpretation as a sort of collection of its own proofs, which was formerly usually thought of as a set. HoTT adds a kind of geometry to this set, putting paths between proofs that are in some sense "equal", and that construction can be iterated to give paths-between-paths (surfaces), and so on to higher dimensions.

HoTT shows us that we can divide the types of type theory into levels of which the first few are propositions, sets, and (categorical) groupoids.

I would say most of HoTT grew out of the consequences of noticing this interpretation of what was already a very beautiful and "natural" type theory (Martin-Löf type theory, MLTT) whose original intent was merely to serve as a foundations for mathematics. It gives that theory a more general model than it previously had and turns what from a purely logical perspective might have been seen before as quirky idiosyncrasies to be put up with in exchange for the other very nice theoretical properties of the system, into things which are geometrically obvious from the perspective of thinking about homotopy types of spaces.

Unfortunately, plain Martin-Löf type theory isn't quite adequate on its own -- there is a fairly natural axiom called the univalence axiom, that falls out as an expectation regarding this interpretation of types as homotopy types. It gives us the nice property that types which are in some sense equivalent (or isomorphic), have these "paths" between them, making them "equal" in a containing universe type. This immediately gives us the ability to transport theorems about one construction to another, which is formally not something that could be done in first order logic and set theory. Formally, you would have to prove the theorem again for each isomorphic thing, because first order logic has no way of knowing that mathematicians only allow properties that respect isomorphism in the first place.

This Univalence axiom is also a problem, however. Axioms spoil the computational properties of a type system, as they are stuck terms under evaluation. But it's only just barely not a theorem. So there are a bunch of new type theories now (cubical type theories), which aim to restructure the rules of the type theory such that the univalence axiom becomes a result obtainable directly from the logical rules, and not an axiom. I'm not sure that a clear winner amongst those has fallen out of that yet, but there's a lot of promising work, and fairly practical systems at least from the perspective of a type theorist or computer scientist, if not the average mathematician.

For the average mathematician, there is some reasonable hope that research along these lines might provide a formal system that can actually be used, rather than merely thought about. Most proofs of modern mathematics are not formalised in the sense that a machine could check them, they're written in such a way where the intent is to convince other human mathematicians that a proof is formalisable in principle. To do otherwise while working under something like ZFC, or what is perhaps the real culprit, first order logic, would be not only 99.9% boring but also incredibly time-consuming. But at the same time, there's a lot of frustration that's happening with mathematics branching out into smaller and smaller areas, with the set of possible referees for some papers getting to the point that it can be hard to know whether to believe that things have been sufficiently checked.

There's some hope though that the fact that you don't have to repeat the proofs over and over for isomorphic things in HoTT, and that type theories in general are far more usable to begin with, might mean that an actually-ergonomic formal system for the average mathematician is within reach.

Will any of that affect how we think about artificial intelligence? Well, if the project to turn mathematicians on to computerised proofs is successful, it might help turn a larger fraction of mathematics into something that is more amenable to applying AI-based search to, and provide such AI systems with lots of worked examples of human mathematics.


I think the a good place to get a sense of the motivation is https://existentialtype.wordpress.com/2011/03/27/the-holy-tr... / https://ncatlab.org/nlab/show/computational+trinitarianism

Let me break it down even more basically:

1. There's a sense in many corners academia that there's too much new but ephemeral work, and not enough librarian work codifying and polishing existing ideas for more better presentatinon. (See https://hapgood.us/2015/10/17/the-garden-and-the-stream-a-te... for a good digression on that.)

2. Category theory, and related fields, emerged as a conceptual framework to organize many disparate parts of mathematics into a tighter whole, addressing such concerns (though it predates the modern form of those concerns).

Category theory however still came about as "regular" math with pen, paper, thinking, and informal natural language proofs.

3. A bunch of computer scientists studying programming languages become interested in Category Theory and Type Theory. They pretty quickly become the main people perusing both of those fields. They also decide they fit together quite well.

(If you are Platonist, this is not surprising as both are very general. If you a Conwayist, this is not surprising because if the same people are interested in two things, either those things will become increasingly intertwined, or if they really are incompatible and the group of people will disaggregate.)

4. Programming language theorists are not content with just a common mental framework for all math, they also want computer-codified proofs and computer-aided workflows. (See https://en.wikipedia.org/wiki/Automath for the first major attempt.) Type theory, being far more mechanistic/syntactic than category theory, is their obvious go-to.

------

5. Enter Vladimir Voevodsky. Vladimir Voevodsky is a fields medalist, i.e. a bonafide exceptional mathematician by mainstream standards who can finally lend some legitimacy to the efforts of these mathematician-computer scientists who often raise eyebrows. VV's field specialty is (within) topology, which always had a friendlier relationship with category theory, and so easily picks everything up after an experience with bad proof of his that no one else caught leads him to think formal methods are needed.

6. HoTT was sparked by some insight from VV. The intent is actually 2-fold: both to fix some annoyances with the automated theorem proovers, and to make some progress on advanced topology. It just so happens that the same tricks looked promising for both problems.

So HoTT in particular is useful for cutting-edge topology research (that's few of us) or better human computer interaction (that's far more of us). For the latter, just getting on board with Category theory and type theory before the HoTT special sauce is most of the mental work. HoTT is a not big additional step if others are doing the heavy lifting to figure out what it is in the first place.

7. During all of this, the US military decided that cybersecurity is a problem, and that programming language theorists had a compelling story on how to do defense more aggressively than traditional security types. That has injected a bunch of money into this stuff whereas before there was hardly any (by computer scientist standards).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: