<div dir="ltr"><div dir="ltr">Dear all, <div><br></div><div>A long time ago I wrote a mathematician's perspective on why dependent type theory is useful for formalizing mathematics:</div><div><br></div><div> <a href="http://bulletin.eatcs.org/index.php/beatcs/article/view/81">http://bulletin.eatcs.org/index.php/beatcs/article/view/81</a></div><div><br></div><div>When I wrote it, I was a newbie to DTT, on sabbatical in France working on the formalization of the Feit Thompson theorem. (It was like a school report, "what I did on my sabbatical.")</div><div><br></div><div>More recently I have given some survey talks along these lines:<br></div><div><br></div><div> <a href="http://www.andrew.cmu.edu/user/avigad/Talks/fields_type_theory.pdf">http://www.andrew.cmu.edu/user/avigad/Talks/fields_type_theory.pdf</a></div><div> <a href="http://www.andrew.cmu.edu/user/avigad/Talks/san_diego.pdf">http://www.andrew.cmu.edu/user/avigad/Talks/san_diego.pdf</a></div><div><br></div><div>The first was meant for a general audience of mathematicians and logicians, the second for a general audience of mathematicians. There is also a discussion of DTT starting on slide 40 here, with a nice quotation from a core mathematician, Sébastien Gouëzel:</div><div><br></div><div> <a href="http://www.andrew.cmu.edu/user/avigad/Talks/london.pdf">http://www.andrew.cmu.edu/user/avigad/Talks/london.pdf</a></div><div><br></div><div>You can still find the quotation on his web page. There is an excellent paper by Kevin Buzzard, Johan Commelin, and Patrick Massot, on formalizing perfectoid spaces:</div><div><br></div><div> <a href="https://arxiv.org/pdf/1910.12320.pdf">https://arxiv.org/pdf/1910.12320.pdf</a></div><div><br></div><div>The last paragraph of the first page addresses the importance of DTT.</div><div><br></div><div>The short story is that some sort of dependent type theory is useful, and possibly indispensable, for formalizing mathematics. But it also brings a lot of headaches, so it is best to use it judiciously. Mathematics don't care about inductively defined structures beyond the natural numbers, and don't really care much for elaborate forms of induction and recursion. But dealing with all kinds of structures and morphisms between them is absolutely essential, and often these structures depend on parameters. Any reasonable system for formalizing mathematics has to provide mechanisms to support reasoning about them.</div><div><br></div><div>Best wishes,</div><div><br></div><div>Jeremy</div><div><br></div><div><br></div><div><br></div><div><br></div><div> </div><div><br></div><div> </div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 4, 2020 at 6:29 AM Dominique Unruh <<a href="mailto:unruh@ut.ee">unruh@ut.ee</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Hi,</p>
<p>following up with Thorsten's command about the word "dependent
type theory", I would like to add a few observations I had in this
discussion:<br>
</p>
<ul>
<li>I think the word "type theory" itself is unclear in this
context. At least several of the emails seem to use different
but related ideas of what that means:</li>
<ul>
<li>It could mean "something where everything has a type" (i.e.,
not the usual ZF). Then HOL would be type theory. (Thorsten's
email quoted below makes sense in that setting because HOL
avoids the problem described there.)<br>
</li>
<li>It could mean the above with dependent types (but not
necessary the Curry-Howard thing from the next item)<br>
</li>
<li>It could mean "a system where we use the same language for
propositions and proofs via Curry-Howard isomorphism" (I
believe this will then also need dependent types since
otherwise the proof terms are not powerful enough)<br>
</li>
<li>It could mean a system with strong normalization (so that
everything terminates), at least some of the answers seem to
see this as part of the meaning of "type theory".</li>
</ul>
</ul>
<p>Of course, there are many interaction between the different
concepts, but if we talk about the costs or benefits of adopting
type theory, it becomes quite important which aspect of it we are
adopting. (E.g., if we have, say, a good reason why we need the
second point, and a big cost for getting the four point, then it
is important not to just say "type theory has the following good
reason and the following cost".)<br>
</p>
<p>Maybe when discussing *why* type theory, we should prefix our
answers by what we mean by type theory (e.g., one of the above).
Otherwise it will be very confusing (at least to me).</p>
<p>Another question is also the context in which we want to use it:</p>
<ul>
<li>Programming (with all the associated things like verification,
type checking, etc.)</li>
<li>Math<br>
</li>
</ul>
<p>These have different requirements, so making explicit which
domain we are thinking of in our answer might make things clearer
as well.</p>
<p>Just my personal thoughts, but I hope they may help to add some
clarity to the discussion.<br>
</p>
<p>Best wishes,<br>
Dominique.</p>
<p><br>
</p>
<p><br>
</p>
<div>On 3/4/20 11:42 AM, Thorsten Altenkirch
wrote:<br>
</div>
<blockquote type="cite">
<div>
<p class="MsoNormal"><span>First
of all I don’t like the word “dependent type theory”.
Dependent types are one important feature of modern Type
Theory but hardly the only one.<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal">To me the most important feature of Type
Theory is the support of abstraction in Mathematics and
computer science. Using types instead of sets means that you
can hide implementation choices which is essential if you want
to build towers of abstraction. Set theory fails here badly.
Just as a very simple example: in set theory you have the
notion of union, so for example
<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal" style="text-indent:36pt">{0,1} \cup
{0,1,2,3} = {0,1,2,3}<u></u><u></u></p>
<p class="MsoNormal" style="text-indent:36pt"><u></u> <u></u></p>
<p class="MsoNormal">However, if we change the representation of
the first set and use lets say {true,false} we get a different
result:<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal" style="text-indent:36pt">{true , false}
\cup {0,1,2,3} = {true,false,0,1,2,3}<u></u><u></u></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span>This
means that \cup exposes implementation details because the
results are not equivalent upto renaming. In Type Theory we
have the notion of sum, sometimes called disjoint union,
which is well behaved<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal" style="text-indent:36pt"><span lang="DE">{0,1}
+ {0,1,2,3} = {in1 0,in1 1,in2 0,in2 1,in2 2,in2 3}<u></u><u></u></span></p>
<p class="MsoNormal" style="text-indent:36pt"><span lang="DE"><u></u> <u></u></span></p>
<p class="MsoNormal" style="text-indent:36pt"><span lang="DE">{true
, false} + {0,1,2,3} = {in1 true,in1 false ,in2 0,in2 1,in2
2,in2 3}<u></u><u></u></span></p>
<p class="MsoNormal"><span lang="DE"><u></u> <u></u></span></p>
<p class="MsoNormal">Unlike \cup, + doesn’t reveal any
implementation details it is a purely structural operation.
Having only structural operations means that everything you do
is stable under equivalence, that is you can replace one
object with another one that behaves the same. This is the
essence of Voevodsky’s univalence principle.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">There are other nice aspects of Type
Theory. From a constructive point of view (which should come
naturally to a computer scientists) the proporsitions as types
explanation provides a very natural way to obtain “logic for
free” and paedagogically helpful since it reduces logical
reasoning to programming.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">There are performance issues with
implementations of Type Theory, however, in my experience
(mainly agda) the execution of functions at compile time isn’t
one of them. In my experience the main problem is to deal with
a loss of sharing when handling equational constraints which
can blow up the time needed for type checking. I think this is
an engineering problem and there are some suggestions how to
fix this.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Thorsten<u></u><u></u></p>
<p class="MsoNormal" style="text-indent:36pt"><u></u> <u></u></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0cm 0cm">
<p class="MsoNormal" style="margin-left:36pt"><b><span style="font-size:12pt;color:black">From:
</span></b><span style="font-size:12pt;color:black"><a href="mailto:coq-club-request@inria.fr" target="_blank">"coq-club-request@inria.fr"</a>
<a href="mailto:coq-club-request@inria.fr" target="_blank"><coq-club-request@inria.fr></a> on behalf of Jason Gross
<a href="mailto:jasongross9@gmail.com" target="_blank"><jasongross9@gmail.com></a><br>
<b>Reply to: </b><a href="mailto:coq-club@inria.fr" target="_blank">"coq-club@inria.fr"</a>
<a href="mailto:coq-club@inria.fr" target="_blank"><coq-club@inria.fr></a><br>
<b>Date: </b>Tuesday, 3 March 2020 at 19:44<br>
<b>To: </b>coq-club <a href="mailto:coq-club@inria.fr" target="_blank"><coq-club@inria.fr></a>, agda-list
<a href="mailto:agda@lists.chalmers.se" target="_blank"><agda@lists.chalmers.se></a>,
<a href="mailto:coq+miscellaneous@discoursemail.com" target="_blank">"coq+miscellaneous@discoursemail.com"</a>
<a href="mailto:coq+miscellaneous@discoursemail.com" target="_blank"><coq+miscellaneous@discoursemail.com></a>, lean-user
<a href="mailto:lean-user@googlegroups.com" target="_blank"><lean-user@googlegroups.com></a><br>
<b>Subject: </b>[Coq-Club] Why dependent type theory?<u></u><u></u></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt">I'm in the
process of writing my thesis on proof assistant performance
bottlenecks (with a focus on Coq), and there's a large class
of performance bottlenecks that come from (mis)using the
power of dependent types. So in writing the introduction, I
want to provide some justification for the design decision
of using dependent types, rather than, say, set theory or
classical logic (as in, e.g., Isabelle/HOL). And the only
reasons I can come up with are "it's fun" and "lots of
people do it" <u></u><u></u></p>
<div>
<p class="MsoNormal" style="margin-left:36pt"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt">So I'm
asking these mailing lists: why do we base proof
assistants on dependent type theory? What are the
trade-offs involved?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt">I'm
interested both in explanations and arguments given on
list, as well as in references to papers that discuss
these sorts of choices.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt">Thanks,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:36pt">Jason<u></u><u></u></p>
</div>
</div>
</div>
<pre>This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please contact the sender and delete the email and
attachment.
Any views or opinions expressed by the author of this email do not
necessarily reflect the views of the University of Nottingham. Email
communications with the University of Nottingham may be monitored
where permitted by law.
</pre>
</blockquote>
</div>
</blockquote></div>