[Agda] How does level polymorphism fit into a valid type theory? (from "More universe polymorphism")

Matthieu Sozeau mattam at mattam.org
Tue Jul 2 12:07:55 CEST 2013


Le 1 juil. 2013 à 23:43, Guillaume Brunerie <guillaume.brunerie at gmail.com> a écrit :

> On 2013/7/1 Martin Escardo <m.escardo at cs.bham.ac.uk> wrote:
>> 
>> On 01/07/13 20:16, Guillaume Brunerie wrote:
>>> 
>>> Well, if you want to define a polymorphic function (polymorphic on all
>>> small types, not restricted to some universe), you have to quantify
>>> over all levels.
>>> Or you can use typical ambiguity,
>> 
>> 
>> I don't think this should be an "either or" situation. One should have an
>> explicit understanding of what universes and levels are, and typical
>> ambiguity should be a shorthand for making types and terms readable (very
>> much like implicit-argument notation) by erasing levels that can be
>> automatically inferred.
> 
> Is there a way to mix explicit universe polymorphism with typical
> ambiguity? I only know Agda where there is no typical ambiguity at all
> and Coq where everything is implicit.
> If there is a way to have the benefits of both I would like to see it.

That's what I'm aiming for in my design for Coq. You start with a theory, 
à la Agda, where everything is explicit. In my version, universes are 
another kind of variables, as in Vladimir's proposed system, and judgments 
are relative to a set of constraints as well. 

Then typical ambiguity is just seen as an elaboration (Harper & Pollack style) 
that transforms terms with anonymous levels into terms with explicit 
levels. Of course you can also explicitly name universes directly 
in the source, e.g. to write down universe-polymorphic types. 
Implicit polymorphism relies on ML-style abstraction. It is implicitely
a meta-level prenex quantification, not higher-rank, so you can't have
nested (for all i : Level, …) quantifications. Additionally, to handle 
cumulativity you need to abstract on constraints as well. There is a 
straightforward interpretation from a term of non-polymorphic type in 
this explicit system to a term in the non-polymorphic theory (with 
just Type_n's), which just expands polymorphic definitions.

The main issue in this system is not exactly one of principality, 
as constraints give you enough flexibility to get most general typings. 
However the inferred polymorphism is often more general than necessary,
so you might not always want the most general types to be inferred. 
Typically, if you write a polymorphic term [t := id bool true] where 
[id_i : Π A : Type i, A -> A], elaboration will give you a 
judgment for [t_i' := id_i' bool true : A -> A] along with a 
constraint [0 <= i'] (bool : Type 0). The quantification on [i'] here 
is certainly unnecessary and you'd rather have it instantiated with [0].
The proliferation of such quantifications is also an issue for 
readability and performance. To reduce these constraints or 
not is a type inference choice, basically deciding where to 
allow subtyping and where to use strict equality. If you are 
unhappy with the choice you can always explicitly say so, as
user-introduced variables are not tempered with during simplification.
As Gabriel mentioned it's already been encountered in e.g. Fsub and 
partial but practical solutions were used in that case as well.
-- Matthieu


More information about the Agda mailing list