[Agda] Re: [Coq-Club] Re: Universe Polymorphism

Dan Doel dan.doel at gmail.com
Fri Sep 17 09:28:58 CEST 2010


On Friday 17 September 2010 1:52:43 am Jean-Philippe Bernardy wrote:
> > Including extending the erasure/irrelevance stuff to inductive types,
> > erasability analysis (which may or may not be what you asked for later in
> > your mail), and thoughts on proof irrelevance (although, I'm personally
> > not convinced that this sort of erasability captures that notion
> > precisely).
> 
> Can I ask you to elaborate on why you think that erasability does not
> mean irrelevance? I happen to have the opposite belief:
> 
> proof irrelevance = structure of the proof irrelevant in the
> computational interpretation

Because that is not the type of irrelevance modeled by erasure pure type 
systems. Rather, they model parametricity, or a divide between static and 
dynamic portions of a program. For this particular form of erasure, it is not 
enough that there be only one canonical value for the dynamic portions of the 
program to look at. Rather, the dynamic portion must not look at the value at 
all.

For instance, the standard intensional equality type:

  data _==_ {A : Set} (x : A) : A -> Set where
    refl : x == x

the type of substitution given in the thesis is ((x : T) => U) is erasable 
quantification):

  subst : (A : Set) => (P : A -> Set) => (x y : A) => x == y -> P x -> P y

Notably, the eliminator is not parametric in the proof of equality. And the 
thesis goes into how exactly it is unsafe to make the proof argument erasable 
(in the above sense) in general. It breaks part of the normalization proof, 
and in the case of type equality, it's possible to introduce loops by taking 
something like

  A == (A -> B)

as an irrelevant hypothesis.

And as a consequence, there exist algorithms whose dynamic structure are not 
parametric in equality proofs. For instance, it's not unheard of to be writing 
a function on sized vectors, and end up with Vec A m instead of the Vec A n 
you want, where m and n are related by some arithmetic identity. So you write:

  foo = subst Nat (Vec A) m n m==n bar

and while it's acceptable to erase the proof m==n (and thus the substitution) 
from a proof irrelevance standpoint, since there's only one canonical proof, 
it isn't okay from a parametricity standpoint, because the proof isn't 
isolated in the static portion of the code.

For another datapoint, take type erasure (which is the sort of erasure EPTSes 
are trying to model). Set doesn't satisfy the conditions for being a 
proposition, because it has many different elements. However, we are still 
allowed to erase the first argument of a function with type:

  (A : Set) -> A -> A

because

  1) values cannot contain sets
  2) there is no case analysis on Set

which means that the function has to essentially throw the type argument away, 
at least from a dynamic perspective. It is this "I'm not even going to look at 
that argument" justification of erasure that EPTSes model (and that is the 
intuitive notion of "parametric polymorphism"), rather than the, "I may look 
at it, but there's at most one possible value to look at," of proof 
irrelevance.

There may well be some overlap. For instance, I believe you can use the EPTS 
notion for keeping track of proofs *about* your algorithms. For instance:

  sort : (xs : List ℕ) → Σ (List ℕ) λ ys → Sorted ys × (xs ~ ys)

That could be manually erased down to:

  sort : List ℕ → List ℕ

without (presumably) changing any of the List terms. The bundled proofs are 
just statically checked coherence conditions along side the real work. But my 
'foo' above is not of the same character; it requires a cast that canonically 
must be an identity function, but is actually part of the dynamic content of 
the program.

(Sorry about the length there.)

> To make sure, even in Coq, Props are NOT intrinsically relevant. Consider
> 
> f : Prop -> Prop
> 
> the argument of 'f' is not relevant in its result. The irrelevance is
> only wrt. Set.

Yeah, and I'm not totally up on just how that works, and how it would fit into 
something like EPTSes. I believe that EPTSes are a special case of (a certain 
variety of) modal type theories. A certain modality might work for type 
erasure and parametricity that is being modeled in an EPTS. Possibly a 
somewhat different modality would capture proof irrelevance.

But, the point is that for something to be proof irrelevant with respect to 
Set in Coq, that something must have type Prop. The point of an EPTS is to be 
able to treat the same type as either relevant or irrelevant in diffrent 
places, depending on how you quantify over it.

> > So, I thought it would be similarly nice if one could make universe
> > polymorphism always parametric while not making any types intrinsically
> > irrelevant.
>
> Why would that be? The intention for Levels is that they are /always/
> irrelevant with respect to programs.

Because it could conceivably be useful to write normal functions involving 
Levels, while ensuring that its use in universe polymorphism is parametric. 
For instance, consider:

  module M (a b c : Level) (T : Set (a ⊔ b ⊔ c)) where
    up : Level → Level
    up d = a ⊔ b ⊔ c ⊔ d

    U : ∀{i} → Set i → Set (up i)
    U A = A × T

We can avoid writing the big lub expression several times by writing a 
function targeting Level, but we may not be able to write such a function if 
we're adamant about restricting functions in and out of Level. I suppose there 
is a PTS scheme for allowing Level -> Level but not Nat -> Level, but why not 
also allow Nat -> Level if we have tools for ensuring proper parametricity 
even while doing so.

Or, if you want to get even more radical (not that I'm suggesting this), Level 
isn't necessary at all if you have parametric quantification. It is, after 
all, just a natural numbers type that has been split off so that it can be 
given special, more restrictive rules. It is a perfect example of the kind of 
duplication that EPTSes are trying to avoid by not tying erasure to particular 
types. You could make Set indexed by Nat, and allow:

  foo : (n : Nat) => ... T : Set n ...
  foo n = ... -- we cannot do case analysis on n here

and not allow:

  -- Should fail, because n has not been bound with parametric quantification
  foo : (n : Nat) -> ... T : Set n ...
  foo zero    = ...
  foo (suc n) = ...

This, of course, doesn't handle the special reductions the compiler does to 
avoid x \lub x not being definitionally equal to x, and I could understand not 
wanting that to happen for all naturals. Aside from that, though, this is 
essentially the motivation for this brand of modal types.

Cheers,

-- Dan


More information about the Agda mailing list