<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>Hi Dan,</div><div><br></div><div>you are right if we think of coinductive types as terminal coalgebras</div><div>then it is clear that the destructor (i.e. the coalgebra) is</div><div>fundamental and the constructor is derived. However, it is not clear</div><div>wether they should be presented this way in dependently typed</div><div>programming. </div><div><br></div><div>The alternative, we have been exploring, is to to say that infinite</div><div>objects are always introduced by allowing a delayed expression as a</div><div>value. This is even true for functions, which are infinite and which</div><div>are introduced by lambda abstraction. Similarily we can introduce an</div><div>infinite data object by allowing suspended expressions as values using</div><div>♯ (and correspondingly ∞ on the level of types).</div><div><br></div><div>This way we disentangle the question wether data should be presented</div><div>as sums or products from the question wether it should be finite of</div><div>infinite. </div><div><br></div><div>I think this is beneficial for design and refactoring when you want to</div><div>move between finite and infinite representations easily. I.e. at some</div><div>point you may have just used lists but decide that you need to allow</div><div>for infinite lists as well, moving from lists to colists. This move</div><div>should not force you to rewrite your whole program from a constructor</div><div>based representation to a destructor based one. Indeed, ideally you</div><div>would like that List is a subtype of Colist. Related to this is the</div><div>possibility to have mixed inductive/coinductive definitions without</div><div>having to nest data inside codata. E.g. I find the mixed definition of</div><div>stream processors and the associated programs quite easy to understand</div><div>while the orthodox nested constructor based data inside destructor</div><div>based codata causes me headaches and I need half a day to write a</div><div>program which I don't really understand. But maybe this is only me. </div><div><br></div><div>You are correct to say that there are issues with this approach. One</div><div>of them is that our naive extension of the termination checker doesn't</div><div>work very well for nested definitions (see</div><div><a href="http://www.cs.nott.ac.uk/~txa/publ/InvertedQuantifiers.pdf">http://www.cs.nott.ac.uk/~txa/publ/InvertedQuantifiers.pdf</a>). And more</div><div>foundational it seems that we have to go via the partial meaning of</div><div>programs to explain total programs (and I don't even know exactly how</div><div>to do this).</div><div><br></div><div>In this sense the coinductive types as terminal coalgebras view is</div><div>easier to understand - we only need to talk about total things and</div><div>there is a clear categorical semantics. We have been thinking about</div><div>how to connect the two, i.e. the suspension view and the coalgebra</div><div>view. But I still wonder wether somehow the supension view is an</div><div>alternative and maybe intuitively preferable approach to explain</div><div>infinite data.</div><div><br></div><div>Cheers,</div><div>Thorsten</div><div><br></div></div><br><div><div>On 5 Oct 2010, at 12:27, Dan Doel wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>On Tuesday 05 October 2010 5:41:58 am Thorsten Altenkirch wrote:<br><blockquote type="cite">I tried to guess what "it" is. :-)<br></blockquote><br>Sorry. :)<br><br><blockquote type="cite">Reading on, it seems that you are saying that the termination checker<br></blockquote><blockquote type="cite">should be aware of destructors in the sense that a destructor should make<br></blockquote><blockquote type="cite">things structurally smaller but that this would lead to non-termination<br></blockquote><blockquote type="cite">(for open terms).<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Is this a sensible summary? I don't have a good answer, actually I haven't<br></blockquote><blockquote type="cite">thought of this.<br></blockquote><br>My ultimate point is that I don't think whether a type is presented as a sum <br>of constructors or a product of destructors can really be so easily divorced <br>from whether that type is inductive or coinductive. This is because an <br>inductive type *is* a type defined by a one-step constructor and an associated <br>recursive elimination, and a coinductive type is defined by a one-step <br>destructor, and a recursive introduction. These aren't just presentational <br>differences.<br></div></blockquote><blockquote type="cite"><div><font class="Apple-style-span" color="#000000"><br></font>You can use the recursive eliminator of an inductive type to define <br>destructors, and you can figure out what the constructors of an inductive type <br>would look like based on the destructors you want to define (and dually...). <br>But the basis for programming with, and recursion over those inductive types <br>will still be the constructors, and not the destructors. So you have to be <br>sure that programming with the latter behaves like some translation into the <br>former. And not simply doing the translation can lead to problems, like non-<br>termination for inductives, and lack of subject reduction for coinductives* <br>(maybe others).<br><br>Perhaps another indicator is that in the Agda approach that has emerged, with <br>all constructors, rarely does one see functions whose types contain ∞ T. We <br>always write functions over T, because it is what interacts appropriately with <br>dependent pattern matching on constructors. But this is analogous to using FνF <br>everywhere instead of νF. The two are isomorphic, but.... (And if we were to <br>get rid of the magic ∞ style of coinduction, ∞ T is all we'd have ready access <br>to for a directly defined coinductive type, not T.)<br><br>Hopefully that's a little clearer?<br><br>-- Dan<br><br>[*] If we have<br><br> codata Conat : Set where<br> constructor<br> cozero : Conat<br> cosuc : Conat -> Conat<br><br>we know that for n : Conat, being able to match on n and obtaining n = cozero <br>or n = cosuc m definitionally causes problems. But an appropriate extensional <br>equality should allow us to prove that<br><br> ∀ n. (n = cozero) ⊎ (∃ m. n = cosuc m)<br><br>that is, if Conat is really a final coalgebra (instead of just weakly final), <br>either n and cozero are the same element of Conat, or n and cosuc m are the <br>same element, because a final coalgebra should contain one unique value for <br>each possible sequence of observations (I think that's appropriate intuition). <br>So the problem doesn't seem to come from us treating unequal things as equal, <br>but from what we think cosuc, cozero and equality mean (unless extensional <br>type theories with coinductives also have problems; I'm not sure if there are <br>any to sample).<br></div></blockquote></div><br></body></html>