I'd like to remark that the language project I'm currently working on, <br><a href="https://github.com/hopper-lang/hopper">https://github.com/hopper-lang/hopper</a><div>,</div><div><br></div><div>A lot of the work that myself and colleagues are currently doing for hopper is towards supporting having a performant efficient shared substrate for evaluation and normalization of dependently typed programs. </div><div><br></div><div>But once it's a little bit more mature, the hopper type theory and runtime should be a viable substrate for normalizing open terms from Agda, because we intend to support a lot of ideas that Agda has done a lovely job of iterating on. </div><div><br></div><div>Point being, having efficient term normalization for type checking turns into "what's a good shared runtime for evaluation and normalization" very quickly, and that's still a design space that's not entirely satisfactorily explored. </div><div><br></div><div>In coq their solution is to patch the ocaml byte code intepreter to support open terms, which some iteration of landed in coq 8.5</div><div><br></div><div>I'm not quite sure how lean treats their runtime system / heap rep, but there's possibly interesting lessons there. </div><div><br></div><div>That said, it does look like the new "treeless" rep in Agda master / 2.5 does pave the way to thinking about this stuff more. </div><div><br></div><div><br></div><div>Perhaps flipped around : what about adding tracing / profiling facilitaties to tease apart bottle necks in code? My admittedly still fuzzy understanding is that having fast type checking crucially intersects with understanding the nature of reductions that are done during any normalization (by evaluation ) strategy. <span></span></div><div><br></div><div>On Wednesday, March 16, 2016, Nils Anders Danielsson <<a href="mailto:nad@cse.gu.se">nad@cse.gu.se</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 2016-03-12 00:33, Nils Anders Danielsson wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I found another problem. I've created a ticket on the bug tracker:<br>
<a href="https://github.com/agda/agda/issues/1901" target="_blank">https://github.com/agda/agda/issues/1901</a>.<br>
</blockquote>
<br>
This problem has been fixed, and I can now type-check Andreas' old code<br>
on my machine. However, Agda is still slow.<br>
<br>
I tried replacing all lambdas in coreAdjunction with pattern-matching<br>
lambdas, and then Agda became roughly ten times faster and used roughly<br>
nine times less memory. This suggests to me that (part of) the problem<br>
is that Agda always reduces lambda applications (as opposed to<br>
applications of pattern-matching lambdas and defined functions), as<br>
pointed out by Andrea.<br>
<br>
What can we do about this? I don't want to force users to use<br>
workarounds like copattern matching. Could we for instance add support<br>
for beta-redexes to the internal syntax?<br>
<br>
-- <br>
/NAD<br>
_______________________________________________<br>
Agda mailing list<br>
<a>Agda@lists.chalmers.se</a><br>
<a href="https://lists.chalmers.se/mailman/listinfo/agda" target="_blank">https://lists.chalmers.se/mailman/listinfo/agda</a><br>
</blockquote></div>