<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Could you elaborate on this "sharing loss from beta-redexes"? </blockquote><div><br></div><div>We may have a huge Peano numeral which is represented by a small term, because we can use iteration/recursion to give short descriptions of large numerals. Then, we can beta-reduce from a small term to a large numeral, but we cannot go from a large numeral to a small term. In particular, Peano numerals are incompressible by hash consing. Hash consing only does "delta-contraction", i.e. contracting terms by maximally introducing let-definitions, but it is intractable to invent small terms which beta reduce to a given term (related: Kolmogorov complexity).</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Stefan Monnier <<a href="mailto:monnier@iro.umontreal.ca">monnier@iro.umontreal.ca</a>> ezt írta (időpont: 2020. máj. 12., K, 16:30):<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> You may be interested in my github repos: smalltt<br>
> <<a href="https://github.com/AndrasKovacs/smalltt" rel="noreferrer" target="_blank">https://github.com/AndrasKovacs/smalltt</a>>, normalization-bench<br>
> <<a href="https://github.com/AndrasKovacs/normalization-bench" rel="noreferrer" target="_blank">https://github.com/AndrasKovacs/normalization-bench</a>>. I haven't yet<br>
> updated the smalltt repo, but there's a simplified implementation<br>
> <<a href="https://gist.github.com/AndrasKovacs/a0e0938113b193d6b9c1c0620d853784" rel="noreferrer" target="_blank">https://gist.github.com/AndrasKovacs/a0e0938113b193d6b9c1c0620d853784</a>> of<br>
> its evaluator, which seems to have roughly the same performance but which<br>
> is much simpler to implement.<br>
<br>
Very interesting, thank you.<br>
<br>
> It is also extremely important to stick to the spirit of Coquand's semantic<br>
> checking algorithm as much as possible. In summary: core syntax should<br>
> support *no* expensive computation: no substitution, shifting, renaming, or<br>
> other ad-hoc term massaging. Core syntax should be viewed as immutable<br>
> machine code, which supports evaluation into various semantic domains, from<br>
> which sometimes we can read syntax back; this also leaves it open to swap<br>
> out the representation of core syntax to efficient alternatives such as<br>
> bytecode or machine code.<br>
<br>
[ Hmm... I kind of vaguely understand what that entails, but can't quite<br>
see how that would work in practice. I guess I'll have to dig deeper<br>
into your above Git repositories. ]<br>
<br>
> Only after we get the above two basic points right, can we start to think<br>
> about more specific and esoteric optimizations. I am skeptical of proposed<br>
> solutions which do not include these. Hash consing has been brought up many<br>
> times, but it is very unsatisfying compared to non-deterministic NbE,<br>
> because of its large constant costs, implementation complexity, and the<br>
> failure to handle sharing loss from beta-redexes in any meaningful way<br>
> (which is the most important source of sharing loss!).<br>
<br>
Could you elaborate on this "sharing loss from beta-redexes"?<br>
<br>
<br>
Stefan<br>
<br>
</blockquote></div>