[Agda] [Coq-Club] Prior work on proof assistant performance / optimization

Stefan Monnier monnier at iro.umontreal.ca
Tue May 12 16:30:33 CEST 2020


> You may be interested in my github repos: smalltt
> <https://github.com/AndrasKovacs/smalltt>, normalization-bench
> <https://github.com/AndrasKovacs/normalization-bench>. I haven't yet
> updated the smalltt repo, but there's a simplified implementation
> <https://gist.github.com/AndrasKovacs/a0e0938113b193d6b9c1c0620d853784> of
> its evaluator, which seems to have roughly the same performance but which
> is much simpler to implement.

Very interesting, thank you.

> It is also extremely important to stick to the spirit of Coquand's semantic
> checking algorithm as much as possible. In summary: core syntax should
> support *no* expensive computation: no substitution, shifting, renaming, or
> other ad-hoc term massaging. Core syntax should be viewed as immutable
> machine code, which supports evaluation into various semantic domains, from
> which sometimes we can read syntax back; this also leaves it open to swap
> out the representation of core syntax to efficient alternatives such as
> bytecode or machine code.

[ Hmm... I kind of vaguely understand what that entails, but can't quite
  see how that would work in practice.  I guess I'll have to dig deeper
  into your above Git repositories.  ]

> Only after we get  the above two basic points right, can we start to think
> about more specific and esoteric optimizations. I am skeptical of proposed
> solutions which do not include these. Hash consing has been brought up many
> times, but it is very unsatisfying compared to non-deterministic NbE,
> because of its large constant costs, implementation complexity, and the
> failure to handle sharing loss from beta-redexes in any meaningful way
> (which is the most important source of sharing loss!).

Could you elaborate on this "sharing loss from beta-redexes"?


        Stefan



More information about the Agda mailing list