[Agda] Data.Rational
Jacques Carette
carette at mcmaster.ca
Wed Mar 12 14:48:22 CET 2014
On 2014-03-12 9:04 AM, Sergei Meshveliani wrote:
> On Wed, 2014-03-12 at 16:37 +0400, Sergei Meshveliani wrote:
>> On Tue, 2014-03-11 at 20:58 +0100, Andreas Abel wrote:
>>> I have no personal experience, but I listened to a talk by Anders
>>> M"ortberg who presented his joint work with Cyril Cohen. He said that
>>> if you want to be efficient you should *not* try to cancel the common
>>> primfactors in fractions each time. The cost of making things
>>> coprime repeatedly seems to be higher than computing with larger
>>> integers.
>>
>> My experience is very different.
>>
>> [..]
> If one computes m/n + m'/n' for large random integers m,n,m',n',
> than it is better to delay cancelling by gcd.
> Because the average gcd for two large random integers occurs small.
>
> Due to this consideration, I had once (in about 1990) an illusion of
> that the (Delay) approach is good.
>
> But in most practical computation methods, the current m and n are
> not random, they are somehow accumulated from some root in a long loop.
>
> This is why (C) occurs more practicable than (Delay).
> I cannot prove this, I only provide practical examples and
> considerations in my previous letter.
Things have changed since. The numbers one could deal with in 1990
computers were still kind of not-too-big (10,000 to 100,000 digits, but
more than that, things just did not work anymore). If you go even
bigger, things change.
The other thing to remember is that "industrial grade" gcd algorithms
are really multi-algorithms: the case where the gcd is small is highly
optimized, and is thus very cheap to compute. Similarly, if the gcd is
quite large, that is fast too. It is only when the gcd is 'mid-sized'
that the computation is expensive!
In particular, for things like Gaussian Elimination for exact matrices
(integers, rationals, polynomials, etc), it can start to really pay to
Delay, IF you have a fast zero-testing strategy. See for example
/Wenqin Zhou, J. Carette, D.J. Jeffrey and M.B. Monagan/ *Hierarchical
representations with signatures for large expression management*
<http://www.cas.mcmaster.ca/%7Ecarette/publications/LUwithStrategiesAISC.pdf>,
Proceedings of Artificial Intelligence and Symbolic Computation (2006).
for one method of using Delay to get algorithms with the 'right' bit
complexity [unlike normal GE which only has the 'right' complexity if
you only count the number of arithmetic computations rather than the bit
complexity of those arithmetic computations]. Of course, the above
'cheats' in a sense, in that we use probabilistic methods to obtain fast
non-equality checks (which is all that matters).
From the theorem proving perspective, linked with Computer Algebra,
@article {
RIO10,
title = "{Invariants for the FoCaL language}",
author = "R. Rioboo",
journal = "Annal of Mathematics and Artificial Intelligence",
year = 2010,
volume = 56,
number = 3,
pages = "273-296",
}
is well worth reading.
[http://link.springer.com/article/10.1007/s10472-009-9156-3 is the full
article, but behind a paywall]. It deals with quotient structures in a
nice way, by NOT quotienting too early.
Jacques
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.chalmers.se/pipermail/agda/attachments/20140312/a15ff6cc/attachment-0001.html
More information about the Agda
mailing list