<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 2014-03-12 9:04 AM, Sergei
Meshveliani wrote:<br>
</div>
<blockquote
cite="mid:1394629497.3449.141.camel@one.mechvel.pereslavl.ru"
type="cite">
<pre wrap="">On Wed, 2014-03-12 at 16:37 +0400, Sergei Meshveliani wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Tue, 2014-03-11 at 20:58 +0100, Andreas Abel wrote:
</pre>
<blockquote type="cite">
<pre wrap="">I have no personal experience, but I listened to a talk by Anders
M"ortberg who presented his joint work with Cyril Cohen. He said that
if you want to be efficient you should *not* try to cancel the common
primfactors in fractions each time. The cost of making things
coprime repeatedly seems to be higher than computing with larger
integers.
</pre>
</blockquote>
<pre wrap="">
My experience is very different.
[..]
</pre>
</blockquote>
<pre wrap="">
If one computes m/n + m'/n' for large random integers m,n,m',n',
than it is better to delay cancelling by gcd.
Because the average gcd for two large random integers occurs small.
Due to this consideration, I had once (in about 1990) an illusion of
that the (Delay) approach is good.
But in most practical computation methods, the current m and n are
not random, they are somehow accumulated from some root in a long loop.
This is why (C) occurs more practicable than (Delay).
I cannot prove this, I only provide practical examples and
considerations in my previous letter.
</pre>
</blockquote>
<br>
Things have changed since. The numbers one could deal with in 1990
computers were still kind of not-too-big (10,000 to 100,000 digits,
but more than that, things just did not work anymore). If you go
even bigger, things change.<br>
<br>
The other thing to remember is that "industrial grade" gcd
algorithms are really multi-algorithms: the case where the gcd is
small is highly optimized, and is thus very cheap to compute.
Similarly, if the gcd is quite large, that is fast too. It is only
when the gcd is 'mid-sized' that the computation is expensive!<br>
<br>
In particular, for things like Gaussian Elimination for exact
matrices (integers, rationals, polynomials, etc), it can start to
really pay to Delay, IF you have a fast zero-testing strategy. See
for example<br>
<i>Wenqin Zhou, J. Carette, D.J. Jeffrey and M.B. Monagan</i> <a
href="http://www.cas.mcmaster.ca/%7Ecarette/publications/LUwithStrategiesAISC.pdf"><b>Hierarchical
representations with signatures for large expression management</b></a>,
Proceedings of Artificial Intelligence and Symbolic Computation
(2006).<br>
for one method of using Delay to get algorithms with the 'right' bit
complexity [unlike normal GE which only has the 'right' complexity
if you only count the number of arithmetic computations rather than
the bit complexity of those arithmetic computations]. Of course,
the above 'cheats' in a sense, in that we use probabilistic methods
to obtain fast non-equality checks (which is all that matters).<br>
<br>
From the theorem proving perspective, linked with Computer Algebra,
<br>
@article {<br>
RIO10,<br>
title = "{Invariants for the FoCaL language}",<br>
author = "R. Rioboo",<br>
journal = "Annal of Mathematics and Artificial
Intelligence",<br>
year = 2010,<br>
volume = 56,<br>
number = 3,<br>
pages = "273-296",<br>
} <br>
is well worth reading.<br>
[<a class="moz-txt-link-freetext" href="http://link.springer.com/article/10.1007/s10472-009-9156-3">http://link.springer.com/article/10.1007/s10472-009-9156-3</a> is the
full article, but behind a paywall]. It deals with quotient
structures in a nice way, by NOT quotienting too early.<br>
<br>
Jacques<br>
<br>
</body>
</html>