Velocity Reviews > Large number multiplication

Large number multiplication

Billy Mays
Guest
Posts: n/a

 07-06-2011
I was looking through the python source and noticed that long
multiplication is done using the Karatsuba method (O(~n^1.5)) rather
than using FFTs O(~n log n). I was wondering if there was a reason the
Karatsuba method was chosen over the FFT convolution method?

--
Bill

Ian Kelly
Guest
Posts: n/a

 07-06-2011
On Wed, Jul 6, 2011 at 1:30 PM, Billy Mays <(E-Mail Removed)> wrote:
> I was looking through the python source and noticed that long multiplication
> is done using the Karatsuba method (O(~n^1.5)) rather than using FFTs O(~n
> log n). *I was wondering if there was a reason the Karatsuba method was
> chosen over the FFT convolution method?

According to Wikipedia:

"""
In practice the Schönhage–Strassen algorithm starts to outperform
older methods such as Karatsuba and Toom–Cook multiplication for
numbers beyond 2**2**15 to 2**2**17 (10,000 to 40,000 decimal digits).
"""

I think most Python users are probably not working with numbers that
large, and if they are, they are probably using specialized numerical
libraries anyway, so there would be little benefit in implementing it
in core.

Billy Mays
Guest
Posts: n/a

 07-06-2011
On 07/06/2011 04:05 PM, Christian Heimes wrote:
> Am 06.07.2011 21:30, schrieb Billy Mays:
>> I was looking through the python source and noticed that long
>> multiplication is done using the Karatsuba method (O(~n^1.5)) rather
>> than using FFTs O(~n log n). I was wondering if there was a reason the
>> Karatsuba method was chosen over the FFT convolution method?

>
> The Karatsuba algorithm uses just addition, subtraction and
> multiplication, so you don't need to resort to floats and have no
> rounding errors. On the other hand FFT are based on e, complex numbers
> or trigonometric functions (=floats), which mean you'll get rounding errors.
>
> We don't want rounding errors for large long multiplication.
>
> Christian
>

I believe it is possible to do FFTs without significant rounding error.
I know that the GIMPS's Prime95 does very large multiplications using
FFTs (I don't know if they use the integer based or double based
version). I also know they have guards to prevent rounding errors so I
don't think it would be impossible to implement.

--
Bill

Billy Mays
Guest
Posts: n/a

 07-06-2011
On 07/06/2011 04:02 PM, Ian Kelly wrote:
> On Wed, Jul 6, 2011 at 1:30 PM, Billy Mays<(E-Mail Removed)> wrote:
>> I was looking through the python source and noticed that long multiplication
>> is done using the Karatsuba method (O(~n^1.5)) rather than using FFTs O(~n
>> log n). I was wondering if there was a reason the Karatsuba method was
>> chosen over the FFT convolution method?

>
> According to Wikipedia:
>
> """
> In practice the Schönhage–Strassen algorithm starts to outperform
> older methods such as Karatsuba and Toom–Cook multiplication for
> numbers beyond 2**2**15 to 2**2**17 (10,000 to 40,000 decimal digits).
> """
>
> I think most Python users are probably not working with numbers that
> large, and if they are, they are probably using specialized numerical
> libraries anyway, so there would be little benefit in implementing it
> in core.

You are right that not many people would gain significant use of it.
The reason I ask is because convolution has a better (best ?) complexity
class than the current multiplication algorithm. I do like the idea of
minimizing reliance on external libraries, but only if the changes would
be useful to all the regular users of python.

I was more interested in finding previous discussion (if any) on why
Karatsuba was chosen, not so much as trying to alter the current
multiplication implementation.

Side note: Are Numpy/Scipy the libraries you are referring to?

--
Bill

Ian Kelly
Guest
Posts: n/a

 07-06-2011
On Wed, Jul 6, 2011 at 2:21 PM, Billy Mays <(E-Mail Removed)> wrote:
> Side note: Are Numpy/Scipy the libraries you are referring to?

I was thinking more of gmpy or mpmath, but I'm not personally well
acquainted with any of them.

Nobody
Guest
Posts: n/a

 07-07-2011
On Wed, 06 Jul 2011 22:05:52 +0200, Christian Heimes wrote:

> On the other hand FFT are based on e, complex numbers or
> trigonometric functions (=floats), which mean you'll get rounding errors.

It's possible to perform a DFT over any field. Schoenhage-Strassen uses
a DFT over a finite field (integers modulo N); it doesn't use floats.

Ulrich Eckhardt
Guest
Posts: n/a

 07-07-2011
Billy Mays wrote:
> On 07/06/2011 04:02 PM, Ian Kelly wrote:
>> According to Wikipedia:
>>
>> """
>> In practice the SchÃ¶nhageâ€“Strassen algorithm starts to outperform
>> older methods such as Karatsuba and Toomâ€“Cook multiplication for
>> numbers beyond 2**2**15 to 2**2**17 (10,000 to 40,000 decimal digits).
>> """
>>
>> I think most Python users are probably not working with numbers that
>> large, and if they are, they are probably using specialized numerical
>> libraries anyway, so there would be little benefit in implementing it
>> in core.

>
> You are right that not many people would gain significant use of it.

Even worse, most people would actually pay for its use, because they don't
use numbers large enough to merit the SchÃ¶nhageâ€“Strassen algorithm.

> The reason I ask is because convolution has a better (best ?) complexity
> class than the current multiplication algorithm.

The "asymptotic complexity" of algorithms (I guess that's what you mean) is
concerned with large up to infinite n elements in operations. The claim
there always excludes any number of elements below n_0, where the complexity
might be different, even though that is usually not repeatedly mentioned. In
other words, lower complexity does not mean that something runs faster, only
that for large enough n it runs faster. If you never realistically reach
that limit, you can't reap those benefits.

That said, I'm sure that the developers would accept a patch that switches
to a different algorithm if the numbers get large enough. I believe it
already doesn't use Karatsuba for small numbers that fit into registers,
too.

> I was more interested in finding previous discussion (if any) on why
> Karatsuba was chosen, not so much as trying to alter the current
> multiplication implementation.

I would hope that such design decisions are documented in code or at least
referenced from there. Otherwise the code is impossible to understand and

Cheers!

Uli

--
Domino Laser GmbH
GeschÃ¤ftsfÃ¼hrer: Thorsten FÃ¶cking, Amtsgericht Hamburg HR B62 932

Ian Kelly
Guest
Posts: n/a

 07-07-2011
On Thu, Jul 7, 2011 at 2:30 AM, Ulrich Eckhardt
<(E-Mail Removed)> wrote:
> Even worse, most people would actually pay for its use, because they don't
> use numbers large enough to merit the Schönhage–Strassen algorithm.

As it stands, Karatsuba is only used for numbers greater than a
another threshold, below which Karatsuba would still be used. So at
worst it would just add another comparison or two for numbers within
the Karatsuba range.

Ian Kelly
Guest
Posts: n/a

 07-07-2011
On Thu, Jul 7, 2011 at 9:49 AM, Ian Kelly <(E-Mail Removed)> wrote:
> On Thu, Jul 7, 2011 at 2:30 AM, Ulrich Eckhardt
> <(E-Mail Removed)> wrote:
>> Even worse, most people would actually pay for its use, because they don't
>> use numbers large enough to merit the Schönhage–Strassen algorithm.

>
> As it stands, Karatsuba is only used for numbers greater than a
> another threshold, below which Karatsuba would still be used. *So at
> worst it would just add another comparison or two for numbers within
> the Karatsuba range.

And I just realized that you said as much yourself further down in
your message. My apologies for the needless lecturing.

Parerga
Guest
Posts: n/a

 07-07-2011

Hi,

Billy Mays wrote:
>
>> On 07/06/2011 04:02 PM, Ian Kelly wrote:
>> > On Wed, Jul 6, 2011 at 1:30 PM, Billy Mays<(E-Mail Removed)> wrote:
>> >> I was looking through the python source and noticed that long

>> multiplication
>> >> is done using the Karatsuba method (O(~n^1.5)) rather than using FFTs

>> O(~n
>> >> log n). I was wondering if there was a reason the Karatsuba method

>> was
>> >> chosen over the FFT convolution method?

>
>> The reason I ask is because convolution has a better (best ?) complexity

>

Better complexity, yes. This means "smaller execution time for LARGE ENOUGH
operands"

Billy Mays wrote:
>
>> I was more interested in finding previous discussion (if any) on why
>> Karatsuba was chosen, not so much as trying to alter the current
>> multiplication implementation.

>

I'm not a python developer, but I worked on multiplication algorithms for
GMP [ http://gmplib.org/ ], and I can guess the answer:
- Karatsuba is (by far) simpler to implement,
- FFT-based multiplication is (by far) slower than Karatsuba for numbers
that are not huge.
I try to attach a small graph
http://old.nabble.com/file/p32014454/karaVSfft.pdf karaVSfft.pdf , with
timings for multiplications of n-bits operands (with GMP, on my very old
laptop) with Toom(2,2) (it's Karatsuba!) and the FFT-based computation. The
first is faster than the latter up to 10000 bits (GMP uses some Toom for
that size, to get the result even faster).

Regards,
Marco

--
http://bodrato.it/software/toom.html
--
View this message in context: http://old.nabble.com/Large-number-m...p32014454.html
Sent from the Python - python-list mailing list archive at Nabble.com.