<(E-Mail Removed)> wrote in message

news:(E-Mail Removed) oups.com...

> Supposed unsigned int(32 bits) is the largest number that computer can

> represent with a single variable.

>

> Now, i have a big integer ( less than 64 bit, but great than 32 bit) .

> i represent it by this way:

>

> unsigned int dividend[2] :

> divident[0] store low 32 bits of big integer, dividend[1] store high 32

> bits of big integer.

>

> my problem is how make a division, like this

>

> quotient = big integer/ divisor, remainder = big integer mod divisor

> (divisor is 32 bit unsigned integer);

>

> how can i get quotient, and remainder ?
Richard Heathfield suggested bit-shifting, and for your problem this will

work just fine. However, it is [very!] suboptimal, even in the case you

proposed.

The key issue -- and it is the same situation for addition, subtraction, and

multiplication -- is that the processor inherently has integer addition,

subtraction, multiplication and division instructions; but they handle

operands smaller than you are interested in.

The key question is whether it is possible to use these "small" instructions

multiple times to deal with larger operands.

In the case of addition and subtraction, the answer is clearly YES. Even if

one is programming in 'C' and doesn't have direct access to the CARRY bit of

the processor, if the result of an addition is smaller than either of the

input arguments (which must be unsigned), then a carry occurred. With a

little thought, one can code multi-precision integer addition that doesn't

perform badly (although it won't be as efficient as assembly-language).

For example:

unsigned int input1[2]; /* LSI first for all of these */

unsigned int input2[2];

unsigned int output[3]; /* Result has one more int to hold carry out. */

unsigned int carryout[1];

output[0] = input1[0] + input2[0];

if ((output[0] < input1[0]) || (output[0] < input2[0]))

carryout[0] = 1;

else

carryout[0] = 0;

output[1] = input1[1] + input2[1];

if ((output[1] < input1[1]) || (output[1] < input2[1]))

output[2] = 1;

else

output[2] = 0;

/* Now, process the carry out of the LSI */

if (carryout[0])

{

output[1] ++;

if (! output[1])

output[2] ++;

}

I don't claim that I didn't make some kind of mistake in the above (I'm

doing this from scratch). The bottom line is that one can do addition and

subtraction of large integers in 'C' fairly effectively.

Note that the solution above uses the inherent ability of the processor to

add using native machine instructions.

The solution suggested by Richard Heathfield is just as crude as doing

addition one bit at a time (compared to the technique above).

I won't get into multiplication ... but there is a way to do that using the

processor's multiplication ability, too. You'll figure it out quickly if

you think about it.

And finally, division ... which is the toughest case.

If you try to do the algebra and figure out if you can use "small" machine

division instructions to accomplish a larger division, you'll rapidly come

to the conculsion that you can't. (Fortunately, Donald Knuth and his peers

are just a bit more experienced than you or I. It ends up there is a way to

do it.)

The process is very similar to the way people do longhand division. In

longhand division, people estimate one digit at a time, then multiply and

subtract, then go on to the next digit. Occasionally, one guesses wrong and

has to increase or decrease the quotient digit and re-do that digit.

In the algorithm, a "digit" is 16-32 bits; and the result of estimation may

be off by as much as 2 counts in one direction only (there is a simple and

cheap correction procedure).

The classic algorithm assumes that the machine has a division instruction

that takes a dividend of bitsize 2w, divides it by a divisor of bitsize w,

and produces a quotient of bitsize w and a remainder of bitsize w, with the

possibility of overflow (which is deliberately avoided by the algorithm).

For a typical desktop processor, w is 32 bits, so that in one machine

instruction you can do a 64/32 division. However, typically compilers will

only allow you to do a 32/32 division, so you have to use w=16 and apply the

standard algorithm.

The algorithm essentially will produce 16 bits of the result at a time

(versus 1 bit at a time from the bit-shifting approach).

If you have access to the assembly-language of the machine, normally you can

get [at least] 32 bits at a time.

The standard integer division algorithm is a bit awkward to code in 'C', but

when the data sizes are known in advance (as they are in your case), it gets

less awkward.

I won't include the code here (too much thought would be required).

Here are the resources you should look up.

a)Knuth covers this in Volume 2 of his classic work:

http://www.amazon.com/Art-Computer-P...e=UTF8&s=books
b)The GMP has some division code that is compiled in the event

assembly-language isn't available for the specific processor. This will use

the "digit estimation" technique I described.

c)Also, this URL should be helpful. It also cites Knuth.

http://en.wikipedia.org/wiki/Arbitra...ion_arithmetic
Heathfield's suggestion will work just fine. However, in the general case,

you don't want to do that.

Dave.