On Fri, 23 Apr 2004, John wrote:

> Hi,

>

> I encountered a strange problem while debugging C code for a

> Windows-based application in LabWindows CVI V5.5, which led me to

> write the test code below. I tried this code with a different compiler

> and got the same erroneous result on two different PCs (with OS Win98

> & Win98SE), so it appears to be a problem with ANSI C. I thought that

> negative double variables could be compared as easily and *reliably*

> as integers, but apparently not?
ANSI C is fine. Your assumption about comparing double variables is wrong.

Thing about this, there are infinite numbers between 0 and 1. Therefore

there is no way a computer can represent all numbers between 0 and 1 let

alone all real numbers in a larger range. This means that there will be

some numbers that are not represented. If I have:

double a = 2;

a += 0.01;

a -= 0.01;

Maybe 2.01 is a number that your compiler cannot represent. It might

decide to round up to the next closest number. When you decrement you

probably get a number that cannot be represented so it rounds up to the

next closet number. Now a will equal 2.00000000000001. This is called

representation error.

When you print them out, printf will only print 6 digits. So it looks like

a is still 2.000000 but it has just trimmed off the representation error

from the display. If you did something like:

printf("%020.20f\n", a);

You'd see the real value of a. In <float.h> are macros to help with this

problem. Rather than comparing a to b you would look at the different

between a and b. If the different is smaller than EPSILON then you should

assume they are close enough to be called equal.

> #include <ansi_c.h>

>

> void main (void)

> {

> double a = -2.0, b = -2.0;

>

> if (a > b)

> printf("a is greater than b because a is %f and b is %f\n", a, b);

> else

> printf("a is not greater than b because a is %f and b is %f\n", a,

> b);

>

> a -= 0.01; // decrease value of a by 0.01

> a += 0.01; // restore original value of a by increasing it by 0.01

>

> if (a > b)

> printf("a is greater than b because a is %f and b is %f\n", a, b);

> else

> printf("a is not greater than b because a is %f and b is %f\n", a,

> b);

> }

>

> The output as copied from the emulated DOS window is:

>

> a is not greater than b because a is -2.000000 and b is -2.000000

> a is greater than b because a is -2.000000 and b is -2.000000

>

> If I decrement and then increment a by 0.001, everything is fine, so

> it doesn't look like there is a problem with the small magnitude of

> the fractions.

>

> I would be grateful for any solutions or suggestions to this problem

> so that I can process *all* fractions correctly.

>

> Thanks in advance,

> John.

>
--

Send e-mail to: darrell at cs dot toronto dot edu

Don't send e-mail to

(E-Mail Removed)