Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Computation slow with float than double.

Reply
Thread Tools

Computation slow with float than double.

 
 
Michele Guidolin
Guest
Posts: n/a
 
      06-06-2005
Hello to everybody.

I'm doing some benchmark about a red black Gauss Seidel algorithm with 2
dimensional grid of different size and type, I have some strange result
when I change the computation from double to float.

Here are the time of test with different grid SIZE and type:

SIZE 128 256 512

float 2.20s 2.76s 7.86s

double 2.30s 2.47s 2.59s

As you can see when the grid has a size of 256 node the code with float
type increase the time drastically.

What could be the problem? could be the cache? Should the float
computation always fastest than double?

Hope to receive an answer as soon as possible,
Thanks

Michele Guidolin.
P.S.

Here are some more information about the test:

The code that I'm testing is this and it is the same for the double
version (the constant are not 0.25f but 0.25).

------------- CODE -------------

float u[SIZE][SIZE];
float rhs[SIZE][SIZE];

inline void gs_relax(int i,int j)
{

u[i][j] = ( rhs[i][j] +
0.0f * u[i][j] +
0.25f* u[i+1][j]+
0.25f* u[i-1][j]+
0.25f* u[i][j+1]+
0.25f* u[i][j-1]);
}

void gs_step_fusion()
{
int i,j;

/* update the red points:
*/

for(j=1; j<SIZE-1; j=j+2)
{
gs_relax(1,j);
}
for(i=2; i<SIZE-1; i++)
{
for(j=1+(i+1)%2; j<SIZE-1; j=j+2)
{
gs_relax(i,j);
gs_relax(i-1,j);
}

}
for(j=1; j<SIZE-1; j=j+2)
{
gs_relax(SIZE-2,j);
}

}
---------------CODE--------------

I'm testing this code on this machine:

processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Pentium(R) 4 CPU 3.20GHz
stepping : 1
cpu MHz : 3192.311
cache size : 1024 KB
physical id : 0
siblings : 2
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 3
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pni
monitor ds_cpl cid
bogomips : 6324.22

with Hyper threading enable on Linux 2.6.8.

The compiler is gcc 3.4.4 and the flags are:
CFLAGS = -g -O2 -funroll-loops -msse2 -march=pentium4 -Wall
 
Reply With Quote
 
 
 
 
those who know me have no need of my name
Guest
Posts: n/a
 
      06-06-2005
in comp.lang.c i read:

>I'm doing some benchmark about a red black Gauss Seidel algorithm with 2
>dimensional grid


>As you can see when the grid has a size of 256 node the code with float
>type increase the time drastically.
>
>What could be the problem? could be the cache? Should the float
>computation always fastest than double?


most likely your system does all floating point computations using a
precision greater than float, then reduces the result when the value must
be stored, which happens more often as you increase the size of the table.

--
a signature
 
Reply With Quote
 
 
 
 
Eric Sosman
Guest
Posts: n/a
 
      06-06-2005


Michele Guidolin wrote:
> Hello to everybody.
>
> I'm doing some benchmark about a red black Gauss Seidel algorithm with 2
> dimensional grid of different size and type, I have some strange result
> when I change the computation from double to float.
>
> Here are the time of test with different grid SIZE and type:
>
> SIZE 128 256 512
>
> float 2.20s 2.76s 7.86s
>
> double 2.30s 2.47s 2.59s
>
> As you can see when the grid has a size of 256 node the code with float
> type increase the time drastically.


I see a modest increase at 256 and a huge increase at 512.
Have there been any transcription errors?

I also see that the code you didn't show probably accounts
for the lion's share of the running time, which casts suspicion
on drawing too many conclusions from a couple of experiments.
The running time of the posted code should increase (roughly)
as the square of SIZE, so changing SIZE from 128 to 512 should
inflate its running time by a factor of (about) sixteen. Yet
this supposed sixteen-fold increase added only 0.29 seconds to
the running time for "double;" a straightforward calculation
(based on data of unknown accuracy, to be sure) suggests that
the rest of the program accounts for 89% or more of the time
in that case, and even more in the other two.

... and if such a large portion of the total time resides
"elsewhere," it would be unwise to draw too many conclusions
until the contributions of "elsewhere" are better characterized,
or better controlled for (e.g., by repeated experiment and
statistical analysis).

> What could be the problem? could be the cache? Should the float
> computation always fastest than double?


Cache might be a problem. So might alignment, or other
competing processes on the machine. If you're reading the
initial data from a file, perhaps one test paid the penalty of
actually reading from the disk while the others benefitted from
the file system's cache. Or maybe the disk is just beginning
to go sour, and the O/S relocated an entire track of data in
the middle of one test. Or maybe the phase of the moon wasn't
propitious.

Should float always be faster than double? No, the C language
Standard is silent on matters of speed (which makes the entire
discussion off-topic here, or at least slightly so). You've shown
some puzzling data, but you need more data and more analysis to
draw good conclusions, and the results you eventually get will
most likely be relevant only to the system you got them on, and
not to the C language. I'd suggest further experimentation, and
a change to a newsgroup devoted to your system, where the experts
on your system's quirks hang out.

--
http://www.velocityreviews.com/forums/(E-Mail Removed)

 
Reply With Quote
 
CBFalconer
Guest
Posts: n/a
 
      06-06-2005
Michele Guidolin wrote:
>

.... snip ...
>
> Here are the time of test with different grid SIZE and type:
>
> SIZE 128 256 512
> float 2.20s 2.76s 7.86s
> double 2.30s 2.47s 2.59s
>
> As you can see when the grid has a size of 256 node the code with
> float type increase the time drastically.
>
> What could be the problem? could be the cache? Should the float
> computation always fastest than double?


C real computations are always done as doubles by default. When
you specify floats you are primarily constricting the storage, and
are causing float->double->float conversions to be done. These are
eating up the time.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson


 
Reply With Quote
 
Eric Sosman
Guest
Posts: n/a
 
      06-06-2005


CBFalconer wrote:
> Michele Guidolin wrote:
>
> ... snip ...
>
>>Here are the time of test with different grid SIZE and type:
>>
>>SIZE 128 256 512
>>float 2.20s 2.76s 7.86s
>>double 2.30s 2.47s 2.59s
>>
>>As you can see when the grid has a size of 256 node the code with
>>float type increase the time drastically.
>>
>>What could be the problem? could be the cache? Should the float
>>computation always fastest than double?

>
>
> C real computations are always done as doubles by default. When
> you specify floats you are primarily constricting the storage, and
> are causing float->double->float conversions to be done. These are
> eating up the time.


That was true in pre-Standard days, but ever since
C89 the implementation has been allowed to use `float'
arithmetic when only `float' operands are involved. Not
all implementations do so (and I don't know whether the
O.P.'s does), but it's no longer a certainty that the
conversions are occurring. C99 6.3.1.8 or C89 3.2.1.5;
I don't have the section number for C90.

--
(E-Mail Removed)

 
Reply With Quote
 
Lawrence Kirby
Guest
Posts: n/a
 
      06-06-2005
On Mon, 06 Jun 2005 20:54:56 +0000, CBFalconer wrote:

> Michele Guidolin wrote:
>>

> ... snip ...
>>
>> Here are the time of test with different grid SIZE and type:
>>
>> SIZE 128 256 512
>> float 2.20s 2.76s 7.86s
>> double 2.30s 2.47s 2.59s
>>
>> As you can see when the grid has a size of 256 node the code with
>> float type increase the time drastically.
>>
>> What could be the problem? could be the cache? Should the float
>> computation always fastest than double?

>
> C real computations are always done as doubles by default.


That was true in K&R C, but not in standard C. An implementation CAN
perform calculations in greater precision than the representation of the
type but it is not required to.

> When
> you specify floats you are primarily constricting the storage, and
> are causing float->double->float conversions to be done. These are
> eating up the time.


Perhaps. But on common architectures it is typically the case that float
operations are performed using float precision or else loading/saving a
float sized object in memory to/from a wider register is no more expensing
than a double sized object in memory.

Lawrence







 
Reply With Quote
 
Christian Bau
Guest
Posts: n/a
 
      06-06-2005
In article <newscache$0viphi$xhe$(E-Mail Removed)>,
Michele Guidolin <"michele dot guidolin at ucd dot ie"> wrote:

> Hello to everybody.
>
> I'm doing some benchmark about a red black Gauss Seidel algorithm with 2
> dimensional grid of different size and type, I have some strange result
> when I change the computation from double to float.
>
> Here are the time of test with different grid SIZE and type:
>
> SIZE 128 256 512
>
> float 2.20s 2.76s 7.86s
>
> double 2.30s 2.47s 2.59s


As a rule of thumb: Accessing array elements at a distance that is a
large power of two is asking for trouble (performance wise).

Any reason why you choose powers of two? Why not SIZE = 50, 100, 200,
500?
 
Reply With Quote
 
Michele Guidolin
Guest
Posts: n/a
 
      06-07-2005
Christian Bau wrote:
>>
>>SIZE 128 256 512
>>
>>float 2.20s 2.76s 7.86s
>>
>>double 2.30s 2.47s 2.59s

>
>
> As a rule of thumb: Accessing array elements at a distance that is a
> large power of two is asking for trouble (performance wise).
>
> Any reason why you choose powers of two? Why not SIZE = 50, 100, 200,
> 500?



OK! I tried some more test with different SIZE of grid, in the precedent
message I forgot to say that the number of loop is proportional of SIZE
of grid, but the different time between two different SIZE shouldn't be
considerate realy proportonial.

-------code ----
ITERATIONS = ((int)(pow(2.0,28.0))/(pow((double)SIZE,2.0)));

gettimeofday(&submit_time, 0);

for(iter=0; iter<ITERATIONS; iter++)
gs_step_fusion();

gettimeofday(&complete_time, 0);


-------code -----

Moreover the time considerer only the loop itself and not other things,
like data initialization and print of result.

The new time test are:

SIZE 100 200 300 400 500 513
Float 2.17s 2.44s 3.35s 5.82s 8.37s 7.98s
Double 2.32s 2.34s 2.57s 2.63s 2.63s 2.65s

When I use a profiler it show me that the 95% of time is on this two
function:

for(j=1+(i+1)%2; j<SIZE-1; j=j+2)
{
gs_relax(i,j); // 45%
gs_relax(i-1,j); // 45%
}

So I still doesn't understand why the float version is going so slowy.
Any help?

Thaks for answer.

Michele Guidolin
 
Reply With Quote
 
Michele Guidolin
Guest
Posts: n/a
 
      06-07-2005
Lawrence Kirby wrote:
>>Moreover the time considerer only the loop itself and not other things,
>>like data initialization and print of result.
>>
>>The new time test are:
>>
>>SIZE 100 200 300 400 500 513
>>Float 2.17s 2.44s 3.35s 5.82s 8.37s 7.98s
>>Double 2.32s 2.34s 2.57s 2.63s 2.63s 2.65s
>>
>>When I use a profiler it show me that the 95% of time is on this two
>>function:
>>
>> for(j=1+(i+1)%2; j<SIZE-1; j=j+2)

>
>
> What is i? Is this an inner loop?
>
>
>> {
>> gs_relax(i,j); // 45%
>> gs_relax(i-1,j); // 45%

>
>
> This suggests that you need to look in gs_relax to see what is happening.
>
>
>> }
>> }
>>So I still doesn't understand why the float version is going so slowy.
>>Any help?

>
>
> You have yet to show any code that accesses float or double data.
>
> Lawrence


The gs_relax simply do a Gauss Seidel red black relaxion.
I already posted the code in the first message, but I post it again.
The double version is exactly the same (with the constant 0.25 and not
0.25f).

I realy don't understand why the float version is going so slowly whit a
SIZE > 300. Maybe gcc bug?
If someone has an idea will be very appreciate.
Thanks
Michele.

------------- CODE -------------

float u[SIZE][SIZE];
float rhs[SIZE][SIZE];

inline void gs_relax(int i,int j)
{

u[i][j] = ( rhs[i][j] +
0.0f * u[i][j] +
0.25f* u[i+1][j]+
0.25f* u[i-1][j]+
0.25f* u[i][j+1]+
0.25f* u[i][j-1]);
}

void gs_step_fusion()
{
int i,j;

/* update the red points:
*/

for(j=1; j<SIZE-1; j=j+2)
{
gs_relax(1,j);
}
for(i=2; i<SIZE-1; i++)
{
for(j=1+(i+1)%2; j<SIZE-1; j=j+2)
{
gs_relax(i,j);
gs_relax(i-1,j);
}

}
for(j=1; j<SIZE-1; j=j+2)
{
gs_relax(SIZE-2,j);
}

}
---------------CODE--------------
 
Reply With Quote
 
Tim Prince
Guest
Posts: n/a
 
      06-07-2005

"Michele Guidolin" <"michele dot guidolin at ucd dot ie"> wrote in message
news:newscache$fsqqhi$o85$(E-Mail Removed)...
> Christian Bau wrote:
>>>
>>>SIZE 128 256 512
>>>
>>>float 2.20s 2.76s 7.86s
>>>
>>>double 2.30s 2.47s 2.59s

>>
>>
>> As a rule of thumb: Accessing array elements at a distance that is a
>> large power of two is asking for trouble (performance wise).
>>
>> Any reason why you choose powers of two? Why not SIZE = 50, 100, 200,
>> 500?

>
>
> OK! I tried some more test with different SIZE of grid, in the precedent
> message I forgot to say that the number of loop is proportional of SIZE
> of grid, but the different time between two different SIZE shouldn't be
> considerate realy proportonial.
>
> -------code ----
> ITERATIONS = ((int)(pow(2.0,28.0))/(pow((double)SIZE,2.0)));
>
> gettimeofday(&submit_time, 0);
>
> for(iter=0; iter<ITERATIONS; iter++)
> gs_step_fusion();
>
> gettimeofday(&complete_time, 0);
>
>
> -------code -----
>
> Moreover the time considerer only the loop itself and not other things,
> like data initialization and print of result.
>
> The new time test are:
>
> SIZE 100 200 300 400 500 513
> Float 2.17s 2.44s 3.35s 5.82s 8.37s 7.98s
> Double 2.32s 2.34s 2.57s 2.63s 2.63s 2.65s
>
> When I use a profiler it show me that the 95% of time is on this two
> function:
>
> for(j=1+(i+1)%2; j<SIZE-1; j=j+2)
> {
> gs_relax(i,j); // 45%
> gs_relax(i-1,j); // 45%
> }
>
> So I still doesn't understand why the float version is going so slowy.
> Any help?

I didn't like to attempt an answer, as I wasn't certain whether your options
invoke SSE code generation. Several other answers seemed to imply that
people thought so, but weren't certain. Maybe attacking the problem more
directly makes it off topic for c.l.c, but I've already seen plenty of
answers which don't look like pure Standard C information.
When you divide your grid more finely, are you running into gradual
underflow? If so, what happens when you invoke abrupt underflow, as
gcc -O2 -funroll-loops -march=pentium4 -mfpmath=sse -ffast-math
might do? Most compilers have gradual underflow on as a default, since it
is required according to IEEE standard, and turn it off either by a specific
option or as a part of some "fast" package.
Gradual underflow is quite slow on early P4 steppings, in case you didn't
believe this question could go far OFF TOPIC.


 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
float to string to float, with first float == second float Carsten Fuchs C++ 45 10-08-2009 09:47 AM
Re: slow slow slow! Expert lino fitter Computer Support 5 12-12-2008 04:00 PM
Re: slow slow slow! Beauregard T. Shagnasty Computer Support 2 12-10-2008 09:03 PM
Re: slow slow slow! Expert lino fitter Computer Support 0 12-10-2008 02:33 PM
Re: float->byte->float is same with original float image. why float->ubyte->float is different??? bd C Programming 0 07-07-2003 12:09 AM



Advertisments