Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Java > method calls faster than a loop?

Reply
Thread Tools

method calls faster than a loop?

 
 
tom fredriksen
Guest
Posts: n/a
 
      03-14-2006
Hi

I did a test to compare a task making lots of method calls to the same
operation in a loop, just to get a sense of the speed cost of method
calls. What i found was a bit strange. one would think the loop would be
the fastest solution, but in my example its actually the method call
that is faster. Does anybody have any ideas why that is? or if there is
something wrong with the code?

Are there any other factors than the speed of the jump and the time it
takes to allocate parameters on the stack that is relevant to the test?

/tom


import java.util.*;

public class BranchTest
{
private Random rand = new Random();

public int add(int sum)
{
return(sum + rand.nextInt(10));
}

public static void main(String args[])
{

{
BranchTest b = new BranchTest();

int count = Integer.parseInt(args[0]);
int total = 0;
long startTime = System.currentTimeMillis();

for(int c=0; c<count; c++) {
total = b.add(total);
}
long endTime = System.currentTimeMillis();

System.out.println("Elapsed time: " + (endTime - startTime));
}

{
Random rand = new Random();
int count = Integer.parseInt(args[0]);
int total = 0;
long startTime = System.currentTimeMillis();

for(int c=0; c<count; c++) {
total += rand.nextInt(10);
}
long endTime = System.currentTimeMillis();

System.out.println("Elapsed time: " + (endTime - startTime));
}
}
}
 
Reply With Quote
 
 
 
 
Chris Uppal
Guest
Posts: n/a
 
      03-15-2006
tom fredriksen wrote:

> I did a test to compare a task making lots of method calls to the same
> operation in a loop, just to get a sense of the speed cost of method
> calls. What i found was a bit strange.


There's no point at all in doing this kind of micro-benchmarking unless you
also take account of the behaviour of the JITer (and also the optimiser which
can just remove whole chunks of code).

Say you want to compare code snippet A with snippet B. Pull both snippets out
into methods runA() and runB(). In each method include enough loops that the
time taken is significant in relation to the measuring "noise" (5 or 10
seconds should be enough). The put /another/ loop around the calls to runA()
and runB() which calls each alternately, timing both, and which loops at least
half-a-dozen times, printing out the time taken for each call.

You will almost certainly see that the first few calls are slower than the
later ones. You will also get an informal impression of how much jitter there
is the results. If you want to be formal you can use statistical techniques to
analyse the results, but even without that a number is basically meaningless
unless you have a sense of how much error there is likely to be in it.

BTW, the use of explicit methods which you call (rather than just using a
doubly-nested loop) is /not/ optional if you want your results to be
indicative. It's also always worth trying with the -server option to get a
better view of how much difference the JITer is making, and of how JITable the
code is.

-- chris


 
Reply With Quote
 
 
 
 
tom fredriksen
Guest
Posts: n/a
 
      03-15-2006
Chris Uppal wrote:
> tom fredriksen wrote:
>
>> I did a test to compare a task making lots of method calls to the same
>> operation in a loop, just to get a sense of the speed cost of method
>> calls. What i found was a bit strange.

>
> There's no point at all in doing this kind of micro-benchmarking unless you
> also take account of the behaviour of the JITer (and also the optimiser which
> can just remove whole chunks of code).


How am I not doing that here?
If you by that mean that the second test should also be an instance
method then ok, but will it matter that much? I have doubts.

> Say you want to compare code snippet A with snippet B. Pull both snippets out
> into methods runA() and runB(). In each method include enough loops that the
> time taken is significant in relation to the measuring "noise" (5 or 10
> seconds should be enough).


That would be what I am saying in the above comment then...

> The put /another/ loop around the calls to runA()
> and runB() which calls each alternately, timing both, and which loops at least
> half-a-dozen times, printing out the time taken for each call.
>
> You will almost certainly see that the first few calls are slower than the
> later ones. You will also get an informal impression of how much jitter there
> is the results. If you want to be formal you can use statistical techniques to
> analyse the results, but even without that a number is basically meaningless
> unless you have a sense of how much error there is likely to be in it.


I dont see how double-nested loop are going to show anything, please
explain.

> BTW, the use of explicit methods which you call (rather than just using a
> doubly-nested loop) is /not/ optional if you want your results to be
> indicative.


??? same as above.

>It's also always worth trying with the -server option to get a
> better view of how much difference the JITer is making, and of how JITable the
> code is.


Thats fair enough, since there is quite a difference in their speeds and
functionality as sun mostly focuses on server side development.


/tom
 
Reply With Quote
 
Timo Stamm
Guest
Posts: n/a
 
      03-15-2006
tom fredriksen schrieb:
> Hi
>
> I did a test to compare a task making lots of method calls to the same
> operation in a loop, just to get a sense of the speed cost of method
> calls.



What's the use?

If you find that method calls are slow, will you stop using methods?

It's silly to optimize without any need for optimization. It only hurts
readability and probably make you code run slower in the next VM.


Timo
 
Reply With Quote
 
tom fredriksen
Guest
Posts: n/a
 
      03-15-2006
Timo Stamm wrote:
> What's the use?
>
> If you find that method calls are slow, will you stop using methods?
>
> It's silly to optimize without any need for optimization. It only hurts
> readability and probably make you code run slower in the next VM.


It is as I said just to get a sense of the speed of it compared to
f.ex for loops. My main reason for trying this is a typical claim:

"C++ is slower than C because of the oo fashion of methods calls all
over the place, even for tiny operations." (paraphrased)

Jumps take time, even on jump-prediction machines such as x86.
So I wanted to make a little test to see if it there was any significant
difference, and the test showed the complete opposite of my assumption.
So, this was a small test of speed cost of one consequence of the oo
paradigm. Since I am interested in understanding programming languages
and their consequences, I find this some interest to me. But if you have
a better suggestion of how to test the claim, please share.

/tom
 
Reply With Quote
 
Timo Stamm
Guest
Posts: n/a
 
      03-15-2006
tom fredriksen schrieb:
> Timo Stamm wrote:
>> What's the use?
>>
>> If you find that method calls are slow, will you stop using methods?
>>
>> It's silly to optimize without any need for optimization. It only
>> hurts readability and probably make you code run slower in the next VM.

>
> It is as I said just to get a sense of the speed of it compared to
> f.ex for loops. My main reason for trying this is a typical claim:
>
> "C++ is slower than C because of the oo fashion of methods calls all
> over the place, even for tiny operations." (paraphrased)


I understand.

I think it should be very easy to "prove" superiority of either
technology/language/whatever with the properly adjusted benchmarks. The
industry does it all the time.

These microbenchmarks never tell anything about real-world performance.
(Except when you really compare apples to oranges, say: ruby and
assembler). They are only used to hype or slate something.

The best argument against claims like the above shouldn't be about
performance. It should rather be: The OO language allows me to create
more secure, easier to maintain, more reusable software. It is usually
much cheaper to stick more RAM into a machine or even buy a faster one,
than spending countless hours on hand-optimizing code and maintining it.


> Jumps take time, even on jump-prediction machines such as x86.
> So I wanted to make a little test to see if it there was any significant
> difference, and the test showed the complete opposite of my assumption.


There are so many variables. Without profound knowledge about compilers
in general, and about the JVM (including the JIT) in particular, you can
only guess.

Anyways, the concept of virtual machines is the way to go. Today,
compilers produce much more efficient code for SMP than a human could
write in assembler. The same will be true for statically versus in-time
compiled code (in some cases, it already is).

See: http://portal.acm.org/citation.cfm?id=345105.352548


Timo
 
Reply With Quote
 
tom fredriksen
Guest
Posts: n/a
 
      03-15-2006
Timo Stamm wrote:
>
> The best argument against claims like the above shouldn't be about
> performance. It should rather be: The OO language allows me to create
> more secure, easier to maintain, more reusable software. It is usually
> much cheaper to stick more RAM into a machine or even buy a faster one,
> than spending countless hours on hand-optimizing code and maintining it.


In one respect you are entirely correct, in another you are wrong;
performance does matter. I am of the opinion that if you can make a
program use less resources and work faster then your employer or client
will save money and time. And if you add that up for an entire company,
region or country, you save a lot of money that could be spent on
otherwise. (But thats just me talking, coming from 5 years in C exile,
but also frustrated by the meaningless waste of resources and the
corresponding attitude.) One thing that really annoys me is how resource
hungry java is, and I would like to try my bit to reduce the requirements.

But in any case, the issue here was more about comparing things for
background info.

/tom
 
Reply With Quote
 
Oliver Wong
Guest
Posts: n/a
 
      03-15-2006
"tom fredriksen" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
>
> Jumps take time, even on jump-prediction machines such as x86.
> So I wanted to make a little test to see if it there was any significant
> difference, and the test showed the complete opposite of my assumption.


You seem to be implying that there is a "jump" used in method calls, but
there is no "jump" used in for-loops.

- Oliver

 
Reply With Quote
 
Timo Stamm
Guest
Posts: n/a
 
      03-15-2006
tom fredriksen schrieb:
> Timo Stamm wrote:
>>
>> The best argument against claims like the above shouldn't be about
>> performance. It should rather be: The OO language allows me to create
>> more secure, easier to maintain, more reusable software. It is usually
>> much cheaper to stick more RAM into a machine or even buy a faster
>> one, than spending countless hours on hand-optimizing code and
>> maintining it.

>
> In one respect you are entirely correct, in another you are wrong;
> performance does matter.


I didn't say that. Of course performance matters.

You can easily find bottlenecks using a profiler. Sometimes it is
possible to make the programm run 1000 times faster at this point with a
minor change in design that only takes a few hours.

To optimize the program to the very last processor cycle takes probably
weeks.


"There is no doubt that the grail of efficiency leads to abuse.
Programmers waste enormous amounts of time thinking about, or worrying
about, the speed of noncritical parts of their programs, and these
attempts at efficiency actually have a strong negative impact when
debugging and maintenance are considered. We should forget about small
efficiencies, say about 97% of the time: premature optimization is the
root of all evil."

Donald E. Knuth, "Structured Programming With Go To Statements,"
Computing Reviews. Vol. 6, December 1974, p. 268.


I am of the opinion that if you can make a
> program use less resources and work faster then your employer or client
> will save money and time. And if you add that up for an entire company,
> region or country, you save a lot of money that could be spent on
> otherwise. (But thats just me talking, coming from 5 years in C exile,
> but also frustrated by the meaningless waste of resources and the
> corresponding attitude.) One thing that really annoys me is how resource
> hungry java is, and I would like to try my bit to reduce the requirements.
>
> But in any case, the issue here was more about comparing things for
> background info.



"A good programmer will [...] be wise to look carefully at the critical
code; but only after that code has been identified. It is often a
mistake to make a priori judgments about what parts of a program are
really critical, since the universal experience of programmers who have
been using measurement tools has been that their intuitive guesses fail."

Same source as above.


Timo
 
Reply With Quote
 
tom fredriksen
Guest
Posts: n/a
 
      03-15-2006
Timo Stamm wrote:
> tom fredriksen schrieb:
>> Timo Stamm wrote:
>>>
>>> The best argument against claims like the above shouldn't be about
>>> performance. It should rather be: The OO language allows me to create
>>> more secure, easier to maintain, more reusable software. It is
>>> usually much cheaper to stick more RAM into a machine or even buy a
>>> faster one, than spending countless hours on hand-optimizing code and
>>> maintining it.

>>
>> In one respect you are entirely correct, in another you are wrong;
>> performance does matter.

>
> I didn't say that. Of course performance matters.
>
> You can easily find bottlenecks using a profiler. Sometimes it is
> possible to make the programm run 1000 times faster at this point with a
> minor change in design that only takes a few hours.
>
> To optimize the program to the very last processor cycle takes probably
> weeks.
>
>
> "There is no doubt that the grail of efficiency leads to abuse.
> Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these
> attempts at efficiency actually have a strong negative impact when
> debugging and maintenance are considered. We should forget about small
> efficiencies, say about 97% of the time: premature optimization is the
> root of all evil."
>
> "A good programmer will [...] be wise to look carefully at the critical
> code; but only after that code has been identified. It is often a
> mistake to make a priori judgments about what parts of a program are
> really critical, since the universal experience of programmers who have
> been using measurement tools has been that their intuitive guesses fail."
>
> Donald E. Knuth, "Structured Programming With Go To Statements,"
> Computing Reviews. Vol. 6, December 1974, p. 268.


I am fully aware of the consequences of early optimisations and that the
algorithms are on of the most important issues when programming. If one
only considers that then one can make the statement that all languages
are equally efficient. And that is certainly not true, that is why
operating systems are programmed in C. If all code can be automatically
shifted (through the compiler) to a different compiling paradigm which
improves on the machine efficiency then all the better.

In any case those statements were made in 1974, when the community used
languages such as C, Fortran, Algol 68, Cobol etc. The first languages
used, other than machine code, where run in an interpreter only because
the hardware did not have support for floating point operations. So it
has to be simulated. As soon as fortran appear the interpred language
idea allmost disappeared. So the statements must be somewhat modified to
be entirely true today, not forgetting that the language used then did
not have the all the language constructs we have today or all the
processor commands we have today.

/tom
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Are system calls sometimes costlier than library calls? Richard Tobin C Programming 24 11-11-2007 08:52 AM
ods calls business object then method calls ta with output params andy6 ASP .Net 2 06-09-2006 01:54 AM
MoVoIP - FREE MOBILE Inetrnet Phone Calls - FREE Internet Phone Calls ubifone VOIP 0 07-29-2005 04:31 PM
Kataba Functions 1.0 - 100x faster reflective calls for Java Chris Thiessen Java 0 05-05-2004 05:06 PM
Sequence of constructor calls and destructor calls. Honne Gowda A C++ 2 10-31-2003 09:31 AM



Advertisments