Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Java > Odd performance difference between -client and -server

Reply
Thread Tools

Odd performance difference between -client and -server

 
 
Remon van Vliet
Guest
Posts: n/a
 
      06-26-2005
Hello,

I've run into an odd performance difference between the client and the
server VM. I made a few classes for 3D math and such, and here are two
version of a scale method :

public final static RTVector scale(RTVector v, double s) {

/* Scale vector and return result */
return new RTVector(v.x * s, v.y * s, v.z * s);
}

public final static RTVector scale(RTVector r, RTVector v, double s) {

/* Scale vector */
r.set(v.x * s, v.y * s, v.z * s);

/* Return result vector */
return r;
}

As you can see, one creates a new vector and returns the result, the other
sets the result in a third vector that's passed to the method. The latter
version should be faster since it doesnt create a new object (note that i
made sure the test isnt creating a new object each iteration either). Now,
for the server VM (-server) all works as expected, for 10000000 runs :

option1 : 0.188s
option2 : 0.032s

The client VM however :

option1 : 0.579s
option2 : 9.547s

As you can see the server VM is way faster for this, which is expected
behavior. What is odd to me is that the option where no new objects are
created is actually a factor 20 slower on the client VM. Does anyone have an
explanation for this? Note that the only difference for these tests is the
VM command line argument -client/-server.

Hope someone can shed some light on this,

Remon van Vliet


 
Reply With Quote
 
 
 
 
Skip
Guest
Posts: n/a
 
      06-26-2005
"Remon van Vliet" <(E-Mail Removed)> wrote in message
news:42be96fb$0$38464$(E-Mail Removed)4all.nl...
> Hello,
>
> I've run into an odd performance difference between the client and the
> server VM. I made a few classes for 3D math and such, and here are two
> version of a scale method :
>
> public final static RTVector scale(RTVector v, double s) {
>
> /* Scale vector and return result */
> return new RTVector(v.x * s, v.y * s, v.z * s);
> }
>
> public final static RTVector scale(RTVector r, RTVector v, double s) {
>
> /* Scale vector */
> r.set(v.x * s, v.y * s, v.z * s);
>
> /* Return result vector */
> return r;
> }
>
> As you can see, one creates a new vector and returns the result, the other
> sets the result in a third vector that's passed to the method. The latter
> version should be faster since it doesnt create a new object (note that i
> made sure the test isnt creating a new object each iteration either). Now,
> for the server VM (-server) all works as expected, for 10000000 runs :
>
> option1 : 0.188s
> option2 : 0.032s
>
> The client VM however :
>
> option1 : 0.579s
> option2 : 9.547s
>
> As you can see the server VM is way faster for this, which is expected
> behavior. What is odd to me is that the option where no new objects are
> created is actually a factor 20 slower on the client VM. Does anyone have

an
> explanation for this? Note that the only difference for these tests is the
> VM command line argument -client/-server.


Welcome to the world of micro-benchmarking.

The JVM is smarter than you think. It might notice that the new Object in
option 1 is never used, and thus eliminating that part of the code,
resulting in doing 'nothing' which is - well - extremely fast. In option 2,
there is no possibility to optimize (say: completely remove) that code for
the JVM, as it is not "never used" as in option 1.

So basicly:
option 1 does nothing
option 2 does what it is supposed to do

Further, the server VM is really really smart, and might notice that in your
micro-benchmark has no result in either case. Thus doing nothing at all.


Could you please show the micro-benchmark code?


 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Strange difference in performance (30x) between PPC and Intel Julian Tarkhanov Ruby 1 09-29-2007 05:28 PM
is there much performance difference between a 2.2 gig P4 and a 1.8 gig Celeron? grappletech Computer Information 2 03-09-2007 10:36 PM
performance difference between OSx and Windows Brian Python 9 05-24-2006 03:01 AM
Difference between bin and obj directories and difference between project references and dll references jakk ASP .Net 4 03-22-2005 09:23 PM
Exact difference between 'const char *' and 'char *', also diff between 'const' and 'static' Santa C Programming 1 07-17-2003 02:10 PM



Advertisments