Velocity Reviews

Velocity Reviews (http://www.velocityreviews.com/forums/index.php)
-   Java (http://www.velocityreviews.com/forums/f30-java.html)
-   -   Java processors (http://www.velocityreviews.com/forums/t947856-java-processors.html)

bob smith 07-05-2012 03:01 PM

Java processors
 
What ever happened to those processors that were supposed to run Java natively?

Did Sun or anyone else ever make those?

Eric Sosman 07-05-2012 03:28 PM

Re: Java processors
 
On 7/5/2012 11:01 AM, bob smith wrote:
> What ever happened to those processors that were supposed to run Java natively?
>
> Did Sun or anyone else ever make those?


http://en.wikipedia.org/wiki/Java_processor

(If you need help clicking links, just ask.)


--
Eric Sosman
esosman@ieee-dot-org.invalid



BGB 07-05-2012 06:00 PM

Re: Java processors
 
On 7/5/2012 10:28 AM, Eric Sosman wrote:
> On 7/5/2012 11:01 AM, bob smith wrote:
>> What ever happened to those processors that were supposed to run Java
>> natively?
>>
>> Did Sun or anyone else ever make those?

>
> http://en.wikipedia.org/wiki/Java_processor
>
> (If you need help clicking links, just ask.)
>


and, of those, AFAIK, ARM's Jazelle was the only one to really gain much
widespread adoption, and even then is largely being phased out in favor
of ThumbEE, where the idea is that instead of using direct execution, a
lightweight JIT or similar is used instead.

part of the issue I think is that there isn't really all that much
practical incentive to run Java bytecode directly on a CPU, since if
similar (or better) results can be gained by using a JIT to another ISA,
why not use that instead?


this is a merit of having a bytecode which is sufficiently abstracted
from the underlying hardware such that it can be efficiently targeted to
a variety of physical processors.

this is in contrast to a "real" CPU ISA, which may tend to expose enough
internal workings to where efficient implementation on different CPU
architectures are problematic (say: differences in endianess, support
for unaligned reads, different ways of handling arithmetic status
conditions, ...). in such a case, conversion from one ISA to another may
come at a potentially significant performance hit.

whereas if this issue does not really apply, or potentially even the
output of the JIT will execute faster than it would via direct execution
of the ISA by hardware (say, because the JIT can do a lot more advanced
optimizations or map the code onto a different and more efficient
execution model, such as transforming the stack-oriented code into
register-based machine code), than there is much less merit to the use
of direct execution.


Eric Sosman 07-05-2012 06:31 PM

Re: Java processors
 
On 7/5/2012 2:00 PM, BGB wrote:
> On 7/5/2012 10:28 AM, Eric Sosman wrote:
>> On 7/5/2012 11:01 AM, bob smith wrote:
>>> What ever happened to those processors that were supposed to run Java
>>> natively?
>>>
>>> Did Sun or anyone else ever make those?

>>
>> http://en.wikipedia.org/wiki/Java_processor
>>
>> (If you need help clicking links, just ask.)
>>

>
> and, of those, AFAIK, ARM's Jazelle was the only one to really gain much
> widespread adoption, and even then is largely being phased out in favor
> of ThumbEE, where the idea is that instead of using direct execution, a
> lightweight JIT or similar is used instead.
>
> part of the issue I think is that there isn't really all that much
> practical incentive to run Java bytecode directly on a CPU, since if
> similar (or better) results can be gained by using a JIT to another ISA,
> why not use that instead?
>
>
> this is a merit of having a bytecode which is sufficiently abstracted
> from the underlying hardware such that it can be efficiently targeted to
> a variety of physical processors.
>
> this is in contrast to a "real" CPU ISA, which may tend to expose enough
> internal workings to where efficient implementation on different CPU
> architectures are problematic (say: differences in endianess, support
> for unaligned reads, different ways of handling arithmetic status
> conditions, ...). in such a case, conversion from one ISA to another may
> come at a potentially significant performance hit.
>
> whereas if this issue does not really apply, or potentially even the
> output of the JIT will execute faster than it would via direct execution
> of the ISA by hardware (say, because the JIT can do a lot more advanced
> optimizations or map the code onto a different and more efficient
> execution model, such as transforming the stack-oriented code into
> register-based machine code), than there is much less merit to the use
> of direct execution.


In principle, a JIT could do better optimization than a
traditional compiler because it has more information available.
For example, a JIT can know what classes are actually loaded in
the JVM and take shortcuts like replacing getters and setters with
direct access to the underlying members. A JIT can gather profile
information from a few interpreted executions and use the data to
guide the eventual realization in machine code. Basically, a JIT
can know what the environment actually *is*, while a pre-execution
compiler must produce code for every possible environment.

On the other hand, a former colleague of mine once observed
that "Just-In-Time" is in fact a misnomer: it's a "Just-Too-Late"
compiler because it doesn't even start work until after you need
its output! Even if the JIT generates code better optimized for
the current circumstances than a pre-execution compiler could,
the JIT's code starts later. Does Achilles catch the tortoise?

--
Eric Sosman
esosman@ieee-dot-org.invalid



Jim Janney 07-05-2012 07:02 PM

Re: Java processors
 
BGB <cr88192@hotmail.com> writes:

> On 7/5/2012 10:28 AM, Eric Sosman wrote:
>> On 7/5/2012 11:01 AM, bob smith wrote:
>>> What ever happened to those processors that were supposed to run Java
>>> natively?
>>>
>>> Did Sun or anyone else ever make those?

>>
>> http://en.wikipedia.org/wiki/Java_processor
>>
>> (If you need help clicking links, just ask.)
>>

>
> and, of those, AFAIK, ARM's Jazelle was the only one to really gain
> much widespread adoption, and even then is largely being phased out in
> favor of ThumbEE, where the idea is that instead of using direct
> execution, a lightweight JIT or similar is used instead.
>
> part of the issue I think is that there isn't really all that much
> practical incentive to run Java bytecode directly on a CPU, since if
> similar (or better) results can be gained by using a JIT to another
> ISA, why not use that instead?


The cost of entry into CPU manufacturing is far from cheap, and once
you're in it's anything but a level playing field. Intel has an
enormous advantage due to the amount of money it can plow into improving
its manufacturing processes. And the demand for a system that can only
run JVM-based software is relatively limited.

Back in the day Niklaus Wirth had a system that was optimised for
running Modula-2, with its own processor and operating system written in
Modula-2. I don't remember now what it was called.

--
Jim Janney

BGB 07-05-2012 09:09 PM

Re: Java processors
 
On 7/5/2012 2:02 PM, Jim Janney wrote:
> BGB <cr88192@hotmail.com> writes:
>
>> On 7/5/2012 10:28 AM, Eric Sosman wrote:
>>> On 7/5/2012 11:01 AM, bob smith wrote:
>>>> What ever happened to those processors that were supposed to run Java
>>>> natively?
>>>>
>>>> Did Sun or anyone else ever make those?
>>>
>>> http://en.wikipedia.org/wiki/Java_processor
>>>
>>> (If you need help clicking links, just ask.)
>>>

>>
>> and, of those, AFAIK, ARM's Jazelle was the only one to really gain
>> much widespread adoption, and even then is largely being phased out in
>> favor of ThumbEE, where the idea is that instead of using direct
>> execution, a lightweight JIT or similar is used instead.
>>
>> part of the issue I think is that there isn't really all that much
>> practical incentive to run Java bytecode directly on a CPU, since if
>> similar (or better) results can be gained by using a JIT to another
>> ISA, why not use that instead?

>
> The cost of entry into CPU manufacturing is far from cheap, and once
> you're in it's anything but a level playing field. Intel has an
> enormous advantage due to the amount of money it can plow into improving
> its manufacturing processes. And the demand for a system that can only
> run JVM-based software is relatively limited.
>
> Back in the day Niklaus Wirth had a system that was optimised for
> running Modula-2, with its own processor and operating system written in
> Modula-2. I don't remember now what it was called.
>


yes, but ARM already had direct JBC execution in the form of Jazelle,
which it is then phasing out in favor of ThumbEE, which is a JIT-based
strategy.

I suspect this is telling, IOW, that even when one *can* directly
execute on raw hardware, does it actually buy enough to make it worthwhile?

these occurrences imply a few things: Java is a fairly big thing on ARM,
and even then it was likely either not sufficiently performance or
cost-effective to keep direct execution, leading to a fallback strategy
of making extensions to ease JIT compiler output.


yes, on x86 targets, it is a much harder sell.


BGB 07-05-2012 09:42 PM

Re: Java processors
 
On 7/5/2012 1:31 PM, Eric Sosman wrote:
> On 7/5/2012 2:00 PM, BGB wrote:
>> On 7/5/2012 10:28 AM, Eric Sosman wrote:
>>> On 7/5/2012 11:01 AM, bob smith wrote:
>>>> What ever happened to those processors that were supposed to run Java
>>>> natively?
>>>>
>>>> Did Sun or anyone else ever make those?
>>>
>>> http://en.wikipedia.org/wiki/Java_processor
>>>
>>> (If you need help clicking links, just ask.)
>>>

>>
>> and, of those, AFAIK, ARM's Jazelle was the only one to really gain much
>> widespread adoption, and even then is largely being phased out in favor
>> of ThumbEE, where the idea is that instead of using direct execution, a
>> lightweight JIT or similar is used instead.
>>
>> part of the issue I think is that there isn't really all that much
>> practical incentive to run Java bytecode directly on a CPU, since if
>> similar (or better) results can be gained by using a JIT to another ISA,
>> why not use that instead?
>>
>>
>> this is a merit of having a bytecode which is sufficiently abstracted
>> from the underlying hardware such that it can be efficiently targeted to
>> a variety of physical processors.
>>
>> this is in contrast to a "real" CPU ISA, which may tend to expose enough
>> internal workings to where efficient implementation on different CPU
>> architectures are problematic (say: differences in endianess, support
>> for unaligned reads, different ways of handling arithmetic status
>> conditions, ...). in such a case, conversion from one ISA to another may
>> come at a potentially significant performance hit.
>>
>> whereas if this issue does not really apply, or potentially even the
>> output of the JIT will execute faster than it would via direct execution
>> of the ISA by hardware (say, because the JIT can do a lot more advanced
>> optimizations or map the code onto a different and more efficient
>> execution model, such as transforming the stack-oriented code into
>> register-based machine code), than there is much less merit to the use
>> of direct execution.

>
> In principle, a JIT could do better optimization than a
> traditional compiler because it has more information available.
> For example, a JIT can know what classes are actually loaded in
> the JVM and take shortcuts like replacing getters and setters with
> direct access to the underlying members. A JIT can gather profile
> information from a few interpreted executions and use the data to
> guide the eventual realization in machine code. Basically, a JIT
> can know what the environment actually *is*, while a pre-execution
> compiler must produce code for every possible environment.
>


well, yes, but it isn't clear how this is directly related (since it was
JIT vs raw HW support, rather than about JIT vs compilation in advance).

a limiting factor for JIT and optimizations is that they often have a
much smaller time window, and so are limited mostly to optimizations
which can themselves be performed fairly quickly.


FWIW though, there is also AOT, which can also optimize specifically for
a specific piece of hardware, but avoids a lot of the initial delay of a
JIT by compiling in advance (or on first execution, so the first time
the app will take a longer time to start up, but next time it will start
much faster).

yes, there are a lot of tradeoffs, for example, AOT will not be able to,
say, make decisions informed by profiler output, since in this case it
will not have this information available.


> On the other hand, a former colleague of mine once observed
> that "Just-In-Time" is in fact a misnomer: it's a "Just-Too-Late"
> compiler because it doesn't even start work until after you need
> its output! Even if the JIT generates code better optimized for
> the current circumstances than a pre-execution compiler could,
> the JIT's code starts later. Does Achilles catch the tortoise?
>


yeah.

even then, there may be other levels of tradeoffs, such as, whether to
do full compilation, or merely spit out some threaded code and run that.

the full compilation then is much more complicated (more complex JIT),
and also slower (since now the JIT needs to worry about things like
type-analysis, register allocation, ...), whereas a simpler strategy,
like spitting out a bunch of function calls and maybe a few basic
machine-code fragments, is much faster and simpler (the translation can
be triggered by trying to call a function, without too many adverse
effects on execution time, and will tend to only translate parts of the
program or library which are actually executed).


some of this can influence VM design as well (going technically OT here):
in my VM it led to the use of explicit type-tagging (via prefixes),
partly because the bytecode isn't directly executed anyways (merely
translated to threaded code by this point), and the "original plan" of
using type-inference and flow-analysis in the JIT backend was just too
much effort to bother with for the more "trivial" threaded-code backend,
so I instead ended up migrating a lot of this logic to the VM frontend,
and using prefixes to indicate types.

I still call the threaded-code execution "interpretation" though, partly
because it is a gray area and from what I can gather, such a thing is
still called an interpreter even when it no longer bases its execution
off direct interpretation of bytecode or similar.

but, the threaded-code is at least sufficiently fast to lessen the
immediate need for the effort of implementing a more proper JIT compiler.


or such...

Jan Burse 07-05-2012 11:29 PM

Re: Java processors
 
Jim Janney schrieb:
> Back in the day Niklaus Wirth had a system that was optimised for
> running Modula-2, with its own processor and operating system written in
> Modula-2. I don't remember now what it was called.


Do you mean?
http://en.wikipedia.org/wiki/Lilith_%28computer%29



Arne Vajh°j 07-06-2012 12:30 AM

Re: Java processors
 
On 7/5/2012 2:31 PM, Eric Sosman wrote:
> On the other hand, a former colleague of mine once observed
> that "Just-In-Time" is in fact a misnomer: it's a "Just-Too-Late"
> compiler because it doesn't even start work until after you need
> its output! Even if the JIT generates code better optimized for
> the current circumstances than a pre-execution compiler could,
> the JIT's code starts later. Does Achilles catch the tortoise?


It is my impression that modern JVM's especially with -server
(or equivalent) is rather aggressive about JIT compiling.

..NET CLR always does it first time I believe.

Arne





Martin Gregorie 07-06-2012 12:42 AM

Re: Java processors
 
On Fri, 06 Jul 2012 01:29:51 +0200, Jan Burse wrote:

> Jim Janney schrieb:
>> Back in the day Niklaus Wirth had a system that was optimised for
>> running Modula-2, with its own processor and operating system written
>> in Modula-2. I don't remember now what it was called.

>
> Do you mean? http://en.wikipedia.org/wiki/Lilith_%28computer%29


Well spotted.

IIRC that was roughly contemporary with the Burroughs x700 systems, which
had an interesting take on virtualisation: its MCP OS ran each user
program in a VM that supported the conceptual model used by its
programming language, so a FORTRAN program ran in a word-addressed VM
with a register set, COBOL ran in a byte-addressed VM (also with
registers) while Algol/Pascal/C (if it had existed at the time), ran in a
stack-based VM, all using instruction sets that suited that programming
model. Unfortunately I never got to play with that kit, but wish I had
known more about it because it was well ahead of its time.

The nearest I got to that, somewhat later, was a 2966 running 1900
programs (24bit word addressed, register-based, 6 bit ISO characters)
under George 3 simultaneously with native programs (byte-addressed, stack-
based, 8-bit EBCDIC characters) under VME/B.

IMHO the 2966 trick of hosting a VM per OS with appropriate microcode was
neat, but was blown away by the Burroughs trick of spinning up the
appropriate VM for each application program and controlling the lot from
the same OS. IBM's OS/400 could do this to run S/36 software on an AS/400
but I don't know of anything else that comes close.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |


All times are GMT. The time now is 06:32 AM.

Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.