Question for the techos out there.

Discussion in 'NZ Computing' started by news.xtra.co.nz, Nov 23, 2005.

  1. It seems, the largest performance increases in computing come from
    increasing the number of bits a processor can address.

    e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.

    So, why don't they just go straight to say, 512bit computers? Why bother
    with just a doubling?

    Obviously there are some technical limitations or they would have done this,
    just curious.
     
    news.xtra.co.nz, Nov 23, 2005
    #1
    1. Advertising

  2. news.xtra.co.nz

    thingy Guest

    news.xtra.co.nz wrote:
    > It seems, the largest performance increases in computing come from
    > increasing the number of bits a processor can address.
    >
    > e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.
    >
    > So, why don't they just go straight to say, 512bit computers? Why bother
    > with just a doubling?
    >
    > Obviously there are some technical limitations or they would have done this,
    > just curious.
    >
    >
    >
    >


    There used to be 8088 and 8086 cpus, 8086 was 16bit internally and
    externally and the 8088 only 16bit internally 8 bit externally, but it
    was a lot cheaper...Biggest difference I can see is memory address
    space, 32bit is 4 gig limit (though it is got around) I think where 64
    bit is way larger....

    Otherwise this is a really techy Q, see google, AMD had some info as well...

    I think it is extreamly complex to try for 512bit, maybe massively
    shrunk die size is needed with very wide data paths and the tranistors
    needed to do it would be correspondingly as huge I assume.....

    regards

    Thing
     
    thingy, Nov 23, 2005
    #2
    1. Advertising

  3. news.xtra.co.nz

    Jerry Guest

    news.xtra.co.nz wrote:
    > It seems, the largest performance increases in computing come from
    > increasing the number of bits a processor can address.
    >
    > e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.
    >
    > So, why don't they just go straight to say, 512bit computers? Why bother
    > with just a doubling?
    >
    > Obviously there are some technical limitations or they would have done this,
    > just curious.


    on a 64 bit computer a program might want to use 1 byte (8 bits) of
    data, but the hardware has to fetch 8 bytes. Also memory operations are
    aligned on the appropriate boundry, a 64 bit computer will fetch
    addresses with the first byte having a low order digit of 0 or 8 hex.
    If you want to fetch 2 bytes (16 bits), say address 1007 and 1008 the
    memory actually has to fetch 16 bytes (128 bits) address 1000 - 100f.
    When you start going more than 64 bits the inefficiencies make it not
    really worth doing.
     
    Jerry, Nov 23, 2005
    #3
  4. news.xtra.co.nz

    Shane Guest

    On Wed, 23 Nov 2005 20:50:30 +1300, news.xtra.co.nz wrote:

    > It seems, the largest performance increases in computing come from
    > increasing the number of bits a processor can address.
    >
    > e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.
    >
    > So, why don't they just go straight to say, 512bit computers? Why bother
    > with just a doubling?
    >
    > Obviously there are some technical limitations or they would have done this,
    > just curious.


    Im taking a stab at this... and its a crude one at best :)
    64 bit cpu's are more than just double, its more like 32^2
    If that makes sense.. with a 32 bit processor, to double its capability
    just requires the addition of one bit.. 33 bit systems
    the easiest way to show that is.
    00
    01
    10
    11
    four sets
    to double that
    000
    001
    010
    011
    100
    101
    110
    111

    and so on
    So 64 bit is a _huge_ jump, and instruction sets for OS's / Apps /Register
    Addresses / Buses / etc have to catch up
    According to wikipedia 64 bit cpu's have been around since 1991
    http://en.wikipedia.org/wiki/64-bit
    In fact they have an easy to follow explanation of the goings on
    HTH (and has shown the upper limits of my meagre knowledge :\


    --
    Hardware, n.: The parts of a computer system that can be kicked

    The best way to get the right answer on usenet is to post the wrong one.
     
    Shane, Nov 23, 2005
    #4
  5. news.xtra.co.nz

    Ron McNulty Guest

    There is a trade off between the bus width and the number of pins on the
    chip.

    A CPU with 64 bit address bus and 64 bit data bus needs 128 pins to access
    memory directly. Sure you can multiplex signals onto a smaller physical bus,
    but this involves more external circuitry and generally slows things down.
    And PCB layout gets more difficult as the number of pins increases.

    Like it or not, most of a CPU's manipulation is on strings - ASCII or
    Unicode data (8 or 16 bit). Performance on this sort of data does not
    improve greatly as the data bus width increases.

    But there may be specific applications where the native size of the machine
    is significant. For example, doing a 64bit by 32bit divide on an 8 bit CPU
    is a hugely complex and slow operation. So high-end video operations (e.g.
    games) may benefit immensely from a wider bus.

    Regards

    Ron

    "news.xtra.co.nz" <> wrote in message
    news:7bVgf.3142$...
    > It seems, the largest performance increases in computing come from
    > increasing the number of bits a processor can address.
    >
    > e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.
    >
    > So, why don't they just go straight to say, 512bit computers? Why bother
    > with just a doubling?
    >
    > Obviously there are some technical limitations or they would have done
    > this, just curious.
    >
    >
    >
    >
     
    Ron McNulty, Nov 23, 2005
    #5
  6. news.xtra.co.nz

    steve Guest

    news.xtra.co.nz wrote:

    > Obviously there are some technical limitations or they would have done
    > this,


    There's your answer!

    They aren't able to do it yet..... :)
     
    steve, Nov 23, 2005
    #6
  7. news.xtra.co.nz

    AD. Guest

    On Wed, 23 Nov 2005 20:50:30 +1300, news.xtra.co.nz wrote:

    > It seems, the largest performance increases in computing come from
    > increasing the number of bits a processor can address.


    Nope. In most cases unless other architectural improvements are made (eg
    extra registers in AMD64), generally it has little or no effect and the
    extra overhead can sometimes even hinder performance.

    Performance improvements mostly come from faster clock speeds, faster I/O
    busses. Although increasing the bit width of these busses help being able
    to move larger numbers around, most computing doesn't rely on large
    numberss. eg 64bit math is plenty accurate enough for all but the most
    demanding calculations.

    > e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.
    >
    > So, why don't they just go straight to say, 512bit computers? Why bother
    > with just a doubling?


    Well 64bit can address billions of GB. I think it will be enough for quite
    a while yet.

    Here's another way of opening you mind to the power (groan) of
    exponential numbers... If you could manage to somehow store a byte of data
    on a single atom and then used all the atoms in the universe to store
    data that way, then you could probably still address all that memory with
    roughly 256bit addressing.

    Basically all a 512bit processor would give you is a lot of overhead, and
    it would end up slower than current processors.

    --
    Cheers
    Anton
     
    AD., Nov 23, 2005
    #7
  8. "AD." <> wrote in message
    news:p...
    > On Wed, 23 Nov 2005 20:50:30 +1300, news.xtra.co.nz wrote:
    >
    >> It seems, the largest performance increases in computing come from
    >> increasing the number of bits a processor can address.

    >
    > Nope. In most cases unless other architectural improvements are made (eg
    > extra registers in AMD64), generally it has little or no effect and the
    > extra overhead can sometimes even hinder performance.
    >
    > Performance improvements mostly come from faster clock speeds, faster I/O
    > busses. Although increasing the bit width of these busses help being able
    > to move larger numbers around, most computing doesn't rely on large
    > numberss. eg 64bit math is plenty accurate enough for all but the most
    > demanding calculations.
    >
    >> e.g, moving from 8bit -> 16bit-> 32 -> 64bit computers.
    >>
    >> So, why don't they just go straight to say, 512bit computers? Why bother
    >> with just a doubling?

    >
    > Well 64bit can address billions of GB. I think it will be enough for quite
    > a while yet.
    >
    > Here's another way of opening you mind to the power (groan) of
    > exponential numbers... If you could manage to somehow store a byte of data
    > on a single atom and then used all the atoms in the universe to store
    > data that way, then you could probably still address all that memory with
    > roughly 256bit addressing.
    >
    > Basically all a 512bit processor would give you is a lot of overhead, and
    > it would end up slower than current processors.
    >
    > --
    > Cheers
    > Anton


    So, given that we will probably never need to address more than billions of
    GB, is there any point going higher than 64bits? Assuming the overheads are
    greater?
     
    news.xtra.co.nz, Nov 24, 2005
    #8
  9. news.xtra.co.nz

    AD. Guest

    On Thu, 24 Nov 2005 16:30:30 +1300, news.xtra.co.nz wrote:

    > So, given that we will probably never need to address more than billions
    > of GB, is there any point going higher than 64bits? Assuming the
    > overheads are greater?


    Maybe not with address space any time soon, but the 'bitness' of a
    processor isn't just due to its addressability. There are also bus widths
    and register sizes etc. Different chips over the years have mixed up these
    factors within the same chip.

    And also todays current 64bit x86 chips can't actually address 64bits of
    address space anyway. From (somewhat hazy) memory the chips themselves can
    address 48bits of memory while the chipsets limit it to 40bits (or
    something).

    128bit registers might one day be desirable for certain computational
    tasks. Hell they might already be being used somewhere - I'm hardly up to
    date on this stuff.

    Just like any engineering effort there are compromises everywhere, and the
    skill of the engineer is in optimising the balance of all those
    compromises to get the best result.

    --
    Cheers
    Anton
     
    AD., Nov 24, 2005
    #9
  10. news.xtra.co.nz

    AD. Guest

    On Thu, 24 Nov 2005 16:30:30 +1300, news.xtra.co.nz wrote:

    > So, given that we will probably never need to address more than billions
    > of GB, is there any point going higher than 64bits? Assuming the
    > overheads are greater?


    Maybe not with address space any time soon, but the 'bitness' of a
    processor isn't just due to its addressability. There are also bus widths
    and register sizes etc. Different chips over the years have mixed up these
    factors within the same chip.

    And also todays current 64bit x86 chips can't actually address 64bits of
    address space anyway. From (somewhat hazy) memory the chips themselves can
    address 48bits of memory while the chipsets limit it to 40bits (or
    something).

    128bit registers might one day be desirable for certain computational
    tasks. Hell they might already be being used somewhere - I'm hardly up to
    date on this stuff.

    Just like any engineering effort there are compromises everywhere, and the
    skill of the engineer is in optimising the balance of all those
    compromises to get the best result.

    --
    Cheers
    Anton
     
    AD., Nov 24, 2005
    #10
  11. news.xtra.co.nz

    Gordon Guest

    On Thu, 24 Nov 2005 08:38:13 +1300, steve wrote:

    > news.xtra.co.nz wrote:
    >
    >> Obviously there are some technical limitations or they would have done
    >> this,

    >
    > There's your answer!
    >
    > They aren't able to do it yet..... :)


    Forget the smiley, it makes your suscent post bloated.
     
    Gordon, Nov 24, 2005
    #11
  12. news.xtra.co.nz

    Enkidu Guest

    news.xtra.co.nz wrote:
    >
    > It seems, the largest performance increases in computing come from
    > increasing the number of bits a processor can address.
    >

    No, the largest performance increase in computing comes from writing
    more efficient programs.

    Cheers,

    Cliff
     
    Enkidu, Nov 24, 2005
    #12
  13. news.xtra.co.nz

    steve Guest

    Jerry wrote:

    > The days of efficient programing are only memories by some
    > no longer very young people.


    .....because it takes so long to write anything in machine code.

    Assembler was a HUGE improvement over that......

    ....and so on.
     
    steve, Nov 24, 2005
    #13
  14. news.xtra.co.nz

    Jerry Guest

    Enkidu wrote:
    > news.xtra.co.nz wrote:
    > >

    >
    >> It seems, the largest performance increases in computing come from
    >> increasing the number of bits a processor can address.
    >>

    > No, the largest performance increase in computing comes from writing
    > more efficient programs.


    That doesn't happen though, with hardware improvements come more
    inefficient programming. Remember when an IBM 360/30 with 32KB of
    memory, 4 29MB Disk drives, 8 tape units a printer and a card read punch
    kept a full staff busy and did all the computing for a pretty good
    sized company. All the applications would have been written in
    assembler. The days of efficient programing are only memories by some
    no longer very young people.
     
    Jerry, Nov 24, 2005
    #14
  15. news.xtra.co.nz

    Enkidu Guest

    Jerry wrote:
    > Enkidu wrote:
    >
    >> news.xtra.co.nz wrote:
    >>
    >>> It seems, the largest performance increases in computing come from
    >>> increasing the number of bits a processor can address.
    >>>

    >> No, the largest performance increase in computing comes from writing
    >> more efficient programs.

    >
    > That doesn't happen though, with hardware improvements come more
    > inefficient programming. Remember when an IBM 360/30 with 32KB of
    > memory, 4 29MB Disk drives, 8 tape units a printer and a card read punch
    > kept a full staff busy and did all the computing for a pretty good
    > sized company. All the applications would have been written in
    > assembler. The days of efficient programing are only memories by some
    > no longer very young people.
    >

    No, we wrote programs mainly in Cobol and no attempt was usually made to
    be efficient. I wrote many assembler programs, but not as part of any
    application suite. Usually as 'exits' for systems programs.

    I don't think that what you say is true, and I was a Mainframe Systems
    Programmer for a looong time.

    A 360/30 was pretty small even for those days!

    Cheers,

    Cliff
     
    Enkidu, Nov 24, 2005
    #15
  16. news.xtra.co.nz

    Jerry Guest

    Enkidu wrote:
    > Jerry wrote:
    >
    >> Enkidu wrote:
    >>
    >>> news.xtra.co.nz wrote:
    >>>
    >>>> It seems, the largest performance increases in computing come from
    >>>> increasing the number of bits a processor can address.
    >>>>
    >>> No, the largest performance increase in computing comes from writing
    >>> more efficient programs.

    >>
    >>
    >> That doesn't happen though, with hardware improvements come more
    >> inefficient programming. Remember when an IBM 360/30 with 32KB of
    >> memory, 4 29MB Disk drives, 8 tape units a printer and a card read
    >> punch kept a full staff busy and did all the computing for a pretty
    >> good sized company. All the applications would have been written in
    >> assembler. The days of efficient programing are only memories by some
    >> no longer very young people.

    >
    > >

    > No, we wrote programs mainly in Cobol and no attempt was usually made to
    > be efficient. I wrote many assembler programs, but not as part of any
    > application suite. Usually as 'exits' for systems programs.
    >
    > I don't think that what you say is true, and I was a Mainframe Systems
    > Programmer for a looong time.
    >
    > A 360/30 was pretty small even for those days!


    There were a lot of 360s in use until the early 80s. Granted the samll
    iron like the 30 tended to go away first, but it was the 4300 series
    that finally pushed the last of the 360s out the door. I was in the US
    until 1981, but there were some fairly good sized companies doing a lot
    of work on 360/30s, which had a max of 64KB of memory.

    There was no dynamic address translation on the 360s (except the model
    67) so especially using the small ones programs need to be pretty
    efficient. Think of memory at nearly $1 US per byte when you could buy
    a new car for $2000.

    My point is that as resources get cheaper, programs get less efficient.
    have you seen a trend for programmes to be more efficient as speeds
    get faster and memory more plentiful?

    Jerry
     
    Jerry, Nov 25, 2005
    #16
  17. news.xtra.co.nz

    Enkidu Guest

    Jerry wrote:
    >>
    >> No, we wrote programs mainly in Cobol and no attempt was usually made
    >> to be efficient. I wrote many assembler programs, but not as part of
    >> any application suite. Usually as 'exits' for systems programs.
    >>
    >> I don't think that what you say is true, and I was a Mainframe Systems
    >> Programmer for a looong time.
    >>
    >> A 360/30 was pretty small even for those days!

    >
    > There were a lot of 360s in use until the early 80s. Granted the samll
    > iron like the 30 tended to go away first, but it was the 4300 series
    > that finally pushed the last of the 360s out the door. I was in the US
    > until 1981, but there were some fairly good sized companies doing a lot
    > of work on 360/30s, which had a max of 64KB of memory.
    >

    Mmm, most of the 360s went when the 370s came in. 4300s were always the
    babies of the bunch.
    >
    > There was no dynamic address translation on the 360s (except the model
    > 67) so especially using the small ones programs need to be pretty
    > efficient. Think of memory at nearly $1 US per byte when you could buy
    > a new car for $2000.
    >
    > My point is that as resources get cheaper, programs get less efficient.
    > have you seen a trend for programmes to be more efficient as speeds get
    > faster and memory more plentiful?
    >

    No, I don't agree. I'd say that it was about the same. The difference is
    that the resources are so cheap that it has been easier to throw more
    hardware at it than reprogram it. That trend is reversing, leading to
    more emphasis on efficiency. I think that we may be on a cusp, and we
    will see courses on making programs more efficient.

    However, even years ago, when I went on an IBM tuning course, we spent a
    week gaining %age points in performance by tuning the system. We made a
    minor change to the program (as I recall, only opening the file once,
    instead of before each read(!)) and performance went up by 70%!

    That's what I was getting at.

    Cheers,

    Cliff
     
    Enkidu, Nov 25, 2005
    #17
  18. news.xtra.co.nz

    Jerry Guest

    Enkidu wrote:
    > Jerry wrote:
    >
    >>>
    >>> No, we wrote programs mainly in Cobol and no attempt was usually made
    >>> to be efficient. I wrote many assembler programs, but not as part of
    >>> any application suite. Usually as 'exits' for systems programs.
    >>>
    >>> I don't think that what you say is true, and I was a Mainframe
    >>> Systems Programmer for a looong time.
    >>>
    >>> A 360/30 was pretty small even for those days!

    >>
    >>
    >> There were a lot of 360s in use until the early 80s. Granted the
    >> samll iron like the 30 tended to go away first, but it was the 4300
    >> series that finally pushed the last of the 360s out the door. I was
    >> in the US until 1981, but there were some fairly good sized companies
    >> doing a lot of work on 360/30s, which had a max of 64KB of memory.
    >>

    > Mmm, most of the 360s went when the 370s came in. 4300s were always the
    > babies of the bunch.
    > >

    The only 360s that the 370 killed were on lease from IBM. I was working
    in the states through the 70s, for a broker and then a 3rd party
    maintenance company. There was a healthy market for used 360s until the
    4331 and 4341 came out. I never did an uninstall-for-scrap to replace a
    360 with a 370. The 4300s were the small to mid range computers, but a
    4341 had more grunt than a 360/65, and cost less to run. It was the
    4300 that killed both the 360 and the smaller 370s. You will remember
    that a 4341 killed the 370/145 at NZDB.

    >
    >> There was no dynamic address translation on the 360s (except the model
    >> 67) so especially using the small ones programs need to be pretty
    >> efficient. Think of memory at nearly $1 US per byte when you could
    >> buy a new car for $2000.
    >>
    >> My point is that as resources get cheaper, programs get less
    >> efficient. have you seen a trend for programmes to be more efficient
    >> as speeds get faster and memory more plentiful?
    >>

    > No, I don't agree. I'd say that it was about the same. The difference is
    > that the resources are so cheap that it has been easier to throw more
    > hardware at it than reprogram it. That trend is reversing, leading to
    > more emphasis on efficiency. I think that we may be on a cusp, and we
    > will see courses on making programs more efficient.
    >
    > However, even years ago, when I went on an IBM tuning course, we spent a
    > week gaining %age points in performance by tuning the system. We made a
    > minor change to the program (as I recall, only opening the file once,
    > instead of before each read(!)) and performance went up by 70%!
    >
    > That's what I was getting at.


    OK, point taken.

    Jerry
     
    Jerry, Nov 25, 2005
    #18
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. christiane kewitz
    Replies:
    1
    Views:
    625
    Pavel A.
    Feb 13, 2005
  2. Goonigoogoo

    Are there any free .pdf writers out there

    Goonigoogoo, Nov 21, 2003, in forum: Computer Support
    Replies:
    5
    Views:
    385
    Won Dampchin
    Nov 22, 2003
  3. OM
    Replies:
    12
    Views:
    5,407
    Phil Thompson
    Oct 4, 2005
  4. ishtarbgl
    Replies:
    0
    Views:
    586
    ishtarbgl
    Apr 1, 2004
  5. Giuen
    Replies:
    0
    Views:
    1,507
    Giuen
    Sep 12, 2008
Loading...

Share This Page