Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Newbie question: accessing global variable on multiprocessor

Reply
Thread Tools

Newbie question: accessing global variable on multiprocessor

 
 
Flash Gordon
Guest
Posts: n/a
 
      01-16-2010
BGB / cr88192 wrote:
> "Flash Gordon" <(E-Mail Removed)> wrote in message
> news:(E-Mail Removed)-gordon.me.uk...
>> BGB / cr88192 wrote:
>>> "Flash Gordon" <(E-Mail Removed)> wrote in message
>>> news:(E-Mail Removed)-gordon.me.uk...
>>>> Nobody wrote:
>>>>> On Sun, 10 Jan 2010 11:06:45 +0000, Flash Gordon wrote:
>>>> <snip>
>>>>
>>>>>>> And even if the compiler provides the "assumed" semantics for
>>>>>>> "volatile"
>>>>>>> (i.e. it emits object code in which read/write of volatile variables
>>>>>>> occurs in the "expected" order), that doesn't guarantee that the
>>>>>>> processor
>>>>>>> itself won't re-order the accesses.
>>>>>> However, it does have to document what it means by accessing a
>>>>>> volatile, and it should be possible to identify from this whether it
>>>>>> prevents the processor from reordering further down, whether it
>>>>>> bypasses the cache etc.
>>>>> Easier said than done. The object code produced by a compiler may
>>>>> subsequently be run on a wide range of CPUs, including those not
>>>>> invented
>>>>> yet. The latest x86 chips will still run code which was generated for a
>>>>> 386.
>>>> If the compiler does not claim to support processors not yet invented
>>>> then that is not a problem. You can't blame a compiler (or program) if
>>>> it fails for processors which are not supported even if the processor is
>>>> theoretically backwards compatible.
>>> if a processor claims to be "backwards compatible" yet old code often
>>> breaks on it, who takes the blame?...
>>> that is right, it is the manufacturer...
>>>
>>> it is worth noting the rather large numbers of hoops Intel, MS, ... have
>>> gone through over the decades to make all this stuff work, and keep
>>> working...

>> <snip>
>>
>> Not successfully. I used programs that worked on a 286 PC but failed on a
>> 386 unless you switched "Turbo mode" off. This was nothing to do with the
>> OS.

>
> on DOS, yes, it is the HW in this case...


Yes, which is relevant to the original point. If a compiler conforms to
the standard on the platform it was written for, but does not conform to
the standard when run on hardware/software more recent than the
compiler, then that is *not* a problem with the compiler.

If either software or hardware vendor changes things in a way that
breaks other software for good reason (and there are lots of good
reasons) then I'm afraid that's part of life.

>>> it doesn't help that even lots of 32-bit SW has broken on newer Windows,
>>> due I suspect to MS no longer really caring so much anymore about legacy
>>> support...

>> It ain't all Microsoft's fault. Also, there are good technical reasons for
>> dropping support of ancient interfaces.

>
> yeah, AMD prompted it with a few of their changes...
>
> but, MS could have avoided the problem by essentially migrating both NTVDM
> and DOS support into an interpreter (which would itself provide segmentation
> and v86).


That doesn't stop it breaking things due to running too fast. It is also
another peace of software which has to be maintained on an ongoing
basis, so security audits, testing, regression testing whenever Window
is patched, redoing the security audit if it needs to be patched due to
a patch in core Windows... it isn't cheap to do right.

> a lot of the rest of what was needed (to glue the interpreter to Win64) was
> likely already implemented in getting WoW64 working, ...
>
> this way, we wouldn't have been stuck needing DOSBox for software from
> decades-past...
>
> if DOSBox can do it, MS doesn't have "that" much excuse, apart from maybe
> that they can no longer "sell" all this old software, so for them there is
> not as much market incentive to keep it working...


It's not simply that there is no money in it. It's also that there are
costs which increase over time. There were costs involved in being able
to run DOS under Win3.1 in protected mode. Getting it working in Win95
required more work and so more costs. Making everything work under
Windows NT would have cost even more (so they did not try and make
everything work). At some point it would require a complete processor
emulator, which can be written, but is even more work and more complex
and would need revalidating every time Windows is patched (the DOSBox
people can just wait until someone reports it is broken, rather than
having to revalidate it themselves for every patch).

There is also the simple old rule that the more lines of code the more
bugs there will be and the harder maintenance and future development is,
and keeping backwards compatibility and obsolete features increases the
line count.
--
Flash Gordon
 
Reply With Quote
 
 
 
 
BGB / cr88192
Guest
Posts: n/a
 
      01-16-2010

"Flash Gordon" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)-gordon.me.uk...
> BGB / cr88192 wrote:
>> "Flash Gordon" <(E-Mail Removed)> wrote in message


<snip>

>>> <snip>
>>>
>>> Not successfully. I used programs that worked on a 286 PC but failed on
>>> a 386 unless you switched "Turbo mode" off. This was nothing to do with
>>> the OS.

>>
>> on DOS, yes, it is the HW in this case...

>
> Yes, which is relevant to the original point. If a compiler conforms to
> the standard on the platform it was written for, but does not conform to
> the standard when run on hardware/software more recent than the compiler,
> then that is *not* a problem with the compiler.
>


yes, this is the fault of the HW vendor(s) for having changed their spec in
a not backwards compatible way...

for example, AMD is to blame for a few things:
REX not working outside long mode;
v86 and segments not working in long mode;
....

but, they did an overall decent job considering...
(much better than Itanium, we can see where this went...).


> If either software or hardware vendor changes things in a way that breaks
> other software for good reason (and there are lots of good reasons) then
> I'm afraid that's part of life.
>


but, then one has to determine what is good reason.

in the DOS/Win16 case, I am not convinced it was good reason.


>>>> it doesn't help that even lots of 32-bit SW has broken on newer
>>>> Windows, due I suspect to MS no longer really caring so much anymore
>>>> about legacy support...
>>> It ain't all Microsoft's fault. Also, there are good technical reasons
>>> for dropping support of ancient interfaces.

>>
>> yeah, AMD prompted it with a few of their changes...
>>
>> but, MS could have avoided the problem by essentially migrating both
>> NTVDM and DOS support into an interpreter (which would itself provide
>> segmentation and v86).

>
> That doesn't stop it breaking things due to running too fast. It is also
> another peace of software which has to be maintained on an ongoing basis,
> so security audits, testing, regression testing whenever Window is
> patched, redoing the security audit if it needs to be patched due to a
> patch in core Windows... it isn't cheap to do right.
>


running too fast doesn't break most apps, and for those rare few that do,
emulators like DOSBox include the ability to turn down the virtual
clock-rate (turning it down really low can make Doom lag, ...).

presumably, something like this would be done almost purely in userspace,
and hence it would not be nearly so sensitive to breaking.

basically, in this case likely the whole 16-bit substructure (including GUI,
....) would likely be moved into the emulator, then maybe the 16-bit apps
draw into the real OS via Direct2D or whatever...


>> a lot of the rest of what was needed (to glue the interpreter to Win64)
>> was likely already implemented in getting WoW64 working, ...
>>
>> this way, we wouldn't have been stuck needing DOSBox for software from
>> decades-past...
>>
>> if DOSBox can do it, MS doesn't have "that" much excuse, apart from maybe
>> that they can no longer "sell" all this old software, so for them there
>> is not as much market incentive to keep it working...

>
> It's not simply that there is no money in it. It's also that there are
> costs which increase over time. There were costs involved in being able to
> run DOS under Win3.1 in protected mode. Getting it working in Win95
> required more work and so more costs. Making everything work under Windows
> NT would have cost even more (so they did not try and make everything
> work). At some point it would require a complete processor emulator, which
> can be written, but is even more work and more complex and would need
> revalidating every time Windows is patched (the DOSBox people can just
> wait until someone reports it is broken, rather than having to revalidate
> it themselves for every patch).
>
> There is also the simple old rule that the more lines of code the more
> bugs there will be and the harder maintenance and future development is,
> and keeping backwards compatibility and obsolete features increases the
> line count.



a CPU emulator is not that complicated, really...
one can write one in maybe around 50 kloc or so.

more so, MS already had these sorts of emulators, as they used them for
things like WinNT on Alpha, ... little says why similar emulators wouldn't
work on x64.

the rest would be to dump some old DLL's on top (hell, maybe the ones from
Win 3.11, FWIW, or a subset of 95 or 98...), and maybe do a little plumbing
to get graphics to the host, get back mouse, allow interfacing the native
filesystem, ...

I could almost do all this myself, apart from not wanting to bother (since,
DOSBox+Win3.11 works, or I could install 95, 98, or XP in QEMU, ...). but as
I it, this one should have been MS's responsibility (rather than burdening
end-users with something which is presumably their responsibility).

(DOSBox gives direct filesystem, but doesn't do very well at keeping it
sync'ed, resulting in extra hassles, 32-bit XP + QEMU though would allow
mounting a network share, OTOH).


hell, MS could have probably even just included DOSBox, FWIW.

well, ok, it is worth noting that Windows 7 Professional & Enterprise do
come with an emulator (not seen personally), which I guess just runs 32-bit
XP (and, so yes, 16-bit SW does maybe return on Win-7, in an emulator...).
(well, with these, one can also get an MS-adapted version of GCC and BASH,
hmm...).

I have Win-7 on my laptop, but it is "Home Ultimate", and hence also
requires the DOSBox or QEMU trick...


anyways, big code is not really that big of a problem IME:
I am working on codebases in the Mloc range, by myself, and in general have
not had too many problems of this sort.

MS has lots more developers, so probably they have easily 10s or 100s of
Mloc to worry about, rather than just the few 100s of kloc needed to make
something like this work, and maybe even be really nice-looking and well
behaved...



 
Reply With Quote
 
 
 
 
Nobody
Guest
Posts: n/a
 
      01-17-2010
On Sat, 16 Jan 2010 00:30:33 -0700, BGB / cr88192 wrote:

>>> it is worth noting the rather large numbers of hoops Intel, MS, ... have
>>> gone through over the decades to make all this stuff work, and keep
>>> working...

>>
>> It's also worth noting that they know when to give up.
>>
>> If maintaining compatibility just requires effort (on the part of MS or
>> Intel), then usually they make the effort. If it would require a
>> substantial performance sacrifice (i.e. complete software emulation),
>> then tough luck.

>
> for the DOS or Win 3.x apps, few will notice the slowdown, as these apps
> still run much faster in the emulator than on the original HW...


That depends upon how much you emulate.

For code which interacts with hardware, you may need to emulate the
hardware as well. This is less of a problem for the PC, due to the
widespread presence of clones (i.e. non-IBM systems). On platforms with
little or no hardware variation (e.g. the Amiga), programs would often
rely upon a particular section of code completing execution before a
certain hardware event occurred (or vice versa).

You may also need to emulate the timings for other reasons. A game which
doesn't scale to frame rate may need to be slowed down to maintain
playability. OTOH, a game which does scale to frame rate may need to be
slowed down so that the frame timings fit within the expected range.

A concrete example of the latter: the original Ultima Underworld game
basically still runs under Win98 on a P4. However: it scales to frame
rate, i.e. the distance anything moves each frame is proportional to the
time between frames. Normally this would be a good thing, except that
everything uses integer coordinates. On a modern system, the time between
frames is so low that the distance moved per frame comes often out at less
than one "unit", so positive values get rounded to zero and negative
values to minus one, resulting in some entities (including the player)
being unable to move North or East.

tl;dr version: some code is so tied to specific hardware that the only way
you can run it on anything else involves VHDL/Verilog simulation.

 
Reply With Quote
 
BGB / cr88192
Guest
Posts: n/a
 
      01-17-2010

"Nobody" <(E-Mail Removed)> wrote in message
news(E-Mail Removed)...
> On Sat, 16 Jan 2010 00:30:33 -0700, BGB / cr88192 wrote:
>
>>>> it is worth noting the rather large numbers of hoops Intel, MS, ...
>>>> have
>>>> gone through over the decades to make all this stuff work, and keep
>>>> working...
>>>
>>> It's also worth noting that they know when to give up.
>>>
>>> If maintaining compatibility just requires effort (on the part of MS or
>>> Intel), then usually they make the effort. If it would require a
>>> substantial performance sacrifice (i.e. complete software emulation),
>>> then tough luck.

>>
>> for the DOS or Win 3.x apps, few will notice the slowdown, as these apps
>> still run much faster in the emulator than on the original HW...

>
> That depends upon how much you emulate.
>
> For code which interacts with hardware, you may need to emulate the
> hardware as well. This is less of a problem for the PC, due to the
> widespread presence of clones (i.e. non-IBM systems). On platforms with
> little or no hardware variation (e.g. the Amiga), programs would often
> rely upon a particular section of code completing execution before a
> certain hardware event occurred (or vice versa).
>


yeah. DOS emulators typically run them on fake HW.
for example, DOSBox uses a fake SoundBlaster and S3 Virge (from what I
remember), ...

for Win16, this should not be necessary, since Win16 still did have
protection, and generally isolated the software from the HW. even if it did
allow direct HW access, these apps would unlikely have run on NT-based
systems (unless NTVDM was actually faking a bunch of HW as well, but I doubt
this).


> You may also need to emulate the timings for other reasons. A game which
> doesn't scale to frame rate may need to be slowed down to maintain
> playability. OTOH, a game which does scale to frame rate may need to be
> slowed down so that the frame timings fit within the expected range.
>
> A concrete example of the latter: the original Ultima Underworld game
> basically still runs under Win98 on a P4. However: it scales to frame
> rate, i.e. the distance anything moves each frame is proportional to the
> time between frames. Normally this would be a good thing, except that
> everything uses integer coordinates. On a modern system, the time between
> frames is so low that the distance moved per frame comes often out at less
> than one "unit", so positive values get rounded to zero and negative
> values to minus one, resulting in some entities (including the player)
> being unable to move North or East.
>


granted, poor code is allowed to break, presumably...
I think in general the Ultima games were known for being horridly
unreliable/broken even on the HW they were designed for...


anyways, the point would be to make old software work, not to make buggy
software work.
the vast majority of old SW working is what is asked, not all old software
which may contain obscure bugs.


> tl;dr version: some code is so tied to specific hardware that the only way
> you can run it on anything else involves VHDL/Verilog simulation.
>


errm, I doubt this...

most full-system emulators fake things at the level of the IO ports, ... and
this in general works plenty well (both OS's and apps generally work). other
things, such as the DMA and IRQ controller, ... can similarly be faked in
SW, and don't require full HW simulation.

on many newer systems, the bus controller itself contains a processor and
some code, which emulates some legacy devices in much the same way: watching
IO ports, responding, ...

granted, not everything works exactly, within reasonable bounds:
QEMU or Bochs will probably not give HW Accel graphics, for example, but
most other things work.

bit-twiddling != need for VHDL...


 
Reply With Quote
 
Nobody
Guest
Posts: n/a
 
      01-18-2010
On Sun, 17 Jan 2010 10:53:40 -0700, BGB / cr88192 wrote:

>> tl;dr version: some code is so tied to specific hardware that the only way
>> you can run it on anything else involves VHDL/Verilog simulation.

>
> errm, I doubt this...
>
> most full-system emulators fake things at the level of the IO ports, ... and
> this in general works plenty well (both OS's and apps generally work). other
> things, such as the DMA and IRQ controller, ... can similarly be faked in
> SW, and don't require full HW simulation.


But they only emulate the hardware to the extent sufficient for "typical"
use.

That's not a problem if the system you're trying to emulate includes a
"real" OS. You only need to emulate to the level at which the OS uses the
hardware, and at which it permits other applications to use it.

On platforms where it was common for applications to just kick the OS
out of the way and access the hardware directly (i.e. most of the 8- and
16-bit micros, and PCs before Win3.1 took over from DOS), anything could
happen (and often did).

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to restrict ASP.NET to specific processors in a multiprocessor system bonse ASP .Net 3 10-10-2005 12:06 PM
Accessing a global variable when there is a local variable in the same name Mohanasundaram C Programming 44 08-23-2004 11:17 PM
ASP.net, Windows 2000, and SMP/Multiprocessor Ohaya ASP .Net 0 02-22-2004 03:33 AM
load balancing with multiprocessor servers Jon ASP .Net 4 02-20-2004 04:47 PM
multiprocessor problem Jaytersen VHDL 2 10-27-2003 05:17 PM



Advertisments