Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Who can explain this bug?

Reply
Thread Tools

Who can explain this bug?

 
 
Tim Rentsch
Guest
Posts: n/a
 
      05-07-2013
David Brown <(E-Mail Removed)> writes:

> [snipping a lot]
>
> But it strikes me that "strictly conforming" is so tight that it
> is close to useless, if you can't write real-world programs that
> are strictly conforming. [...] On the other hand, "conforming"
> is so lose that it is close to useless.


The point of these terms (and their definitions) is to aid
discussion of implementations, not of programs. Sometimes
people forget that the main reason for having a C standard
is to define requirements for implementors, not to help
developers.
 
Reply With Quote
 
 
 
 
Tim Rentsch
Guest
Posts: n/a
 
      05-07-2013
David Brown <(E-Mail Removed)> writes:

> [..snip..]
>
> Some embedded compilers I have 4-byte doubles (identical to their
> single-precision floats), which is not unreasonable for 8-bit
> microcontrollers. Can such a compiler be "conforming", or must
> the doubles be bigger? I'm getting the impression that a
> conforming compiler would need 8-byte doubles, but could throw
> away the extra bits and do the calculations as floats.


You'll get better answers to questions like this, and a better
understanding of what is being said, if you first read what
the Standard says and try to discover the answer yourself:

http://www.open-std.org/jtc1/sc22/wg...docs/n1256.pdf
http://www.open-std.org/jtc1/sc22/wg...docs/n1570.pdf

For this question you should look at what's written in section
5.2.4.2.2 'Characteristics of floating types'.
 
Reply With Quote
 
 
 
 
Tim Rentsch
Guest
Posts: n/a
 
      05-07-2013
Marcin Grzegorczyk <(E-Mail Removed)> writes:

> David Brown wrote:
> [...]
>> Some embedded compilers I have 4-byte doubles (identical to their
>> single-precision floats), which is not unreasonable for 8-bit
>> microcontrollers. Can such a compiler be "conforming", or must the
>> doubles be bigger? I'm getting the impression that a conforming
>> compiler would need 8-byte doubles, but could throw away the extra bits
>> and do the calculations as floats.

>
> Indeed, the minimum requirements given in 5.2.4.2.2 for DBL_DIG,
> DBL_MIN_10_EXP and DBL_MAX_10_EXP cannot be met by an implementation
> that stores double in 32 bits. However, 8 bytes are not necessary; 6
> are enough. (If both the exponent and the significand are stored in
> binary, then (unless I made a mistake in the calculations) the
> exponent requires min. 8 bits and the significand requires min. 34
> bits, of which the most significant one need not be stored. Together
> with the sign bit, that gives 42 bits total.)


The formula given for DBL_DIG in 5.2.4.2.2 p11 implies, for b == 2,
a lower bound of 35 significand bits, giving 43 bits altogether (ie,
the rest of the calculation is right).

However, if b == 10 is used, and the exponent and significand are
encoded into a single (binary) numeric quantity, this can be
reduced to just 41 bits. This representation also gives a
noticeably larger exponent range (though entailing, of course, a
loss of almost two bits of precision, plus an unmentionably
large cost converting to/from the stored representation to do
floating point arithmetic).

> (Side note: IIRC, at least one C implementation for Atari ST (which
> did not have an FPU) did use 6-byte doubles. Unlike the IEEE formats,
> the exponent was stored after the significand, so that they could be
> separated for further processing with only AND instructions, no
> shifts.)
>
> Still, as James Kuyper wrote in this thread,


Either I missed this posting, or I understood its contents as
saying something different from the rest of the sentence.

> an implementation could pretend (via the <float.h> constants) to
> provide more precision than it really did, and probably still
> claim conformance. It would be a rather perverse thing to do,
> though.


Are you saying that, for example, an implementation could define
DBL_DIG to be 10, even when the stored representation would make
it impossible to convert any 10 decimal digit floating point
number to the stored representation and back again (so as to get
the original 10 decimal digits)? AFAICS such an implementation
could no more be conforming than one that defined INT_MAX as 15.
Have I misunderstood what you're saying? If not, can you explain
how you arrive at this conclusion, or what interpretation might
allow such a result?
 
Reply With Quote
 
Keith Thompson
Guest
Posts: n/a
 
      05-07-2013
Tim Rentsch <(E-Mail Removed)> writes:
> David Brown <(E-Mail Removed)> writes:
>
>> [snipping a lot]
>>
>> But it strikes me that "strictly conforming" is so tight that it
>> is close to useless, if you can't write real-world programs that
>> are strictly conforming. [...] On the other hand, "conforming"
>> is so lose that it is close to useless.

>
> The point of these terms (and their definitions) is to aid
> discussion of implementations, not of programs. Sometimes
> people forget that the main reason for having a C standard
> is to define requirements for implementors, not to help
> developers.


I'm not sure I agree with that last statement. I see a language
standard as a contract between implementers and developers, applying
equally to both.

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
James Kuyper
Guest
Posts: n/a
 
      05-07-2013
On 05/04/2013 06:55 PM, Marcin Grzegorczyk wrote:
....
> Still, as James Kuyper wrote in this thread, an implementation could
> pretend (via the <float.h> constants) to provide more precision than it
> really did, and probably still claim conformance. It would be a rather
> perverse thing to do, though.


I didn't really intend to imply that, though there's a certain amount of
truth to it. The standard does fail to impose any accuracy requirements
whatsoever on floating point arithmetic or on the functions in the
<math.h> and <complex.h> (5.2.4.2.2p6). That fact makes it difficult to
write any code involving floating point operations that is guaranteed by
the standard to do anything useful. As a special case of that general
statement, it makes it difficult to prove that a defective <float.h> is
in fact defective.

However, it was not my point that <float.h> could be defective. In fact,
what I basically demonstrated is that there's a limit on how much
imprecision can be covered up by this means. The inaccuracy allowed by
the standard for conversion of floating point constants to floating
point values is sufficient to make it difficult to prove non-conformance
if the correct value for DBL_EPSILON were as large as twice the
standard's maximum value. However, anything more than twice that minimum
would be provably non-conforming, despite what it says in 5.2.4.2.2p6.
That's not enough to fake conformance with the standard's requirements
while using 32-bit floating point for double, which was the issue I was
addressing.
 
Reply With Quote
 
Marcin Grzegorczyk
Guest
Posts: n/a
 
      05-07-2013
Tim Rentsch wrote:
> Marcin Grzegorczyk <(E-Mail Removed)> writes:

[...]
>> (If both the exponent and the significand are stored in
>> binary, then (unless I made a mistake in the calculations) the
>> exponent requires min. 8 bits and the significand requires min. 34
>> bits, of which the most significant one need not be stored. Together
>> with the sign bit, that gives 42 bits total.)

>
> The formula given for DBL_DIG in 5.2.4.2.2 p11 implies, for b == 2,
> a lower bound of 35 significand bits, giving 43 bits altogether (ie,
> the rest of the calculation is right).


Checked that again, you're right.
I did make a mistake in the calculations, then
--
Marcin Grzegorczyk
 
Reply With Quote
 
Tim Rentsch
Guest
Posts: n/a
 
      06-11-2013
Keith Thompson <(E-Mail Removed)> writes:

> Tim Rentsch <(E-Mail Removed)> writes:
>> David Brown <(E-Mail Removed)> writes:
>>
>>> [snipping a lot]
>>>
>>> But it strikes me that "strictly conforming" is so tight that it
>>> is close to useless, if you can't write real-world programs that
>>> are strictly conforming. [...] On the other hand, "conforming"
>>> is so lose that it is close to useless.

>>
>> The point of these terms (and their definitions) is to aid
>> discussion of implementations, not of programs. Sometimes
>> people forget that the main reason for having a C standard
>> is to define requirements for implementors, not to help
>> developers.

>
> I'm not sure I agree with that last statement. I see a
> language standard as a contract between implementers and
> developers, applying equally to both.


I don't disagree on your fundamental point, but there's a
subtle distinction between what the two statements are
considering. The main reason for having C be standardized is
just as you say, to form a contract between implementors and
developers, and this is of interest to both. However, once the
terms of the contract are decided, the main reason for having a
defining document is to serve as a reference for implementors.
To be sure, some developers will read the defining document,
and even benefit from reading it, but it isn't necessary to be
a successful C developer, whereas reading and understanding the
C Standard _is_ pretty much necessary to write a conforming
implementation. So I don't think it's wrong to say the main
reason for having a C standard -- in the sense of the actual
formal document -- is to define requirements for implementors.
C developers do very much benefit from having the language
be standardized, but for the most part they do not benefit
(directly) from the actual formal document.
 
Reply With Quote
 
glen herrmannsfeldt
Guest
Posts: n/a
 
      06-11-2013
Tim Rentsch <(E-Mail Removed)> wrote:
> Keith Thompson <(E-Mail Removed)> writes:


(snip)
>> I'm not sure I agree with that last statement. I see a
>> language standard as a contract between implementers and
>> developers, applying equally to both.


> I don't disagree on your fundamental point, but there's a
> subtle distinction between what the two statements are
> considering. The main reason for having C be standardized is
> just as you say, to form a contract between implementors and
> developers, and this is of interest to both. However, once the
> terms of the contract are decided, the main reason for having a
> defining document is to serve as a reference for implementors.
> To be sure, some developers will read the defining document,
> and even benefit from reading it, but it isn't necessary to be
> a successful C developer, whereas reading and understanding the
> C Standard _is_ pretty much necessary to write a conforming
> implementation.


Well, to me a good language can be understood by developers
using a simpler set of rules, rarely having to look up the
fine details in the standard.

PL/I includes features for scientific, commercial, and systems
programming, and people from one group will rarely need ones
specific for the others. (Many features are used by more than
one group.)

As for precision and range of data types, often the limitations
of an implementation are passed along to users of the program.
That might be more true in floating point, but still true
in fixed point.

One consequence of that is that undetected overflow may surprise
the user of a program.

> So I don't think it's wrong to say the main
> reason for having a C standard -- in the sense of the actual
> formal document -- is to define requirements for implementors.
> C developers do very much benefit from having the language
> be standardized, but for the most part they do not benefit
> (directly) from the actual formal document.


I agree.

-- glen
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Really throwing this out there - does anyone have a copy of my oldDancer web browser? steven.miale@gmail.com Python 1 04-10-2013 03:32 PM
Someone can explain this to me? Tosh Cisco 4 11-19-2005 03:48 PM
Can someone explain this route-map command bhamoo@gmail.com Cisco 8 02-04-2005 12:35 PM
Can anyone explain me about this command? Keung Cisco 1 11-28-2004 11:20 AM
Can someone explain this? Ray Microsoft Certification 1 09-01-2003 02:12 AM



Advertisments