Velocity Reviews > Re: Casting an array to integer type

# Re: Casting an array to integer type

Joe Wright
Guest
Posts: n/a

 09-02-2005
Eric Sosman wrote:
>
> tedu wrote:
>
>>Eric Sosman wrote:
>>
>>
>>> struct { int flag : 1; } s;
>>>
>>>... the value of `s.flag' will always be zero, no matter what
>>>you try to assign to it. (Explanation: if `int' is signed
>>>when used as a bit-field base, the lone bit of `s.flag' will
>>>be its sign bit and there will be no value bits at all.)

>>
>>
>>I may be misunderstanding how bitfields work, but why couldn't you read
>>and write the sign bit in such a case?

>
>
> The Standard allows three representations for signed
> integers: signed magnitude, ones' complement, and two's
> complement. In all three, if the sign bit is clear the
> value is what you'd expect by considering the value bits
> as ordinary binary notation. Since s.flag has no value
> bits, its value when the sign bit is clear must be zero.
>
> Now for the negatives. If the sign bit is set
>
> - for signed magnitude, the value is the negative of
> the number obtained from the value bits. The value
> bits contribute zero, which when negated is still
> zero.[*]
>
> - for ones' complement, the value is the negative of
> the number obtained by inverting all the value bits.
> There are no value bits to invert, so the zero value
> they contribute is still zero after negation.[*]
>
> - for two's complement, the value is the number obtained
> from the value bits, minus 2**k where k is the count
> of value bits (zero in this case). The value bits
> produce a zero, and subtracting 2**0 == -1 yields -1.
> Aha! I see that I mis-remembered and mis-stated
> the case.
>
> So: in two of the allowable representations the value
> is uniformly zero.[*] In two's complement the value is
> either zero or minus one, but never plus one. When I said
> `if (s.flag)' could not succeed I was wrong (for the common
> case of two's complement); what I should have said had I
> remembered the case better and/or been thinking more clearly
> is that `if (s.flag > 0)' or `if (s.flag == 1)' could never
> succeed, not in any of the three representations. And indeed,
> this is what the compiler warned about: the range of s.flag
> includes no positive values, so the compiler emitted the same
> warning it would for `if (sizeof(int) < 0)'. I apologize if
> I've misled anyone.
>
>[*] The Standard permits minus zero to be a "trap value,"
> meaning that you don't even get a value at all: you get a
> signal of some kind instead. In all three representations
> s.flag = 1 is dubious, because 1 is outside the representable
> range: you'll get an implementation-defined value or an
> implementation-defined signal.
>

Try this..

#include <stdio.h>
struct { int flag : 1; } s;
int main(void) {
s.flag = 1;
printf("%d\n", s.flag);
s.flag = 0;
printf("%d\n", s.flag);
s.flag = -1;
printf("%d\n", s.flag);
s.flag = -2;
printf("%d\n", s.flag);
return 0;
}
Here at home gcc 3.1 prints..

-1
0
-1
0

...which seems at odds with your explanation. It looks like the low-order
bit gets assigned as a value bit and that in the conversion :1 to int
the value expands into the sign.

Of course gcc could get it wrong. (?)

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---

Eric Sosman
Guest
Posts: n/a

 09-03-2005
Joe Wright wrote:

> Eric Sosman wrote:
>> [...]
>> - for two's complement, the value is the number obtained
>> from the value bits, minus 2**k where k is the count
>> of value bits (zero in this case). The value bits
>> produce a zero, and subtracting 2**0 == -1 yields -1.

>
> Try this..
>
> #include <stdio.h>
> struct { int flag : 1; } s;
> int main(void) {
> s.flag = 1;
> printf("%d\n", s.flag);
> s.flag = 0;
> printf("%d\n", s.flag);
> s.flag = -1;
> printf("%d\n", s.flag);
> s.flag = -2;
> printf("%d\n", s.flag);
> return 0;
> }
> Here at home gcc 3.1 prints..
>
> -1
> 0
> -1
> 0
>
> ..which seems at odds with your explanation. It looks like the low-order
> bit gets assigned as a value bit and that in the conversion :1 to int
> the value expands into the sign.

It seems to confirm what I wrote (once I got my head
unscrewed): in two's complement, a one-bit signed integer
is either 0 or -1. Your first and fourth assignments invoke
implementation-defined behavior, because neither 1 nor -2
is one of the possible values. You got an implementation-
defined value as a result; the Standard also permits the
raising of an implementation-defined signal.

(Actually, there's implementation-defined behavior in
the third assignment, too, because the implementation defines
the representation of signed integers: if it chooses signed
magnitude or ones' complement the only representable value
is zero, and -1 is out of the representable range.)

--
Eric Sosman
http://www.velocityreviews.com/forums/(E-Mail Removed)lid

Tim Rentsch
Guest
Posts: n/a

 09-03-2005
Eric Sosman <(E-Mail Removed)> writes:

> tedu wrote:
> > Eric Sosman wrote:
> >
> >> struct { int flag : 1; } s;
> >>
> >>... the value of `s.flag' will always be zero, no matter what
> >>you try to assign to it. (Explanation: if `int' is signed
> >>when used as a bit-field base, the lone bit of `s.flag' will
> >>be its sign bit and there will be no value bits at all.)

> >
> >
> > I may be misunderstanding how bitfields work, but why couldn't you read
> > and write the sign bit in such a case?

>
> The Standard allows three representations for signed
> integers: signed magnitude, ones' complement, and two's
> complement. In all three, if the sign bit is clear the
> value is what you'd expect by considering the value bits
> as ordinary binary notation. Since s.flag has no value
> bits, its value when the sign bit is clear must be zero.
>
> Now for the negatives. If the sign bit is set
>
> - for signed magnitude, the value is the negative of
> the number obtained from the value bits. The value
> bits contribute zero, which when negated is still
> zero.[*]
>
> - for ones' complement, the value is the negative of
> the number obtained by inverting all the value bits.
> There are no value bits to invert, so the zero value
> they contribute is still zero after negation.[*]
>
> - for two's complement, the value is the number obtained
> from the value bits, minus 2**k where k is the count
> of value bits (zero in this case). The value bits
> produce a zero, and subtracting 2**0 == -1 yields -1.
> Aha! I see that I mis-remembered and mis-stated
> the case.
>
> So: in two of the allowable representations the value
> is uniformly zero.[*] In two's complement the value is
> either zero or minus one, but never plus one. When I said
> `if (s.flag)' could not succeed I was wrong (for the common
> case of two's complement); what I should have said had I
> remembered the case better and/or been thinking more clearly
> is that `if (s.flag > 0)' or `if (s.flag == 1)' could never
> succeed, not in any of the three representations. And indeed,
> this is what the compiler warned about: the range of s.flag
> includes no positive values, so the compiler emitted the same
> warning it would for `if (sizeof(int) < 0)'. I apologize if
> I've misled anyone.
>
>[*] The Standard permits minus zero to be a "trap value,"
> meaning that you don't even get a value at all: you get a
> signal of some kind instead. In all three representations
> s.flag = 1 is dubious, because 1 is outside the representable
> range: you'll get an implementation-defined value or an
> implementation-defined signal.

Actually, 'if (s.flag > 0)' or 'if (s.flag == 1)' could
succeed, in any of the three representation schemes,
depending on the implementation.

The rule about conversions that's indirectly referenced in
the *'ed paragraph is slightly misquoted. The wording in
6.3.1.3 p3 says the result is implementation-defined, not
that the conversions yields an implementation-defined value:

6.3.1.3

3 Otherwise, the new type is signed and the value
cannot be represented in it; either the result is
implementation-defined or an implementation-defined
signal is raised.

Because the result is implementation-defined, it can be a
trap representation. A sign bit of 1, with no value bits,
can be (implementation-)defined to be a trap representation
in any of the three representation schemes (6.2.6.2 p2). If
the result of the conversion is a trap representation, the
expressions accessing s.flag are undefined behavior; so,
they might yield 1 as a result.

The question of what conversions happen when assigning to
the bitfield s.flag is a little murky, because exactly what
the type is of a bitfield is a little murky. However, one
of three cases holds: (1) a conversion to the narrow type
happens because of the assignment -- this conversion can
yield a trap representation and so produces undefined
behavior; (2) a conversion to the narrow type happens when
the value of the right hand side is stored, which again can
yield a trap representation and produce undefined behavior;
or, (3) the right hand side is converted to 'int' type, and
there is no conversion to the narrow type, but storing the
value into the too-narrow bitfield results in an exceptional
condition, again producing undefined behavior. In each case
the result of the undefined behavior could cause the
expressions referencing s.flag to have the value 1.

The s.flag expressions could also yield the value 1 without
the presence of undefined behavior. The reason is that it's
implementation-defined whether 'int' on a bitfield is the
same as 'signed int' or 'unsigned int' (6.7.2 p5).

Peter Nilsson
Guest
Posts: n/a

 09-05-2005
Eric Sosman wrote:
> Peter Nilsson wrote:
> > Keith Thompson wrote:
> > > The only allowed types for bit-fields are plain int, unsigned
> > > int, signed int, and (C99 only) _Bool.

> >
> > Thanks for that.
> >
> > I note though that C99 also includes "or some other
> > implementation-defined type." Did any of the final C89/90/94/95
> > include this too? [I notice that it's missing from the C89
> > draft.]

>
> A given implementation is permitted to accept `wchar_t',
> say, as a bit-field base type, and is not required to issue
> a diagnostic if you use it.

This seems a very round about way to answer my question. You seem
to be implying 'yes, implementation defined bitfield types also

<snip>

--
Peter