Velocity Reviews (http://www.velocityreviews.com/forums/index.php)
-   C Programming (http://www.velocityreviews.com/forums/f42-c-programming.html)
-   -   signed and unsigned types (http://www.velocityreviews.com/forums/t441329-signed-and-unsigned-types.html)

 Bilgehan.Balban@gmail.com 02-14-2006 04:21 PM

signed and unsigned types

Hi,

I have a basic question on signed and unsigned integers. Consider the
following code:

// Some context
{
unsigned int *x = (unsigned int *)(SOME_ADDR);
*x = ( 1 << 10 );
}

Here, how would ( 1 << 10 ) be interpreted in terms of sign? My
compiler does not give any warnings for a case like (1 << 10) assigned
to an unsigned int, however, it does say, "result of operation out of
range" for an assignment like (1 << 31). My interpretation of it was
that, in (1 << 31), "1" is signed by default, and shifting it 31 bits
overflows the type because [31] is the sign bit, and this is the cause
of the warning. But why does it not warn for the former case? Is the
sign determined by the lvalue?

Finally, a bit off-topic but, does a cast between signed and unsigned
values generate (perhaps a handful of) instructions for converting
two's complement signed and unsigned notation?

Thanks,

 Eric Sosman 02-14-2006 05:56 PM

Re: signed and unsigned types

Bilgehan.Balban@gmail.com wrote On 02/14/06 11:21,:
> Hi,
>
> I have a basic question on signed and unsigned integers. Consider the
> following code:
>
>
> // Some context
> {
> unsigned int *x = (unsigned int *)(SOME_ADDR);
> *x = ( 1 << 10 );
> }
>
> Here, how would ( 1 << 10 ) be interpreted in terms of sign?

Exactly as it would in any other context: it is the
positive value 1024, with type `int' (aka `signed int').
The business with `x' (including the dubious initialization)
is irrelevant to the evaluation of `1 << 10'.

> My
> compiler does not give any warnings for a case like (1 << 10) assigned
> to an unsigned int, however, it does say, "result of operation out of
> range" for an assignment like (1 << 31). My interpretation of it was
> that, in (1 << 31), "1" is signed by default, and shifting it 31 bits
> overflows the type because [31] is the sign bit, and this is the cause
> of the warning. But why does it not warn for the former case? Is the
> sign determined by the lvalue?

First, the compiler is being helpful in emitting the
warning; it is not required to do so. Left-shifts that
attempt to promote a one-bit into the sign position
yield what is known as "undefined behavior," meaning that
the C Standard washes its hands of your program and refuses
to say anything more about what might happen. The compiler
has noticed that `1 << 31' strays into this dangerous
territory, and warns you that you may find dragons there.

Second, there's nothing at all wrong with `1 << 10':
it yields 1024, always, and is perfectly well-defined.
There's no reason for the compiler to grouse about it.
Of course, a compiler is permitted to issue any warnings
it wants -- it can complain about the way you indent or
not required to issue diagnostics for valid code, and the
writers presumably felt that doing so would be unwelcome.

> Finally, a bit off-topic but, does a cast between signed and unsigned
> values generate (perhaps a handful of) instructions for converting
> two's complement signed and unsigned notation?

It might, it might not. Everything depends on the
characteristics of the underlying hardware: the compiler
must emit instructions to produce the defined effect, but
what those instructions are (if there are any) differs
from one system to another.

--
Eric.Sosman@sun.com

 Alex Fraser 02-14-2006 08:30 PM

Re: signed and unsigned types

<Bilgehan.Balban@gmail.com> wrote in message
[snip]
> Finally, a bit off-topic but, does a cast between signed and unsigned
> values generate (perhaps a handful of) instructions for converting
> two's complement signed and unsigned notation?

N869 (the last public draft of the C99 standard) says this:

6.3.1.3 Signed and unsigned integers

[#1] When a value with integer type is converted to another
integer type other than _Bool, if the value can be
represented by the new type, it is unchanged.

[#2] Otherwise, if the new type is unsigned, the value is
converted by repeatedly adding or subtracting one more than
the maximum value that can be represented in the new type
until the value is in the range of the new type.

[#3] Otherwise, the new type is signed and the value cannot
be represented in it; the result is implementation-defined.

Knowing this, the sizes of types used by a compiler, and the instruction set
of the target processor, you should have some idea of what code is generated
for conversions covered by the first two paragraphs - typically (depending
on the types) either none at all, zero extension, or sign extension.

For obvious reasons, you would do well to avoid relying on the result of
conversions covered by the third paragraph, but if two's complement
representation is used for signed integers the result is typically like
converting to the corresponding unsigned type, then reinterpreting the bits
as if they represented a signed value.

Alex

 All times are GMT. The time now is 03:24 PM.