Velocity Reviews > Behavior of the code

# Behavior of the code

somenath
Guest
Posts: n/a

 12-25-2007
Hi All,

I was going through one of the exercise of one C tutorial .
In that they have given one small code and asked about the output.

#include <stdio.h>
int main(void)
{
int x = 0xFFFFFFF0;
signed char y;
y = x;
printf("%x\n", y);
return 0;
}
Output of the program is

fffffff0

The explanation they have given as bellow

Since %x is being used for printing, the printf statement will promote
y to an integer and hence fffffff0 will be printed.

I have doubt about the explanation.

My understanding is
1) if we assign a signed variable more than its max size it will
overflow so the behavior is undefined .
2) And in the explanation it says %x promote y to an integer but I
think it converts its argument to hexadecimal.
Is my understanding wrong?

Regards,
Somenath

Thomas Lumley
Guest
Posts: n/a

 12-25-2007
On Dec 24, 8:57 pm, somenath <(E-Mail Removed)> wrote:
> I was going through one of the exercise of one C tutorial .
> In that they have given one small code and asked about the output.
>
> #include <stdio.h>
> int main(void)
> {
> int x = 0xFFFFFFF0;
> signed char y;
> y = x;
> printf("%x\n", y);
> return 0;}
>
> Output of the program is
>
> fffffff0
>
> The explanation they have given as bellow
>
> Since %x is being used for printing, the printf statement will promote
> y to an integer and hence fffffff0 will be printed.
>
> I have doubt about the explanation.

Good. The explanation is wrong.

> My understanding is
> 1) if we assign a signed variable more than its max size it will
> overflow so the behavior is undefined .

It's not quite as bad as that. If 0xfffffff0 is interpreted as an int
and ints have a 32-bit two's complement representation then x is -16,
which is within the range of signed char. For other implementation
choices, such as 64-bit ints, there may be undefined behaviour for
exactly the reason you suggest.

There is some implementation-dependence here: obviously the size of
int,
but also (if I am interpreting the standard correctly), whether
0xfffffff0
is interpreted as an int or unsigned int constant. If it is
interpreted
as unsigned int then I think you do have undefined behaviour

> 2) And in the explanation it says %x promote y to an integer but I
> think it converts its argument to hexadecimal.

Yes. You are right, the explanation is wrong. We can look at a
simplified version of the program that doesn't have the confusing
conversion from int.

#include <stdio.h>
int main(void)
{
signed char y = -16;
printf("%x\n", y);
return 0;
}

The actual output is implementation-specified, but on my computer
this
also prints
fffffff0

In this program, since y is part of the variable-length argument
list of printf() it will be promoted[1] by the default argument
promotions to int. printf() receives an int value of -16, and the
%x format asks for this in hexadecimal. The %x format has nothing to
do with the promotion of y to int.

-thomas

[1] I'm not sure whether it is possible for signed char to have values
that don't fit in an int. I'm sure that someone on the list is.

santosh
Guest
Posts: n/a

 12-25-2007
Thomas Lumley wrote:

> On Dec 24, 8:57 pm, somenath <(E-Mail Removed)> wrote:
>> I was going through one of the exercise of one C tutorial .
>> In that they have given one small code and asked about the output.
>>
>> #include <stdio.h>
>> int main(void)
>> {
>> int x = 0xFFFFFFF0;
>> signed char y;
>> y = x;
>> printf("%x\n", y);
>> return 0;}
>>
>> Output of the program is
>>
>> fffffff0
>>
>> The explanation they have given as bellow
>>
>> Since %x is being used for printing, the printf statement will
>> promote y to an integer and hence fffffff0 will be printed.
>>
>> I have doubt about the explanation.

>
> Good. The explanation is wrong.
>
>
>> My understanding is
>> 1) if we assign a signed variable more than its max size it
>> will overflow so the behavior is undefined .

>
> It's not quite as bad as that. If 0xfffffff0 is interpreted as an int
> and ints have a 32-bit two's complement representation [ ... ]

The C standard explicitly excuses such restrictions. int neither needs
to be 32 bits nor be represented in twos-complement form. It needs to
hold values in the range -32767 to 32767. It could be twos-complement,
ones-complement or sign-and-magnitude or something else. It is not
explicitly stated but we may presume at least 16 value bits for an int,
as C needs binary representation.

Old Wolf
Guest
Posts: n/a

 12-25-2007
On Dec 25, 5:57 pm, somenath <(E-Mail Removed)> wrote:
>
> #include <stdio.h>
> int main(void)
> {
> int x = 0xFFFFFFF0;
> signed char y;
> y = x;
> printf("%x\n", y);
> return 0;}
>
> Output of the program is
>
> fffffff0
>
> The explanation they have given as bellow
>
> Since %x is being used for printing, the printf statement will promote
> y to an integer and hence fffffff0 will be printed.

This is rubbish, your tutorial is lame

> My understanding is
> 1) if we assign a signed variable more than its max size it will
> overflow so the behavior is undefined .

It is implementation-defined to assign an out-of-range
value to an int. Your compiler documentation should say
somewhere what it does in this case.

It looks like on your implmentation, the result is that
-16 gets assigned to the int.

> 2) And in the explanation it says %x promote y to an integer but I
> think it converts its argument to hexadecimal.
> Is my understanding wrong?

The behaviour is undefined because you have to explicitly
pass an unsigned int for %x , the compiler doesn't do any
conversions.

somenath
Guest
Posts: n/a

 12-25-2007
On Dec 25, 10:57*am, santosh <(E-Mail Removed)> wrote:
> Thomas Lumley wrote:
> > On Dec 24, 8:57 pm, somenath <(E-Mail Removed)> wrote:
> >> I was going through one of the exercise of one C tutorial .
> >> In that they have given one small code and asked about the output.

>
> >> #include <stdio.h>
> >> int main(void)
> >> {
> >> * * * * int x = 0xFFFFFFF0;
> >> * * * * signed char y;
> >> * * * * y = x;
> >> * * * * printf("%x\n", y);
> >> * * * * return 0;}

>
> >> Output of the *program is

>
> >> fffffff0

>
> >> The explanation they have given as bellow

>
> >> Since %x is being used for printing, the printf statement will
> >> promote y to an integer and hence fffffff0 will be printed.

>
> >> I have doubt about the explanation.

>
> > Good. * The explanation is wrong.

>
> >> My understanding is
> >> 1) * * *if we assign a signed variable more than its max size *it
> >> will overflow so the behavior is undefined .

>
> > It's not quite as bad as that. *If 0xfffffff0 is interpreted as an int
> > and ints have a 32-bit two's complement representation [ ... ]

>
> The C standard explicitly excuses such restrictions. int neither needs
> to be 32 bits nor be represented in twos-complement form. It needs to
> hold values in the range -32767 to 32767. It could be twos-complement,
> ones-complement or sign-and-magnitude or something else. It is not
> explicitly stated but we may presume at least 16 value bits for an int,

To get little bit clearer .
So if i assigned 128 to signed char it is not guranted that it will
be converted to -128
For example
signed char x = 0x80;
x will not always contains -128 ?
Is it only possible in case of 2's complement representation ?

Malcolm McLean
Guest
Posts: n/a

 12-25-2007

"somenath" <(E-Mail Removed)> wrote in message

>get little bit clearer .
>So if i assigned 128 to signed char it is not guranted that it will
>be converted to -128
>For example
>signed char x = 0x80;
>x will not always contains -128 ?
>Is it only possible in case of 2's complement representation ?
>

Yes. It practise it is most unlikely that you will ever encounter a
non-two's complement machine, but C allows it.
However it is not so unlikely that a char will be more than 8 bits. Not all
machines do 8-bit addressing at an instruction level, in which case it is
reasonable to make char more bits.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

somenath
Guest
Posts: n/a

 12-25-2007
On Dec 25, 10:45*am, Thomas Lumley <(E-Mail Removed)> wrote:
> On Dec 24, 8:57 pm, somenath <(E-Mail Removed)> wrote:
>
>
>
>
>
> > I was going through one of the exercise of one C tutorial .
> > In that they have given one small code and asked about the output.

>
> > #include <stdio.h>
> > int main(void)
> > {
> > * * * * int x = 0xFFFFFFF0;
> > * * * * signed char y;
> > * * * * y = x;
> > * * * * printf("%x\n", y);
> > * * * * return 0;}

>
> > Output of the *program is

>
> > fffffff0

>
> > The explanation they have given as bellow

>
> > Since %x is being used for printing, the printf statement will promote
> > y to an integer and hence fffffff0 will be printed.

>
> > I have doubt about the explanation.

>
> Good. * The explanation is wrong.
>
> > My understanding is
> > 1) * * *if we assign a signed variable more than its max size *it will
> > overflow so the behavior is undefined .

>
> It's not quite as bad as that. *If 0xfffffff0 is interpreted as an int
> and ints have a 32-bit two's complement representation then x is -16,
> which is within the range of signed char. For other implementation
> choices, such as 64-bit ints, there may be undefined behaviour for
> exactly the reason you suggest.
>
> There is some implementation-dependence here: obviously the size of
> int,
> but also (if I am interpreting the standard correctly), whether
> 0xfffffff0
> is interpreted as an int or unsigned int constant. If it is
> interpreted
> as unsigned int then I think you do have undefined behaviour
>
> > 2) * * *And in the explanation it says %x promote y to an integer but I
> > think it converts its argument to hexadecimal.

>
> Yes. You are right, the explanation is wrong. We can look at a
> simplified version of the program that doesn't have the confusing
> conversion from int.
>
> #include <stdio.h>
> int main(void)
> {
> * * * * signed char y = -16;
> * * * * printf("%x\n", y);
> * * * * return 0;
>
> }
>
> The actual output is implementation-specified, but on my computer
> this
> also prints
> fffffff0
>
> In this program, since y is part of the variable-length argument
> list of printf() it will be promoted[1] by the default argument
> promotions to int. *printf() receives an int value of -16, and the
> %x format asks for this in hexadecimal. The %x format has nothing to
> do with the promotion of y to int.
>
> * * *-thomas
>
> [1] I'm not sure whether it is possible for signed char to have values
> that don't fit in an int. I'm sure that someone on the list is

I am unable to understand the conversation
Let me explain my understanding .

int x = 0xFFFFFFF0;
signed char y;
y = x;

In these three statements y is guaranteed to contain the last byte of
the integer. As my machine is Little endian y wil be equal to F0 i.e
240 in decimal.
Now when I print using %x it should print the hexadecimal
representation of 240 i.e F0 .Why it is printing fffffff0 ?

santosh
Guest
Posts: n/a

 12-25-2007
somenath wrote:

> On Dec 25, 10:57*am, santosh <(E-Mail Removed)> wrote:
>> Thomas Lumley wrote:
>> > On Dec 24, 8:57 pm, somenath <(E-Mail Removed)> wrote:
>> >> I was going through one of the exercise of one C tutorial .
>> >> In that they have given one small code and asked about the output.

>>
>> >> #include <stdio.h>
>> >> int main(void)
>> >> {
>> >> int x = 0xFFFFFFF0;
>> >> signed char y;
>> >> y = x;
>> >> printf("%x\n", y);
>> >> return 0;}

>>
>> >> Output of the *program is

>>
>> >> fffffff0

>>
>> >> The explanation they have given as bellow

>>
>> >> Since %x is being used for printing, the printf statement will
>> >> promote y to an integer and hence fffffff0 will be printed.

>>
>> >> I have doubt about the explanation.

>>
>> > Good. * The explanation is wrong.

>>
>> >> My understanding is
>> >> 1) * * *if we assign a signed variable more than its max size *it
>> >> will overflow so the behavior is undefined .

>>
>> > It's not quite as bad as that. *If 0xfffffff0 is interpreted as an
>> > int and ints have a 32-bit two's complement representation [ ... ]

>>
>> The C standard explicitly excuses such restrictions. int neither
>> needs to be 32 bits nor be represented in twos-complement form. It
>> needs to hold values in the range -32767 to 32767. It could be
>> twos-complement, ones-complement or sign-and-magnitude or something
>> else. It is not explicitly stated but we may presume at least 16
>> value bits for an int,

>
> To get little bit clearer .
> So if i assigned 128 to signed char it is not guranted that it will
> be converted to -128
> For example
> signed char x = 0x80;
> x will not always contains -128 ?
> Is it only possible in case of 2's complement representation ?

This is not guaranteed for plain signed char. You could test for the
presence of int8_t and use that if available, but it's likely not to be
available on a non-twos-complement machine, in which case you'll have
to use a wider type.

But except for some weird architectures, you needn't worry about the
non-availability of twos-complement.

santosh
Guest
Posts: n/a

 12-25-2007
somenath wrote:

<snip>

> I am unable to understand the conversation
> Let me explain my understanding .
>
> int x = 0xFFFFFFF0;
> signed char y;
> y = x;
>
> In these three statements y is guaranteed to contain the last byte of
> the integer.

No it's not. How can it be when 'x' isn't guaranteed to hold 0xfffffff
in the first place. Use a smaller value like 0x7fff.

> As my machine is Little endian y wil be equal to F0 i.e
> 240 in decimal.
> Now when I print using %x it should print the hexadecimal
> representation of 240 i.e F0 .Why it is printing fffffff0 ?

Because of format specifier mismatch. You are supplying a char and
telling it to look for an int. As a result, printf accesses more bytes
than it should and prints whatever happens to be in the extra bytes.
Use the 'hhx' format specifier and try again. Also use a cast for the
assignment to 'y'.

Harald van Dĳk
Guest
Posts: n/a

 12-25-2007
On Tue, 25 Dec 2007 16:34:53 +0530, santosh wrote:
> This is not guaranteed for plain signed char. You could test for the
> presence of int8_t and use that if available, but it's likely not to be
> available on a non-twos-complement machine, in which case you'll have to
> use a wider type.

If CHAR_BIT == 8, then char is an integer type with a width of 8 bits,
and when such an integer type existed, the (u)int8_t typedefs would have
to be provided. This requirement was obviously useless, and an attempt
has been made already to correct it -- 7.18.1.1 now reads

"These types are optional. However, if an implementation provides integer
types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for
the signed types) that have a two's complement representation, it shall
define the corresponding typedef names."

"These types are optional. However, if an implementation provides integer
types with widths of 8, 16, 32, or 64 bits, it shall define the
corresponding typedef names."

-- but, if CHAR_BIT == 8, unsigned char is still an unsigned integer type
with a width of 8 bits and no padding, meaning uint8_t is required to be
provided. And when uint8_t is provided, int8_t is required as well.

In other words, while you're quite possibly correct about the intent,
what the standard actually says is significantly different.