Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > C99 integer types

Reply
Thread Tools

C99 integer types

 
 
Ronald Landheer-Cieslak
Guest
Posts: n/a
 
      07-30-2012
Barry Schwarz <(E-Mail Removed)> wrote:
> On Saturday, July 28, 2012 8:12:23 PM UTC-5, justinx wrote:
>> Hi all,
>>
>> I have a question that I have been unable to answer. It is bugging me.
>> It is specifically related to the C99 integer types.
>>

> <snip>
>>
>> To be very specific. I am working with an STM32F103RCT6 Cortex-M3. The C
>> compiler I am using is Code Sourcery G++ Lite 2011.03-42 (4.5.2).
>>
>> In <stdint.h> the follow types are defined:
>>
>> typedef unsigned int uint_fast32_t;
>> typedef uint32_t uint_least32_t;
>> typedef unsigned long uint32_t;
>>
>> -. On this platform sizeof(int) == sizeof(long). Why are the new types
>> not all unsigned ints or all unsigned longs?
>>
>> -. What I find even more interesting is the underlying type of
>> uint32_least_t (unsigned long) is larger (or at least equal) than
>> uint32_fast_t (unsigned int). This does not seem logical to me. Surely
>> the least width integer type would use the smallest basic type possible.
>>
>> Any insight in to the selection process for establishing the underlying
>> data types for the fixed, minimum and fastest width types would be great.
>>
>> This leads to one other questions. If sizeof(int) == sizeof(long), is
>> there ANY difference (performance or otherwise) in using one over the other?

>
> While the contributors to this group may not be able to infer why the
> designers chose what they did does not imply the absence of a rationale.
> For example, while they are the same size, unsigned int and unsigned long
> have different conversion ranks. This may make a difference in the
> generated code and may have driven the compiler writers to choose one over the other.


Excuse my ignorance, but what's a "conversion rank"?

Thx

rlc


--
Software analyst & developer -- http://rlc.vlinder.ca
 
Reply With Quote
 
 
 
 
Barry Schwarz
Guest
Posts: n/a
 
      07-30-2012
On Monday, July 30, 2012 8:53:28 AM UTC-5, Ronald Landheer-Cieslak wrote:
>
> Excuse my ignorance, but what's a "conversion rank"?


n1256 is freely available reasonabley current draft of C99 standard. Check section 6.3.1.1.
 
Reply With Quote
 
 
 
 
Eric Sosman
Guest
Posts: n/a
 
      07-30-2012
On 7/30/2012 9:53 AM, Ronald Landheer-Cieslak wrote:
> Barry Schwarz <(E-Mail Removed)> wrote:
>>[...]
>> While the contributors to this group may not be able to infer why the
>> designers chose what they did does not imply the absence of a rationale.
>> For example, while they are the same size, unsigned int and unsigned long
>> have different conversion ranks. This may make a difference in the
>> generated code and may have driven the compiler writers to choose one over the other.

>
> Excuse my ignorance, but what's a "conversion rank"?


Shorthand for "integer conversion rank," of course.

C sometimes needs to convert values from one type to another
before working with them. For example, you cannot compare an
`unsigned short' and a `signed int' as they stand; you must first
convert them to a common type and then compare the converted values.
But what type should be chosen? Plausible arguments could be made
for any of `unsigned short' or `signed int' or `unsigned int' or
even `unsigned long', depending on the relative "sizes" of these
types on the machine at hand.

"Integer conversion rank" formalizes this notion of "size."
In the old days there were only a few integer types and it was
easy to enumerate the possible combinations. Things got more
complicated when C99 not only introduced new integer types, but
made the set open-ended: An implementation might support types
like `int24_t' or `uint_least36_t', and we need to know where
these fit with respect to each other and to generic types like
`long'. For example, when you divide a `uint_least36_t' by a
`long', what conversions occur? Inquiring masochists want to know.

To this end, each integer type has an "integer conversion rank"
that establishes a pecking order. Roughly speaking, "narrow" types
have low ranks and "wide" types have higher ranks. It's all in
section 6.3.1.1 of the Standard, which takes quite a bit of prose
to express this "narrow versus wide" idea precisely -- but that's
really all it's doing: narrow versus wide, and how to handle ties.

Eventually, when C needs to perform "integer promotions" or
"usual arithmetic conversions," its choice of target type for
integers is driven by the integer conversion rank(s) of the original
type(s) involved.

--
Eric Sosman
http://www.velocityreviews.com/forums/(E-Mail Removed)d
 
Reply With Quote
 
Ronald Landheer-Cieslak
Guest
Posts: n/a
 
      07-30-2012
Eric Sosman <(E-Mail Removed)> wrote:
> On 7/30/2012 9:53 AM, Ronald Landheer-Cieslak wrote:
>> Barry Schwarz <(E-Mail Removed)> wrote:
>>> [...]
>>> While the contributors to this group may not be able to infer why the
>>> designers chose what they did does not imply the absence of a rationale.
>>> For example, while they are the same size, unsigned int and unsigned long
>>> have different conversion ranks. This may make a difference in the
>>> generated code and may have driven the compiler writers to choose one over the other.

>>
>> Excuse my ignorance, but what's a "conversion rank"?

>
> Shorthand for "integer conversion rank," of course.
>
> C sometimes needs to convert values from one type to another
> before working with them. For example, you cannot compare an
> `unsigned short' and a `signed int' as they stand; you must first
> convert them to a common type and then compare the converted values.
> But what type should be chosen? Plausible arguments could be made
> for any of `unsigned short' or `signed int' or `unsigned int' or
> even `unsigned long', depending on the relative "sizes" of these
> types on the machine at hand.
>
> "Integer conversion rank" formalizes this notion of "size."
> In the old days there were only a few integer types and it was
> easy to enumerate the possible combinations. Things got more
> complicated when C99 not only introduced new integer types, but
> made the set open-ended: An implementation might support types
> like `int24_t' or `uint_least36_t', and we need to know where
> these fit with respect to each other and to generic types like
> `long'. For example, when you divide a `uint_least36_t' by a
> `long', what conversions occur? Inquiring masochists want to know.
>
> To this end, each integer type has an "integer conversion rank"
> that establishes a pecking order. Roughly speaking, "narrow" types
> have low ranks and "wide" types have higher ranks. It's all in
> section 6.3.1.1 of the Standard, which takes quite a bit of prose
> to express this "narrow versus wide" idea precisely -- but that's
> really all it's doing: narrow versus wide, and how to handle ties.
>
> Eventually, when C needs to perform "integer promotions" or
> "usual arithmetic conversions," its choice of target type for
> integers is driven by the integer conversion rank(s) of the original
> type(s) involved.


OK, so it basically formalizes the conversions that the integer types goes
through to end up with either something useful or something
implementation-defined, or both.

Reading the draft Barry pointed to, it doesn't seem to actually change any
of the rules as they were before - just formalize them, is that right? (and
comparing a negative signed short to an unsigned long still yields
implementation-defined results).

Thanks,

rlc

--
Software analyst & developer -- http://rlc.vlinder.ca
 
Reply With Quote
 
Ronald Landheer-Cieslak
Guest
Posts: n/a
 
      07-30-2012
Barry Schwarz <(E-Mail Removed)> wrote:
> On Monday, July 30, 2012 8:53:28 AM UTC-5, Ronald Landheer-Cieslak wrote:
>>
>> Excuse my ignorance, but what's a "conversion rank"?

>
> n1256 is freely available reasonabley current draft of C99 standard.
> Check section 6.3.1.1.

thx

--
Software analyst & developer -- http://rlc.vlinder.ca
 
Reply With Quote
 
James Kuyper
Guest
Posts: n/a
 
      07-30-2012
On 07/30/2012 02:26 PM, Ronald Landheer-Cieslak wrote:
> Eric Sosman <(E-Mail Removed)> wrote:

....
>> Eventually, when C needs to perform "integer promotions" or
>> "usual arithmetic conversions," its choice of target type for
>> integers is driven by the integer conversion rank(s) of the original
>> type(s) involved.

>
> OK, so it basically formalizes the conversions that the integer types goes
> through to end up with either something useful or something
> implementation-defined, or both.
>
> Reading the draft Barry pointed to, it doesn't seem to actually change any
> of the rules as they were before - just formalize them, is that right?


I believe that integer conversion rank was put into the very first
version of the C standard, nearly a quarter century ago. Before that
time, different compilers implemented different rules. So while it would
be accurate to say that the rules were formalized, it would be
inaccurate to say that there was no change: some compilers implemented
rules that differed from the formalized version of the rules. I'm not
sure whether any of them implemented exactly the rules that were
formalized, though I think it's likely that some did.

(and
> comparing a negative signed short to an unsigned long still yields
> implementation-defined results).


In such a comparison, the negative signed short value is first promoted
to an 'int', without change in value. Then that value is converted to
unsigned long. That conversion is well-defined: it is performed by
adding ULONG_MAX+1 to the negative value, with a result that is
necessarily representable as unsigned long. That result is then compared
with the other unsigned long value.

Since the value of ULONG_MAX is implementation-defined, the result could
be described as implementation-defined, but once the value for ULONG_MAX
has been defined by the implementation, the standard gives the
implementation no additional freedom when performing that comparison.

Does that correspond with what you meant?
 
Reply With Quote
 
Eric Sosman
Guest
Posts: n/a
 
      07-30-2012
On 7/30/2012 2:26 PM, Ronald Landheer-Cieslak wrote:
> Eric Sosman <(E-Mail Removed)> wrote:
>> On 7/30/2012 9:53 AM, Ronald Landheer-Cieslak wrote:
>>> Barry Schwarz <(E-Mail Removed)> wrote:
>>>> [...]
>>>> While the contributors to this group may not be able to infer why the
>>>> designers chose what they did does not imply the absence of a rationale.
>>>> For example, while they are the same size, unsigned int and unsigned long
>>>> have different conversion ranks. This may make a difference in the
>>>> generated code and may have driven the compiler writers to choose one over the other.
>>>
>>> Excuse my ignorance, but what's a "conversion rank"?

>>
>> Shorthand for "integer conversion rank," of course.
>>
>> C sometimes needs to convert values from one type to another
>> before working with them. For example, you cannot compare an
>> `unsigned short' and a `signed int' as they stand; you must first
>> convert them to a common type and then compare the converted values.
>> But what type should be chosen? Plausible arguments could be made
>> for any of `unsigned short' or `signed int' or `unsigned int' or
>> even `unsigned long', depending on the relative "sizes" of these
>> types on the machine at hand.
>>
>> "Integer conversion rank" formalizes this notion of "size."
>> In the old days there were only a few integer types and it was
>> easy to enumerate the possible combinations. Things got more
>> complicated when C99 not only introduced new integer types, but
>> made the set open-ended: An implementation might support types
>> like `int24_t' or `uint_least36_t', and we need to know where
>> these fit with respect to each other and to generic types like
>> `long'. For example, when you divide a `uint_least36_t' by a
>> `long', what conversions occur? Inquiring masochists want to know.
>>
>> To this end, each integer type has an "integer conversion rank"
>> that establishes a pecking order. Roughly speaking, "narrow" types
>> have low ranks and "wide" types have higher ranks. It's all in
>> section 6.3.1.1 of the Standard, which takes quite a bit of prose
>> to express this "narrow versus wide" idea precisely -- but that's
>> really all it's doing: narrow versus wide, and how to handle ties.
>>
>> Eventually, when C needs to perform "integer promotions" or
>> "usual arithmetic conversions," its choice of target type for
>> integers is driven by the integer conversion rank(s) of the original
>> type(s) involved.

>
> OK, so it basically formalizes the conversions that the integer types goes
> through to end up with either something useful or something
> implementation-defined, or both.
>
> Reading the draft Barry pointed to, it doesn't seem to actually change any
> of the rules as they were before - just formalize them, is that right?


Pretty much, yes: It's formalized to make it work with
implementation-defined integer types the Standard doesn't know
about, or doesn't require, or doesn't fully specify.

> (and
> comparing a negative signed short to an unsigned long still yields
> implementation-defined results).


Within limits, yes. Let's work through it:

- First, we consult 6.5.8 for the relational operators, and
learn that the "usual arithmetic conversions" apply to
both operands.

- Over to 6.3.1.8 for the UAC's, where we learn that the
"integer promotions" are performed on each operand,
independently.

- 6.3.1.1 describes the IP's. We find that `unsigned long'
is unaffected. It takes a little more research, but we
eventually find that `signed short' converts to `int'.

- 6.3.1.3 tells us that this conversion preserves the
original value, so we now have the `int' whose value
is the same as that of the original `signed short'.

- Back to 6.3.1.8 again to continue with the UAC's, now with
an `unsigned long' and an `int' and working through the
second level of "otherwise." There we find that we've got
one signed and one unsigned operand, *and* the unsigned
operand has the higher rank (consult 6.3.1.1 again). This
tells us we must convert the `int' again, this time to
`unsigned long'.

- Over to 6.3.1.3 again for the details of the conversion,
and if the `int' is negative we must use "the maximum
value that can be represented in the new type" to finish
converting. ULONG_MAX is an implementation-defined value,
so this is where implementation-definedness creeps in.

- ... and we're back to 6.5.8, with two `unsigned long' values,
which we know how to compare.

Seems like quite a lot of running around for a "simple" matter,
but consider: Before the first ANSI Standard nailed things down,
different C implementations disagreed on how some comparisons should
be done! Both the "unsigned preserving" and "value preserving" camps
(see the Rationale) would have agreed on the particular example we've
just worked through, but would have produced different results for
some other comparisons. The Standard's complicated formalisms --
including "integer conversion rank" -- are part of an attempt to
eliminate or at least minimize such disagreements.

--
Eric Sosman
(E-Mail Removed)d
 
Reply With Quote
 
Ronald Landheer-Cieslak
Guest
Posts: n/a
 
      07-31-2012
James Kuyper <(E-Mail Removed)> wrote:
> On 07/30/2012 02:26 PM, Ronald Landheer-Cieslak wrote:
>> Eric Sosman <(E-Mail Removed)> wrote:

> ...
>>> Eventually, when C needs to perform "integer promotions" or
>>> "usual arithmetic conversions," its choice of target type for
>>> integers is driven by the integer conversion rank(s) of the original
>>> type(s) involved.

>>
>> OK, so it basically formalizes the conversions that the integer types goes
>> through to end up with either something useful or something
>> implementation-defined, or both.
>>
>> Reading the draft Barry pointed to, it doesn't seem to actually change any
>> of the rules as they were before - just formalize them, is that right?

>
> I believe that integer conversion rank was put into the very first
> version of the C standard, nearly a quarter century ago. Before that
> time, different compilers implemented different rules. So while it would
> be accurate to say that the rules were formalized, it would be
> inaccurate to say that there was no change: some compilers implemented
> rules that differed from the formalized version of the rules. I'm not
> sure whether any of them implemented exactly the rules that were
> formalized, though I think it's likely that some did.
>
> (and
>> comparing a negative signed short to an unsigned long still yields
>> implementation-defined results).

>
> In such a comparison, the negative signed short value is first promoted
> to an 'int', without change in value. Then that value is converted to
> unsigned long. That conversion is well-defined: it is performed by
> adding ULONG_MAX+1 to the negative value, with a result that is
> necessarily representable as unsigned long. That result is then compared
> with the other unsigned long value.
>
> Since the value of ULONG_MAX is implementation-defined, the result could
> be described as implementation-defined, but once the value for ULONG_MAX
> has been defined by the implementation, the standard gives the
> implementation no additional freedom when performing that comparison.
>
> Does that correspond with what you meant?

Yes. That and the fact that due to the addition in 6.3.1.3p2 the result
differs on systems depending how signed integers are implemented (i.e. it
works as expected only for two's complement signed integers).

Thanks,

rlc

--
Software analyst & developer -- http://rlc.vlinder.ca
 
Reply With Quote
 
Ronald Landheer-Cieslak
Guest
Posts: n/a
 
      07-31-2012
Eric Sosman <(E-Mail Removed)> wrote:
> On 7/30/2012 2:26 PM, Ronald Landheer-Cieslak wrote:
>> Eric Sosman <(E-Mail Removed)> wrote:
>>> On 7/30/2012 9:53 AM, Ronald Landheer-Cieslak wrote:
>>>> Barry Schwarz <(E-Mail Removed)> wrote:
>>>>> [...]
>>>>> While the contributors to this group may not be able to infer why the
>>>>> designers chose what they did does not imply the absence of a rationale.
>>>>> For example, while they are the same size, unsigned int and unsigned long
>>>>> have different conversion ranks. This may make a difference in the
>>>>> generated code and may have driven the compiler writers to choose one over the other.
>>>>
>>>> Excuse my ignorance, but what's a "conversion rank"?
>>>
>>> Shorthand for "integer conversion rank," of course.
>>>
>>> C sometimes needs to convert values from one type to another
>>> before working with them. For example, you cannot compare an
>>> `unsigned short' and a `signed int' as they stand; you must first
>>> convert them to a common type and then compare the converted values.
>>> But what type should be chosen? Plausible arguments could be made
>>> for any of `unsigned short' or `signed int' or `unsigned int' or
>>> even `unsigned long', depending on the relative "sizes" of these
>>> types on the machine at hand.
>>>
>>> "Integer conversion rank" formalizes this notion of "size."
>>> In the old days there were only a few integer types and it was
>>> easy to enumerate the possible combinations. Things got more
>>> complicated when C99 not only introduced new integer types, but
>>> made the set open-ended: An implementation might support types
>>> like `int24_t' or `uint_least36_t', and we need to know where
>>> these fit with respect to each other and to generic types like
>>> `long'. For example, when you divide a `uint_least36_t' by a
>>> `long', what conversions occur? Inquiring masochists want to know.
>>>
>>> To this end, each integer type has an "integer conversion rank"
>>> that establishes a pecking order. Roughly speaking, "narrow" types
>>> have low ranks and "wide" types have higher ranks. It's all in
>>> section 6.3.1.1 of the Standard, which takes quite a bit of prose
>>> to express this "narrow versus wide" idea precisely -- but that's
>>> really all it's doing: narrow versus wide, and how to handle ties.
>>>
>>> Eventually, when C needs to perform "integer promotions" or
>>> "usual arithmetic conversions," its choice of target type for
>>> integers is driven by the integer conversion rank(s) of the original
>>> type(s) involved.

>>
>> OK, so it basically formalizes the conversions that the integer types goes
>> through to end up with either something useful or something
>> implementation-defined, or both.
>>
>> Reading the draft Barry pointed to, it doesn't seem to actually change any
>> of the rules as they were before - just formalize them, is that right?

>
> Pretty much, yes: It's formalized to make it work with
> implementation-defined integer types the Standard doesn't know
> about, or doesn't require, or doesn't fully specify.
>
>> (and
>> comparing a negative signed short to an unsigned long still yields
>> implementation-defined results).

>
> Within limits, yes. Let's work through it:
>
> - First, we consult 6.5.8 for the relational operators, and
> learn that the "usual arithmetic conversions" apply to
> both operands.
>
> - Over to 6.3.1.8 for the UAC's, where we learn that the
> "integer promotions" are performed on each operand,
> independently.
>
> - 6.3.1.1 describes the IP's. We find that `unsigned long'
> is unaffected. It takes a little more research, but we
> eventually find that `signed short' converts to `int'.
>
> - 6.3.1.3 tells us that this conversion preserves the
> original value, so we now have the `int' whose value
> is the same as that of the original `signed short'.
>
> - Back to 6.3.1.8 again to continue with the UAC's, now with
> an `unsigned long' and an `int' and working through the
> second level of "otherwise." There we find that we've got
> one signed and one unsigned operand, *and* the unsigned
> operand has the higher rank (consult 6.3.1.1 again). This
> tells us we must convert the `int' again, this time to
> `unsigned long'.
>
> - Over to 6.3.1.3 again for the details of the conversion,
> and if the `int' is negative we must use "the maximum
> value that can be represented in the new type" to finish
> converting. ULONG_MAX is an implementation-defined value,
> so this is where implementation-definedness creeps in.
>
> - ... and we're back to 6.5.8, with two `unsigned long' values,
> which we know how to compare.

A very thorough walk-through of the conversions indeed, thanks.

> Seems like quite a lot of running around for a "simple" matter,
> but consider: Before the first ANSI Standard nailed things down,
> different C implementations disagreed on how some comparisons should
> be done! Both the "unsigned preserving" and "value preserving" camps
> (see the Rationale) would have agreed on the particular example we've
> just worked through, but would have produced different results for
> some other comparisons. The Standard's complicated formalisms --
> including "integer conversion rank" -- are part of an attempt to
> eliminate or at least minimize such disagreements.

I didn't want to imply that I had any problem with the added complexity
(and don't think I had): I understand very well that there's a real need to
specify in detail how these sorts of conversions need to work.

However, I think it only works as expected if the signed integer type is a
two's complement type. 6.2.6.2s2 allows for three representations, two of
which won't work as expected when ULONG_MAX + 1 is "repeatedly added" as in
6.3.1.3p2.

I've never worked with hardware that had anything other than two's
complement integers, but that is what I meant with the
"implementation-defined" bit.

Thanks,

rlc

--
Software analyst & developer -- http://rlc.vlinder.ca
 
Reply With Quote
 
Eric Sosman
Guest
Posts: n/a
 
      07-31-2012
On 7/30/2012 8:15 PM, Ronald Landheer-Cieslak wrote:
> James Kuyper <(E-Mail Removed)> wrote:
>>[...]
>> Since the value of ULONG_MAX is implementation-defined, the result could
>> be described as implementation-defined, but once the value for ULONG_MAX
>> has been defined by the implementation, the standard gives the
>> implementation no additional freedom when performing that comparison.
>>
>> Does that correspond with what you meant?

> Yes. That and the fact that due to the addition in 6.3.1.3p2 the result
> differs on systems depending how signed integers are implemented (i.e. it
> works as expected only for two's complement signed integers).


The conversion rules are independent of representation, and
deal only with values. If you expect something different from
different ways of representing negative integers, you expect
incorrectly.

--
Eric Sosman
(E-Mail Removed)d
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
warning: use of C99 long long integer constant Sebastian Faust C Programming 23 04-02-2008 12:09 PM
Number of bits in C99 float types? Dom Fulton C Programming 27 03-19-2008 06:34 AM
Standard integer types vs <stdint.h> types euler70@gmail.com C Programming 163 01-28-2008 03:21 PM
Difference between "library parts" of C99 and "language parts" of C99 albert.neu@gmail.com C Programming 3 03-31-2007 08:14 PM
C99 struct initialization (C99/gcc) jilerner@yahoo.com C Programming 3 02-20-2006 04:41 AM



Advertisments