Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Why in stdint.h have both least and fast integer types?

Reply
Thread Tools

Why in stdint.h have both least and fast integer types?

 
 
GS
Guest
Posts: n/a
 
      11-27-2004
The stdint.h header definition mentions five integer categories,

1) exact width, eg., int32_t
2) at least as wide as, eg., int_least32_t
3) as fast as possible but at least as wide as, eg., int_fast32_t
4) integer capable of holding a pointer, intptr_t
5) widest integer in the implementation, intmax_t

Is there a valid motivation for having both int_least and int_fast?

--
TIA
 
Reply With Quote
 
 
 
 
Christian Bau
Guest
Posts: n/a
 
      11-27-2004
In article <(E-Mail Removed) >,
http://www.velocityreviews.com/forums/(E-Mail Removed) (GS) wrote:

> The stdint.h header definition mentions five integer categories,
>
> 1) exact width, eg., int32_t
> 2) at least as wide as, eg., int_least32_t
> 3) as fast as possible but at least as wide as, eg., int_fast32_t
> 4) integer capable of holding a pointer, intptr_t
> 5) widest integer in the implementation, intmax_t
>
> Is there a valid motivation for having both int_least and int_fast?


Of course. If 16 bit integers are slow in your hardware, and 32 bit
integers are fast, then you would want int_least16_t to be 16 bit, and
int_fast16_t to be 32 bit. That covers about every computer that you can
buy in a shop.
 
Reply With Quote
 
 
 
 
James Harris
Guest
Posts: n/a
 
      11-27-2004

"Christian Bau" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> In article <(E-Mail Removed) >,
> (E-Mail Removed) (GS) wrote:
>
>> The stdint.h header definition mentions five integer categories,
>>
>> 1) exact width, eg., int32_t
>> 2) at least as wide as, eg., int_least32_t
>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
>> 4) integer capable of holding a pointer, intptr_t
>> 5) widest integer in the implementation, intmax_t
>>
>> Is there a valid motivation for having both int_least and int_fast?

>
> Of course. If 16 bit integers are slow in your hardware, and 32 bit
> integers are fast, then you would want int_least16_t to be 16 bit, and
> int_fast16_t to be 32 bit. That covers about every computer that you can
> buy in a shop.


Interesting example but what advantage does int_least16_t really give? If
we are talking about a few scalars wouldn't it be OK to let the compiler
represent them as int32s since they are faster? If, on the other hand,
these were stored in arrays
int_least16_t fred [10000];
why not let the compiler choose whether to store as int16 or int32,
depending on it's optimization constraints?

--
James


 
Reply With Quote
 
Gordon Burditt
Guest
Posts: n/a
 
      11-27-2004
>>> The stdint.h header definition mentions five integer categories,
>>>
>>> 1) exact width, eg., int32_t
>>> 2) at least as wide as, eg., int_least32_t
>>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
>>> 4) integer capable of holding a pointer, intptr_t
>>> 5) widest integer in the implementation, intmax_t
>>>
>>> Is there a valid motivation for having both int_least and int_fast?

>>
>> Of course. If 16 bit integers are slow in your hardware, and 32 bit
>> integers are fast, then you would want int_least16_t to be 16 bit, and
>> int_fast16_t to be 32 bit. That covers about every computer that you can
>> buy in a shop.

>
>Interesting example but what advantage does int_least16_t really give? If


Space savings.

>we are talking about a few scalars wouldn't it be OK to let the compiler
>represent them as int32s since they are faster?


The programmer asked for memory savings over speed savings by
using int_least16_t over int_fast16_t. Speed doesn't do much good
if the program won't fit in (virtual) memory.

The few scalars might be deliberately made the same type as that
of a big array (or disk file) used in another compilation unit.
One example of this is storing data in dbm files using a third-party
library. When you retrieve data from dbm files, you get back a
pointer to the data, but it seems like it's usually pessimally
aligned, and in any case the dbm functions do not guarantee alignment,
so the way to use it is to memcpy() to a variable/structure of the
same type, and access it there. This fails if different compilations
have different sizes for int_least16_t.

>If, on the other hand,
>these were stored in arrays
> int_least16_t fred [10000];
>why not let the compiler choose whether to store as int16 or int32,
>depending on it's optimization constraints?


sizeof(int_least16_t) must be the same in all compilation units
that get linked together to make a program. (of course, array
subscripting, allocating a variable or array of int_least16_t, and
pointer incrementing all implicitly use that size) The optimizer
doesn't get much info on what size to make int_least16_t when the
only reference to it is:

void *vp;
size_t record_count;

qsort(vp, record_count, sizeof(int_least16_t), compar);

However, using that information, the compiler *MUST* choose now.
Perhaps before the part that actually allocates the array vp points
at is even written.

Gordon L. Burditt
 
Reply With Quote
 
Charlie Gordon
Guest
Posts: n/a
 
      11-29-2004
"James Harris" <no.email.please> wrote in message
news:41a8a7f2$0$1068$(E-Mail Removed)...
>
> "Christian Bau" <(E-Mail Removed)> wrote in message
> news:(E-Mail Removed)...
> > In article <(E-Mail Removed) >,
> > (E-Mail Removed) (GS) wrote:

....
> > Of course. If 16 bit integers are slow in your hardware, and 32 bit
> > integers are fast, then you would want int_least16_t to be 16 bit, and
> > int_fast16_t to be 32 bit. That covers about every computer that you can
> > buy in a shop.

>
> Interesting example but what advantage does int_least16_t really give? If
> we are talking about a few scalars wouldn't it be OK to let the compiler
> represent them as int32s since they are faster? If, on the other hand,
> these were stored in arrays
> int_least16_t fred [10000];
> why not let the compiler choose whether to store as int16 or int32,
> depending on it's optimization constraints?


That would create incompatibilities between modules compiled with different
optimisation settings : a horrible side effect, that would cause unlimited
headaches !
My understanding is that int16_t must be exactly 16 bits.
int_least16_t should be the practical choice on machines where 16 bit ints have
to be emulated for instance, but otherwise would still be implemented as 16 bit
ints, whereas int_fast16_t would only be 16 bits if that's the fastest option.

There really is more than just the speed/size tradeoff: practical/precise is
another dimension to take into account.

--
Chqrlie.


 
Reply With Quote
 
Kevin Bracey
Guest
Posts: n/a
 
      11-29-2004
In message <(E-Mail Removed) >
(E-Mail Removed) (GS) wrote:

> The stdint.h header definition mentions five integer categories,
>
> 1) exact width, eg., int32_t
> 2) at least as wide as, eg., int_least32_t
> 3) as fast as possible but at least as wide as, eg., int_fast32_t
> 4) integer capable of holding a pointer, intptr_t
> 5) widest integer in the implementation, intmax_t
>
> Is there a valid motivation for having both int_least and int_fast?


The point you missed is that the _least types are supposed to be the
*smallest* types at least as wide, as opposed to the *fastest*, which are
designated by _fast.

A typical example might be the ARM, which (until ARMv4) had no 16-bit memory
access instructions, and still has only 32-bit registers and arithmetic
instructions. There int_least16_t would be 16-bit, but int_fast16_t might be
32-bit.

How you decide what's "fastest" is the tricky bit.

In a function, code like:

uint16_t a, b, c;

a = b + c;

would be slow on the ARM, because it would have to perform a 32-bit
addition, and then manually trim the excess high bits off. Using
uint_fast16_t would have avoided that.[*]

On the other hand, if you had an array of 2000 such 32-bit int_fast_16_ts you
were working on, having them as 16-bit might actually be faster because they
fit in the cache better, regardless of the extra core CPU cycles to
manipulate them.

That observation is likely to be true for pretty much any cached processor
where int_fast_XX != int_least_XX, so as a programmer it's probably going to
be a good idea to always use int_least_XX for arrays of any significant size.

[*] Footnote - some good ARM compilers have "significant bit tracking" that
can actually figure out when such narrowing is mathematically
unnecessary.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd Tel: +44 (0) 1223 503464
182-190 Newmarket Road Fax: +44 (0) 1728 727430
Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/
 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      12-01-2004

"Gordon Burditt" <(E-Mail Removed)> wrote in message
news:coaq1f$(E-Mail Removed)...
>>>> The stdint.h header definition mentions five integer categories,
>>>>
>>>> 1) exact width, eg., int32_t
>>>> 2) at least as wide as, eg., int_least32_t
>>>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
>>>> 4) integer capable of holding a pointer, intptr_t
>>>> 5) widest integer in the implementation, intmax_t
>>>>
>>>> Is there a valid motivation for having both int_least and int_fast?
>>>
>>> Of course. If 16 bit integers are slow in your hardware, and 32 bit
>>> integers are fast, then you would want int_least16_t to be 16 bit, and
>>> int_fast16_t to be 32 bit. That covers about every computer that you
>>> can
>>> buy in a shop.

>>
>>Interesting example but what advantage does int_least16_t really give? If

>
> Space savings.


For scalars?

>>we are talking about a few scalars wouldn't it be OK to let the compiler
>>represent them as int32s since they are faster?

>
> The programmer asked for memory savings over speed savings by
> using int_least16_t over int_fast16_t. Speed doesn't do much good
> if the program won't fit in (virtual) memory.


You expect to run out of memory? If that is really a problem why not use
int16_t?

More to the point, memory constraints are more likely to be a feature of
PICs or similar. In that case I would want to be able to tell the compiler
to fit the code in X words but to still optimize to be as fast as possible.

> The few scalars might be deliberately made the same type as that
> of a big array (or disk file) used in another compilation unit.
> One example of this is storing data in dbm files using a third-party
> library. When you retrieve data from dbm files, you get back a
> pointer to the data, but it seems like it's usually pessimally
> aligned, and in any case the dbm functions do not guarantee alignment,
> so the way to use it is to memcpy() to a variable/structure of the
> same type, and access it there. This fails if different compilations
> have different sizes for int_least16_t.


Agreed but better, surely, to define the interface using int16_t. I expect
that int_least16_t would be different for different implementations, making
them incompatible with each other. This is an argument against the presence
of int_least16_t.

>>If, on the other hand,
>>these were stored in arrays
>> int_least16_t fred [10000];
>>why not let the compiler choose whether to store as int16 or int32,
>>depending on it's optimization constraints?

>
> sizeof(int_least16_t) must be the same in all compilation units
> that get linked together to make a program. (of course, array
> subscripting, allocating a variable or array of int_least16_t, and
> pointer incrementing all implicitly use that size) The optimizer
> doesn't get much info on what size to make int_least16_t when the
> only reference to it is:
>
> void *vp;
> size_t record_count;
>
> qsort(vp, record_count, sizeof(int_least16_t), compar);
>
> However, using that information, the compiler *MUST* choose now.
> Perhaps before the part that actually allocates the array vp points
> at is even written.


Again, perhaps this is better written as int16_t, though I am beginning to
see there could be benefits to separating int_fast16_t.

--
Cheers,
James




 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      12-01-2004

"Charlie Gordon" <(E-Mail Removed)> wrote in message
news:cof17g$omq$(E-Mail Removed)...
> "James Harris" <no.email.please> wrote in message
> news:41a8a7f2$0$1068$(E-Mail Removed)...
>>
>> "Christian Bau" <(E-Mail Removed)> wrote in message
>> news:(E-Mail Removed)...
>> > In article <(E-Mail Removed) >,
>> > (E-Mail Removed) (GS) wrote:

> ...
>> > Of course. If 16 bit integers are slow in your hardware, and 32 bit
>> > integers are fast, then you would want int_least16_t to be 16 bit, and
>> > int_fast16_t to be 32 bit. That covers about every computer that you
>> > can
>> > buy in a shop.

>>
>> Interesting example but what advantage does int_least16_t really give?
>> If
>> we are talking about a few scalars wouldn't it be OK to let the compiler
>> represent them as int32s since they are faster? If, on the other hand,
>> these were stored in arrays
>> int_least16_t fred [10000];
>> why not let the compiler choose whether to store as int16 or int32,
>> depending on it's optimization constraints?

>
> That would create incompatibilities between modules compiled with
> different
> optimisation settings : a horrible side effect, that would cause
> unlimited
> headaches !
> My understanding is that int16_t must be exactly 16 bits.
> int_least16_t should be the practical choice on machines where 16 bit
> ints have
> to be emulated for instance, but otherwise would still be implemented as
> 16 bit
> ints, whereas int_fast16_t would only be 16 bits if that's the fastest
> option.
>
> There really is more than just the speed/size tradeoff: practical/precise
> is
> another dimension to take into account.
>
> --
> Chqrlie.
>
>



 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      12-01-2004

"Charlie Gordon" <(E-Mail Removed)> wrote in message
news:cof17g$omq$(E-Mail Removed)...
<snip>
>> Interesting example but what advantage does int_least16_t really give?
>> If
>> we are talking about a few scalars wouldn't it be OK to let the compiler
>> represent them as int32s since they are faster? If, on the other hand,
>> these were stored in arrays
>> int_least16_t fred [10000];
>> why not let the compiler choose whether to store as int16 or int32,
>> depending on it's optimization constraints?

>
> That would create incompatibilities between modules compiled with
> different
> optimisation settings : a horrible side effect, that would cause
> unlimited
> headaches !


But isn't that exactly what int_least16_t does? It requires compilation
under the same rules for all modules which are to be linked together (and
that share data). Otherwise chaos will ensue. Given that the compilation
rules must match why have the three types of 16-bit integer? I can see the
need for two,

1) an integer that is at least N bits wide but upon which operations are as
fast as possible,
2) an integer than behaves as if it is exactly N bits wide - for shifts
etc.,

but I'm not sure about having a third option. This seems a bit baroque and
not in keeping with the lean nature that is the essence of C. It also seems
to me to confuse the performance vs. space issue with program logic. Is
this a set of data types designed by committee? I wonder what Ken Thompson
and Dennis Ritchie make of it.

> My understanding is that int16_t must be exactly 16 bits.
> int_least16_t should be the practical choice on machines where 16 bit
> ints have
> to be emulated for instance, but otherwise would still be implemented as
> 16 bit
> ints, whereas int_fast16_t would only be 16 bits if that's the fastest
> option.
>
> There really is more than just the speed/size tradeoff: practical/precise
> is
> another dimension to take into account.


Agreed.


 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      12-01-2004

"Kevin Bracey" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> In message <(E-Mail Removed) >
> (E-Mail Removed) (GS) wrote:
>
>> The stdint.h header definition mentions five integer categories,
>>
>> 1) exact width, eg., int32_t
>> 2) at least as wide as, eg., int_least32_t
>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
>> 4) integer capable of holding a pointer, intptr_t
>> 5) widest integer in the implementation, intmax_t
>>
>> Is there a valid motivation for having both int_least and int_fast?

>
> The point you missed is that the _least types are supposed to be the
> *smallest* types at least as wide, as opposed to the *fastest*, which are
> designated by _fast.


The *smallest* type as least as wide as 16 is of width 16, no? If it is
impossible to support an integer of width 16 (18-bit word, for instance)
how does the implementation deal with this standard's int16_t?

> A typical example might be the ARM, which (until ARMv4) had no 16-bit
> memory
> access instructions, and still has only 32-bit registers and arithmetic
> instructions. There int_least16_t would be 16-bit, but int_fast16_t might
> be
> 32-bit.
>
> How you decide what's "fastest" is the tricky bit.


Absolutely! There is no point making a data type "fast" if it is to be
repeatedly compared with values which are not the same width. Of course,
operations are fast or slow, not data values. Is the standard confusing two
orthogonal issues?

> In a function, code like:
>
> uint16_t a, b, c;
>
> a = b + c;
>
> would be slow on the ARM, because it would have to perform a 32-bit
> addition, and then manually trim the excess high bits off. Using
> uint_fast16_t would have avoided that.[*]


Yes, I think I'm coming round to having one that behaves as if it is
exactly 16 bits and another that behaves as if it has at least 16 bits.

> On the other hand, if you had an array of 2000 such 32-bit int_fast_16_ts
> you
> were working on, having them as 16-bit might actually be faster because
> they
> fit in the cache better, regardless of the extra core CPU cycles to
> manipulate them.
>
> That observation is likely to be true for pretty much any cached
> processor
> where int_fast_XX != int_least_XX, so as a programmer it's probably going
> to
> be a good idea to always use int_least_XX for arrays of any significant
> size.


I can see your point here. It's a subtlety. I still wonder, though, if I
wouldn't prefer to specify that array as int16_t. Specifying int_least16_t
is making me a hostage to the compiler. If I am taking in to account the
architecture of the underlying machine (in this case, primary cache size)
wouldn't I be better writing more precise requirements than int_leastX_t?


 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
dynamic_cast won't compile that I have to have at least one virtualmethod srdgame C++ 3 03-21-2009 08:50 AM
Need regular expression for at least 7 characters and at least 1 special chatacter AAaron123 ASP .Net 0 10-03-2008 01:25 PM
Why does String have to_str and Integer have to_int? Nanostuff Ruby 9 03-01-2007 09:44 PM
findcontrol("PlaceHolderPrice") why why why why why why why why why why why Mr. SweatyFinger ASP .Net 2 12-02-2006 03:46 PM
Why support for both int.class and Integer.TYPE? Brian J. Sayatovic Java 1 09-05-2003 08:32 AM



Advertisments