JKop wrote:

>

> I've just realized something:

>

> char >= 8 bits

> short int >= 16 bits

> int >= 16 bits

> long int >= 32 bits
Yes.

> And:

>

> short int >= int >= long int
Uhm, no. But I guess it's just a typo

> On WinXP, it's like so:

>

> char : 8 bits

> short : 16 bits

> int : 32 bits

> long : 32 bits

>

>

> Anyway,

>

> Since there's a minimum, why haven't they just be given definite

> values! like:

>

> char : 8 bits

> short : 16 bits

> int : 32 bits

> long : 64 bits
Because there are other platforms for which other sizes may fit better.

There are even systems that only support data types with a multple of

24bit as size. C++ can still be implemented on those, because the size

requirements in the standard don't have fixed values. Also, int is

supposed (though not required) to be the machine's native type that is

the fastest one. On 64 bit platforms, it often isn't though.

> or maybe even names like:

>

> int8

> int16

> int32

> int64
C99 has something like this in the header <stdint.h>. It further defines

smallest and fastest integers with a specific minimum size, like:

int_fast16_t

int_least32_t

This is a good thing, because an exact size is only needed rarely. Most

often, you don't care for the exact size as long as it's the fastest

resp. smallest type that provides at least a certain range.

> And so then if you want a greater amount of distinct possible values,

> there'll be standard library classes. For instance, if you want a

> 128-Bit integer, then you're looking for a data type that can store

> 3e+38 approx. distinct values. Well... if a 64-Bit integer can store

> 1e+19 approx values, then put two together and viola, you've got a

> 128-Bit number:

>

> class int128

> {

> private:

>

> int64 a;

> int64 b;

> //and so on

> };

>

> Or while I'm thinking about that, why not be able to specify whatever

> size you want, as in:

>

> int8 number_of_sisters;

>

> int16 population_my_town;

>

> int32 population_of_asia;

>

> int64 population_of_earth;

>

> or maybe even:

>

> int40 population_of_earth;

>

>

> Some people may find that this is a bit ludicrious, but you can do it

> already yourself with classes: if you want a 16,384 bit number, then

> all you need to do is:

>

> class int16384

> {

> private:

> unsigned char data[2048];

>

> //and so on

>

> };

>

> Or maybe even be able to specify how many distinct possible

> combinations you need. So for unicode:

>

> unsigned int<65536> username[15];

>

>

> This all seems so simple in my head - why can't it just be as so!
It isn't as simple as you might think. If it were, you could just start

writing a proof-of-concept implementation.