Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Intrinsic Minimums

Reply
Thread Tools

Intrinsic Minimums

 
 
JKop
Guest
Posts: n/a
 
      07-23-2004
Mark A. Gibbs posted:

> of course, you should have no problem extending the

iostreams, strings,
> etc. for the new character type ^_^. enjoy.


You're absolutley correct

basic_string<unsigned long> stringie;


-JKop
 
Reply With Quote
 
 
 
 
JKop
Guest
Posts: n/a
 
      07-23-2004
Old Wolf posted:

> from a practical point of
> view you can assume 2's complement, ie. -32768 to 32767).
> FWIW the 3 supported systems are (for x > 0):
> 2's complement: -x == ~x + 1
> 1's complement: -x == ~x
> sign-magnitude: -x = x & (the sign bit)
>


sS wouldn't that be -32,767 -> 32,768?

I assume that 1's compliment is the one that has both positive and negative
0.


As for the sign-magnitude thingie, that's interesting!

unsigned short blah = 65535;

signed short slah = blah;

slah == -32767 ? ?


-JKop
 
Reply With Quote
 
 
 
 
John Harrison
Guest
Posts: n/a
 
      07-23-2004

"JKop" <(E-Mail Removed)> wrote in message
news:ma6Mc.5340$(E-Mail Removed)...
> Mark A. Gibbs posted:
>
> > of course, you should have no problem extending the

> iostreams, strings,
> > etc. for the new character type ^_^. enjoy.

>
> You're absolutley correct
>
> basic_string<unsigned long> stringie;
>


I think you'll also need a char_traits class.

basic_string<unsigned long, ul_char_traits> stringie;

john


 
Reply With Quote
 
Rolf Magnus
Guest
Posts: n/a
 
      07-23-2004
JKop wrote:

> John Harrison posted:
>
>
>> C requires short >= 16 bits, int >= 16 bits, long >= 32

> bits. These
>> minimums are implied by the constraints given on INT_MIN,

> INT_MAX etc.
>> in
>><limits.h>. Presumably C++ inherits this from C.
>>
>> john

>
>
> I'm writing a prog that'll use Unicode. To represent a
> Unicode character, I need a data type that can be set to
> 65,536 distinct possible values;


No, you need more for full unicode support.

> which in the today's world of computing equates to 16 bits. wchar_t is
> the natural choice, but is there any guarantee in the standard that'll
> it'll be 16 bits?


It doesn't need to be exactly 16 bit. It can be more. In g++, it's 32
bits.

> If not, then is unsigned short the way to go?
>
> This might sound a bit odd, but... if an unsigned short
> must be atleast 16 bits, then does that *necessarily* mean
> that it:
>
> A) Must be able to hold 65,536 distinct values.
> B) And be able to store integers in the range 0 -> 65,535 ?


It's actually rather the other way round. It must explicitly be able to
hold at least the range from 0 to 65535, which implies a minimum of 16
bits.

> Furthermore, does a signed short int have to be able to
> hold a value between:
>
> A) -32,767 -> 32,768
>
> B) -32,768 -> 32,767


Neither.

> I've also heard that some systems are stupid enough (opps!
> I mean poorly enough designed) to have two values for zero,
> resulting in:
>
> -32,767 -> 32,767


That's the minimum range that a signed short int must support.

 
Reply With Quote
 
Rolf Magnus
Guest
Posts: n/a
 
      07-23-2004
JKop wrote:

>
> I've just realized something:
>
> char >= 8 bits
> short int >= 16 bits
> int >= 16 bits
> long int >= 32 bits


Yes.

> And:
>
> short int >= int >= long int


Uhm, no. But I guess it's just a typo

> On WinXP, it's like so:
>
> char : 8 bits
> short : 16 bits
> int : 32 bits
> long : 32 bits
>
>
> Anyway,
>
> Since there's a minimum, why haven't they just be given definite
> values! like:
>
> char : 8 bits
> short : 16 bits
> int : 32 bits
> long : 64 bits


Because there are other platforms for which other sizes may fit better.
There are even systems that only support data types with a multple of
24bit as size. C++ can still be implemented on those, because the size
requirements in the standard don't have fixed values. Also, int is
supposed (though not required) to be the machine's native type that is
the fastest one. On 64 bit platforms, it often isn't though.

> or maybe even names like:
>
> int8
> int16
> int32
> int64


C99 has something like this in the header <stdint.h>. It further defines
smallest and fastest integers with a specific minimum size, like:

int_fast16_t
int_least32_t

This is a good thing, because an exact size is only needed rarely. Most
often, you don't care for the exact size as long as it's the fastest
resp. smallest type that provides at least a certain range.

> And so then if you want a greater amount of distinct possible values,
> there'll be standard library classes. For instance, if you want a
> 128-Bit integer, then you're looking for a data type that can store
> 3e+38 approx. distinct values. Well... if a 64-Bit integer can store
> 1e+19 approx values, then put two together and viola, you've got a
> 128-Bit number:
>
> class int128
> {
> private:
>
> int64 a;
> int64 b;
> //and so on
> };
>
> Or while I'm thinking about that, why not be able to specify whatever
> size you want, as in:
>
> int8 number_of_sisters;
>
> int16 population_my_town;
>
> int32 population_of_asia;
>
> int64 population_of_earth;
>
> or maybe even:
>
> int40 population_of_earth;
>
>
> Some people may find that this is a bit ludicrious, but you can do it
> already yourself with classes: if you want a 16,384 bit number, then
> all you need to do is:
>
> class int16384
> {
> private:
> unsigned char data[2048];
>
> //and so on
>
> };
>
> Or maybe even be able to specify how many distinct possible
> combinations you need. So for unicode:
>
> unsigned int<65536> username[15];
>
>
> This all seems so simple in my head - why can't it just be as so!


It isn't as simple as you might think. If it were, you could just start
writing a proof-of-concept implementation.
 
Reply With Quote
 
Nemanja Trifunovic
Guest
Posts: n/a
 
      07-23-2004
http://www.velocityreviews.com/forums/(E-Mail Removed) (Old Wolf) wrote in message news:<(E-Mail Removed). com>...
> JKop <(E-Mail Removed)> wrote:
> On MS windows, some compilers (eg. gcc) have 32-bit wchar_t and
> some (eg. Borland, Microsoft) have 16-bit. On all other systems
> that I've encountered, it is 32-bit.
>
> This is quite annoying (for people whose language falls in the
> over-65535 region especially). One can only hope that MS will
> eventually come to their senses, or perhaps that someone will
> standardise a system of locales.


It is hardly annoying for these people, because they died long before
computers were invented. Non-BMP region contains mostly symbols for
dead languages.
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Accessing intrinsic ASP (not ASP.NET) objects from .NET class Alek Davis ASP .Net 15 11-12-2009 02:17 AM
Marshal.ReleaseComObject used with intrinsic IIS / ASP objects? jason ASP .Net 2 01-10-2006 08:43 PM
Intrinsic objects of ASP.Net Alan Ho ASP .Net 5 05-09-2005 02:24 PM
Canon Ixus 500 intrinsic metering error Peter McKenzie \(remove 'nospam'\) Digital Photography 0 06-30-2004 05:02 PM
trouble utilizing intrinsic objects in custom class =?Utf-8?B?R2xlbm4=?= ASP .Net 0 05-19-2004 01:56 PM



Advertisments