Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Use of Long and Long Long

Reply
Thread Tools

Use of Long and Long Long

 
 
Walter Roberson
Guest
Posts: n/a
 
      01-09-2008
In article <fm3j0b$gso$(E-Mail Removed)>,
Walter Roberson <(E-Mail Removed)-cnrc.gc.ca> wrote:
>In article <(E-Mail Removed)>,
>user923005 <(E-Mail Removed)> wrote:


>>Your method is not accurate. It is possible to have trap bits or


>Wouldn't that be "trap values" rather than "trap bits" ?


>On the other hand, I was scanning through the trap value / padding
>bits in C99 the other day, and noticed that the state of padding
>bits is not allowed to affect whether a particular value is a trap
>value or not, so if there did happen to be a bit which when set (or
>clear) triggered a trap, it would officially have to be one
>of the "value bits", leading to a rather large number of
>"trap values"!


I misremembered. N794 6.1.2.8.2 Integer types

{footnote} [39] Some combinations of padding bits might
generate trap representations, for example, if one padding
bit is a parity bit. Regardless, no arithmetic operation
on valid values can generate a trap representation other
than as part of an exception such as overflow, and this
cannot occur with unsigned types. All other combinations
of padding bits are alternative object representations of
the value specified by the value bits.
--
"History is a pile of debris" -- Laurie Anderson
 
Reply With Quote
 
 
 
 
CBFalconer
Guest
Posts: n/a
 
      01-10-2008
Walter Roberson wrote:
>

.... snip ...
>
> I misremembered. N794 6.1.2.8.2 Integer types


I suggest you update your standard copy to at least N869, which was
the last draft before issuing C99. It is also the last to have a
text version. You can get a copy, diddled for easy searching and
quotation, as N869_txt.bz2, which is bzip2 compressed, at:

<http://cbfalconer.home.att.net/download/>

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.


--
Posted via a free Usenet account from http://www.teranews.com

 
Reply With Quote
 
 
 
 
Bart C
Guest
Posts: n/a
 
      01-10-2008
Flash Gordon wrote:
> Bart C wrote, On 09/01/08 20:33:


>> Integer widths that obey the rule short < int < long int <long long
>> int (instead of short<=int<=long int or whatever) would be far more
>> intuitive and much more useful (as it is now, changing int x to long
>> int x is not guaranteed to change anything, so is pointless)

>
> Now look at a 32 bit DSP which cannot deal with anything below 32
> bits. Given your definition it would have to do at least
> sizeof(char) == 1 - 32 bits
> sizeof(short) == 2 - 64 bits
> sizeof(int) == 3 - 96 bits
> sizeof(long) == 4 - 128 bits
> sizeof(long long) == 5 - 156 bits [160?]


Yes in this example it sort of makes sense, if the compiler does not try too
hard to impose anything else.

However it's not impossible either for the compiler to impose 8, 16, 32,
64-bit word sizes (don't know how capable DSPs are or whether this is even
desirable). So a 128K array of 8-bit data for example could take 128KB
instead of 512KB.

> If those are your minimum requirements then you use
> char - at least 8 bits
> short - at least 16 bits
> int - at least 16 bits (but likely to be larger)
> long - at least 32 bits
> long long - at least 64 bits


At first this seems the way to go: char, short, long, long long giving at
least 8, 16, 32, 64-bits respectively.
Except my test showed the long int was 32-bits on one compiler and 64-bits
in another -- for the same processor. This just seems plain wrong.

It means my program that needs 32-bit data runs happily using long int under
dev-cpp, but takes maybe twice as long under gcc because long int now takes
twice the memory and the processor unnecessarily emulates 64-bit arithmetic.

> Alternatively, use the types defined in stdint.h (or inttypes.h) which
> have been added in C99. These headers provide you exact width types


OK looks like this is what I will have to do.

http://www.velocityreviews.com/forums/(E-Mail Removed) wrote:
>> printf("LD = %3d bits\n",sizeof(ld)*CHAR_BIT);

> The proper format specifier for size_t is '%zu'


I would never have guessed! Although the results (on one rerun anyway) were
the same.

Thanks for all the replies.

Bart


 
Reply With Quote
 
Bart C
Guest
Posts: n/a
 
      01-10-2008
CBFalconer wrote:
> Bart C wrote:


>> Given that I know my target hardware has an 8-bit byte size and
>> natural word size of 32-bits, what int prefixes do I use to span
>> the range 16, 32 and 64-bits? And perhaps stay the same when
>> compiled for 64-bit target?

>
> All you need to know is the minimum guaranteed sizes for the
> standard types. They are:
>
> char 8 bits
> short 16 bits
> int 16 bits
> long 32 bits
> long long 64 bits


This is an example of a problem I anticipate:

long ftell( FILE *stream );

The above is from documentation for a function in a dynamic library.
Probably the 'long' means a 32-bit value (this is on a MS platform). But for
my compiler 'long' might be implemented as 64-bits. What happens?
(Especially if long applied to a parameter rather than the return value.)

Bart






 
Reply With Quote
 
Jack Klein
Guest
Posts: n/a
 
      01-10-2008
On Thu, 10 Jan 2008 01:27:48 GMT, "Bart C" <(E-Mail Removed)> wrote in
comp.lang.c:

> CBFalconer wrote:
> > Bart C wrote:

>
> >> Given that I know my target hardware has an 8-bit byte size and
> >> natural word size of 32-bits, what int prefixes do I use to span
> >> the range 16, 32 and 64-bits? And perhaps stay the same when
> >> compiled for 64-bit target?

> >
> > All you need to know is the minimum guaranteed sizes for the
> > standard types. They are:
> >
> > char 8 bits
> > short 16 bits
> > int 16 bits
> > long 32 bits
> > long long 64 bits

>
> This is an example of a problem I anticipate:
>
> long ftell( FILE *stream );
>
> The above is from documentation for a function in a dynamic library.
> Probably the 'long' means a 32-bit value (this is on a MS platform). But for
> my compiler 'long' might be implemented as 64-bits. What happens?
> (Especially if long applied to a parameter rather than the return value.)


The above prototype is for a standard C library function. Whatever
"dynamic" means is not a language issue.

If the library is supplied with your platform, either your compiler
claims to be compatible with the platform or it does not. If it does,
and the size of its long type is different from that of the platform's
library, then you implementation must provide its own alternative to
the library function, or perhaps provide a wrapper around the
platform's functions as needed, converting between its data types and
those of the underlying system.

A compiler with a 64-bit long is not going to claim Win32
compatibility.

Simple.

I think you are looking for problems that you are not likely to run
int.

When you develop a program for a platform, you will use a compiler for
that platform. If you want to add a third party library, you will use
one that is built for that compiler and platform combination.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
 
Reply With Quote
 
Flash Gordon
Guest
Posts: n/a
 
      01-10-2008
Bart C wrote, On 10/01/08 01:16:
> Flash Gordon wrote:
>> Bart C wrote, On 09/01/08 20:33:

>
>>> Integer widths that obey the rule short < int < long int <long long
>>> int (instead of short<=int<=long int or whatever) would be far more
>>> intuitive and much more useful (as it is now, changing int x to long
>>> int x is not guaranteed to change anything, so is pointless)

>> Now look at a 32 bit DSP which cannot deal with anything below 32
>> bits. Given your definition it would have to do at least
>> sizeof(char) == 1 - 32 bits
>> sizeof(short) == 2 - 64 bits
>> sizeof(int) == 3 - 96 bits
>> sizeof(long) == 4 - 128 bits
>> sizeof(long long) == 5 - 156 bits [160?]

>
> Yes in this example it sort of makes sense, if the compiler does not try too
> hard to impose anything else.
>
> However it's not impossible either for the compiler to impose 8, 16, 32,
> 64-bit word sizes (don't know how capable DSPs are or whether this is even
> desirable). So a 128K array of 8-bit data for example could take 128KB
> instead of 512KB.


It is not impossible, but on a significant number of main stream DSP
processors using anything less than the size the processor understands
(16 bits, 24 bits, 32 bits and 48 being common) you would make the
smaller types very inefficient since it would have to keep masking etc.
So to write an 8 bit char to memory it would have to on one of the
processors...
shift accumulator left 16 bits (1 clock)
load memory location shifting data left 8 bits (1 clock)
save shifting right 8 bits (1 clock)
So you have turned a common 1 clock cycle operation in to a 4 clock
cycles which also change the contents of the accumulator. Do you really
think a 400% increase in the time to save a char is worth the cost?

A lot of other operations would also be slowed down.

On such processors it really does not make sense to use smaller data
types than the processor understands so why introduce the additional
complexity to the compiler just to make it extremely inefficient if some
code does use a short or a char?

>> If those are your minimum requirements then you use
>> char - at least 8 bits
>> short - at least 16 bits
>> int - at least 16 bits (but likely to be larger)
>> long - at least 32 bits
>> long long - at least 64 bits

>
> At first this seems the way to go: char, short, long, long long giving at
> least 8, 16, 32, 64-bits respectively.
> Except my test showed the long int was 32-bits on one compiler and 64-bits
> in another -- for the same processor. This just seems plain wrong.


No, it is NOT wrong. At least, not necessarily. If one implementation
supports things which require a 64 bit integer type to perform certain
operations (e.g. supporting fseek on large files) then selecting a 64
bit long makes perfect sense. Another implementation might have decided
that supporting legacy broken code that assumes int and long are the
same is more important. These are both valid trade-offs.

> It means my program that needs 32-bit data runs happily using long int under
> dev-cpp, but takes maybe twice as long under gcc because long int now takes
> twice the memory and the processor unnecessarily emulates 64-bit arithmetic.


Hmm. Did you realise that you are saying that gcc produces code that
uses twice the memory of gcc? dev-cpp uses either gcc, the MinGW port of
gcc or the Cygwin port of gcc as the compiler.

>> Alternatively, use the types defined in stdint.h (or inttypes.h) which
>> have been added in C99. These headers provide you exact width types

>
> OK looks like this is what I will have to do.


Using the fast types would make more sense than the fixed width ones
unless you *really* need fixed-width types (most people don't, although
I do) or to save memory (in which case the least types maximise
portability).

> (E-Mail Removed) wrote:
>>> printf("LD = %3d bits\n",sizeof(ld)*CHAR_BIT);

>> The proper format specifier for size_t is '%zu'

>
> I would never have guessed! Although the results (on one rerun anyway) were
> the same.


Undefined behaviour is like that. It can hide itself by doing exactly
what you expect, or it can cause your computer to come alive and eat all
the food in your fridge.
--
Flash Gordon
 
Reply With Quote
 
James Kuyper
Guest
Posts: n/a
 
      01-10-2008
Flash Gordon wrote:
> Bart C wrote, On 10/01/08 01:16:
>> Flash Gordon wrote:
>>> Bart C wrote, On 09/01/08 20:33:

>>
>>>> Integer widths that obey the rule short < int < long int <long long
>>>> int (instead of short<=int<=long int or whatever) would be far more
>>>> intuitive and much more useful (as it is now, changing int x to long
>>>> int x is not guaranteed to change anything, so is pointless)
>>> Now look at a 32 bit DSP which cannot deal with anything below 32
>>> bits. Given your definition it would have to do at least
>>> sizeof(char) == 1 - 32 bits
>>> sizeof(short) == 2 - 64 bits
>>> sizeof(int) == 3 - 96 bits
>>> sizeof(long) == 4 - 128 bits
>>> sizeof(long long) == 5 - 156 bits [160?]

>>
>> Yes in this example it sort of makes sense, if the compiler does not
>> try too hard to impose anything else.
>>
>> However it's not impossible either for the compiler to impose 8, 16,
>> 32, 64-bit word sizes (don't know how capable DSPs are or whether this
>> is even desirable). So a 128K array of 8-bit data for example could
>> take 128KB instead of 512KB.

>
> It is not impossible, but on a significant number of main stream DSP
> processors using anything less than the size the processor understands
> (16 bits, 24 bits, 32 bits and 48 being common) you would make the
> smaller types very inefficient since it would have to keep masking etc.


On any given processor, types that are too small are inefficient because
they must be emulated by bit-masking operations, while types that are
two big are inefficient because they must be emulated using multiple
instances of the largest efficient type. If signed char, short, int,
long and long long were all required to have different sizes, a
processor with hardware support for fewer than 5 different sizes would
have to choose one form of inefficiency or the other; efficient
implementation of all 5 types would not be an option.

In practice, what would be provided would depend upon what's needed, and
not just what's efficient. There's a lot more need for 8 or 16 bit types
than there is for 128 or 156 bit types (though I'm not saying there's no
need for those larger types - I gather that the cryptographic community
loves them).
 
Reply With Quote
 
Bart C
Guest
Posts: n/a
 
      01-10-2008
Flash Gordon wrote:
> Bart C wrote, On 10/01/08 01:16:


>> It means my program that needs 32-bit data runs happily using long
>> int under dev-cpp, but takes maybe twice as long under gcc because
>> long int now takes twice the memory and the processor unnecessarily
>> emulates 64-bit arithmetic.

>
> Hmm. Did you realise that you are saying that gcc produces code that
> uses twice the memory of gcc? dev-cpp uses either gcc, the MinGW port
> of gcc or the Cygwin port of gcc as the compiler.


Well, dev-cpp under winxp reported 32 bits for long int; gcc under linux
reported 64 bits for long int, both on same pentium machine. So, yes, I
guess I'm saying a huge array of long ints would be take double the memory
in gcc in this case. Maybe these are all configurable options but I'm using
them 'out-of-the-box'.

Other points well explained and understood.

--
Bart



 
Reply With Quote
 
Bart C
Guest
Posts: n/a
 
      01-10-2008
Jack Klein wrote:
> On Thu, 10 Jan 2008 01:27:48 GMT, "Bart C" <(E-Mail Removed)> wrote in
> comp.lang.c:


>>> Bart C wrote:


>> This is an example of a problem I anticipate:
>>
>> long ftell( FILE *stream );
>>
>> The above is from documentation for a function in a dynamic library.
>> Probably the 'long' means a 32-bit value (this is on a MS platform).
>> But for my compiler 'long' might be implemented as 64-bits. What
>> happens? (Especially if long applied to a parameter rather than the
>> return value.)

>
> The above prototype is for a standard C library function. Whatever
> "dynamic" means is not a language issue.


It means the function could have been compiled with a different C compiler
(maybe even a different language) with a specification published using the
local semantics (in this case the exact meaning of 'long').

I had in mind dynamic linking but I think the problem can occur with static
linking too, if using object files compiled elsewhere. (You will say some of
these terms are not in the C standard but this is a practical problem.)

> If the library is supplied with your platform, either your compiler
> claims to be compatible with the platform or it does not. If it does,
> and the size of its long type is different from that of the platform's
> library, then you implementation must provide its own alternative to
> the library function, or perhaps provide a wrapper around the
> platform's functions as needed, converting between its data types and
> those of the underlying system.


> When you develop a program for a platform, you will use a compiler for
> that platform. If you want to add a third party library, you will use
> one that is built for that compiler and platform combination.


Provided a binary library is for the correct platform (object format,
processor and so on), surely I can use any C compiler to compile the header
files? Otherwise it seems overly restrictive.

The one overriding theme in this newsgroup seems to be that of portability,
yet if I distribute a library I need to specify a particular compiler or
supply a version for every possible compiler?

Wouldn't it be better, in published header files, to use more specific type
designators than simply 'long' (since, once compiled, the binary will not
change). Then a compiler can use common sense out in dealing with it. (Since
the processor is the same there will surely be a local word size that will
match.)

Bart

[oops, originally sent to your email instead of group]


 
Reply With Quote
 
Flash Gordon
Guest
Posts: n/a
 
      01-10-2008
Bart C wrote, On 10/01/08 12:20:
> Flash Gordon wrote:
>> Bart C wrote, On 10/01/08 01:16:

>
>>> It means my program that needs 32-bit data runs happily using long
>>> int under dev-cpp, but takes maybe twice as long under gcc because
>>> long int now takes twice the memory and the processor unnecessarily
>>> emulates 64-bit arithmetic.

>> Hmm. Did you realise that you are saying that gcc produces code that
>> uses twice the memory of gcc? dev-cpp uses either gcc, the MinGW port
>> of gcc or the Cygwin port of gcc as the compiler.

>
> Well, dev-cpp under winxp reported 32 bits for long int; gcc under linux
> reported 64 bits for long int, both on same pentium machine. So, yes, I
> guess I'm saying a huge array of long ints would be take double the memory
> in gcc in this case.


You completely missed the point. You are comparing gcc against gcc.
dev-cpp has NOTHING to do with it. You are either comparing the MinGW
implementation of gcc (which has to match the MS C library) with one of
the Linux implementations of gcc (which has to match the glibc version)
or you are comparing the Cygwin version of gcc against one of the Linux
implementations.

I also strongly suspect that you are comparing an implementation
designed for a 32 bit processor against one designed for a 64 bit
processor (i.e. the 64 bit version of Linux). Did you expect that WinXP
would magically become a 64 bit OS just because you ran it on a 64 bit
processor?

> Maybe these are all configurable options but I'm using
> them 'out-of-the-box'.


They have to match the library, and gcc does not come with a library, it
just uses the one provided by the system. I suspect that the
"out-of-the-box' for dev-cpp is to use MinGW on Linux rather than
Cygwin, but I don't think that would make any difference in this case.
--
Flash Gordon
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Having compilation error: no match for call to (const __gnu_cxx::hash<long long int>) (const long long int&) veryhotsausage C++ 1 07-04-2008 05:41 PM
long long and long Mathieu Dutour C Programming 4 07-24-2007 11:15 AM
Is it standard and practical to use long long types? Matt C Programming 106 04-19-2004 02:04 PM
Is it standard and practical to use long long types? Matt C++ 88 04-16-2004 10:11 PM
How would I use qsort to sort a struct with a char* member and a long member - I want to sort in order of the long member Angus Comber C Programming 7 02-05-2004 06:41 PM



Advertisments