Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Optimizing C

Reply
Thread Tools

Optimizing C

 
 
Keith Thompson
Guest
Posts: n/a
 
      03-13-2006
Eric Sosman <(E-Mail Removed)> writes:
[...]
> If one-bits are expected to be fairly sparse, you'll probably
> want to search through largish "chunks" to find one that isn't
> all zeroes, then search within that chunk for the proper bit.
> You could do the search an `int' at a time with an ordinary loop,
> or you might use memchr() to search byte-by-byte. The memchr()
> method might (or might not) take longer for the search, but note
> that it leaves a smaller space for the bit-by-bit search to explore.
> You'd need to implement both and measure.

[...]

I don't think memchr() would be useful. It scans for a byte equal to
a specified value; you want to scan for bytes *unequal* to 0.

If you're searchin for a 1 bit in a chunk of memory, checking first
whether each word (where "word" probably means something like an
unsigned int or unsigned long) is equal to 0 seems like an obvious
optimization. But if you care about absolute portability (i.e., code
that will work with any possible conforming C implementation), it can
get you into trouble.

It's guaranteed that, for any integer type, all-bits-zero is a
representation of the value 0. (Neither the C90 nor the C99 standard
says this, but n1124 does; this change was made in TC2 in response to
DR #263. I've never heard of an implementation that doesn't have this
property anyway, and I'd be comfortable depending on it.) For
one's-complement and sign-magnitude representations, there are two
distinct representations of zero (+0 and -0), but you can avoid that
by using an unsigned type. But unsigned types *can* have padding
bits, so even if buf[i]==0, you might still have missed a 1 bit.

unsigned char can't have padding bits, and every object can be viewed
as an array of unsigned char, so you can safely compare each byte to 0
to find the first 1 bit. But this is likely to be less efficient than
scanning by words.

You can scan by words *if* the unsigned type you're using has no
padding bits. You can test this automatically by checking whether,
for example, looking at the values of CHAR_BIT*sizeof(unsigned long)
and LONG_MAX; if the range takes up all the bits, there can be no
padding bits. If this test fails, you have to fall back to some other
algorithm -- which means you have two different algorithms to test.

You also have to account for alignment issues (unless you use unsigned
char).

If you don't care about absolute portability, you can get away with
making whatever assumptions you can get away with. I suggest
documenting any assumptions you make that aren't guaranteed by the
standard. If you can test your assumptions, either during compilation
or at run time, you can fall back to a "#error" directive or use
assert(), so code will fail early and cleanly if your assumptions are
violated.

To summarize: portable source-level micro-optimization is hard.

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
 
Reply With Quote
 
 
 
 
Richard G. Riley
Guest
Posts: n/a
 
      03-13-2006
On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
> Eric Sosman <(E-Mail Removed)> writes:
> [...]
>> If one-bits are expected to be fairly sparse, you'll probably
>> want to search through largish "chunks" to find one that isn't
>> all zeroes, then search within that chunk for the proper bit.
>> You could do the search an `int' at a time with an ordinary loop,
>> or you might use memchr() to search byte-by-byte. The memchr()
>> method might (or might not) take longer for the search, but note
>> that it leaves a smaller space for the bit-by-bit search to explore.
>> You'd need to implement both and measure.

> [...]
>
> I don't think memchr() would be useful. It scans for a byte equal to
> a specified value; you want to scan for bytes *unequal* to 0.


I dont either : see other reply. But to sumamrise : it uses bytes
which is plain silly and it requires a per character comparison which
is not required when checking for "non zero".

>
> If you're searchin for a 1 bit in a chunk of memory, checking first
> whether each word (where "word" probably means something like an
> unsigned int or unsigned long) is equal to 0 seems like an obvious
> optimization. But if you care about absolute portability (i.e., code
> that will work with any possible conforming C implementation), it can
> get you into trouble.
>
> It's guaranteed that, for any integer type, all-bits-zero is a
> representation of the value 0. (Neither the C90 nor the C99 standard
> says this, but n1124 does; this change was made in TC2 in response to
> DR #263. I've never heard of an implementation that doesn't have this
> property anyway, and I'd be comfortable depending on it.) For
> one's-complement and sign-magnitude representations, there are two
> distinct representations of zero (+0 and -0), but you can avoid that
> by using an unsigned type. But unsigned types *can* have padding
> bits, so even if buf[i]==0, you might still have missed a 1 bit.


Could you please explain this? If I have calculated (or even knw..) that the
underlying word size is 32 bit and I use an unsigned int to represent
this in C, then how do padding bits make any difference? Isnt the
variable guarenteed to go from 0 to 2^32-1?

Keeping in mind that it is 32 bit "image data" put into 32 bit memory
storage (array of unsigned ints).

I think the term "padding bits" is confusing me here. Its not another
one of these horrible things that means for truly portable code we
cant assume that a 4 byte word doesnt actually take 4 bytes in memory
is it? And since I'm accessing that data via an "unsigned long" array
would it make any difference to the code?

(I think example code in other replay was something like ...)
...
unsigned int * refImageData = &arrImageDataScanLine[0];
...

while(columnsToCheck-- && !(imageWord = *refImageData++));


This is surely portable? Slap me down if its not : that I can take.

>
> unsigned char can't have padding bits, and every object can be viewed
> as an array of unsigned char, so you can safely compare each byte to 0
> to find the first 1 bit. But this is likely to be less efficient than
> scanning by words.
>
> You can scan by words *if* the unsigned type you're using has no
> padding bits. You can test this automatically by checking whether,
> for example, looking at the values of CHAR_BIT*sizeof(unsigned long)
> and LONG_MAX; if the range takes up all the bits, there can be no
> padding bits. If this test fails, you have to fall back to some other
> algorithm -- which means you have two different algorithms to test.
>
> You also have to account for alignment issues (unless you use unsigned
> char).


Assume guarenteed to align to natural word alignment. Otherwise it would kill
the compiler produced optimisations triggered by using the natural
word size data type. This type of thing was covered well, IMO, in the
two initial links I gave in the OP.

>
> If you don't care about absolute portability, you can get away with
> making whatever assumptions you can get away with. I suggest
> documenting any assumptions you make that aren't guaranteed by the
> standard. If you can test your assumptions, either during compilation
> or at run time, you can fall back to a "#error" directive or use
> assert(), so code will fail early and cleanly if your assumptions are
> violated.
>
> To summarize: portable source-level micro-optimization is hard.
>


I dont doubt it : but I'm happy for it to be optimzed for x86 and
"work" for other platforms :-;

 
Reply With Quote
 
 
 
 
Keith Thompson
Guest
Posts: n/a
 
      03-13-2006
"Richard G. Riley" <(E-Mail Removed)> writes:
> On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:

[...]
>> It's guaranteed that, for any integer type, all-bits-zero is a
>> representation of the value 0. (Neither the C90 nor the C99 standard
>> says this, but n1124 does; this change was made in TC2 in response to
>> DR #263. I've never heard of an implementation that doesn't have this
>> property anyway, and I'd be comfortable depending on it.) For
>> one's-complement and sign-magnitude representations, there are two
>> distinct representations of zero (+0 and -0), but you can avoid that
>> by using an unsigned type. But unsigned types *can* have padding
>> bits, so even if buf[i]==0, you might still have missed a 1 bit.

>
> Could you please explain this? If I have calculated (or even knw..) that the
> underlying word size is 32 bit and I use an unsigned int to represent
> this in C, then how do padding bits make any difference? Isnt the
> variable guarenteed to go from 0 to 2^32-1?


No, it isn't. The guaranteed range of unsigned int is 0 to 32768.
But, for example, an implementation could legally have:
CHAR_BIT == 8
sizeof(unsigned int) == 4
INT_MAX == 32767
An unsigned int then has 32 bits: 16 value bits and 16 padding bits.

For details, see section 6.2.6.2 of the C99 standard.

[...]

>> To summarize: portable source-level micro-optimization is hard.
>>

>
> I dont doubt it : but I'm happy for it to be optimzed for x86 and
> "work" for other platforms :-;


If you make too many assumptions (for example, that unsigned int has
no padding bits), you code might be optimized for x86 and *not* "work"
for some other platforms.

--
Keith Thompson (The_Other_Keith) (E-Mail Removed) <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
 
Reply With Quote
 
Richard G. Riley
Guest
Posts: n/a
 
      03-13-2006
On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
> "Richard G. Riley" <(E-Mail Removed)> writes:
>> On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:

> [...]
>>> It's guaranteed that, for any integer type, all-bits-zero is a
>>> representation of the value 0. (Neither the C90 nor the C99 standard
>>> says this, but n1124 does; this change was made in TC2 in response to
>>> DR #263. I've never heard of an implementation that doesn't have this
>>> property anyway, and I'd be comfortable depending on it.) For
>>> one's-complement and sign-magnitude representations, there are two
>>> distinct representations of zero (+0 and -0), but you can avoid that
>>> by using an unsigned type. But unsigned types *can* have padding
>>> bits, so even if buf[i]==0, you might still have missed a 1 bit.

>>
>> Could you please explain this? If I have calculated (or even knw..) that the
>> underlying word size is 32 bit and I use an unsigned int to represent
>> this in C, then how do padding bits make any difference? Isnt the
>> variable guarenteed to go from 0 to 2^32-1?

>
> No, it isn't. The guaranteed range of unsigned int is 0 to 32768.
> But, for example, an implementation could legally have:
> CHAR_BIT == 8
> sizeof(unsigned int) == 4
> INT_MAX == 32767
> An unsigned int then has 32 bits: 16 value bits and 16 padding bits.
>
> For details, see section 6.2.6.2 of the C99 standard.


Wow. That would break a lot of code. Assumimg some situation like
that, is their a constant "MAX BITS FOR ULONG" or "MAX WORD SIZE IN BITS"? e.g
to get a platform independant assurance of the number of bits one can
use in a single HW word? Having said that, in my sample code in the
initial reply to Eric, I would only need to recalculate my "start
mask" from 0x80000000 to "whatever" when I calculate "usable bits per
HW word" in program init. Since the overhead of doing that calc is
almost less than zero compared to cother computation, I could do that.

>
> [...]
>
>>> To summarize: portable source-level micro-optimization is hard.
>>>

>>
>> I dont doubt it : but I'm happy for it to be optimzed for x86 and
>> "work" for other platforms :-;

>
> If you make too many assumptions (for example, that unsigned int has
> no padding bits), you code might be optimized for x86 and *not* "work"
> for some other platforms.
>

 
Reply With Quote
 
Jordan Abel
Guest
Posts: n/a
 
      03-13-2006
On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
>>> To summarize: portable source-level micro-optimization is hard.
>>>

>>
>> I dont doubt it : but I'm happy for it to be optimzed for x86 and
>> "work" for other platforms :-;

>
> If you make too many assumptions (for example, that unsigned int has
> no padding bits), you code might be optimized for x86 and *not* "work"
> for some other platforms.


That's not to say that portable source-level optimization isn't done.

There is a macro in stdio.h on freebsd that is optimized for pcc on VAX.

Source-level optimization only requires non-universal assumptions about
what's fast, not about what works.
 
Reply With Quote
 
Keith Thompson
Guest
Posts: n/a
 
      03-13-2006
Jordan Abel <(E-Mail Removed)> writes:
> On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
>>>> To summarize: portable source-level micro-optimization is hard.
>>>
>>> I dont doubt it : but I'm happy for it to be optimzed for x86 and
>>> "work" for other platforms :-;

>>
>> If you make too many assumptions (for example, that unsigned int has
>> no padding bits), you code might be optimized for x86 and *not* "work"
>> for some other platforms.

>
> That's not to say that portable source-level optimization isn't done.
>
> There is a macro in stdio.h on freebsd that is optimized for pcc on VAX.
>
> Source-level optimization only requires non-universal assumptions about
> what's fast, not about what works.


I think you missed half of my point.

*Correctly portable* source-level optimization only requires
non-universal assumptions about what's fast, but it's entirely
possible (and too common) to do source-level optimization that makes
the code faster on some platforms, and incorrect on others.

I discussed a specific instance of this upthread: the assumption that
a particular type has no padding bits.

It's also possible, with a little extra work, to write code that
*tests* this assumption, and falls back to a more portable algorithm
if the test fails; this is useful only if you get everything right.

--
Keith Thompson (The_Other_Keith) (E-Mail Removed) <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
 
Reply With Quote
 
A. Sinan Unur
Guest
Posts: n/a
 
      03-14-2006
"Richard G. Riley" <(E-Mail Removed)> wrote in news:isjge3-9u8.ln1
@cern.frontroom:

> On 2006-03-13, Eric Sosman <(E-Mail Removed)> wrote:
>>
>>
>> D.E. Knuth (elaborating on a remark by Hoare): "We should
>> forget about small efficiencies, say about 97% of the time:
>> premature optimization is the root of all evil."
>>
>> W.A. Wulf: "More computing sins are committed in the name
>> of efficiency (without necessarily achieving it) than for any
>> other single reason, including blind stupidity."
>>
>> Jackson's Laws of Program Optimization:
>> FIRST LAW: Don't do it.
>> SECOND LAW (for experts only): Don't do it yet.
>>
>> See also <http://www.flounder.com/optimization.htm>.
>>
>> ... and that's all I'll offer on the matter. Until you
>> have made measurements or done calculations that demonstrate
>> a performance problem, "approaching optimum performance as
>> early as possible" is folly. If you wish to behave foolishly,
>> go ahead -- but do it on your own; I'm not going to assist
>> you to your ruin.
>>

>
> Thanks for those references : all very valid if designing a large
> system.


All very valid for any type of system.

> I am designing/impementing a very low level system with fairly
> specific requirements :


We do not know what does specific requirements, constraints, and
operating parameters (they would also be off-topic here, I am assuming).

> and apriori


I think you are missing the point that what will work best in that
narrow environment cannot be known *a priori* (see
http://wordnet.princeton.edu/perl/webwn?s=a%20priori), only
*a posteriori*:

Adverb

* S: (adv) a priori (derived by logic, without observed facts)
o antonym
+ W: (adv) a posteriori [Opposed to: a priori] (derived from observed
facts)

Sinan
--
A. Sinan Unur <(E-Mail Removed)>
(remove .invalid and reverse each component for email address)

comp.lang.perl.misc guidelines on the WWW:
http://mail.augustmail.com/~tadmc/cl...uidelines.html

 
Reply With Quote
 
Richard G. Riley
Guest
Posts: n/a
 
      03-14-2006
On 2006-03-14, A. Sinan Unur <(E-Mail Removed)> wrote:
> "Richard G. Riley" <(E-Mail Removed)> wrote in news:isjge3-9u8.ln1
> @cern.frontroom:
>
>> On 2006-03-13, Eric Sosman <(E-Mail Removed)> wrote:
>>>
>>>
>>> D.E. Knuth (elaborating on a remark by Hoare): "We should
>>> forget about small efficiencies, say about 97% of the time:
>>> premature optimization is the root of all evil."
>>>
>>> W.A. Wulf: "More computing sins are committed in the name
>>> of efficiency (without necessarily achieving it) than for any
>>> other single reason, including blind stupidity."
>>>
>>> Jackson's Laws of Program Optimization:
>>> FIRST LAW: Don't do it.
>>> SECOND LAW (for experts only): Don't do it yet.
>>>
>>> See also <http://www.flounder.com/optimization.htm>.
>>>
>>> ... and that's all I'll offer on the matter. Until you
>>> have made measurements or done calculations that demonstrate
>>> a performance problem, "approaching optimum performance as
>>> early as possible" is folly. If you wish to behave foolishly,
>>> go ahead -- but do it on your own; I'm not going to assist
>>> you to your ruin.
>>>

>>
>> Thanks for those references : all very valid if designing a large
>> system.

>
> All very valid for any type of system.


Possibly. I know however, that if I have a known
bottleneck/limitation at a certain part of a system then
I'd better take that into account at an early stage or its going to
be hell later. This could be an API to a graphics HW processor for
instance. We can all throw big shot Laws at things : but often we
choose C as a solution because it does allow us to get down and
dirty. At the end of the day I know the format of my data : what goes
around it or provides it is immaterial for the topic in hand : ie
methods for fast bitstream scanning and manipulation in C. The other
posts in the thread, bar one, have been constructive.

>
>> I am designing/impementing a very low level system with fairly
>> specific requirements :

>
> We do not know what does specific requirements, constraints, and
> operating parameters (they would also be off-topic here, I am
>assuming).


What dont you know? I want fast ways in C to handle bit data. I have some
answers. C is on topic here. I specifically said I want to keep it all
in C.

>
>> and apriori

>
> I think you are missing the point that what will work best in that
> narrow environment cannot be known *a priori* (see


Of course it can : and precisely because of that environment. A
smaller environment make it easier to make a such a statement :

" relating to or derived by reasoning from self-evident propositions"

I will concurr that "self evident" to me, might not match that which is
self evident to you. I will also agree that I might have got the use
of that word arseways, but since noone else correct it yet I'll
assume I have some breathing space :-;

Remeber we're not inventing a square circle here : theres no big
mission. I simply want anyone with any experience of doing hardcore
bit field manipulation to give me some pointers. And they have.


*snip
 
Reply With Quote
 
Michael Mair
Guest
Posts: n/a
 
      03-14-2006
Richard G. Riley schrieb:
> On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
>
>>"Richard G. Riley" <(E-Mail Removed)> writes:
>>
>>>On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:

>>
>>[...]
>>
>>>>It's guaranteed that, for any integer type, all-bits-zero is a
>>>>representation of the value 0. (Neither the C90 nor the C99 standard
>>>>says this, but n1124 does; this change was made in TC2 in response to
>>>>DR #263. I've never heard of an implementation that doesn't have this
>>>>property anyway, and I'd be comfortable depending on it.) For
>>>>one's-complement and sign-magnitude representations, there are two
>>>>distinct representations of zero (+0 and -0), but you can avoid that
>>>>by using an unsigned type. But unsigned types *can* have padding
>>>>bits, so even if buf[i]==0, you might still have missed a 1 bit.
>>>
>>>Could you please explain this? If I have calculated (or even knw..) that the
>>>underlying word size is 32 bit and I use an unsigned int to represent
>>>this in C, then how do padding bits make any difference? Isnt the
>>>variable guarenteed to go from 0 to 2^32-1?

>>
>>No, it isn't. The guaranteed range of unsigned int is 0 to 32768.
>>But, for example, an implementation could legally have:
>> CHAR_BIT == 8
>> sizeof(unsigned int) == 4
>> INT_MAX == 32767
>>An unsigned int then has 32 bits: 16 value bits and 16 padding bits.
>>
>>For details, see section 6.2.6.2 of the C99 standard.

>
> Wow. That would break a lot of code. Assumimg some situation like
> that, is their a constant "MAX BITS FOR ULONG" or "MAX WORD SIZE IN BITS"? e.g
> to get a platform independant assurance of the number of bits one can
> use in a single HW word? Having said that, in my sample code in the
> initial reply to Eric, I would only need to recalculate my "start
> mask" from 0x80000000 to "whatever" when I calculate "usable bits per
> HW word" in program init. Since the overhead of doing that calc is
> almost less than zero compared to cother computation, I could do that.


If you intend "fast and portable", then consider doing the following:
Build a "guaranteedly portable" version of your intended to be fast
stuff.
Then start documenting your assumptions for your "fast header": Add a
source file which is not linked with the rest of the code but fails
compilation (during preprocessing or actual compiling) if one of
these assumptions is violated.
Say you have determined that sizeof(int)*CHAR_BIT is 32 Bits, then
0xFFFFFFFF - INT_MAX should be 0. In other words, neither
char inttestl[0x7FFFFFFF - (long)INT_MAX + 1L];
nor
char inttestu[(long)INT_MAX - 0x7FFFFFFF + 1L];
should fail to compile; actually, we have typedefs not unlike the
C99 intN_t in our code, with appropriate maximum and minimum values
#define'd or otherwise provided, so we test only the provided values
and fail with
#if (!defined XX_INT32_MAX) || (XX_INT32_MAX != 2147483647)
# error Bleargh -- no Int32 type
#endif
or something similar.

There was a thread about preprocessor and compile time checks (or it
evolved to that) some time ago, starting at
<43838308$0$20847$(E-Mail Removed)-online.net> if my
repository of potentially useful stuff remembers correctly.


Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
 
Reply With Quote
 
Richard G. Riley
Guest
Posts: n/a
 
      03-14-2006
On 2006-03-14, Michael Mair <(E-Mail Removed)> wrote:
> Richard G. Riley schrieb:
>> On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
>>
>>>"Richard G. Riley" <(E-Mail Removed)> writes:
>>>
>>>>On 2006-03-13, Keith Thompson <(E-Mail Removed)> wrote:
>>>
>>>[...]
>>>
>>>>>It's guaranteed that, for any integer type, all-bits-zero is a
>>>>>representation of the value 0. (Neither the C90 nor the C99 standard
>>>>>says this, but n1124 does; this change was made in TC2 in response to
>>>>>DR #263. I've never heard of an implementation that doesn't have this
>>>>>property anyway, and I'd be comfortable depending on it.) For
>>>>>one's-complement and sign-magnitude representations, there are two
>>>>>distinct representations of zero (+0 and -0), but you can avoid that
>>>>>by using an unsigned type. But unsigned types *can* have padding
>>>>>bits, so even if buf[i]==0, you might still have missed a 1 bit.
>>>>
>>>>Could you please explain this? If I have calculated (or even knw..) that the
>>>>underlying word size is 32 bit and I use an unsigned int to represent
>>>>this in C, then how do padding bits make any difference? Isnt the
>>>>variable guarenteed to go from 0 to 2^32-1?
>>>
>>>No, it isn't. The guaranteed range of unsigned int is 0 to 32768.
>>>But, for example, an implementation could legally have:
>>> CHAR_BIT == 8
>>> sizeof(unsigned int) == 4
>>> INT_MAX == 32767
>>>An unsigned int then has 32 bits: 16 value bits and 16 padding bits.
>>>
>>>For details, see section 6.2.6.2 of the C99 standard.

>>
>> Wow. That would break a lot of code. Assumimg some situation like
>> that, is their a constant "MAX BITS FOR ULONG" or "MAX WORD SIZE IN BITS"? e.g
>> to get a platform independant assurance of the number of bits one can
>> use in a single HW word? Having said that, in my sample code in the
>> initial reply to Eric, I would only need to recalculate my "start
>> mask" from 0x80000000 to "whatever" when I calculate "usable bits per
>> HW word" in program init. Since the overhead of doing that calc is
>> almost less than zero compared to cother computation, I could do that.

>
> If you intend "fast and portable", then consider doing the

following:

That is the best. My initial requirements are slightly smaller : fast
on x86 as I originally specified and "works elsewhere when it might be
needed the day hell freezes over" .-; Having said that, I have taken
Thomsons comments to heart about word padding for some reason : I see
no real overhead (when keeping everything in C) to ensure platform
compatability in the C level. It will be, I admit, the first code I
ever wrote where a machine word can potentially hold less values than
its size indicates : unless of course I do come across an unforseen
performance hit in which case bye bye good practice.

> Build a "guaranteedly portable" version of your intended to be fast
> stuff.
> Then start documenting your assumptions for your "fast header": Add a
> source file which is not linked with the rest of the code but fails
> compilation (during preprocessing or actual compiling) if one of
> these assumptions is violated.
> Say you have determined that sizeof(int)*CHAR_BIT is 32 Bits, then
> 0xFFFFFFFF - INT_MAX should be 0. In other words, neither
> char inttestl[0x7FFFFFFF - (long)INT_MAX + 1L];


Cant assume that : CHAR_BIT will be 8 or so and sizeof(int) will be 4,
but KT pointed out that maximum unsigned int might still be 32767
due to padding bits..I asked for an easy way to determine "usable bit
size" but got no reply so I guess its just some code to do at
init. Not too far up on the priority list I must admit though.

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
optimizing DropDownLists Alejandro Penate-Diaz ASP .Net 1 04-08-2005 01:54 PM
Firefox optimizing websites (help) Tiernan Firefox 11 03-07-2005 12:14 AM
Optimizing ViewState Ben Fidge ASP .Net 4 02-18-2005 02:39 PM
Optimizing using the IncrementalBuild property ASP .Net 5 07-28-2004 05:49 PM
Optimizing perfomance on T3 line Christoph Schad Cisco 12 01-01-2004 08:40 PM



Advertisments