ImpalerCore <(EMail Removed)> writes:
> On Sep 23, 12:29Â*pm, Ben Bacarisse <(EMail Removed)> wrote:
>> ImpalerCore <(EMail Removed)> writes:
>>
>
> [snip]
>>
>> To be really picky, it's problematic even for integer types. Â*You
>> probably don't want to generate trap representations with a probability
>> determined by the number of padding bits. Â*In fact, you may want to avoid
>> them altogether. Â*Also, for integer systems that permit two (nontrap)
>> zero representations, the result won't be "value" uniform since 0 will
>> be (ever so slightly) more common. Â*Of course, having both zeros
>> represented might be seen as an advantage when testing.
>
> I just discovered that my 'c_rand_int( 1000000 + 1 )' was infinite
> looping because my 'limit' was negative. It appears that it needs
> some adjustment for when INT_MAX > RAND_MAX, in my case RAND_MAX =
> 32767. Is there an alternative to filling bytes that doesn't tread
> into risky territory?
If the risky territory is the possibility of padding bits and so on,
then I think you need to take a valuebased approach. For example, one
could start with uniform double in [0, 1) and use that to get a uniform
int in [0. INT_MAX]. There's risky territory here also, because INT_MAX
might exceed the range of exact integers that double can represent.
Most good random number generators produce random bits (but if
RAND_MAX+1 is not a power of 2 this will probably not be true) so you
can often just take bytes from rand() and put them where you need them.
This is equivalent to what you were doing with % (UCHAR_MAX + 1). The
point about the bits being random is simply that you don't need to worry
about bias when reducing the range to some smaller number of bits.

Ben.
