Leigh Johnston wrote:

> Wrong; it is not a sign of bad design at all. If a variable is used as

> an unsigned value more often than as a signed value and it logically

> makes sense for it to be unsigned (e.g. it is never negative) then

> making the variable unsigned makes perfect sense.
--8x--

> You might be in the "use int everywhere" camp but I am not. Perhaps

> Java is a more suitable language for you rather than C++.
For what it's worth, here's my take on this. This is a piece of text I

have written earlier (explaining the form). I am in the 'use int

everywhere' camp

Ideas can't and shouldn't be forced, but they can be

shared; the following text reflects how I reason my position, not that

the other camps are wrong.

In this article I present two problems in using unsigned

integers in a program. These are the _zero boundary problem_,

and the _extended positive range problem_.

Zero boundary problem

---------------------

Most of the time we are assuming that, away from boundaries caused

by the finite representation, an integer object works like an element

of the integer ring ZZ. In addition, it seems plausible that most

integer calculations are concentrated around a neighborhood of zero.

The _zero-boundary problem_ (of unsigned integers) is that the zero

is on the boundary beyond which the assumption of working with integers

falls apart. Thus, the probability of introducing errors in computations

increases greatly. Furthermore, those errors can not be catched, since

every value is legal.

### Looping with unsigned integers

Looping over values from n to zero backwards demonstrates the zero

boundary problem with unsigned integers:

for (unsigned int i = n;i >= 0;--i)

{

// Do something.

}

Since there are no negative values to fail the loop test,

this loop never finishes. In contrast, with signed integers the problem

does not exist, since the boundaries are located far away from zero:

for (int i = n;i >= 0;--i)

{

// Do something.

}

Extended positive range problem

-------------------------------

Conversion between an unsigned integer and a signed integer

is an information destroying process in either direction.

The only way to avoid this is to use one or the other

consistently.

If the normal arithmetic is the intention, then a signed integer

represents a more general concept than an unsigned integer: the former

covers both the negative and positive integers, whereas the latter

only covers non-negative integers. In programming terms, it is

possible to create a program using signed integers alone, however,

the same can't be said about the unsigned integers. Therefore, if

only one type is to be used consistently, the choice should be a

signed integer. However, let us do some more analysis.

Since any non-trivial program must use signed integers, the use of

unsigned integers eventually leads to an unsigned-signed

conversion. In particular, because `std::size_t` in the Standard

Library is an unsigned integer, there are few programs that can

escape the unsigned-signed conversions.

Despite these conversions, programs still seem to work. The reason

for this, I reflect, is that the unsigned integers are normally

not taken into the extended positive range they allow.

The _extended positive range problem_ is that if the unsigned integers

are taken to their extended positive range, then the signed-unsigned

conversions become a reality. Ensuring correctness under such a threat

is hard, if not practically impossible.

Conclusion

----------

Using unsigned integers to model integers decreases the probability

that a program is correct.

--

http://kaba.hilvi.org