In article <060220051856274220%(E-Mail Removed)>, ben <(E-Mail Removed)> wrote:

>i'm following an algorithm book and am stuck on an early excersise in

>it, not because of the c programming side of it or even the algorithm

>side of it, i don't think, but because of maths. i don't really

>understand what is expected, or really what the question means. could

>anyone explain what the question's after please?

>Prove an upper bound on the number of machine instructions required to

>process M connections on N objects using the below programme. You may

>assume, for example, that any C assignment statement always requires

>less than c instructions, for some fixed constant c.
The question is about algorithm time complexity. For instance,

sorting operations are typically O(N log N) or O(N * N), where N

is the number of items being sorted. Roughly speaking, that tells

you how long it takes to sort 10 items, vs 15000, vs a million

(or indeed any other number N).

Suppose the algorithm is "order n-squared", and it takes about a

millisecond to sort 10 items. Then:

10*10 = 100 => 1.00 millisecond

1500*1500 = 2250000 => 22500.00 milliseconds (or 22.5 seconds)

1mil*1mil = 1e12 => 1e12 milliseconds

= 1e9 seconds

= ~11574 days

= ~31.7 years

Hence, sorting a million items with this code is probably not a

good idea.

On the other hand, suppose you find an algorithm that is "order n

log n" (log2, in this case, although it does not matter in big-O

notation because log10 X / log2 X is a constant). Suppose it is

about three times slower for ten items, taking just over 3 milliseconds

(I am doing this to make the math easier). Then:

10 * log2 10 = 33.2 => 3.32 milliseconds

1500 * log2 1500 = 15826.1 => 1582.61 milliseconds (~1.6 seconds)

1mil * log2 1mil = 19931568.6 => 1993156.86 milliseconds

= ~1993 seconds

= ~33.2 minutes

As you can see, an "N log N" algorithm does not bog down nearly as

fast as an "N squared" one -- even if it starts out a little slower,

it is faster once you have a significant number of items to process.

Big-O or "order" notation is useful for deciding how much data you

can stand to process. It completely ignores linear factors, however:

if routine A is O(n log n) and routine B is O(n log n), they have

the same order, but A can still run a million times faster than B

in all cases. So it is not the entire picture, just a very important

part of it.

General algorithm questions (including "is this O(whatever)") mostly

belong in comp.programming.

--

In-Real-Life: Chris Torek, Wind River Systems

Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603

email: forget about it

http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.