In article <(E-Mail Removed)>,

Steven Jenkins <(E-Mail Removed)> wrote:

>Brian Schroeder wrote:

>> That is interesting enough. Anybody here would like to explain to me, why

>> log_2 is harder than log_10 or ln. I just assumed that anything binary

>> would be nice for computers.

>

>It's neither harder nor easier, it's just not that useful. Base 2

>logarithms aren't really needed in most math and engineering. Even in

>computer science and information theory, where the base 2 log is an

>important analytical concept, it's rarely employed in precision

>calculations. Those are not the kind of problems you attack with

>numerical methods.

>

>It takes a lot of care and skill to develop a math library; much more

>than naively coding power series expansions of transcendental functions.

>People tend to put that work where it's really needed.

>

>The good news is that for most purposes in computer science,

>

>log2(x) = ln(x) / ln(2)

>

>is plenty precise. Note that ln(2) is a constant.
I do occassionally need Log base 2 operations. Mostly I need to figure

out how many bits are needed to represent some range (for example, given a

range of 0 to 7, I need a 3 bit counter) in a hardware representation. I

also ran into a need for log2 when I was doing some Karnaugh map

manipulations for a class project I did last quarter.

You're right though, I don't need any sort of precision for these

applications, integers only:

log2(2) => 1

log2(3) => 2 (it's actually something like 1.5849, apply ceiling op)

log2(4) => 2

log2(5) => 3

log2(6) => 3

log2(7) => 3

log2(

=> 3

log2(9) => 4

.....

As I recall, since I only needed up log2(32) (5 bits), and since it needed

to be as fast as possible, I just defined my own method that uses a case

statement to 'lookup' the desired value.

Phil