Velocity Reviews > Accessing array elements via floating point formats.

# Accessing array elements via floating point formats.

George Neuner
Guest
Posts: n/a

 12-14-2010
On Mon, 13 Dec 2010 01:09:21 -0800 (PST), Malcolm McLean
<(E-Mail Removed)> wrote:

>On Dec 13, 6:55*am, George Neuner <(E-Mail Removed)> wrote:
>>
>> But the point of table lookup in any algorithm is to have part of the
>> answer precomputed. *If the entire calculation can be done
>> sufficiently quickly, precomputing becomes unnecessary.
>>

>Suppose we have a table of 100 values representing sine wave. We have
>theta in a floating point.
>
>By taking theta/2PI * 100 we can create an index into the sine table.
>However this calculation isn't inherently integeral.
>If we linearly interpolate between floor(theta/PI *100) and ceil(theta/
>2PI * 100) we can get slightly more accurate results. If the hardware
>does it automatically for us, we can get the results very quickly.

Yes, but you can also compute sine directly without using a table and
my point was that, in general, using a table is unnecessary if the
direct computation is fast enough. Trigonometric functions happen to
be one area where the direct computation is, in general, too slow.

Hardware interpolation within an interval could be very useful, but
there are just too many ways to perform interpolation, with good
reasons for each. I seriously doubt that any one method being in the
hardware will convince people using other methods to switch.

And would the hardware method handle extrapolation as well. If so,
how?

George

Skybuck Flying
Guest
Posts: n/a

 12-14-2010

"Andy "Krazy" Glew" <(E-Mail Removed)-arch.net> wrote in message
news:(E-Mail Removed) ...
> On 12/13/2010 6:27 PM, Skybuck Flying wrote:
>> "Andy "Krazy" Glew"<(E-Mail Removed)-arch.net> wrote in message
>> news:(E-Mail Removed) ...
>>> On 12/13/2010 7:50 AM, Skybuck Flying wrote:
>>>>> Apperently sombody on google completely misunderstood how the
>>>>> fractional
>>>>> part would be used to access individual bits. He's not in my outlook
>>>>> express folder so he problably a troll.
>>>>
>>>> Concerning the potential troll... I cannot find his posting anymore...
>>>> but
>>>> it doesn't matter... at least I clearified it a bit how I saw it
>>>>
>>>> The nice thing is it doesn't matter how the floating point format
>>>> works...
>>>> because we human beings can design the language to fit whatever we
>>>> want...
>>>> so we don't have to use the fractional part for anything... and can
>>>> give
>>>> the
>>>> source code notation a different meaning.
>>>>
>>>> Bye,
>>>> Skybuck.
>>>>
>>>>
>>>
>>> But, what about decimal versus binary floating point?

>>
>> What about it ?
>>
>> Screw decimals... computers work with binary !

>
> (1) IEEE Decimal Floating Point, 754-2008
>
> (2) You said "7.1". You realize that ".1" is not an exactly
> representable binary fraction?
>
> You probably meant "7.125".

No I ment 7.1.

The .1 means something totally else once it's between the brackets[]

It's treated as a modifier.

Is just a constant decimal notation... which is either stuffed into an
integer... or perhaps stuffed into the floating point itself somewhere if
possible and if that doesn't upset the hardware.

Bye,
Skybuck.

Malcolm McLean
Guest
Posts: n/a

 12-14-2010
On Dec 14, 8:27*am, George Neuner <(E-Mail Removed)> wrote:
>
> And would the hardware method handle extrapolation as well. *If so,
> how?
>

No, extrapolation is, in general, invalid, whilst interpolation is
generally valid.

(eg if I have a list of the number of aircraft in the US airforce
between December 1941 and August 1945, one entry per month, then I can
make a reasonable guess of the numbers mid-month by taking the mean of
the other two figures. However I can't extrapolate beyond August 1945,
something happened then to the US airforce and my predictions will be
wildly wrong).

Skybuck Flying
Guest
Posts: n/a

 12-14-2010

"Andy "Krazy" Glew" <(E-Mail Removed)-arch.net> wrote in message
news:(E-Mail Removed)...
> On 12/13/2010 11:37 PM, Skybuck Flying wrote:
>> "Andy "Krazy" Glew"<(E-Mail Removed)-arch.net> wrote in message
>> news:(E-Mail Removed) ...
>>> On 12/13/2010 6:27 PM, Skybuck Flying wrote:
>>>> "Andy "Krazy" Glew"<(E-Mail Removed)-arch.net> wrote in message
>>>> news:(E-Mail Removed) ...
>>>>> On 12/13/2010 7:50 AM, Skybuck Flying wrote:
>>>>>>> Apperently sombody on google completely misunderstood how the
>>>>>>> fractional
>>>>>>> part would be used to access individual bits. He's not in my outlook
>>>>>>> express folder so he problably a troll.
>>>>>>
>>>>>> Concerning the potential troll... I cannot find his posting
>>>>>> anymore...
>>>>>> but
>>>>>> it doesn't matter... at least I clearified it a bit how I saw it
>>>>>>
>>>>>> The nice thing is it doesn't matter how the floating point format
>>>>>> works...
>>>>>> because we human beings can design the language to fit whatever we
>>>>>> want...
>>>>>> so we don't have to use the fractional part for anything... and can
>>>>>> give
>>>>>> the
>>>>>> source code notation a different meaning.
>>>>>>
>>>>>> Bye,
>>>>>> Skybuck.
>>>>>>
>>>>>>
>>>>>
>>>>> But, what about decimal versus binary floating point?
>>>>
>>>> What about it ?
>>>>
>>>> Screw decimals... computers work with binary !
>>>
>>> (1) IEEE Decimal Floating Point, 754-2008
>>>
>>> (2) You said "7.1". You realize that ".1" is not an exactly
>>> representable binary fraction?
>>>
>>> You probably meant "7.125".

>>
>> No I ment 7.1.
>>
>> The .1 means something totally else once it's between the brackets[]
>>
>> It's treated as a modifier.
>>
>> Is just a constant decimal notation... which is either stuffed into an
>> integer... or perhaps stuffed into the floating point itself somewhere if
>> possible and if that doesn't upset the hardware.

>
> Then you are not using an IEEE binary floating point representation for
> your addresses. You are using something that looks like floating point
> when typed, but is actually something else.
> I.e. just a language syntax and notation.

No the floating point is fed to the cpu and will take care of it... as well
as the optional integer or whatever it may be.

Bye,
Skybuck.

Skybuck Flying
Guest
Posts: n/a

 12-14-2010
vArray[
HowManyCharactersDoYouNeedToTypeBeforeItGetsVeryAn noyingAndCostlyToConsistentlyHaveToCastToIntHaveYo uEverWrittenALotOfArrayCodeLikeIHaveInDelphiNoYouP robablyHaveNotBecauseCDoesNotHaveTheGreatArraySupp ortThatDelphiHasSoInOtherWordsYouHaveNoIdeaWhatSoE verWhatsItLikeToConstantlyHaveToRoundIndexes
] = YouGetItNow ?

Me against your "combining bits of 1.5" that idea is just stupid, why would
you even want to do that ? never... and it's confusing as well.

Since vArray[ 5.4 / 3.4 ] will normally not compile anyway ? This notation
is not
valid and therefore it's not a problem.

In my original idea I said to ignore the fraction... I think that's best
because
newbies don't understand fractions that well.. and the fractions are
probably not
that usefull anyway... You could try to write weird code like:

vArray[ (1.0 + 1/2 + 1/4 + 1/ ] = vSomething;

But wouldn't you much rather write something shorter like:

vArray[ 1.7 ] =

at least .7 is easy to remember while 0.5 + 0.25 + 0.125 and so forth is
not.

Also there is nothing preventing the compiler from interpreting the above
code as:

vArray[ Float.Integer ] =

Two seperate variables... one for array indexing, one for bit indexing.

If the bit indexing is a bad idea... fine then drop it.

But at least the float idea is nice which was my original idea ! *

Bye,
Skybuck =D

George Neuner
Guest
Posts: n/a

 12-14-2010
On Tue, 14 Dec 2010 00:09:20 -0800 (PST), Malcolm McLean
<(E-Mail Removed)> wrote:

>On Dec 14, 8:27*am, George Neuner <(E-Mail Removed)> wrote:
>>
>> And would the hardware method handle extrapolation as well. *If so,
>> how?
>>

>No, extrapolation is, in general, invalid, whilst interpolation is
>generally valid.
>
>(eg if I have a list of the number of aircraft in the US airforce
>between December 1941 and August 1945, one entry per month, then I can
>make a reasonable guess of the numbers mid-month by taking the mean of
>the other two figures. However I can't extrapolate beyond August 1945,
>something happened then to the US airforce and my predictions will be
>wildly wrong).

But, a robot needs to extrapolate future positions based on its
current course and speed. It must do this for a variety of reasons,
but chiefly to determine whether a probably future position will
intersect with an obstacle.

George

Keith Thompson
Guest
Posts: n/a

 12-14-2010
Malcolm McLean <(E-Mail Removed)> writes:
> On Dec 14, 8:27Â*am, George Neuner <(E-Mail Removed)> wrote:
>>
>> And would the hardware method handle extrapolation as well. Â*If so,
>> how?
>>

> No, extrapolation is, in general, invalid, whilst interpolation is
> generally valid.
>
> (eg if I have a list of the number of aircraft in the US airforce
> between December 1941 and August 1945, one entry per month, then I can
> make a reasonable guess of the numbers mid-month by taking the mean of
> the other two figures. However I can't extrapolate beyond August 1945,
> something happened then to the US airforce and my predictions will be
> wildly wrong).

Sure, if you carefully choose the end points of the range to make
extrapolation invalid, you'll find that extrapolation doesn't work
very well.

Interpolation does tend to be more reliable than extrapolation,
simply because you've got (at least) two data points to start with,
but I don't think the difference is as great as you imply.

If you had data from, say, May 1957 to March 1983 (months randomly
chosen from the 20th century), extrapolation would probably be
reasonably accurate.

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

nmm1@cam.ac.uk
Guest
Posts: n/a

 12-14-2010
In article <(E-Mail Removed)>,
Keith Thompson <(E-Mail Removed)> wrote:
>
>Interpolation does tend to be more reliable than extrapolation,
>simply because you've got (at least) two data points to start with,
>but I don't think the difference is as great as you imply.
>
>If you had data from, say, May 1957 to March 1983 (months randomly
>chosen from the 20th century), extrapolation would probably be
>reasonably accurate.

Not if it was anything to do with computers

There are mathematical reasons that interpolation is more reliable
than extrapolation, in addition to extrapolation being vulnerable
to changing conditions. When it comes to hopeless inaccuracy, yes,
the difference is immense - there is much less difference for mere
minor inaccuracies.

Regards,
Nick Maclaren.

George Neuner
Guest
Posts: n/a

 12-15-2010
On Tue, 14 Dec 2010 04:29:28 -0600, (E-Mail Removed) (Gordon
Burditt) wrote:

>>But having to call such routines all the time seems a bit

>
>Have you looked at what code is generated? Especially for the plain
>cast to int?

Have you? I don't claim that VC2008 is a great compiler, but I
happen to have it handy. I compiled the following for x64 optimized
for speed:

typedef long long bignum;

int _tmain(int argc, _TCHAR* argv[])
{
double f;
bignum i;

f = 1.23456789e29;
i = (bignum) f;

printf( "%20f -> %lld\n", f, i );
return 0;
}

The code for the cast is:

cvttsd2si r8,xmm1

Simple enough. But make the following little changes:

typedef unsigned long long bignum;
and
printf( "%20f -> %llu\n", f, i );

and suddenly the "simple" cast becomes:

movsd xmm2,mmword ptr [__real@43e0000000000000 (13FF021C0h)]
xor eax,eax
comisd xmm1,xmm2
movapd xmm0,xmm1
jbe wmain+37h (13FF01037h)
subsd xmm0,xmm2
comisd xmm0,xmm2
jae wmain+37h (13FF01037h)
mov rcx,8000000000000000h
mov rax,rcx
wmain+37h:
cvttsd2si r8,xmm0

>Please explain why these features are "necessary". You seem to think
>that casting to int is a difficult-to-accomplish task.

Seems to me like it is.

George

George Neuner
Guest
Posts: n/a

 12-16-2010
On Thu, 16 Dec 2010 15:01:34 +0100, Terje Mathisen <"terje.mathisen at
tmsw.no"> wrote:

>George Neuner wrote:
>> On Tue, 14 Dec 2010 04:29:28 -0600, (E-Mail Removed) (Gordon
>> Burditt) wrote:
>>
>>>> But having to call such routines all the time seems a bit
>>>
>>> Have you looked at what code is generated? Especially for the plain
>>> cast to int?

>>
>> Have you? I don't claim that VC2008 is a great compiler, but I
>> happen to have it handy. I compiled the following for x64 optimized
>> for speed:
>>
>> typedef long long bignum;
>>
>> int _tmain(int argc, _TCHAR* argv[])
>> {
>> double f;
>> bignum i;
>>
>> f = 1.23456789e29;
>> i = (bignum) f;
>>
>> printf( "%20f -> %lld\n", f, i );
>> return 0;
>> }
>>
>> The code for the cast is:
>>
>> cvttsd2si r8,xmm1
>>
>>
>> Simple enough. But make the following little changes:
>>
>> typedef unsigned long long bignum;
>> and
>> printf( "%20f -> %llu\n", f, i );
>>
>> and suddenly the "simple" cast becomes:
>>
>> movsd xmm2,mmword ptr [__real@43e0000000000000 (13FF021C0h)]

>
>Loading some magic value, presumably +2^63, i.e. the first fp value that
>won't fit in a signed 64-bit int.
>
>> xor eax,eax

>
>OK, avoiding the REX.W prefix because a 32.bit operation will always
>zero the top 32 bits?
>
>> comisd xmm1,xmm2

>
>This just did a signed fp compare but sets the flags as if it was an
>unsigned int CMP!
>
>> movapd xmm0,xmm1
>> jbe wmain+37h (13FF01037h)

>Skip the correction if the top bit was clear
>
>> subsd xmm0,xmm2

>
>Subtract 2^63
>> comisd xmm0,xmm2

>
>Check again, are we in range?
>> jae wmain+37h (13FF01037h)

>
>If not, the conversion will and should overflow
>
>> mov rcx,8000000000000000h
>> mov rax,rcx
>> wmain+37h:
>> cvttsd2si r8,xmm0

>
>That is actually quite nice code, except for the spurious MOVAPD copy
>from xmm1 to xmm0 it is probably as fast as you can make it while still
>handling all inputs, including out of range, correctly.
>
>Terje

I agree that it is about as good as can be ... but it is a lot more
complex than the single instruction in the signed integer case.

George