Velocity Reviews > sscanf question

# sscanf question

Keith Thompson
Guest
Posts: n/a

 12-13-2012
"BartC" <(E-Mail Removed)> writes:
> "Fred K" <(E-Mail Removed)> wrote in message
> news:(E-Mail Removed)...
>> On Thursday, December 13, 2012 2:31:41 PM UTC-8, Fred K wrote:
>>> On Thursday, December 13, 2012 1:40:58 PM UTC-8, Prathamesh Kulkarni
>>> wrote: >
>>> Please could you suggest some better ways than float/double to represent
>>> monetary amounts ? Thanks > > > -- > > -Ed Falk, http://www.velocityreviews.com/forums/(E-Mail Removed)
>>> > > http://thespamdiaries.blogspot.com/

>>
>> Integer times 100 - i.e., in integer units of the smallest unit in the
>> monetary system.
>>
>> So \$1.23 is stored as 123
>>
>> The reason is to avoid roundoff and truncation errors.

>
> You still get problems when doing calculations (there are remainders to take
> care of).
>
> Floating point is workable if you know that an amount might be meant to
> represent a whole number of cents for example. Or you can use floating
> point to represent cents rather than dollars ...

Floating-point can't exactly represent all integer values within its
range -- and operations that yield inexact results generally don't give
any warning about a loss of precision. If you can guarantee that all
the numbers you'll be dealing with are within the contiguous range of
integers that can be represented exactly, then you'll probably be ok.
But then why use floating-point rather a large integer type in the first
place?

For *serious* financial calculations, there are typically standards that
specify exactly how those calculations are to be carried out. In such
contexts, you need to find those standards and follow them.

--
Keith Thompson (The_Other_Keith) (E-Mail Removed) <http://www.ghoti.net/~kst>
Will write code for food.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

glen herrmannsfeldt
Guest
Posts: n/a

 12-14-2012
Keith Thompson <(E-Mail Removed)> wrote:

(snip, someone wrote)
>> Floating point is workable if you know that an amount might be meant to
>> represent a whole number of cents for example. Or you can use floating
>> point to represent cents rather than dollars ...

> Floating-point can't exactly represent all integer values within its
> range -- and operations that yield inexact results generally don't give
> any warning about a loss of precision. If you can guarantee that all
> the numbers you'll be dealing with are within the contiguous range of
> integers that can be represented exactly, then you'll probably be ok.

Also, as long as you don't do division. Well, it will work for a
while, but can fail long before add, subtract, and multiply when the
result is rounded differently than you expect.

Now, with IEEE you know that the result is rounded.

IBM S/360 (and HFP on newer processors) truncates the quotient
(except on the 360/91 and related processors). The truncated
quotient might be a better choice if you are using it for
integer arithmetic.

> But then why use floating-point rather a large integer type
> in the first place?

Yes, good question. In the days of 16 bit integers and 64 bit
floating point, I know some used floating point for larger
integer values, but now with 64 bit integers on most machines
there isn't much of an excuse anymore.

> For *serious* financial calculations, there are typically standards that
> specify exactly how those calculations are to be carried out. In such
> contexts, you need to find those standards and follow them.

Yes.

And also, as Knuth says, for typesetting.

-- glen

BartC
Guest
Posts: n/a

 12-14-2012

"Keith Thompson" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> "BartC" <(E-Mail Removed)> writes:
>> "Fred K" <(E-Mail Removed)> wrote in message

>>> So \$1.23 is stored as 123
>>>
>>> The reason is to avoid roundoff and truncation errors.

>>
>> You still get problems when doing calculations (there are remainders to
>> take
>> care of).
>>
>> Floating point is workable if you know that an amount might be meant to
>> represent a whole number of cents for example. Or you can use floating
>> point to represent cents rather than dollars ...

> Floating-point can't exactly represent all integer values within its
> range

64-bit floats can still represent some \$45 trillion dollars, even if you
work in cents. Anything more than that, then perhaps you should make use of
some 'serious' methods. But we're talking about the price of coffee.

> -- and operations that yield inexact results generally don't give
> any warning about a loss of precision.

Integers aren't much better, when division is introduced. And division is
necessary to do things like percentages. Unless the proposal is not only to
use integers, but to use rationals so that you never get to do the division
(but I'd be interested in how you'd print out the results!).

> But then why use floating-point rather a large integer type in the first
> place?

Because integers don't do fractions very well. Not without using special
methods, at which point it becomes easier to just use floating point.

If you *know* that intermediate results need to be rounded to the nearest
cent, for example, then you just round to the nearest cent (ie. to the
nearest whole number). As far as add, subtract and multiply are concerned,
calculating using floating point whole numbers is little different to using
integers. But if you then have to calculate 15.9% tax on \$53.49, it's a lot
easier with floating point.

(I've done this stuff where the total needed to be calculated in two
separate currencies. The conversion rate between them was specified to four
decimal places. Or you're buying by weight or length, and need to use either
metric or Imperial measure. These things just aren't integer-friendly!)

--
bartc

glen herrmannsfeldt
Guest
Posts: n/a

 12-14-2012
BartC <(E-Mail Removed)> wrote:

> "Keith Thompson" <(E-Mail Removed)> wrote in message

(snip)
>> Floating-point can't exactly represent all integer values within its
>> range

> 64-bit floats can still represent some \$45 trillion dollars, even if you
> work in cents. Anything more than that, then perhaps you should make use of
> some 'serious' methods. But we're talking about the price of coffee.

>> -- and operations that yield inexact results generally don't give
>> any warning about a loss of precision.

> Integers aren't much better, when division is introduced. And division is
> necessary to do things like percentages. Unless the proposal is not only to
> use integers, but to use rationals so that you never get to do the division
> (but I'd be interested in how you'd print out the results!).

Integers are still a lot better.

>> But then why use floating-point rather a large integer type in the first
>> place?

> Because integers don't do fractions very well. Not without using special
> methods, at which point it becomes easier to just use floating point.

> If you *know* that intermediate results need to be rounded to the nearest
> cent, for example, then you just round to the nearest cent (ie. to the
> nearest whole number). As far as add, subtract and multiply are concerned,
> calculating using floating point whole numbers is little different to using
> integers. But if you then have to calculate 15.9% tax on \$53.49, it's a lot
> easier with floating point.

Take 5349 (cents) multiply by 159 and then divide by 1000 to get the
truncated tax in cents. But with tax, you have to watch the rounding.
If you want the rounded value, add 500 before dividing by 1000.

Some years ago I lived in Maryland, where, as far as I could tell, the
tax was rounded up. You can add from 0 to 999 before dividing depending
on how you want it rounded. You do have to be sure that the intemediate
doesn't overflow, but with 64 bit integers that shouldn't be hard.

With floating point, the intermediate values might be rounded. You then
have to be careful to avoid double rounding, which can give an
unexpected wrong result.

With integer division, you control the rounding.

> (I've done this stuff where the total needed to be calculated in two
> separate currencies. The conversion rate between them was specified to four
> decimal places. Or you're buying by weight or length, and need to use either
> metric or Imperial measure. These things just aren't integer-friendly!)

Multiply be the value with four decimal places, after moving the decimal
point to the right four places, then divide by 10000. Not hard at all.
Again, add the appropriate amount before dividing to round as needed.

-- glen

Brian O'Brien
Guest
Posts: n/a

 12-14-2012
Yep.... I hear you when it comes to the floating point numbers...
Particularly .1 and .2 These number don't express themselves in binary ...at least not exactly.. So ya you can get lots of rounding errors.

I think scanf may not be the best way.. perhaps scanning from the right to the left until you find something that isn't a number or . would help extract the number... then anything from the left to the location found would bethe item... less any denomination symbol.

Thank you all for your input.
B.

BartC
Guest
Posts: n/a

 12-14-2012
"glen herrmannsfeldt" <(E-Mail Removed)> wrote in message
news:kae67t\$isr\$(E-Mail Removed)...
> BartC <(E-Mail Removed)> wrote:

>> Because integers don't do fractions very well. Not without using special
>> methods, at which point it becomes easier to just use floating point.

>> But if you then have to calculate 15.9% tax on \$53.49, it's a lot
>> easier with floating point.

>
> Take 5349 (cents) multiply by 159 and then divide by 1000 to get the
> truncated tax in cents. But with tax, you have to watch the rounding.
> If you want the rounded value, add 500 before dividing by 1000.

OK, by using 'special methods' as I mentioned. You've now also introduced
the arbitrary figures 10 (10*15.9), 500 and 1000, and presumably need to
impose some arbitrary convention as to how percentage values are going to be
represented in the program (I guess you can't just have 15.9 or 0.159).

At this level of program, just using floating point is a lot easier! You
just need to round to a cent at every stage, taking care when values are
negative; one little function.

> With floating point, the intermediate values might be rounded.

Do you have a calculation where that rounding (which will be to the
low-order bits of the representation) will actually give a different result
to doing the whole thing with integers? (And with ordinary amounts that are
likely to be encountered.)

> Multiply be the value with four decimal places, after moving the decimal
> point to the right four places, then divide by 10000. Not hard at all.

No. Waste of time inventing floating point, really!

--
Bartc

glen herrmannsfeldt
Guest
Posts: n/a

 12-14-2012
BartC <(E-Mail Removed)> wrote:

(snip, someone wrote)
>>> But if you then have to calculate 15.9% tax on \$53.49, it's a lot
>>> easier with floating point.

(then I wrote)
>> Take 5349 (cents) multiply by 159 and then divide by 1000 to get the
>> truncated tax in cents. But with tax, you have to watch the rounding.
>> If you want the rounded value, add 500 before dividing by 1000.

> OK, by using 'special methods' as I mentioned. You've now also introduced
> the arbitrary figures 10 (10*15.9), 500 and 1000, and presumably need to
> impose some arbitrary convention as to how percentage values are going to be
> represented in the program (I guess you can't just have 15.9 or 0.159).

Well, you can do it in fixed point decimal just fine. If you write
it in PL/I, instead of C, 15.9 is FIXED DECIMAL(3,1), that is, three
digits with one after the decimal point. If you multiply, then the
compiler knows what you mean and does it all for you, in fixed point.

They added complex numbers to C, yet not many people use them. Maybe
next we can have fixed decimal to make these calculations easier.

> At this level of program, just using floating point is a lot easier! You
> just need to round to a cent at every stage, taking care when values are
> negative; one little function.

>> With floating point, the intermediate values might be rounded.

> Do you have a calculation where that rounding (which will be to the
> low-order bits of the representation) will actually give a different result
> to doing the whole thing with integers? (And with ordinary amounts that are
> likely to be encountered.)

With interest rates, it might be unusual, but, as previously mentioned,
exchange rates often go to four decimal places. It won't take such big
values to get rounding errors in that case. How long will it take you
to show that they don't occur?

You might look at: http://dl.acm.org/citation.cfm?id=221334

>> Multiply be the value with four decimal places, after moving the decimal
>> point to the right four places, then divide by 10000. Not hard at all.

> No. Waste of time inventing floating point, really!

Floating point was invented for scientific problems that range over
many orders of magnitude, and have inherent relative uncertainty.
That is, the uncertainty scales with the value.

In finance and typesetting, the uncertainty (cents or pixels) is fixed,
independent of the size of the values. In the case of absolute
uncertainty, fixed point arithmetic is a much better choice.

Calculations in exadollars or femtodollars are pretty rare,
but femtometers and exameters not so rare. (Nuclear physics
and astrophysics, respectively.)

It will take longer to prove that the rounding is right
in floating point than to do it right in fixed point.

-- glen

Ben Bacarisse
Guest
Posts: n/a

 12-14-2012
"Brian O'Brien" <(E-Mail Removed)> writes:

> Yep.... I hear you when it comes to the floating point numbers...
> Particularly .1 and .2 These number don't express themselves in
> binary ... at least not exactly..

It's a little odd to single out .1 and .2. The same is true of .3, .4,
..6, .7, .8 and .9.

<snip>
--
Ben.

BartC
Guest
Posts: n/a

 12-14-2012
"glen herrmannsfeldt" <(E-Mail Removed)> wrote in message
news:kaf5ge\$ppe\$(E-Mail Removed)...
> BartC <(E-Mail Removed)> wrote:

>>> With floating point, the intermediate values might be rounded.

>
>> Do you have a calculation where that rounding (which will be to the
>> low-order bits of the representation) will actually give a different
>> result
>> to doing the whole thing with integers? (And with ordinary amounts that
>> are
>> likely to be encountered.)

> With interest rates, it might be unusual, but, as previously mentioned,
> exchange rates often go to four decimal places. It won't take such big
> values to get rounding errors in that case. How long will it take you
> to show that they don't occur?

I'm asking you for *one* example where there might be a problem, but you're
asking me to verify zillions of possible calculations? That's not fair!

Actually I've now found my own examples where there might be differences.

For example, 1% of \$1234.50 (0.01*123450). Whenever a result has an odd 0.5
cents that needs to be rounded up or down (with different rounding methods,
the problem just shifts elsewhere).

Although neither integer nor floating point will give the 'right' answer,
with integers it's possible to consistently round the same way.

So you're right, although the intermediate rounding, while affecting which
way it might go, has less to do with it than the errors in representing
values such as 0.01.

Of course you can just use whole number calculations as you suggested, but
still using floating point; showing they are at least versatile!

--
Bartc

glen herrmannsfeldt
Guest
Posts: n/a

 12-14-2012
BartC <(E-Mail Removed)> wrote:

(snip, I wrote)
>>>> With floating point, the intermediate values might be rounded.

>>> Do you have a calculation where that rounding (which will be to the
>>> low-order bits of the representation) will actually give a different
>>> result
>>> to doing the whole thing with integers? (And with ordinary amounts that
>>> are
>>> likely to be encountered.)

>> With interest rates, it might be unusual, but, as previously mentioned,
>> exchange rates often go to four decimal places. It won't take such big
>> values to get rounding errors in that case. How long will it take you
>> to show that they don't occur?

> I'm asking you for *one* example where there might be a problem,
> but you're asking me to verify zillions of possible calculations?
> That's not fair!

When you put your money in a bank, you want to know that they will
do the interest computation right. You won't ask if they have an
example where they did it wrong, but that they do it right under
all conditions.

Fortunately, you don't have to ask that because regulators will

-- glen