Wednesday 11 April 2007

The Joys Of Floating Point Numbers

Sometimes, floating point numbers are the thing that futile hours of programming are made of. Whether they come as 32, 64 or any other precision you want, they can truly give you unnecessary and unexpected work if you do not use them correctly. The problem I stumbled over yesterday was the problem of rounding errors. Consider the following C snippet
double pi = 3.1415926535897932384626433832795;
double val = sqrt(pi) * sqrt(pi);

printf("pi  = %f\n", pi);
printf("val = %f\n", val);
which yields
pi  = 3.141593
val = 3.141593
Which is what we expected! val is the same as pi (the square is the inverse of the square root as we all know). However, the following C snippet shows the problem we run into:
if (val == pi) {
 printf ("pi == val\n");
} else {
 printf ("pi != val\n");
}
which yields
pi != val
So two apparently equal values are not equal any more! Where is the problem? It is in the precision of printf(), of course. We modify the first snippet so the printf() calls print the floating point values to more significant places
printf("pi  = %.16f\n", pi);
printf("val = %.16f\n", val);
and now we get
pi  = 3.1415926535897931
val = 3.1415926535897927
So how do we resolve this? Instead of comparing floating point values directly, we calculate their difference and make sure it is below a small value, for example 1e-9:
if (abs(val - pi) < 1e-9) {
 printf ("pi == val\n");
} else {
 printf ("pi != val\n");
}
which now gives us
pi == val
I guess most programmers are aware about the problems with floating point numbers and I have also known about the problem for a long time. What made me stumble, however, was that I trusted the printed value - which was inaccurate. As far as I know, there is no generic solution to the problem with floating point inaccuracies. You will get away with comparing whether the difference between two floating point values is smaller than a given value for comparing. In Ruby, you could hack the built in Float class with the following:
class Float
  def ==(value)
    self - value < 0.01
  end
end
This allows us to compare floating point values “fuzzily”:
>> Math.sqrt(2) ** 2 == 2.0
=> false
>> require 'floathack'
=> true
>> Math.sqrt(2) ** 2 == 2.0
=> true
However, I do not think that this is a good idea! Overwriting the == operator of the Float as above breaks its transitivity, i.e. we get the following (in-)equalities 1.99 == 2.00 == 2.01 but 1.99 != 2.01 Another idea would be only to compare the first n numbers after the point, e.g. consider 3.1415926535897931 to be 3.141 for n = 2. However, this does not help us if a floating point error gives us two values just next to a “boundary”, i.e. a result of 1.999 would not be considered correct if 2.001 was expected and we only cared about the result being “sufficiently close” to the expected value. So be careful when using floating point values are checking two of them for equality. Trouble will strike and most probably find some way to trick you into wasting time. An extensive, academic - and arguably dry - discussion of the implications of floating point arithmetics can be found in What Every Computer Scientist Should Know About Floating-Point Arithmetic, by David Goldberg.

6 comments:

cshesse said...

The python decimal module lets you mess around with arbitrary precision numbers. There is even a (super slow) implementation of the standard math functions for decimal numbers called dmath.

Manuel said...

cshesse, thank you for your comment.

Yes, arbitrary-precision arithmetics can solve floating point related problems in some cases (e.g. correct financial calculations).

However, you do not profit from the native - and fast - floating point operations of modern processors.

Additionally, some problems are inherent to the representation of something infinte - like sqrt(2) - in a computer system which can only store finitely long objects:

>>> import decimal
>>> Decimal(2).sqrt()
Decimal("1.414213562373095048801688724")
>>> a = Decimal(2).sqrt()
>>> a * a
Decimal("1.999999999999999999999999999")

I guess you can find an example where arbitrary precision numbers lead to inaccurate results for any number of significant places.

Anonymous said...

This is usually not such a big deal in practice (i.e. scientific computing).

We use floating point numbers in numerical algorithms, and usually the algorithmic error (i.e. the error obtained when doing computation over the real numbers) is much larger than floating point error.

I.e., use RK4/5 to solve a differential equation, with dt=0.01. RK4/5 is O(dt^6)=1e-12. Floating point error tends to be of order 1e-17; i.e. 5 orders of magnitude smaller.

In hard problems (modelling global climate, parts of plasma physics or the stock market), the situation is even worse since we don't even know the right equation to solve.

So except for trivial problems, I'd say not to worry about floating point errors.

Anonymous said...

"This is usually not such a big deal in practice (i.e. scientific computing)."

In computational geometry and solid modeling, floating point problems can be a very big deal.

Eliza said...

Excellent article. Very interesting to read. I really love to read such a nice article. Thanks! keep rocking. wärmerückgewinnung industrie

Anonymous said...

Weight depend is an American 점보카지노 time period referring to the whole worth of cash or tokens faraway from a slot machine's drop bucket or drop box for counting by the on line casino's hard depend group via the usage of} a weigh scale. Carousel refers to a grouping of slot machines, often in a circle or oval formation. Your Unity by Hard Rock card is now your virtual Wallet, a handy various to utilizing cash at a slot machine. Wimberley Interiors designed the on line casino floor to emit the environment of dynamic and vibrant luxury. With simplified elegance of the traditional architectural details, it feels recent with fiery colours in the carpets, wealthy Italian marble and the sparkle of the chandeliers.

 

Header Image

Header Image
Bitwiese Header Image