As I have written about before, we all know that our floating point calculations contain error from accumulated rounding error and that error can distribute unevenly across the range of inputs for that function based on the distribution of results in each calculation.
Checking this error at run-time would involve a lot of overhead but having a method to analyse the function before the program is released is useful for detecting any edge cases and getting an idea of the minimum and maximum error to help you calculate a correct epsilon for handling errors. (Ask me about epsilons in physics libraries to get my full rant on this...)
Luckily, this kind of function checking is very simple. We simply need to create a replacement 'float' type which mimics all floating point calculations at the highest precision and stores this error in the type. This type can then combine it's own error with others when used together to get an understanding of the cumulative offset.
A class which replaces a standard type like this will need to have the correct constructors and operator overrides to cover all the use cases so that we don't get compile errors or type mismatches or even worse - blindly building a new instance and losing our cumulative error count.
The main elements of this class are the 'float' value it is wrapping and a high precision error counter. We can't use float for this error counter as it will be vulnerable to the same level of rounding error as the float that it is wrapping.
(Optionally: I added a string to keep the name of the float to make debugging and checking that we are counting the error correctly easier)
With that in mind, you should end up with a class that looks something like this:
This implementation of the operator+ override is the important part here. Each operation is repeated while cast to the higher precision type. The difference of these two results is then computed to give us a high level approximation of the rounding error that has been incurred. This result is then combined with the errors of the two numbers being summed to give us the total error in the final result.
It would be tempting in this implementation to not take the absolute value of the error to store the total offset from the correct answer rather than allowing errors to cancel each other out by taking the negative and positive errors. However, that would only be correct for measuring when values are added or subtracted. When dealing with all operations we would have to handle how the offset value of the multiply or division would increase error with the number it is being applied to - which is a more complex problem than we are addressing here. In this instance we are only looking at analysing the total rounding error - for now.
With this all in place we want to be able to use this class on a function taking floats. The example we will use will be a function which takes a fixed array of floating point numbers and add them all together returning the sum value.
In the code example you will see that we have wrapped the function in a define to override the float type and replace it with our FloatError type. We can then call this function from our test function which generates arrays of random values in a fixed range and then tests the Sum function with those values to try and find a maximum and minimum error in that range. (Note: for any array of random values in this example there should exist values that produce zero error.)
Once it has tested the function multiple times it will output and min and max error while also outputting the error for each sample as it runs. Which in the end looks like this for the range (0,1000.f):
So we can see that the total error that is possible in even this small sample is not insignificant - and that is only with additions of float values which already sit on floating point steps.
I hope this was useful to you and that you can apply it in your own code to help visualise the error that is hidden in even your simplest algorithms!