OpenCL Minimum Setup

If you look at the sample code available for using OpenCL 1.2/2.0 on from the primary vendors you will notice that it is all very complicated, some is out of date ( i.e using depreciated functions) or difficult to get running.

The demos they provide do some clever things and definitely show you some good tricks but they are far from a good entry point. Ideally a newcomer to GPGPU wants to be able to open a single file, compilable demo. It should be simple and cover the basics only. Nothing more.

The user should be able to walk from the top of the file to the bottom without having to jump around and see the working pipeline in-order and clearly. Once they are comfortable with seeing the minimum and can have a play around, then you can show them more than just the top layer.

If those of us in the high-performance computing end of software development havent learned anything (and sometimes I think we havent learned anything) from javascript, GUI development tools and the rapid pace of the app development world we should hopefully have at least learned that getting someone onto your platform and working quickly is the best way to keep them. A single hour to understanding how to use something basically is better than a whole day of stress to only gain slightly more.

But enough ranting. I have written a minimal code demo for OpenCL in this style. It lacks all the options, robustness, safety and control the of Intel samples - but its basically ~100 lines of code instead of many thousand and is enough to show the basic concepts and usage patterns.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
//Include the OpenCL Headers
// On Intel default directory is: C:\Intel\OpenCL\sdk\include\CL
// During installation of the Intel SDK it usually placed $(INTELOCLSDKROOT) into your
// system environment. So the project should have the default include directory of: $(INTELOCLSDKROOT)\include
// On AMD and NVIDIA this is different. When you grab the SDK from their site it will usually tell me
// but it is mostly variations on this. Easiest way to find it if the site it confusing is to open a sample
// in visual studio and check the include directories listed in the project properties.
#include <CL/cl.h>

// Standard library to make some things easier.
#include <vector>
#include <string>
#include <fstream>
#include <streambuf>

#define BUFFER_ENTRY_COUNT 256

int main()
{
    //The important objects for initialising and using OpenCL.
    cl_platform_id		platform	= 0;
    cl_device_id		device		= 0;
    cl_context			context		= 0;
    cl_command_queue	queue		= 0;

    //First get our platform-------------------
    {
        //Find which platforms are available
        cl_uint numPlatforms = 0;
        cl_int err = clGetPlatformIDs(0, 0, &numPlatforms);
        if (err != CL_SUCCESS || numPlatforms == 0) return 0;

        //Fetch the IDs and take our first one.
        std::vector<cl_platform_id> availablePlatforms(numPlatforms);
        err = clGetPlatformIDs(numPlatforms, &availablePlatforms[0], 0);
        if (err != CL_SUCCESS) return 0;
        platform = availablePlatforms[0];
    }

    //Now we need our device------------------
    {
        //You can specify if you want CPU/GPU or Accelerator here, but for simple getting going
        //we will just take any.
        cl_device_type  deviceType = CL_DEVICE_TYPE_ALL;

        // Same as above, get the number of devices before we fetch the information.
        cl_uint num_of_devices	= 0;
        cl_int err				= clGetDeviceIDs(platform, deviceType, 0, 0, &num_of_devices	);
        if (err != CL_SUCCESS || num_of_devices==0) return 0;

        //Fetch the ids and select the first one.
        std::vector<cl_device_id> deviceVector(num_of_devices);
        err	= clGetDeviceIDs(platform, deviceType, num_of_devices,	&deviceVector[0], 0);
        if (err != CL_SUCCESS) return 0;
        device = deviceVector[0];
    }

    //Create the context minimal code.
    {
        cl_context_properties contextProps[3];
        contextProps[0] = CL_CONTEXT_PLATFORM;
        contextProps[1] = cl_context_properties(platform);
        contextProps[2] = 0;

        cl_int err = 0;
        context = clCreateContext(&contextProps[0], 1, &device, 0, 0, &err);
        if (err != CL_SUCCESS) return 0;
    }

    //Create a Queue
    {
        cl_int err = 0;
        cl_command_queue_properties props= 0;
        queue = clCreateCommandQueueWithProperties(context, device, &props, &err);
        if (err != CL_SUCCESS) return 0;
    }

    std::string CLProgramFilename	= "./simpleprogram.cl";
    std::string CLProgramKernelName = "EmptyKernel";
    std::string CLProgramSource = "";
    cl_program  CLProgram = 0;
    cl_kernel   CLProgramKernel = 0;
    //Read program source code from file
    {
        std::ifstream file(CLProgramFilename);
        std::string temp;
        while (std::getline(file, temp)) 
        {
            CLProgramSource.append(temp);
        }
    }

    //Create Program from source
    {
        //Take the source and get the program
        cl_int err;
        const char* text = CLProgramSource.c_str();
        cl_program program = clCreateProgramWithSource(context, 1, &text, 0, &err);
        if (err != CL_SUCCESS) return 0;

        //Build it for your specified device.
        err = clBuildProgram(program, (cl_uint)1, &device, "", 0, 0);
        if (err != CL_SUCCESS) return 0;

        //Pull out the kernel(function) we want to use from the program.
        //Programs can have many kernels
        CLProgramKernel = clCreateKernel(program, CLProgramKernelName.c_str(), &err);
        if (err != CL_SUCCESS) return 0;
    }

    cl_mem outputBuffer = 0;
    cl_uint buffSize = BUFFER_ENTRY_COUNT * sizeof(cl_int);
    //Create an output Buffer
    {
        //We are creating a buffer here. The important flags are the CL_MEM_... ones.
        // In this example we say we want one that the kernel can only write and the CPU
        // can only request to read.
        // There are many options here and combining them in different ways has interesting performance effects.
        cl_int	err		= 0;
        outputBuffer	= clCreateBuffer(context, CL_MEM_WRITE_ONLY | CL_MEM_HOST_READ_ONLY, buffSize, NULL, &err);
        if (err != CL_SUCCESS) return 0;
    }

    //Run Kernel
    {
        //Set the buffer we write to. This maps to the index of the variable in the function in the kernel.
        cl_int err = clSetKernelArg(CLProgramKernel, 0, sizeof(cl_mem), (void *)&outputBuffer);

        //Global size is the total number of things we want to do.
        //Local size is the chunks we are breaking it into. If global not divisible by local
        //it will throw an error.
        cl_uint globalSize	= BUFFER_ENTRY_COUNT;
        cl_uint localSize	= 16;
        err = clEnqueueNDRangeKernel(queue, CLProgramKernel, 1, NULL, &globalSize, &localSize, 0, NULL, NULL);
        if (err != CL_SUCCESS) return 0;

        //Ensuring all the work is done before we copy out our buffer to check the kernel ran correctly.
        err = clFinish(queue);
        if (err != CL_SUCCESS) return 0;

        //Validate the output from our buffer
        cl_int ourOutput[BUFFER_ENTRY_COUNT];
        err = clEnqueueReadBuffer(queue, outputBuffer, CL_TRUE, 0, buffSize, ourOutput, 0, NULL, NULL);
        if (err != CL_SUCCESS) return 0;

        //Check the array has the magic number in it
        if (ourOutput[6] != 42)	return 0;
    }

    //Everything went well.
    return 0;
}

I hope this is useful to anyone trying to get started with OpenCL. I will do the same for Vulkan Compute and DirectX11/12 if I see this gets any traction.

 

When Approximations Are More Accurate And Better Performing. Part 2

So if you read the last part of this work you might be wondering how was the error less in the approximated solution than the proper implementation?

This comes down to simply exploiting the rules and expectations of the user. The user has selected to use 'float' as the base accuracy for the circle. This is usually done for performance reasons. Sin/Cos in float is cheaper than Sin/Cos in double. 

From this we have some base rules on which to build our case. We know the total error only has to be less than the floating point implementation and we know our approximation has to reduce the cost.

So we need a way to measure the error, and a way to measure the run-time performance so we can compare.
Run-time performance is easy, we will simply run the algorithm and see how long it takes.
Error is a little more tricky. In this example we defined the error as the sum of the distance of the resulting vertex from where the most accurate implementation would place it. Ideally though, to build an approximation we want to understand where the error is coming from so we know where to change.

So let's look at the implementation again:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
std::array<std::pair<float, float>, numVerts> points;

T theta = 0.f;
T thetaStep = (T)(PI) / (T)numVerts;
for (int i = 0; i < numVerts; i++)
{
	T x = (_radius * cos(theta)) + _offset.first;
	T y = (_radius * sin(theta)) + _offset.second;
	points[i] = std::make_pair(x, y);
	theta += thetaStep;
}

Where could the error be introduced here? I think it would be better if we split it into individual lines per operation.
For just calculating a single 'x' position:

1
2
3
4
    T theta += thetaStep;
    T cosRes= cos(theta);
    T radiusScaled = cosRes * _radius;
    T x = radiusScaled + _offset.first;

We have four operations taking place, one on each line. Let's see how the error is induced for each line step.

  1. This step introduces error as thetaStep is stored at a fixed precision and is the product of the division of PI into the number of chunks needed to represent each step. As the number of steps gets arbitrarily large then the size of theta step gets arbitrarily smaller. This gives two problems, firstly each subsequent step will be less accurate than the last as the position on the circle is traced out, and secondly if this number is small enough that is within the rounding bounds of a floating point step then the addition to 'theta' could add 0 leading us to be unable to trace the circle or it could add more than it should leading to further inaccuracy.
  2. Depending on the type of theta this will change the version of the function 'cos' that is called. By the cmath standard these functions are all guaranteed accurate to the unit in last place (ULP) for the type that it is working on. In this case we induce error into cosRes as any precision of real number cannot represent the infinite number of unique values between -1 and 1. Luckily, because of the standard we know that this will be on the best possible result though and the error should be relatively small. As 'cosRes' and theta share the same type there should be no rounding or error incurred by the value copy.
  3. In this step we are multiplying two real numbers together. This should result in an arbitrary error as the resulting value is stepped down from the high precision floating point hardware back to the representation we are using. So we should get +/- the floating point step in the scale of the result.
  4. This is the same as step 3.

Steps 2-4 are then repeated for the y component and both are cast to float to store in the vertex list. Quite a lot of potential error - depending on the type!

Represented in my pigeon math

Represented in my pigeon math

How does this differ from our approximation? Where we simply replace the calls to sin/cos?

approxerror.png

Not very much at all when we are considering the same type! And with the additional error from multiplication and addition which aren't guaranteed to be minimal error for the type like the sin/cos from the cmath library are.

So, when is it that we can use the approximation?
When it follows these two rules:

rules.png

When these two rules hold true then we can replace the function without worrying.

So, for our example of better performance and accuracy we ensure that for 'double' type in the approximation function and 'float' type in the accurate implementation match both rules and we are good to go.

So that gives us this table of results for the built in standard real number types:

table.png

So there really is only a limited area where we meet both of these rules, but it could be a beneificial one.

 

 

When Approximations Are More Accurate And Better Performing. Part 1

So we have been talking a lot recently about approximation approaches and what we can do with it, measuring error and some horrible template programming to support this all seamlessly in C++.

This is the post where we show why.
This post will show you how to generate the vertex positions on a circle, sphere, some curved path with greater accuracy and with lower performance cost than the standard algorithm - without breaking any specifications or altering the behavior of the compiler.

To begin with, we want to keep the problem simple as a proof of concept. So we will be generating a semi-circle. We want to be able to compare the different hardware supported precisions available to the programmer so we will use templates to reduce any human error. We want to see how the error scale based on the radius and position of the circle so we will make those parameters too. We also want to control the number of points being generated to increase or lower the precision.
That gives us this accurate implementation:

template <typename T, int numVerts>
std::array<std::pair<float, float>, numVerts> GenerateCircleVertices_A(T _radius, std::pair<T, T> _offset)
{
	std::array<std::pair<float, float>, numVerts> points;

	T theta = 0.f;
	T thetaStep = (T)(PI) / (T)numVerts;
	for (int i = 0; i < numVerts; i++)
	{
		T x = (_radius * cos(theta)) + _offset.first;
		T y = (_radius * sin(theta)) + _offset.second;
		points[i] = std::make_pair(x, y);
		theta += thetaStep;
	}

	return points;
}

Nothing too flashy there. Calculating the positions of the points for every point and offsetting them.

Next we want to write the approximated version. This will take all the same parameters but also take two approximations of the 'sin' and 'cos' functions

template <typename T, int numVerts, T(*approxSin)(T), T(*approxCos)(T)>
std::array<std::pair<float, float>, numVerts> GenerateCircleVertices_A(T _radius, std::pair<T, T> _offset)
{
	std::array<std::pair<float, float>, numVerts> points;

	T theta = 0.f;
	T thetaStep = (T)(PI) / (T)numVerts;
	for (int i = 0; i < numVerts; i++)
	{
		T x = (_radius * approxCos(theta)) + _offset.first;
		T y = (_radius * approxSin(theta)) + _offset.second;

		points[i] = std::make_pair(x, y);

		theta += thetaStep;
	}
	return points;
}

We have templated the types of the functions that can be passed in so that in our approximation functions we can work at a higher precision. However, you will notice that in our approximation we are still using the same type for the output positions, if we did not do this then would be introducing precision at the cost of memory which is not what we want to demo here and would change the shape of the function the user is expecting to call.

So what functions are we going to submit to replace the Sin/Cos in the algorithm?
In this example we have implemented some simple curve fit equations with Chebychev economisation to minimise error over the total range of data we care about. In this instance that is over the range of inputs 0 to PI.

Here is an example of the templated output for 'cos()' in the range 0-PI. Pretty hideious. But it works.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
template<typename T>
static constexpr T cos_approx_10(const T _x) noexcept
{
	return (( 1.00000000922939569214520361128961667     ) + 
		(-0.00000040038351959167077379411758820*(_x)) + 
		(-0.49999581902513079434413612034404650*(_x* _x)) + 
		(-0.00001868808381795402666814172321086*(_x* _x * _x)) + 
		( 0.04171125229782790544419412981369533*(_x* _x * _x * _x)) + 
		(-0.00006331374243904154216523727516375*(_x* _x * _x * _x * _x)) +
		(-0.00133230596002621454881920115553839*(_x* _x * _x * _x * _x * _x)) + 
		(-0.00003250491185282628451994405005543*(_x* _x * _x * _x * _x * _x * _x)) + 
		( 0.00003666795841889910768365487547804*(_x* _x * _x * _x * _x * _x * _x * _x)) + 
		(-0.00000258872188337465851184506469840*(_x* _x * _x * _x * _x * _x * _x * _x * _x)) + 
		(-0.00000000060839243653413992793179150*(_x* _x * _x * _x * _x * _x * _x * _x * _x * _x)));
}

To verify our approximation functions we output them at different levels of accuracy and test the total accuracy and performance. For the functions generated for this test here is the results from our back-end testing where we test our automated generated tables as well as function replacement.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
                          Name         Mean Err.       Median Err.        Total Err.          Runtime
                 Source Function       2.66282e-08        2.2331e-08       2.72673e-05               171
          CompileTimeTable_Float       3.17287e-07       3.08806e-07       0.000324902               106
     CompileTimeTable_LongDouble       3.15743e-07       3.11381e-07       0.000323321                80
              RunTimeTable_Float       6.48356e-08       3.79084e-08       6.63917e-05               101
         RunTimeTable_LongDouble       3.95529e-08                 0       4.05022e-05                79

      functionPoly_float_Order10       9.00522e-08       5.42778e-08       9.22134e-05                11
     functionPoly_double_Order10       1.78518e-09       1.87474e-09       1.82802e-06                11
 functionPoly_longdouble_Order10       1.78518e-09       1.87474e-09       1.82802e-06                11
       functionPoly_float_Order9       7.69145e-08       4.88219e-08       7.87604e-05                10
      functionPoly_double_Order9       1.78519e-09       1.87499e-09       1.82804e-06                10
  functionPoly_longdouble_Order9       1.78519e-09       1.87499e-09       1.82804e-06                10
       functionPoly_float_Order8       3.26524e-07       3.18897e-07        0.00033436                11
      functionPoly_double_Order8       3.14699e-07       3.29983e-07       0.000322252                11
  functionPoly_longdouble_Order8       3.14699e-07       3.29983e-07       0.000322252                10
       functionPoly_float_Order7        3.2993e-07       3.21493e-07       0.000337849                 9
      functionPoly_double_Order7       3.14701e-07       3.29942e-07       0.000322254                 9
  functionPoly_longdouble_Order7       3.14701e-07       3.29942e-07       0.000322254                 9
       functionPoly_float_Order6       3.61088e-05        3.7886e-05         0.0369754                 6
      functionPoly_double_Order6       3.61126e-05       3.78765e-05         0.0369793                 7
  functionPoly_longdouble_Order6       3.61126e-05       3.78765e-05         0.0369793                 6
       functionPoly_float_Order5       3.61177e-05        3.7886e-05         0.0369845                 6
      functionPoly_double_Order5       3.61127e-05       3.78748e-05         0.0369794                 5
  functionPoly_longdouble_Order5       3.61127e-05       3.78748e-05         0.0369794                 5
       functionPoly_float_Order4        0.00239204        0.00250672           2.44945                 5
      functionPoly_double_Order4        0.00239204        0.00250679           2.44945                 5
  functionPoly_longdouble_Order4        0.00239204        0.00250679           2.44945                 5
       functionPoly_float_Order3        0.00239204        0.00250686           2.44945                 4
      functionPoly_double_Order3        0.00239204        0.00250686           2.44945                 5
  functionPoly_longdouble_Order3        0.00239204        0.00250686           2.44945                 4

The first item in this list is the floating point precision implementation of 'cos', the next four lines are our different table generated results (See this post on how that works. Everything after the line break is our generated function replacements that we care about in this example.

You will see that the total error across the range represented in the "Total Err." column. This is the summed total difference between each sampled position when compared against the highest precision of the function we are replacing. So we can see that compared to the "long double" implementation of 'cos' the floating-point implementation incurs a total error of '2.72673e-05' in this range for a mean error of '2.66282e-08' at each sample point.

Where this becomes interesting is that our implementation at floating-point precision has quite close error - but our double and long double implementations have less total error. But a higher-precision having less error is probably not a surprise - what is a surprise is that our higher-precision implementation takes 1/16th of the time of the floating-point function we want to replace.
To put it simply - we have a function which has less error and less computational cost. Although our function does come with an additional near insignificant cost of a cast from double back to float.

In our initial implementation of this we expected the lower computational cost but the lower error was a surprise. It shouldn't have been though. As we have shown in the past error can be expressed in type and in function. Our example is increasing the error in the function but lowering the error in the type. So for any trivial function which has large number of errors from the type (such as anything which performs summed operations on floating-point types) we should in theory be able to beat it on error if the function is simply mappable at any precision and we do so in fewer operations.

So what happens when we apply this to our circle plotting problem?

We see that for relatively small offsets from zero we don't get much difference in the error of the two approaches but as the radius of the circle approaches numbers just a short distance from the origin we see that our approximation is handling it much better - leading to a much better overall accuracy.

Better Accuracy. Better Performance. In a real-world problem.

We are currently further exploring this approach with DSP processing, video media processing and computer graphics.

Source code for a lot of this stuff is available on my Github.

UNDERSTANDING ERROR AND APPROXIMATION: 7. Relaxation and Regularisation

So far in our approximation approaches we have been considering generating functions which match the input and out parameters of the function we want to optimise. In this post we will be considering two techniques which do not so directly follow this approach.

Regularisation
Is generally considered in mathematical modelling when we are looking to prevent overfitting or to solve the problem of modelling when the function is not a well-posed problem. In this context an ill-posed function is one that either: 

  • Has no solution
  • The solution is not unique
  • or, the solutions behaviour doesn't change only on initial conditions.

When we are looking for an approximate function we are hoping that the function is ill-posed as we would ideally want the function solution to not be unique so we can find a better one!

In other articles we discussed the range of the function we are interested in, as well as the accuracy for each point in this range. In this section we are asking if there are discete points in the range that we care about more than others. For example, you might have some function in your application which takes some real number and returns it expensively mathematically altered in some fixed way - but you only call it in your application for the values of 0 or 1. In this instance we probably don't want to have the costly the calculation if the answer is either the result of f(0) or f(1). It might be cheaper to have a function which maps to those two results directly. 

In this case for the case of only caring about those two results we can generate a "regularised" function that matches those requirements exactly.

These regularised functions can often be cheaper than the actual function as they only have to care about a specific set of points. 

Relaxation
The next topic in the same vein is relaxation. Relaxation in the mathematical sense, more formally called "Integer Linear Programming Relaxation" is a technique for taking a program which has only integer solutions and allowing them to be expressed as real numbers to simplify the problem and then rounded back to integer numbers. This is known for its ability to take NP complexity programs and take them to P complexity (if that is of interest).

When we look at how this relaxation changes a problem we can see that it has the possibility to expand the total problem domain (by a range dependent on your rounding choice) and may not actually give the correct solution due to some constraints. Full details on the Mathematics and proofs for this can be found around Wikipedia.

For our use of it, we are looking at it for its ability to take a function given discrete results and model it as a continuous function. This allows us to approach some integer problems which are technically "infeasible" in an approximate manner - however, never optimally.

We can see approximate algorithms solving problems through this method if you look up the "Weighted Vertex Cover Problem" or similar shortest path or optimal distribution problems.

The book "Algorithms to Live By" contains some very good examples and explanations in the chapter covering this type of relaxation and lagrangian 

 

 

 

 

UNDERSTANDING ERROR AND APPROXIMATION: 6. Continuous Approximation

In the last few posts we have covered ways to measure the error and bounds in different functions and how that effects how we view them when coming to approximate them. A lot of what we have been discussing has been in the area of "continuous functions". A continuous function is one where the answers for neighbouring inputs flow into one another smoothly (or at least in a predictable fashion). For example, we know for the function "x=y" that the values between 'x=1' and 'x=2' will be smoothly interpolated between 1 and 2.

If this wasn't the case, if the the function was discrete and only gave results on integer values then when we samples the function at 'x=1.5' there may not be a valid result and any result would be an error. Or the function could have discontinuous periods around this area where the results are totally unrelated to the surrounding results.

This identification of continuous and discrete results make it an important factor in understanding the function want to replace and its behavior.

If a graph is continuous then a common numerical approximation method would be to generate a 2 or 3D polynomial Taylor expansion to represent the curve. (See examples of how this is done here). This gives us a curve which matches the polynomial across certain ranges under certain conditions. 

graph2.png

Shown above is the continuous function sin(x) with different orders of Taylor series approximating it.

Here is the graph of 'tan(x)'. In this example we cannot approximate the whole range of 0 to 2PI as there are discontinuities every 'PI' distance in x. To correctly approximate this curve we would need to split the curve into discrete sections of range PI and calculate from that. Essentially splitting a discontinuous function into n continuous chunks. In the case of tan(x) each chunk is a repeat of the last, so it is simply re-framing that needs to be done. But for more complex functions this can vary.

You may notice in the taylor series example that our approximation in the lower orders quickly diverge. This happens as values get further away from the central reference point we used to build the series. For some complex function you may want to chop the function into chunks to get better precision across certain ranges. This is a good thing to do when we only care about the values being directly output, but we have to be aware of the effect that has at the boundaries between curves.

If we take a look at the differential for the curve the discontinuities as we switch from one curve to another which were previously near invisible will become clearly obvious. This analysis of the gradient changes at these points is important as some uses of the results of the function may rely on them and in that case the resultant behaviour may be drastically different than what we were replacing.

This is where we need to express that even though the numerical error is low in the direct results, the actual use-case has large error. At the end of the day, the error we really care about is how it effects the end result!

 

UNDERSTANDING ERROR AND APPROXIMATION: 5. Radius of Convergence

One of the main factors when you are looking to approximate a function is understanding what part of the function you are approximating. Approximation is inherently a trade-off, so when we are approximating a function we may want to only approximate a certain part of it, or may want to approximate one section of the input to a higher accuracy than another. 

But, before we can make any decisions on any of this we have to understand the full features of the function we want to approximate and a major feature of a lot of functions falls into the category of the "Radius of Convergence".

The most simple way to understand the "Radius of Convergence" is to consider y = ∑0.5x from 'x = 1' to 'x = infinity'. At x gets larger the result being added to the sum decreases. This causes the function to converge at y=1.

So if we were going to approximate this function by sampling it for various values of x, it would be quite wasteful for us to sample past x=10 as the function has fully converged then. This gives this function a radius of convergence of y= 1.9990. (This is quite fun to play with on WolframAlpha)

For a function which has convergence point (or points) it is important that we understand it so that we can increase the value of each point we sample and use in our own function generation. This gives us bounds and simplifies the work we have to do. Similar to how we can simplify intractable algorithms with priors, we can use this information to form our own "priors" in our generated functions.

 

 

UNDERSTANDING ERROR AND APPROXIMATION: 4. Math Max Error (Unit in Last Place)

This post is focused on the representation of real numbers in our application. Here we are looking at how they are represented as a finite number of bits and what considerations we must make in our application to be able to correctly predict the behavior to use them effectively.

Real Number Representation
As you probably already know, if you are reading this, types such as float and double are represented. MDSN gives a very concise description of how they are represented on their page about float type:

Floating-point numbers use the IEEE (Institute of Electrical and Electronics Engineers) format. Single-precision values with float type have 4 bytes, consisting of a sign bit, an 8-bit excess-127 binary exponent, and a 23-bit mantissa. The mantissa represents a number between 1.0 and 2.0. Since the high-order bit of the mantissa is always 1, it is not stored in the number. This representation gives a range of approximately 3.4E–38 to 3.4E+38 for type float.

The IEEE format they are talking about is IEEE 754 which covers in much greater detail the rules and behavior of float types but is a pretty heavy read (trust me, it's OK to just read the plot synopsis on this book).

The MSDN page misses the equation to calculate a number from the representation it describes. That equation is:

floatdesc.png

With this representation we can see how numbers are encoded like this.

With this binary view in mind it is very visible that there is only a finite number of bits to switch and due to the nature of the exponential some of those switches will have varying degrees of change based on how large the number currently is.

This leads us to the main point of this article...

Unit in Last Place
So when we are thinking about representing numbers and errors that creep into calculations we have to consider what the size of the distance between the current number being represented and the next number being represented is. In single-precision floating point (shown above) it is capable of being accurate to very very small fractions of numbers in the range 0-1 but then becomes increasing less accurate as the size of the number increases due to the lack of sufficient bits to represent it. This means that when working in the millions or tens of millions we can lose the fractional part of the number entirely!

This small error can be used with the equations shown in the last post on error propagation to show the cumulative error and accuracy loss as a function involving lots of floating point numbers progresses. This is an important factor to consider when dealing with long numerical functions.

Usage
So when we come to use floating point numbers in large calculations we can calculate how much error we expect to accumulate through the numbers being truncated to fit in the number of bits provided and the rounding of the numbers to the nearest representable floating point number.  This error must be of an acceptable level for the function we are trying to write, otherwise we may need to turn to alternative algorithms or more precise data types to represent the data. 

Understanding Error and Approximation: 3. Error Propagation

When dealing with error we not only need to know how to measure it, but also how what the error of a value that is produced by performing an operation on two values with differing levels of error.

For example, if we have a variable A which has an error of +/- 5 and a variable B which has an error of +/- 3, how would we calculate the accuracy of the result of A+B?

It turns out this can be solved by the 'Rules for Error Propagation'. These rules define how the combination of values of different uncertainty interact to produce a new value of uncertainty.

The main 'Rules of Error Propagation'.

The main 'Rules of Error Propagation'.

The equations we are given to calculate the solution to our problem is quite simple. We simply place our values of δA (+/- 5) and δB (+/- 3) into the first equation (as we are doing addition, 'A + B'). 

errorpropogation.png

This results in our new value having a potential error of 5.83 which as we expected is greater than either of the inputs as error in this situations can only grow due to missing information.

So we can see that with this method of approaching error it is relatively simple to encode this into your software to analyse values as they are passed through functions.

Understanding Error and Approximation: 2. Absolute and Relative Error

The first in our guide to error and approximation in computing is starting with absolute and relative error.

Error in measurement or a calculation can be easily thought of as a simple thing, but when we are trying to be precise in our description of error and how it effects the validity of our results we need to consider the context of the error that we are talking about. To do this we need to define the error.

Starting with Absolute Error. Absolute error is the total error. For most tools we use in the real-world to measure things they are often marked with +/- 1mm or similar to tell the user the absolute amount of error in measurement. In computing terms, we could say the absolute error in a floating point value that we have just assigned a real number is the maximum amount of rounding possible for a real number in that range. Quite simple and easy to understand.

Expanding on that we also have the Mean Absolute Error, as it may sound this is a quite straight forward extension of the absolute error. This is simply the average (mean) of the errors between related results in a series. So the average value of f(x) - g(x) for all x in a series where f is the correct function and g is some approximation. This is used when evaluating error on continuous data as opposed to single measurement results.

Finally we have Relative Error, this is calculated from the Absolute Error and the total range you are working with. It gives you the error in your measurement relative to the total outcome. For example, if we are measuring weight and our scales are accurate to +-1Kg and the thing we are weighing is 20kg then we have a Relative Error of 5%. But if we were weighing something that was 2000kg then the error is 0.05% which is much better. This is an important consideration when measuring error in software as it can directly link the precision of the data type you are using to the size of the data you are trying to store.

That's about it on Absolute and Relative error. All quite simple but important to know the difference when reading or writing a paper so that you understand the context the error is being described in.

 

Understanding Error and Approximation: 1. Intro

Error and approximation can take many forms. It can be mathematical, numerical, algorithmic... pretty much any part of what we consider computer science has some level of error or approximation arising from the software, hardware or simply mistakes in our code.

When we are planning to take advantage of acceptable error as I am then we must have a decent understanding of what we consider error and the forms it can take in our software.

TOPICS

  1. Introduction (This page)
  2. Absolute and Relative Error
  3. Error Propagation
  4. Math Max Error
  5. Radius of Convergence
  6. Continuous Approximations
  7. Regularisation and Relaxation