In the last few posts we have covered ways to measure the error and bounds in different functions and how that effects how we view them when coming to approximate them. A lot of what we have been discussing has been in the area of "continuous functions". A continuous function is one where the answers for neighbouring inputs flow into one another smoothly (or at least in a predictable fashion). For example, we know for the function "x=y" that the values between 'x=1' and 'x=2' will be smoothly interpolated between 1 and 2.
If this wasn't the case, if the the function was discrete and only gave results on integer values then when we samples the function at 'x=1.5' there may not be a valid result and any result would be an error. Or the function could have discontinuous periods around this area where the results are totally unrelated to the surrounding results.
This identification of continuous and discrete results make it an important factor in understanding the function want to replace and its behavior.
If a graph is continuous then a common numerical approximation method would be to generate a 2 or 3D polynomial Taylor expansion to represent the curve. (See examples of how this is done here). This gives us a curve which matches the polynomial across certain ranges under certain conditions.
Shown above is the continuous function sin(x) with different orders of Taylor series approximating it.
Here is the graph of 'tan(x)'. In this example we cannot approximate the whole range of 0 to 2PI as there are discontinuities every 'PI' distance in x. To correctly approximate this curve we would need to split the curve into discrete sections of range PI and calculate from that. Essentially splitting a discontinuous function into n continuous chunks. In the case of tan(x) each chunk is a repeat of the last, so it is simply re-framing that needs to be done. But for more complex functions this can vary.
You may notice in the taylor series example that our approximation in the lower orders quickly diverge. This happens as values get further away from the central reference point we used to build the series. For some complex function you may want to chop the function into chunks to get better precision across certain ranges. This is a good thing to do when we only care about the values being directly output, but we have to be aware of the effect that has at the boundaries between curves.
If we take a look at the differential for the curve the discontinuities as we switch from one curve to another which were previously near invisible will become clearly obvious. This analysis of the gradient changes at these points is important as some uses of the results of the function may rely on them and in that case the resultant behaviour may be drastically different than what we were replacing.
This is where we need to express that even though the numerical error is low in the direct results, the actual use-case has large error. At the end of the day, the error we really care about is how it effects the end result!