Jump to content

Approximation error

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by ClonicalClone (talk | contribs) at 14:52, 8 October 2024 (I added some extra links and made some passages more interesting). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
Graph of (blue) with its linear approximation (red) at a = 0. The approximation error is the gap between the curves, and it increases for x values further from 0.

Approximation error refers to the difference between an exact value and its approximation. This discrepancy can be quantified in two ways: as absolute error, which measures the numerical difference, and as relative error, which expresses the absolute error in relation to the true value. By understanding both types of errors, we can better assess the accuracy of our estimates and measurements.

Approximation errors can occur due to various factors, often stemming from limitations in measurement tools or computing precision. For example, if a piece of paper measures exactly 4.53 cm, but your ruler only marks the nearest 0.1 cm, you’d record it as 4.5 cm. Similarly, in computing, limitations in machine precision mean numbers are sometimes rounded, leading to small discrepancies. These errors, while often minor, can affect the accuracy of calculations, especially when accumulated over multiple steps.

In the mathematical field of numerical analysis, the numerical stability of an algorithm indicates the extent to which errors in the input of the algorithm will lead to large errors of the output; numerically stable algorithms do not yield a significant error in output when the input is malformed and vice versa.[1]

Formal definition

[edit]

Given some value v, we say that vapprox approximates v with absolute error ε>0 if [2][3]

where the vertical bars denote the absolute value.

We say that vapprox approximates v with relative error η>0 if

.

If v ≠ 0, then

.

The percent error (an expression of the relative error) is [3]

An error bound is an upper limit on the relative or absolute size of an approximation error.[4]

Examples

[edit]
Best rational approximants for π (green circle), e (blue diamond), ϕ (pink oblong), (√3)/2 (grey hexagon), 1/√2 (red octagon) and 1/√3 (orange triangle) calculated from their continued fraction expansions, plotted as slopes y/x with errors from their true values (black dashes)  

For example, if the exact value is 50 and we approximate it as 49.9, the absolute error is 0.1. The relative error, calculated as 0.1/50, equals 0.002, or 0.2%.

In a practical scenario, consider measuring liquid in a 6 mL beaker. If the reading shows 5 mL, but the correct value is 6 mL, the percent error is 61​≈16.7%. This kind of error shows how even small discrepancies can translate into significant percent errors, especially with smaller measurements!

The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is 0.003 while in the second it is only 0.000003.

There are two important aspects of relative error to remember. First, relative error becomes undefined when the exact value is zero, as this would place zero in the denominator. Second, relative error is meaningful only when the values are measured on a ratio scale—one that has a true zero. This is because relative error is sensitive to the measurement units. For instance, a temperature measurement with an absolute error of 1°C and a true value of 2°C has a relative error of 0.5. However, on the Kelvin scale, the same 1 K error with a true value of 275.15 K (equivalent to 2°C) results in a much smaller relative error of 0.00363.

Comparison

[edit]

Statements about relative errors are sensitive to addition of constants, but not to multiplication by constants. For absolute errors, the opposite is true: are sensitive to multiplication by constants, but not to addition of constants.[5]: 34 

Polynomial-time approximation of real numbers

[edit]

We say that a real value v is polynomially computable with absolute error from an input if, for every rational number ε>0, it is possible to compute a rational number vapprox that approximates v with absolute error ε, in time polynomial in the size of the input and the encoding size of ε (which is O(log(1/ε)). Analogously, v is polynomially computable with relative error if, for every rational number η>0, it is possible to compute a rational number vapprox that approximates v with relative error η, in time polynomial in the size of the input and the encoding size of η.

If v is polynomially computable with relative error (by some algorithm called REL), then it is also polynomially computable with absolute error. Proof. Let ε>0 be the desired absolute error. First, use REL with relative error η=1/2; find a rational number r1 such that |v-r1| ≤ |v|/2, and hence |v| ≤ 2 |r1|. If r1=0, then v=0 and we are done. Since REL is polynomial, the encoding length of r1 is polynomial in the input. Now, run REL again with relative error η=ε/(2 |r1|). This yields a rational number r2 that satisfies |v-r2| ≤ ε|v| / (2r1) ≤ ε, so it has absolute error ε as wished.[5]: 34 

The reverse implication is usually not true. But, if we assume that some positive lower bound on |v| can be computed in polynomial time, e.g. |v| > b > 0, and v is polynomially computable with absolute error (by some algorithm called ABS), then it is also polynomially computable with relative error, since we can simply call ABS with absolute error ε = η b.

An algorithm that, for every rational number η>0, computes a rational number vapprox that approximates v with relative error η, in time polynomial in the size of the input and 1/η (rather than log(1/η)), is called an FPTAS.

Instruments

[edit]

In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[6]

Generalizations

[edit]

The definitions can be extended to the case when and are n-dimensional vectors, by replacing the absolute value with an n-norm.[7]

See also

[edit]

References

[edit]
  1. ^ Weisstein, Eric W. "Numerical Stability". mathworld.wolfram.com. Retrieved 2023-06-11.
  2. ^ Weisstein, Eric W. "Absolute Error". mathworld.wolfram.com. Retrieved 2023-06-11.
  3. ^ a b "Absolute and Relative Error | Calculus II". courses.lumenlearning.com. Retrieved 2023-06-11.
  4. ^ "Approximation and Error Bounds". www.math.wpi.edu. Retrieved 2023-06-11.
  5. ^ a b Grötschel, Martin; Lovász, László; Schrijver, Alexander (1993), Geometric algorithms and combinatorial optimization, Algorithms and Combinatorics, vol. 2 (2nd ed.), Springer-Verlag, Berlin, doi:10.1007/978-3-642-78240-4, ISBN 978-3-642-78242-8, MR 1261419
  6. ^ Helfrick, Albert D. (2005) Modern Electronic Instrumentation and Measurement Techniques. p. 16. ISBN 81-297-0731-4
  7. ^ Golub, Gene; Charles F. Van Loan (1996). Matrix Computations – Third Edition. Baltimore: The Johns Hopkins University Press. p. 53. ISBN 0-8018-5413-X.
[edit]