Numerical Precision

From Open Risk Manual

Definition

Numerical Precision is a measure of detail in which a numerical quantity (in particular real numbers) is expressed digitally (in modern computer applications). This is usually measured in bits or decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.

In the context of Quantitative Risk Management applications numerical precision is important in the following contexts:

  • Ensure that final numerical results are accurate to the required number of digits
  • Ensure that intermediate results have sufficient accuracy so final outcomes are correct (e.g in the context of Simulation Models)
  • Ensure a measure of reproducibility in the context of Model Validation

Standards

The IEEE Standard for Floating-Point Arithmetic ( IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.