
Then we can evaluate the level of precision that they truly need. It consists of a simple idea: we reduce the precision of a group of variables and measure how it affects the outputs. This paper presents a novel method that enables modern and legacy codes to benefit from a reduction of the precision of certain variables without sacrificing accuracy. The only input that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results.

Consequently, it is possible to improve computational performance by establishing a more appropriate choice of precision.

Most scientific codes have overengineered the numerical precision, leading to a situation in which models are using more resources than required without knowing where they are required and where they are not. Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes with little effort.
