Welcome to our community

Be a part of something great, join today!

help explaining unrounding concepts

mathnewb99

New member
Aug 10, 2019
2
We have some people trying to report four places to the right of the decimal with inputs that rounded to 2 and 3 decimal places. Can you please articulate why it is impossible to guarantee four decimal precision? I have been unsuccessful in my attempts and am looking for help from someone with a formal math background. Many thanks in advance.
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,713
We have some people trying to report four places to the right of the decimal with inputs that rounded to 2 and 3 decimal places. Can you please articulate why it is impossible to guarantee four decimal precision? I have been unsuccessful in my attempts and am looking for help from someone with a formal math background. Many thanks in advance.
Hi mathnewb99, welcome to MHB!

There are different schools of thought on how many decimals to report.

Mathematically it is correct to report as many decimals as we want since we would assume that the inputs are exact.

In practice we have to take into account that the inputs are not exact but have a measurement and/or rounding error.
It is then common to report as many decimals as is representative for the precision of the final result.
However, we only do that for the final result. Any intermediate result must be reported with a couple more digits to ensure we do not introduce undesired rounding errors in the final result.

So mathematically we can say that 12.3 + 0.456 = 12.756.
In practice it is conventional to assume that the input 12.3 has an error up to 0.05.
In this example the result will also have an error up to 0.05, so it is common to report the result as 12.8, which is rounded to the same number of decimals as the 'worst' input.
That is, unless it is an intermediate result, in which case we would report it as 12.756.
 

mathnewb99

New member
Aug 10, 2019
2
Thanks for the reply. That makes a lot of sense.

In the example I have the team is dividing a dollar value with 2 decimal places by a decimal that has 3 decimal places. In this case applying "round to the worst precision" would mean the final result should be reported with 2 decimal places to ensure any rounding errors are properly consumed by that final rounding operation. Does my interpretation sound correct to you?
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,713
Thanks for the reply. That makes a lot of sense.

In the example I have the team is dividing a dollar value with 2 decimal places by a decimal that has 3 decimal places. In this case applying "round to the worst precision" would mean the final result should be reported with 2 decimal places to ensure any rounding errors are properly consumed by that final rounding operation. Does my interpretation sound correct to you?
For multiplication and division it works slightly different than for addition/subtraction.
We have to look at the number of significant digits instead of the number of decimal digits.
Consider 56 / 2.34 = 23.9316.
The input 56 has 2 significant digits and 2.34 has 3 significant digits.
The worst input has 2 significant digits so the final result should also have 2 significant digits.
So if this is the final result, we would report 24, which shows that we have a precision of about +/- 0.5.