- Thread starter
- #1

#### mathnewb99

##### New member

- Aug 10, 2019

- 2

- Thread starter mathnewb99
- Start date

- Thread starter
- #1

- Aug 10, 2019

- 2

- Admin
- #2

- Mar 5, 2012

- 8,713

Hi mathnewb99, welcome to MHB!

There are different schools of thought on how many decimals to report.

Mathematically it is correct to report as many decimals as we want since we would assume that the inputs are exact.

In practice we have to take into account that the inputs are not exact but have a measurement and/or rounding error.

It is then common to report as many decimals as is representative for the precision of the final result.

However, we only do that for the final result. Any intermediate result must be reported with a couple more digits to ensure we do not introduce undesired rounding errors in the final result.

So mathematically we can say that 12.3 + 0.456 = 12.756.

In practice it is conventional to assume that the input 12.3 has an error up to 0.05.

In this example the result will also have an error up to 0.05, so it is common to report the result as 12.8, which is rounded to the same number of decimals as the 'worst' input.

That is, unless it is an intermediate result, in which case we would report it as 12.756.

- Thread starter
- #3

- Aug 10, 2019

- 2

In the example I have the team is dividing a dollar value with 2 decimal places by a decimal that has 3 decimal places. In this case applying "round to the worst precision" would mean the final result should be reported with 2 decimal places to ensure any rounding errors are properly consumed by that final rounding operation. Does my interpretation sound correct to you?

- Admin
- #4

- Mar 5, 2012

- 8,713

For multiplication and division it works slightly different than for addition/subtraction.

In the example I have the team is dividing a dollar value with 2 decimal places by a decimal that has 3 decimal places. In this case applying "round to the worst precision" would mean the final result should be reported with 2 decimal places to ensure any rounding errors are properly consumed by that final rounding operation. Does my interpretation sound correct to you?

We have to look at the number of significant digits instead of the number of decimal digits.

Consider 56 / 2.34 = 23.9316.

The input 56 has 2 significant digits and 2.34 has 3 significant digits.

The worst input has 2 significant digits so the final result should also have 2 significant digits.

So if this is the final result, we would report 24, which shows that we have a precision of about +/- 0.5.