Precision vs. Accuracy

How Good Are Your Measurements?

In everyday conversion, the words "precision" and "accuracy" are often used interchangeably. When those words are used in the context of measurements or scientific data, they have two distinct meanings.

Precision

"Precision" refers to how consistent or close together a set of data points are to one another (assuming they are supposed to be similar values). It is not uncommon to repeated a measurement multiple times to account for random sources of error (with the intent of taking an average later). A set of measurements with "good" precision will be close together and have very little variation. Having good precision or low variance in your data gives you confidence that your experiment is repeatable, but it does not necessarily tell you that your values are correct.

Accuracy

"Accuracy" refers to how close a data point or an average of a data set is to a "correct" answer or reference value. Usually the reference value is an established value from literature (textbook, scientific journal, or another reference article). The closer the data or average is to the literature value, the better the accuracy.

Precision and Accuracy are Independent

Precision and accuracy are largely independent of each other. In other words, it is possible to have good precision but poor accuracy. Similarly, it is possible to have poor precision but good accuracy (see Sources of Error for more information about how this can happen).

Quantifying Precision and Accuracy

Precision

Two common ways of quantifying precision in early chemistry classes is relative range and standard deviation. Relative range expresses the difference between your largest/greatest and smallest/least measured value as a percentage of the mean (average). The better the precision of a data set, the closer the data points will be resulting in a small relative range. To calculate relative range, you take the highest value from your data set, subtract the lowest value, then divide the result by the mean and multiply by 100 % (multiply by 100 then add units of %). It is important to note that the units for relative range is % (percent) regardless of what the units of the original data were. Standard deviation is a more advanced way of quantifying precision. The basis and derivation of the standard deviation go beyond the scope of this website, but if you are interested I would recommend looking at the Wikipedia article, or better yet, a statistics book if you have one. But in short, the standard deviation effectively measures the average difference between the each data point and the mean (it's not actually this, but very close to it). As with relative range, the smaller this value is, the better the precision, although standard deviation has the same units as the original data points.

formula for relative range

Accuracy

The most common way of quantifying accuracy is with percent error. Percent error expresses the error of an experiment as the difference between the experimentally observed value and the literature/reference value as a percentage of the experimental value. Calculating percent error is similar to calculating relative range, but with some slight but important differences. To calculate the percent error, you take the value obtained from experimental measurements and subtract the literature/reference value. Take the absolute value of the previous result, divide by the literature/reference value, then multiply by 100 %. The smaller the percentage error of a data set, the more accurate the data are and the closer they are to the "true" value.

Formula for calculating percent error.