Standard Deviation, Variance, Variability
Why is standard deviation (error) rather than variance
often a more useful measure of variability?
While the variance (which is the square of the standard deviation) is
mathematically the "more natural" measure of deviation, many people have a
better "gut" feel for the standard deviation because it has the same
dimensional units as the measurements being analyzed. A similar difference
occurs with another statistical variable, the correlation coefficient,
usually denoted "R", which measures the correlation between two variables.
The more correct measure of such correlation (not cause) is R^2. which is
always less than R because by the nature of the formula -1 +/- 1), but that doesn't mean that
"A" caused "B". It means that "A" and "B" go up and down together. The
"cause" may be some entirely different variable(s) that make "A" and "B"
behave that way.
Standard deviation is in the units of the data. Consider a distribution of
times, all measured in "seconds". The standard deviation also has "seconds"
as its unit. The variance is the square of the standard deviation. The
unit for variance would be "seconds-squared". A quantity in the units of
the data is much easier to actually relate to the data.
Dr. Ken Mellendorf
Illinois Central College
Click here to return to the Mathematics Archives
Update: June 2012