The standard deviation of a set of values is a measure of the variability of those values. Specifically, it is a positive number representing the average distance that the values in a dataset tend to fall from their arithmetic mean.
The standard deviation is calculated by taking the square root of the variance, which itself is calculated in different ways depending, again, on whether the dataset is finite or is represented by a density curve.
For a finite dataset, the variance is calculated by finding the distance of each value from the mean, squaring these distances, summing them, and dividing the result by one less than the number of values. That is, for a dataset of size \(n\), the variance \(s^2\) is given by
The standard deviation \(s\) is then given by