I think whuber's comment is on the spot. That formula is surely not refering to a "physical distance", but it's simply a general measure of the distance (as "dissimilarity") between two multidimensional (general) $features$.
A classical example is statistical classification; for example when applying the nearest-neighbour rule, we must compute the distance from our observed feature (say ${\mathbf x} = [x_1 x_2]$) against some reference values (say ${\mathbf x^A} = [x_1^A x_2^A]$) .
Here the components $x_1 x_2$ can be any measurements, often with different dimensions (eg, weight and length f some object).
A possible way to measure the (square) distance is to compute $d^2 = (x_1 - x_1^A)^2+(x_2 - x_2^A)^2$ , but this would be dimensionally inconsistent, and hence sensitive to the scale. A more reasonable recipe is to normalize each component dividing by some "characteristic" value (eg: the standard deviation), so all components are now adimensional and approximately even distributed:
$$ d^2 = \frac{(x_1 - x_1^A)^2}{\sigma_1^2}+\frac{(x_2 - x_2^A)^2}{\sigma_2^2}$$
Of course, the distance obtained by this feature normalization is adimensional, it's only meaningful when compared with other distances.