## Statistics

### September 27, 2011

These are all straight forward:

`(define (mean xs) (/ (sum xs) (length xs)))`

`(define (std-dev xs)`

(let* ((n (length xs)) (x-bar (/ (sum xs) n)))

(define (diff x) (- x x-bar))

(sqrt (/ (sum (map square (map diff xs))) (- n 1)))))

`(define (linear-regression xs ys)`

(let* ((n (length xs))

(x (sum xs)) (y (sum ys))

(xx (sum (map square xs)))

(xy (sum (map * xs ys)))

(yy (sum (map square ys)))

(slope (/ (- (* n xy) (* x y)) (- (* n xx) (* x x))))

(intercept (- (/ y n) (* slope (/ n) x))))

(values slope intercept)))

`(define (correlation xs ys)`

(let* ((n (length xs)) (x-bar (mean xs)) (y-bar (mean ys)))

(define (x-diff x) (- x x-bar)) (define (y-diff y) (- y y-bar))

(/ (sum (map * (map x-diff xs) (map y-diff ys)))

(- n 1) (std-dev xs) (std-dev ys))))

We used `sum`

and `square`

from the Standard Prelude. Here is a simple example:

`> (define xs '(1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83))`

> (define ys '(52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46))

> (mean xs)

1.6506666666666665

> (mean ys)

62.078

> (std-dev xs)

0.11423451233985206

> (std-dev ys)

7.037514983490772

> (linear-regression xs ys)

61.272186542107434

-39.06195591883866

> (correlation xs ys)

0.9945837935768895

In 1973 statistician F. J. Anscombe constructed four datasets that have remarkably different shapes, shown at right, but that share common mean *x*=9 and *y*=7.5, standard deviation *x*=3.32 and *y*=2.03, slope 0.5, intercept 3.0 and correlation 0.816:

`> (define xs-1 '(10 8 13 9 11 14 6 4 12 7 5))`

> (define ys-1 '(8.04 6.95 7.58 8.81 8.33 9.96 7.24 4.26 10.84 4.82 5.68))

> (define xs-2 '(10 8 13 9 11 14 6 4 12 7 5))

> (define ys-2 '(9.14 8.14 8.74 8.77 9.26 8.10 6.13 3.10 9.13 7.26 4.74))

> (define xs-3 '(10 8 13 9 11 14 6 4 12 7 5))

> (define ys-3 '(7.46 6.77 12.74 7.11 7.81 8.84 6.08 5.39 8.15 6.42 5.73))

> (define xs-4 '(8 8 8 8 8 8 8 19 8 8 8))

> (define ys-4 '(6.58 5.76 7.71 8.84 8.47 7.04 5.25 12.50 5.56 7.91 6.89))

You can run the program at http://programmingpraxis.codepad.org/8WJpBVc9.

The implementation of your standard deviation (and thus correlation) is wrong, given the definitions on page 1. Your definition says to divide by N, you divide by N – 1…

My implementation in Go:

I think I was taught to divide by n – 1 when the deviation from the (unknown) population mean is wanted but the (known) sample mean is used instead in the formula. The sample values are said to lose one “degree of freedom” because they can not all deviate freely from their own mean.

See: http://en.wikipedia.org/wiki/Standard_deviation

If you divide by n, the standard deviation is biased. Dividing by n-1 gives an unbiased standard deviation.

Python solution

http://pastebin.com/vrV9J4vN

By way of conversation, here is an approach I find much fun. I lift constants to be vecs (indexed sequences) so that everything is uniform, and then I map binary or unary operations on these vecs. Like in the language of R but more rigidly and in Scheme. The goal is a special language that allows to explore descriptions like “the mean square deviation from the mean” in the code itself. Someone should write The Structure and Interpretation of Statistical, er, Something.

Ok, I get carried away. A variation on the theme anyway. I’ve included one of Anscombe’s cases.

@Jussi Piitulainen, Paul Hofstra:

Yes, but that’s not how he defined standard deviation on page 1. Thus the confusion..

@DGel: Yes. It may be better to deviate from the definition on page 1, especially when even the model implementation does so.

How ugly :)