Is there a relation between the max of a Gaussian random walk of 10 steps vs the max of 10 Gaussian random walks? Specifics (in Mathematica notation):
(* a Gaussian random walk with standard deviation 1 *) a[0] := 0 a[n_] := a[n-1] + RandomReal[NormalDistribution[0, 1]] (* the max of the walk over 10 steps *) b := Max[Table[a[i],{i,1,10}]] (* calculate max many times to get good sample set *) (* Mathematica "magic" insures we're not using the same random #s each time *) c = Table[b,{i,1,10000}] (* distribution isn't necessarily normal, but we can still compute mu + SD *) Mean[c] (* 3.66464 *) StandardDeviation[c] (* 1.61321 *)
Now, consider 10 people doing a Gaussian random walk of 1 step and we take the max of these 10 values.
(* max of 10 standard-normally distributed numbers *) d := Max[Table[RandomReal[NormalDistribution[0, 1]],{i,1,10}]] (* get a good sample set *) f = Table[d,{i,10000}] (* and now the mean and SD *) Mean[f] (* 1.54843 *) StandardDeviation[f] (* 0.580593 *)
The two means/SDs are obviously different, but I sense they're related somehow, perhaps by Sqrt[10], since the sum (not max) of 10 random walks is normal with SD of Sqrt[10], and I sense that somehow the cumulative sum of the first 9 somehow cancel out.
Are these known distributions?