7
$\begingroup$

Original post on Mathoverflow here.

Given a list of identical and independently distributed Levy Stable random variables, $(X_0, X_1, \dots, X_{n-1})$, what is the is the probability that the maximum exceeds the sum of the rest? i.e.:

$$ M = \text{Max}(X_0, X_1, \dots, X_{n-1}) $$ $$ \text{Pr}( M > \sum_{j=0}^{n-1} X_j - M ) $$

Where, in Nolan's notation, $X_j \in S(\alpha, \beta=1, \gamma, \delta=0 ; 0)$, where $\alpha$ is the critical exponent, $\beta$ is the skew, $\gamma$ is the scale parameter and $\delta$ is the shift. For simplicity, I have taken the skew parameter, $\beta$, to be 1 (maximally skewed to the right) and $\delta=0$ so everything has its mode centered in an interval near 0.

From numerical simulations, it appears that for the region of $0 < \alpha < 1$, the probability converges to a constant, irregardless of $n$ or $\gamma$. Below is a plot of this region for $n=500$, $0< \alpha < 1$, where each point represents the result of 10,000 random draws. The graph looks exactly the same for $n=100, 200, 300$ and $400$.

alt text

For $1 < \alpha < 2$ it appears to go as $O(1/n^{\alpha - 1})$ (maybe?) irregardless of $n$ or $\gamma$. Below is a plot of the probability for $\alpha \in (1.125, 1.3125)$ as a function of $n$. Note that it is a log-log plot and I have provided the graphs $1/x^{.125}$ and $1/x^{.3125}$ for reference. It's hard to tell from the graph unless you line them up, but the fit for each is a bit off, and it appears as if the (log-log) slope of the actual data is steeper than my guess for each. Each point represents 10,000 iterations.

alt text

For $\alpha=1$ it's not clear (to me) what's going on, but it appears to be a decreasing function dependent on $n$ and $\gamma$.

I have tried making a heuristic argument to the in the form of:

$$\text{Pr}( M > \sum_{j=0}^{n-1} X_j - M) \le n \text{Pr}( X_0 - \sum_{j=1}^{n-1} X_j > 0 )$$

Then using formula's provided by Nolan (pg. 27) for the parameters of the implicit r.v. $ U = X_0 - \sum_{j=1}^{n-1} X_j$ combined with the tail approximation:

$$ \text{Pr}( X > x ) \sim \gamma^{\alpha} c_{\alpha} ( 1 + \beta ) x^{-\alpha} $$ $$ c_{\alpha} = \sin( \pi \alpha / 2) \Gamma(\alpha) / \pi $$

but this leaves me nervous and a bit unsatisfied.

Just for comparison, if $X_j$ were taken to be uniform r.v.'s on the unit interval, this function would decrease exponentially quickly. I imagine similar results hold were the $X_j$'s Gaussian, though any clarification on that point would be appreciated.

Getting closed form solutions for this is probably out of the question, as there isn't even a closed form solution for the pdf of Levy-Stable random variables, but getting bounds on what the probability is would be helpful. I would appreciate any help with regards to how to analyze these types of questions in general such as general methods or references to other work in this area.

If this problem is elementary, I would greatly appreciate any reference to a textbook, tutorial or paper that would help me solve problems of this sort.

UPDATE: George Lowther and Shai Covo have answered this question below. I just wanted to give a few more pictures that compare their answers to some of the numerical experiments that I did.

Below is the probability of the maximum element being larger than the rest for a list size of $n=100$ as a function of $\alpha$, $\alpha \in (0,1)$. Each point represents 10,000 simulations.

alt text

Below are two graphs for two values of $\alpha \in \{1.53125, 1.875\}$. Both have the function $ (2/\pi) \sin(\pi \alpha / 2) \Gamma(\alpha) n (( \tan(\pi \alpha/2) (n^{1/\alpha} - n))^{-\alpha} $ with different prescalars in front of them to get them to line up ( $1/4$ and $1/37$, respectively) superimposed for reference.

alt text alt text

As George Lowther correctly pointed out, for the relatively small $n$ being considered here, the effect of the extra $n^{1/\alpha}$ term (when $1 < \alpha < 2$) is non-negligible and this is why my original reference plots did not line up with the results of the simulations. Once the full approximation is put in, the fit is much better.

When I get around to it, I will try and post some more pictures for the case when $\alpha=1$ as a function of $n$ and $\gamma$.

  • 0
    This looks like quite a nice question, and I think you can get the $n\to\infty$ asymptotic behaviour by looking at the joint distribution of the maximum jump and terminal value of a stable Levy process.2010-11-27
  • 0
    And, compare with the Gaussian case, where the associated Levy process is Brownian motion. This is continuous, so its maximum jump size is zero and, as the final distribution is Gaussian, you expect fast convergence to the probability that the BM ends at a negative value (which will depend on the mean used).2010-11-27
  • 0
    And, by the Central Limit Theorem, I expect the result for any square integrable distribution (e.g., uniform) to have the same asymptotics as the Gaussian case.2010-11-27
  • 0
    @Shai: I read it as taking the maximum of n Levy stable processes independent of the sum then asking which one is larger. @George: Do I have that right? Would you like to expand a bit more on how to do this? Perhaps consider the Frechet distribution vs. the resulting stable distribution?2010-11-28
  • 0
    @user4143: I was thinking of a single stable Levy process Z(t), and you want the joint distribution of Z(1) and $\max_{t\le1}(Z(t)-Z(t-))$.2010-11-28
  • 0
    However, one thing confuses me, looking at your question again. You chose $\beta=1$, so that the random variables are non-negative. That simplifies things a lot, whereas $\beta < 1$ sounds a bit more interesting. However, in the case $\alpha\ge1$ the rvs always have support equal to the entire set of real numbers (and $\beta$ can't be chosen to be 1). Am I misunderstanding something?2010-11-28
  • 0
    Oh, sorry, I see you're using "Nolan's notation", whatever that is (haven't read through the linked file yet). In any case, the RVs won't be nonnegative for $\alpha\ge1$, so the problem is rather different in that case.2010-11-28
  • 0
    Having looked at your link, I see that there are *two* parameterizations. The first, which you are using, is denoted by $S(\cdot;0)$ and, for $\alpha < 1,\beta=1$ is supported by the nonnegative numbers when $\delta > \gamma\tan\frac{\pi\alpha}{2}$, which is not the case here. The second paramerization, $S(\cdot;1)$ is supported in the nonnegative numbers when $\delta\ge0$, which is what I initially assumed you were doing.2010-11-28
  • 0
    @user4143: Can you confirm that you are not, in fact, using nonnegative random variables in any of the scenarios. That seems to have caused some confusion.2010-11-28
  • 0
    @user4143: Never mind. It is clear what you are using looking at your results and the notation in the linked text. @Shai: The normal case will depend on what mean you are using. For mean zero, it should converge to 1/2. The approach in my answer should work for the Gaussian case, where the Levy process is a continuous Brownian motion.2010-11-29
  • 0
    @user4143: Since you mentioned you are a programmer, I'm curious to know in what context this question arose.2010-12-01
  • 0
    @shai, I'm actually a grad student in computer science. I was looking into phase transition of the Number Partition Problem for random instance elements drawn from (slightly modified) Levy-Stable distributions. For $0 < \alpha < 1$ I noticed the prob. of a solution existing was not 1 in the "easy" region. On further investigation, this mostly turned out to be because of the maximum element in the list (i.e. maximum jump). A more detailed analysis will probably have to include the second largest jump, etc. etc. Also, really, I was just curious.2010-12-01
  • 0
    Thanks. The references I mentioned before may be useful with regard to the second largest, etc.2010-12-01

4 Answers 4

5

What you are asking for is a special case of the joint distribution of the maximum and sum of a sequence of random variables. The fact that the variables are stable simplifies this and allows us to compute the joint distribution in the limit as n goes to infinity. This limit can be described in terms of the joint distribution of the value of a stable Lévy process at time $t=1$ and its maximum jump size over $t\le1$. Lévy processes are right continuous with left limits (cadlag) and have stationary independent increments. Stable Lévy processes have increments with stationary distributions.

If $X_i$ each have the $\alpha$-stable distribution denoted by $S(\alpha,\beta,\gamma,\delta)=S(\alpha,\beta,\gamma,\delta;0)$ in the text you link, then the sum $S=\sum_{i=0}^{n-1}X_i$ has the $S(\alpha,\beta,n^{1/\alpha}\gamma,\delta_n)$ distribution where $$ \delta_n=\begin{cases} n\delta + \gamma\beta\tan\frac{\pi\alpha}{2}(n^{1/\alpha}-n),&\textrm{if }\alpha\not=1,\\ n\delta +\gamma\beta\frac{2}{\pi}n\log n,&\textrm{if }\alpha=1. \end{cases}\qquad\qquad{\rm(1)} $$ This is equation (1.9) on page 20 of the linked notes. It will be helpful to rescale the sum to remove the dependence on n. The random variable $Y=n^{-1/\alpha}(S-\delta_n)$ has the $S(\alpha,\beta,\gamma,0)$ distribution (by Proposition 1.16).

Now, to relate this to Lévy processes. Let $Z_t$ be the Lévy process with $Z_1\sim Y$. We can take $X_i=n^{1/\alpha}(Z_{(i+1)/n}-Z_{i/n})+\delta_n/n$, which will be independent with the required $S(\alpha,\beta,\gamma,\delta)$ distribution. In fact, in this case we have $Z_1=Y$ and $S=n^{1/\alpha}Z_1+\delta_n$. Writing $M=\max_iX_i$ and $\Delta Z^*_t$ for the maximum of $\Delta Z_s=Z(s)-Z(s-)$ over $s\le t$ gives $n^{-1/\alpha}(M-\delta_n/n)\to\Delta Z^*_1$ as n goes to infinity. We can then compute the probability P that you are asking for to leading order as n becomes large, $$ \begin{align} P&\equiv\mathbb{P}\left(M > S-M\right)\\ &=\mathbb{P}\left(2n^{-1/\alpha}(M-\delta_n/n) > n^{-1/\alpha}(S-2\delta_n/n)\right)\\ &\sim\mathbb{P}\left(2\Delta Z^*_1 > Z_1+n^{-1/\alpha}\delta_n(1-2/n)\right) \end{align} $$ So, $$ P\sim\mathbb{P}\left(2\Delta Z^*_1 - Z_1 > n^{-1/\alpha}\delta_n\right) $$ and the leading term in the asymptotics for P are determined by the distribution of $2\Delta Z^*_1-Z_1$.

For $\alpha < 1$ it can be seen from (1) that $n^{-1/\alpha}\delta_n\to\gamma\beta\tan\frac{\pi\alpha}{2}$ as n goes to infinity, so we get a finite and positive limit $$ P\to\mathbb{P}\left(2\Delta Z^*_1-Z_1 > \gamma\beta\tan\frac{\pi\alpha}{2}\right) $$ as $n\to\infty$. The tangent has only appeared in the expression here because of the particular parameterization that we are using. It is a bit simpler if we rewrite this in terms of $\tilde Z_t=Z_t+\gamma\beta t\tan\frac{\pi\alpha}{2}$. Then, for the $\beta=1$ case you are using, $\tilde Z_t$ has support $[0,\infty)$ (Lemma 1.10 in the linked text). This means that $\tilde Z$ is a stable subordinator and the required probability is $$ P\to\mathbb{P}\left(2\Delta\tilde Z^*_1 > \tilde Z_1\right)=\frac{\sin(\pi\alpha)}{\pi\alpha}. $$ [I've been a bit sneaky here, and substituted in the expression that Shai Covo gave in his answer for $\mathbb{P}(2\Delta\tilde Z^*_1 > \tilde Z_1)$ and simplified a bit using Euler's reflection formula.]

On the other hand, if $\alpha > 1$ then (1) gives $n^{-1/\alpha}\delta_n\sim n^{1-1/\alpha}(\delta-\gamma\beta\tan\frac{\pi\alpha}{2})$ and, if $\alpha=1$, then $n^{-1/\alpha}\delta_n\sim\gamma\beta\frac{2}{\pi}\log n$. In this case, P tends to zero at a rate dependent on the tail of the distribution function of $2\Delta Z^*_1-Z_1$.

It is also possible to use the Lévy-Khintchine representation to compute the tail of the distribution function of $2\Delta Z^*_1-Z_1$. The jumps of a Lévy process are described by a Lévy measure $\nu$ on the real numbers, so that the jumps of size belonging to a set $A$ occur according to a Poisson process of rate $\nu(A)$. An $\alpha$-stable process has Lévy measure $d\nu(x)=c\vert x\vert^{-\alpha-1}\,dx$, where $c$ only depends on the sign of $x$. We can back out the value of $c$ by calculating the characteristic function of $Z_1$ using the Lévy-Khintchine formula (essentially the Fourier transform of $\nu$, see Wikipedia, which uses W for the Lévy measure) and comparing with expression (1.4) from your text. Going through this computation gives $$ d\nu(x)=\pi^{-1}\alpha\gamma^\alpha\sin\frac{\pi\alpha}{2}\Gamma(\alpha)(1+\beta{\rm sign}(x))\vert x\vert^{-\alpha-1}\,dx. $$ The number of jumps of size greater than $x$ is Poisson distributed of rate $\nu((x,\infty))$ giving, $$ \begin{align}\mathbb{P}(\Delta Z^*_1> x) &= 1-e^{-\nu((x,\infty))}\\ &\sim\nu((x,\infty))\\ &=\pi^{-1}\gamma^\alpha\sin\frac{\pi\alpha}{2}\Gamma(\alpha)(1+\beta)x^{-\alpha}. \end{align}\qquad{\rm(2)} $$ Comparing with your linked text (Theorem 1.12), this is exactly the same as the tail of $Z_1$! That is, $\mathbb{P}(\Delta Z^*_1 > x)\sim\mathbb{P}(Z_1 > x)$. This means that the tail of the distribution is entirely due to the occasional big jump of the Lévy process. This does actually correspond to experience. I have plotted Cauchy process sample paths before, and you do get some paths with a single big jump drowning out all other behaviour. We can also see why this should be the case. The density of the jump distribution is sub-exponential, proportional to $\vert x\vert^{-\alpha-1}$. So one big jump of size $x$ has a much bigger probability than n small jumps of size $x/n$. It also suggests that, in your calculation of $X_i$, the samples contributing to the probability come mainly from the case where all of them are reasonably close to the mean except for one extreme (positive or negative) $X_i$.

So, from the argument above, the leading asymptotic of the probability P comes from the case where $\Delta Z_1^*$ is very large compared to $Z_1-\Delta Z_1^*$, or when the largest negative jump $\Delta(-Z)^*_1$ is large compared to $Z_1+\Delta(-Z)^*_1$. This gives $$ \begin{align} \mathbb{P}(2\Delta Z^*_1-Z_1 > x) &\sim\mathbb{P}(\Delta Z^*_1 > x) +\mathbb{P}(\Delta (-Z)^*_1 > x)\\ &\sim 2\pi^{-1}\gamma^{\alpha}\sin\frac{\pi\alpha}{2}\Gamma(\alpha)x^{-\alpha}, \end{align} $$ where I substituted in (2). Plugging in the asymptotic form for $x=n^{-1/\alpha}\delta_n$ calculated above gives $$ P\sim2\pi^{-1}\gamma^{\alpha}\sin\frac{\pi\alpha}{2}\Gamma(\alpha)\left(\delta-\gamma\beta\tan\frac{\pi\alpha}{2}\right)^{-\alpha}n^{1-\alpha} $$ for $\alpha > 1$. This confirms your $O(1/n^{\alpha-1})$ guess from the numerical simulation! Also, using $x=n^{-1/\alpha}\delta_n\sim\gamma\beta\frac{2}{\pi}\log n$ for $\alpha=1$ gives $$ P\sim\frac{1}{\beta\log n}. $$ Hopefully this is all correct, but I can't rule out making an arithmetic slip above. Edit: The limit above for $\alpha > 1$ does not look very good. It appears to be slightly above the plot in the original post. However, even for n=1000, it should not be very close to the asymptotic limit described. Replacing the asymptotic limit for $\delta_n$ by its exact form effective replaces the $n^{1-\alpha}$ term for P by $(n^{1-1/\alpha}-1)^{-\alpha}$. For n=1000 this approximately doubles the calculated number so, the more accurate value for $\delta_n$ moves you further away from the plot. Something has gone wrong somewhere...

  • 0
    If I understand your answer, you use the largest jump of a stable process up to time $t=1$ (and the process' value at $t=1$) only to approximate the probability $P$ in the title as $n \to \infty$. You partition $[0,1]$ into very small subintervals so the increments correspond to the jump sizes. Am I right?2010-11-29
  • 0
    The distribution function of $\Delta Z^*_1$ can be computed as ${\rm P} (\Delta Z^*_1 \leq x) = \exp(-\nu(x,\infty))$, where $\nu$ is the L\'evy measure (this corresponds to having $0$ jumps of size greater than $x$ up to time $t=1$).2010-11-29
  • 0
    @Shai: Indeed. And for $\alpha\ge1$, $\beta < 1$ the asymptotic behaviour will be dominated by the terminal value $Z_1$. For $\beta=1$ it will be dominated by the jump term $\Delta Z^*_1$. This is easily computed.2010-11-29
  • 0
    But I have run out of time, and will have to return to this point later.2010-11-29
  • 0
    Rather than considering ${\rm P}(2 \Delta Z^*_1 - Z_1 > 0)$, we can consider the distribution of $\Delta Z^*_1 / Z_1$. This has been studied in the literature with regard to subordinators (increasing L\'evy processes) with infinite jump measure. In particular, there are explicit formulas for the cases of strictly stable subordinators and gamma processes. While these formulas are very complicated in general, they are very simple for computing ${\rm P}(\Delta Z^*_1 / Z_1 > 1/2)$. This is just what we need for $\alpha$-stable subrodinators, where it equals $1/[\Gamma(1-\alpha)\Gamma(\alpha+1)]$.2010-11-29
  • 0
    Looking at the difference is convenient here because subtracting out the biggest jump of a Levy process and conditioning on the jump size just gives another levy process with the jump measure truncated. In the limit as the jump size goes to infinity, $\Delta Z^*_1$ and $Z_1-\Delta Z_1^*$ should become independent, which simplifies things a lot.2010-11-29
  • 0
    I'm talking about the $\alpha\ge1$ case in my previous comment, where it is the tail of the distribution that matters.2010-11-29
  • 0
    Many related formulas for the strictly subordinator case $(0 < \alpha < 1)$ can be found in the paper "On the extreme flights of one-sided Lévy processes".2010-11-29
  • 0
    @George, @Shai, I find all this very useful but I also find myself out of my depth. I'm in the process of looking at Eliazar and Klafter's paper. Do either of you have any more references that I could read?2010-11-29
  • 0
    @user4143: Below, I only consider the case of an increasing $\alpha$-stable process, $0 < \alpha < 1$ (strictly stable subordinator, a particular case of a subordinator). Then, according to George's explanations, you need ${\rm P}(\Delta Z^*_1 / Z_1 > 1/2)$ in order to approximate the probability you actually want as $n \to \infty$. So, two useful references are: 1) "The two-parameter Poisson–Dirichlet distribution derived from a stable subordinator". You are only interested in the ${\rm PD}(\alpha,0)$ case. 2) "Order statistics for jumps of normalized subordinators" (might not be available).2010-11-29
  • 0
    @user4143: However, if you are only interested in the probability ${\rm P}(\Delta Z^*_1 / Z_1 > 1/2)$ (equivalently, ${\rm P}(2 \Delta Z^*_1 - Z_1 > 0)$), for the case of a strictly stable subordinator ($0 < \alpha < 1$), then, as I already indicated, this probability equals $p = 1/[\Gamma(1-\alpha)\Gamma(\alpha+1)]$, where $\Gamma$ is the gamma function. For example, letting $\alpha = 1/2$ gives $p = 0.6366...$ which agrees very well with your graph. I checked various values of $\alpha$ and everything agrees very well with your graph. So, this seems to confirm George's approximation.2010-11-29
  • 0
    @Shai: actually, my previous comment was a bit misleading. The tail distribution is determined by the case of a single huge jump (positive or negative), so you can read it off the levy measure. I'll add this when I have time to update my answer.2010-11-29
  • 0
    I deleted my two previous comments concerning the (centered) normal case, due to some trivial mistake of mine.2010-11-29
  • 0
    @Shai, @George: So it looks like you've converged on the probability being $1/( \Gamma(1-\alpha) \Gamma(1+\alpha) )$ for the case of $0 < \alpha < 1$, but there is still some question as to what the happens when $\alpha=1$ and when $1 < \alpha < 2$?2010-11-30
  • 0
    @George, I notice that for $1 < \alpha < 2$, this still has a dependence on $\gamma$ though it appears from numerical simulations that this is not so...can you elaborate on this a bit?2010-11-30
  • 0
    For what it's worth, I did not come up with the $O(n^{1-\alpha})$ estimate just from numerical simulations, it falls out of using the heuristic derivation that I mentioned. I was suspicious of it since the data did not match my prediction. Also, thank you both for your time and I apologize I can't add more to the discussion but I still need some time to digest the answers and references given.2010-11-30
  • 0
    @user4143: I just want to remind that $1/[ \Gamma(1-\alpha) \Gamma(1+\alpha) ]$ can be simplified to $\sin(\pi \alpha)/(\pi \alpha)$.2010-11-30
  • 0
    @user4143: No, there's no dependence on gamma, it drops out of my formula when delta is zero. And my argument looks complete. The rate of convergence to the leading asymptotic could well be an issue, which is why I mentioned using the exact form for $\delta_n$ in the last edit.2010-11-30
  • 0
    @user4143: and $n^{-1/\alpha}\delta_n$ is still quite small even for n=1000. So there is the issue of whether the tail approximation is valid.2010-11-30
  • 0
    @George, @Shai: Thank you. I really appreciate the time and effort that you put into answering this. I'm a programmer, so if I can ever repay the favor in terms of numerical experiments or anything else, let me know.2010-12-01
4

My previous answer was useless. The new answer below completes George's answer for the $0 < \alpha < 1$ case, illustrated in the first graph above.

For this case, George provided the asymptotic approximation $P_n \sim {\rm P}(2 \Delta Z^*_1 - Z_1 > 0)$ as $n \to \infty$, where $P_n$ is the probability the OP asked for, $Z = \lbrace Z_t : t \geq 0 \rbrace$ is a strictly stable L\'evy process of index $\alpha$, and $\Delta Z^*_1$ is the largest jump of $Z$ in the time interval $[0,1]$. It is not easy to obtain ${\rm P}(2 \Delta Z^*_1 - Z_1 > 0)$ in closed form, but indeed possible, and in fact the following more general result is known: $$ {\rm P}\bigg(\frac{{\Delta Z_1^* }}{{Z_1 }} > \frac{1}{{1 + y}}\bigg) = \frac{{y^\alpha }}{{\Gamma (1 - \alpha )\Gamma (1 + \alpha )}}, \;\; y \in [0,1], $$ where $\Gamma$ is the gamma function. Letting $y = 1$ thus gives $P_n \sim 1/[\Gamma(1 - \alpha) \Gamma(1 + \alpha)]$. This asymptotic expression agrees very well with the graph above, as I confirmed by checking various values of $\alpha \in (0,1)$. (For example, $\alpha = 1/2$ gives $P_n \sim 2/\pi \approx 0.6366$.)

A generalization. Instead of just considering the probability ${\rm P}(M > \sum\nolimits_{i = 0}^{n - 1} {X_i } - M)$, we can consider the probability ${\rm P}(M > y^{-1}[\sum\nolimits_{i = 0}^{n - 1} {X_i } - M])$, $y \in (0,1]$. In view of George's answer, this should correspond to the formula given above for ${\rm P}(\Delta Z_1^* / Z_1 > 1/(1+y))$. This can be further generalized, in various directions, based on known results from the literature (three useful references in this context are indicated somewhere in the comments above/below).

EDIT:

As George observed, the term $\Gamma(1 - \alpha) \Gamma(1 + \alpha)$ can be simplified to $(\pi\alpha) / \sin(\pi\alpha)$. The former expression corresponds to $[\Gamma(1 - \alpha)]^k \Gamma(1 + k \alpha)$, which is incorporated in the explicit formula for ${\rm P}(\Delta Z_1^* / Z_1 > 1/(1+y))$, $y > 0$, of the form $\sum\nolimits_{k = 1}^{\left\lceil y \right\rceil } {a_k \varphi _k (y)} $. The function $\varphi _k (y)$ is some $(k-1)$-dimensional integral, hence that formula is computationally very expensive already for moderate values of $y$.

0

So $M=\mbox{Max}(X_0,\ldots,X_{n-1})$ then you know that:

$$\mathbb{P}(M\geq x)=1-F(x)^n$$

with $F(x)$ the Levy-stable cumulative distribution. Now, you also have

$$\mathbb{P}(M\geq \sum X_i) = \int p(M\geq \sum X_i | \sum X_i = x) p(x) dx$$

Where $p(x)$ is the density function for Levy-stable distributions. (Since by definition Levy-stable distributions are stable under summation of random Levy-stable variables.)

So it seems to me you have in principle all the ingredients to try some asymptotics, even if you don't have an explicit formula for all the distributions. Maybe you can try with some for which the formula exists, like Cauchy and normal distributions.

Sorry for the sloppiness, just trying to sketch an approach for now.

0

Some rough idea.

I consider only the special case where the $X_i$ are positive rv's such that, by self-similarity, $X_1 + \cdots + X_{n-1}$ is equal in distribution to $(n-1)^{1/\alpha}X_1$ where $\alpha$ is the associated index. Then, it seems useful to start (similarly to what you did above) with $$ P = n{\rm P}(X_0 > X_1 + \cdots + X_{n-1}), $$ where $P$ is the probability in the title. Then, recalling the self-similarity argument, a straightforward application of the law of total probability (conditioning on $X_0$) yields $$ P = n\int F (s/(n - 1)^{1/\alpha } )f(s){\rm d}s, $$ where $F$ and $f$ are the common distribution and density functions of the $X_i$, respectively. See if you can continue from here.

  • 0
    This only works when the random variables are nonnegative. And, besides, in that case you could just look at $-X_0+X_1+\cdots+X_{n-1}$, which is also stable.2010-11-28
  • 0
    I stated that I only consider positive rv's (in this revised answer). As for your second comment, it might be more useful to use my approach if, for example, the $X_i$ correspond to the strictly stable subordinator of index $\alpha = 1/2$, where closed form expressions exist (at least for the density).2010-11-28