3
$\begingroup$

F y(w) >= F x(w), for all w. Prove P{y<x} >= 0.5

I had this question on the exam and boy did it stump me. It was simple to understand why it was true, but I tried several different paths and got nowhere.

Any ideas?

  • 0
    What distribution are we talking about here?2010-10-03
  • 0
    Are $X$ and $Y$ independent?2010-10-03
  • 0
    Any distribution, and I believe they were independent if it matters.2010-10-03

3 Answers 3

3

Here's a rough sketch of one approach:

\begin{align} \mathbb{P}[Y\lt X] &= \int_{-\infty}^{\infty} \mathbb{P}[Y\lt X|X=x] \cdot f_X(x) \,dx \\ &= \int_{-\infty}^{\infty} F_Y(x) \cdot f_X(x) \,dx \\ &\geq \int_{-\infty}^{\infty} F_X(x) \cdot f_X(x) \,dx \\ &= \left[ \frac{1}{2} \left(F_X(x)\right)^2 \right]_{-\infty}^{\infty} \\ &= \frac{1}{2}. \end{align}

1

For definiteness, let us assume that $X$ and $Y$ are positive random variables taking values in $[0, \infty)$,and $F_Y(w)\le F_X(w)$, where $F_X(x), F_Y(y)$ are the CDFs and the equalities take place only at zero and in the limit $w \rightarrow \infty$. For simplicity, let us assume further that both CDFs are strictly increasing. The latter condition implies that we can invert the x-CDF and substitute in the y-CDF to obtain the function $F_Y(F_X)$. In this function, both independent ($F_X$) and dependent ($F_Y$) variables range in $[0, 1]$ and the value of F_Y is always less than F_X (except at 0 and 1). (It is easy to see that, if one plots the x-CDF and y-CDF on the same axes, and see that at every value of the x-CDF, the y-CDF lies to the left of the x-CDF.

Now, $P(Y\lt X)$ is equal to the area under the line of $F_Y(F_X)$ , which is less than the area of a right isosceles rectangle of unit sides which is equal to 0.5.

Edit: Here is the proof (as asked by Eruditass) that the required probability can be computed from the area of under the $F_Y$ graph as a function of $F_X$.

Using the convolution formula to compute the probability density function of $Y-X$:

$f_{Y-X}(w) = \int_{0}^{\infty}f_X(x) f_Y(w+x) dx$

In terms of which, the required probability is given by:

$P(Y < X) = P(Y -X<0) = \int_{-\infty}^{0} f_{Y-X}(w) dw$

Substituting the convolution formula:

$P(Y < X) = \int_{-\infty}^{0} dw \int_{0}^{\infty} dx f_X(x) f_Y(w+x) $

Changing the order of integration (the integration is over bounded functions) and performing a change of variables $ u = w +x$.

$P(Y < X) = \int_{0}^{\infty} dx f_X(x) \int_{-\infty}^{x} du f_Y(u) $

Using the definition of the cumulative distribution function, the second integral is just the Y-CDF

$P(Y < X) = \int_{0}^{\infty} dx f_X(x) F_Y(x) $

Again using the definition of the CDF:

$f_X(x) dx = dF_X(x) $

Changing the integration variable from $x$ to $F_X$, we obtain the final result:

$P(Y < X) = \int_{0}^{1} dF_X F_Y$

  • 0
    Your conclusion is the opposite of the desired one. I think you mean that the value of $F_Y$ is always *greater* than $F_X$. Then the graph of $F_Y$ as a function of $F_X$ will lie *above* the line $F_Y=F_X$, and the area (which is $P(Y2010-10-03
  • 0
    No, for a given $w$, $F_Y$ is greater than $F_X$ as given in the question, but, for a given $F_X$, $F_Y$ is smaller than $F_X$, this can be seen easily when you plot both CDFs on the same axes.2010-10-03
  • 0
    It's for all w and yes we wanted P(y 0.5 but Fy(w) > Fx(w), so it should logically work itself out. Why is P(y < x) equal to the line under Fy(Fx) ? Why not Fy(fx)? And what do the functions mean without something feeding in like (w)?2010-10-03
1

The assertion is false, even for independent random variables $X$ and $Y$. Fix $u$ in $(0,\frac12)$ and consider i.i.d. Bernoulli random variables $X$ and $Y$ with $$P(X=0)=P(Y=0)=1-u,\quad P(X=1)=P(Y=1)=u. $$ Then $[Y

Let us now consider the probability that $[Y\le X]$. Then the independence hypothesis is crucial. To see this, consider $u$ in $(0,1)$, $X$ uniformly distributed on the interval $(u,1+u)$ and $Y$ uniformly distributed on the interval $(0,1)$. Then $F_Y(x)\ge F_X(x)$ for every $x$. One can realize $(X,Y)$ in at least three ways.

(1) If $Y=X-u$, then $P(Y\le X)=1$ hence $P(Y\le X)\ge\frac12$.

(2) If $(X,Y)$ are independent, $P(Y\le X)=\frac12+u-\frac12u^2$ hence $P(Y\le X)\ge\frac12$.

(3) If $Y=\varphi(X)$ with $\varphi(x)=x+u$ if $x\le1-u$ and $\varphi(x)=x+u-1$ otherwise, one can check that $Y$ is uniformly distributed on $(0,1)$ and that $[Y\le X]=[X>1-u]$ hence $P(Y\le X)=2u$ and $P(Y\le X)$ is as close to $0$ as desired.

This shows that the result $P(Y\le X)\ge\frac12$ cannot hold in full generality.

Finally, for independent $X$ and $Y$, indeed $P(Y\le X)\ge\frac12$.

A proof which does not assume the existence of densities is as follows. Note that $$P(Y\le X)=E(P(Y\le X|X))=E(F_Y(X))\ge E(F_X(X))=P(X'\le X), $$ where $X'$ and $X$ are i.i.d. Now, by symmetry $[X'\le X]$ and $[X\le X']$ have the same probability and $P(X'\le X)+P(X\le X')=1+P(X=X')\ge1$ hence $P(X'\le X)\ge\frac12$ and we are done.