1
$\begingroup$

Let $X1,X2$ be independent random variables with $P[Xi = 0] = P[Xi = 1] = \frac{1}{2}$ for $i = 1, 2$. What is the random variable set $A1 = \{X1 = X2\}$ mean? I came across this representation in the book but not sure what this means.

3 Answers 3

4

Although this question may look very simple, a full understanding of it is not. For general purposes, I give the following elaborated answer.

In general, if $X_1$ and $X_2$ are any two random variables on a common probability space $(\Omega,\mathcal{F},{\rm P})$, then $\lbrace X_1 = X_2 \rbrace$ stands as a shorthand for the event $\lbrace \omega \in \Omega : X_1 {(\omega)} = X_2 {(\omega)} \rbrace$. For your specific example, we can define $(\Omega,\mathcal{F},{\rm P})$ as follows: $\Omega = \lbrace (0,0),(0,1),(1,0),(1,1) \rbrace$, $\mathcal{F}=2^\Omega$ (the power set of $\Omega$), and, for any $\omega = (i,j) \in \Omega$, ${\rm P}(\lbrace \omega \rbrace) = 1/4$. Note that, by additivity, this determines (the probability measure) ${\rm P}$ on $\mathcal{F}$; for example, $$ {\rm P}(\lbrace (0,0),(1,0),(1,1)\rbrace) = {\rm P}(\lbrace (0,0)\rbrace) + {\rm P}(\lbrace (1,0)\rbrace) + {\rm P}(\lbrace (1,1)\rbrace) = 3/4. $$ Now, we can define the random variables $X_1$ and $X_2$ as follows. For any $\omega = (i,j) \in \Omega$, $X_1 {(\omega)} = i$ and $X_2 {(\omega)} = j$. Thus, $$ {\rm P}(X_1 = 0) := {\rm P}(\lbrace \omega \in \Omega : X_1 {(\omega)} = 0 \rbrace) = {\rm P}(\lbrace (0,0),(0,1) \rbrace) = 1/2 $$ and $$ {\rm P}(X_1 = 1) := {\rm P}(\lbrace \omega \in \Omega : X_1 {(\omega)} = 1 \rbrace) = {\rm P}(\lbrace (1,0),(1,1) \rbrace) = 1/2, $$ and analogously for $X_2$. As for the event in question, $$ \lbrace X_1 = X_2 \rbrace : = \lbrace \omega \in \Omega : X_1 {(\omega)} = X_2 {(\omega)} \rbrace = \lbrace (0,0),(1,1) \rbrace, $$ and so $$ {\rm P}(X_1 = X_2) := {\rm P}(\lbrace \omega \in \Omega : X_1 {(\omega)} = X_2 {(\omega)} \rbrace) = {\rm P}(\lbrace (0,0),(1,1) \rbrace) = 1/2. $$ Finally, let's see that $X_1$ and $X_2$ are indeed independent (this should be clear from our construction). This amounts to showing that, for any $i,j \in \lbrace 0,1 \rbrace$, $$ {\rm P}(X_1 = i, X_2 = j) = {\rm P}(X_1 = i) {\rm P}(X_2 = j), $$ that is, $$ {\rm P}(\lbrace \omega \in \Omega : X_1 {(\omega)} = i, X_2 {(\omega)} = j \rbrace) = {\rm P}(\lbrace \omega \in \Omega : X_1 {(\omega)} =i \rbrace ){\rm P}(\lbrace \omega \in \Omega : X_2 {(\omega)} =j \rbrace ). $$ Indeed, both sides are equal to $1/4$.

  • 0
    It is also interesting to see that if I introduce two additional sets, defined as $A2 = (X2=X3)$ and $A3 = (X3=X1)$, $A1$, $A2$ and $A3$ are pairwise independent but not independent.2010-12-09
4

To add to Shai Covo's answer, it's worth noting that the convention of writing something like $\{X_1 = X_2\}$ for $\{\omega \in \Omega : X_1(\omega) = X_2(\omega)\}$, essentially "suppressing the $\omega$s", is very common in probability, and you will see it a lot as you continue your studies. Essentially, any "statement" written inside braces should be interpreted as the set of all $\omega$ for which the statement (which is generally something in terms of random variables, which are really functions of $\omega$) is true. Other common examples are things like $\{X_n \to X\}$, which is the set of all $\omega$ for which $X_n(\omega)$ converges to $X(\omega)$. At first you may find it helpful to "fill in the $\omega$s" when reading.

However, for intuition, it is often easy to understand such expressions. If you are doing an experiment where you flip two coins, with heads and tables labeled 0 and 1, $\{X_1 = X_2\}$ is just the event that the two coins come up the same.

1

It's the event that $X_1 = X_2$. So $X_1 = 0, X_2 = 0$ or $X_1 = 1, X_2 = 1$.