4
$\begingroup$

I am not good with vector spaces so I would be grateful for any help.
As I've been told I need to take $v \in \mathrm{Im}(T)$, $v\neq 0$, and show that if $T(v) = \mathbf{0}$ then $T^2 = 0$. But if $T(v) \neq \mathbf{0}$, then need to show that if $\{e_1,\ldots,e_{n-1}\}$ is a basis of $\mathrm{Ker}(T)$, then $\{e_1,\dots,e_{n-1},v\}$ is a basis of $V$. Then deduce that $T$ is diagonalisable. I don't really know how to do that though...

4 Answers 4

3

Suppose that $0\neq v\in Im(T)$, then $T(v)\in Im(T)$. $\dim(Im(T))=1$ so $Im(T)=\text{span}(v)=F\cdot v$ and you get that there is a $\lambda$ such that $T(v)=\lambda v$.

If $T(v)=0$ then $T^2(V) \subseteq T(Im(T)) = T(F\cdot v)=0$, so $T^2 = 0$. Assume now that $T(v)\neq 0$, then $Im(T) \cap \ker(T) = {0}$ so a basis for the kernel (which is of dimension $n-1$) is a set of eigenvectors of eigenvalue $0$ and $v$ is the n-th eigenvector of eigenvalue $\lambda\neq 0$

  • 0
    "v is the n-th eigenvector of eigenvalue \lambda \neq 0" This is false. Nilpotent operators have only zero eigenvalues. If there were an eigenvector of non-zero eigenvalue, it would surely stabilize under action of T (as in the first paragraph) and T^n wouldn't be zero for any n.2010-11-12
  • 0
    In this case it is true, since the operator isn't nilpotent. the assumption is that $T(v)\neq 0$ so $\lambda \neq 0$ and then $T^n(v) = \lambda^n v \neq 0$.2010-11-12
  • 0
    Ah, I got confused because you talk about non-nilpotent case in the first paragraph, then you talk about nilpotent case at the beginning of the second paragraph and immediately you switch to non-nilpotent case again. A strange order indeed.2010-11-12
  • 0
    Actually the first paragraph is just the general case for 1 dimensional image - $\lambda$ there can be any number.2010-11-12
  • 0
    @Prometheus: Why is it that if {e1,…,en−1,v} is a basis of V then T is diagonalisable?2010-11-12
  • 0
    @Math student: because another definition of T being diagonalizable is that V has a basis eigenvectors of T - the $e_i$ have eigenvalue 0 (because they are in the kernel) and $v$ has eigenvalue $\lambda\neq 0$.2010-11-12
  • 0
    @Prometheus: I see. Thank you!2010-11-12
2

Prometheus' answer is essentialy correct. I'd just like to point something out to you so that you can gain a little intuition.

We know that the rank of $T$ is one-dimensional, which means that there exists ($n-1$)-dimensional subspace $W \subset V$ such that $T(W) = 0$ (imagine this in three dimensions as a plane that is being sent to zero). Now pick any $u \in V$ not lying in $W$. We have $T(u) = v \neq 0$ (because otherwise the rank of $T$ would be zero). So what can we say about $v$? Well, it either lies in $W$ and therefore for any $u' \in V$, with decomposition $u' = u + w'$ for some $w' \in W$, we have $T^2(u') = T^2(u + w') = T^2(u) + T^2(w') = T(v) + 0 = 0$ (because both $v$ and $w'$ are in $W$).

So assume that $v \notin W$. But then we know $T(v) = \lambda v$ because the span of $v$ is one-dimensional. Therefore we can write $T = \lambda E_v \oplus 0_W$ which is the desired diagonalization.

Try thinking about it in two dimensions and matrices and convince yourself that this problem is all about this: $$\pmatrix{0&1\cr0&0} \quad \pmatrix{\lambda & 0\cr 0 &0}$$

  • 0
    Thank you for the explanation!2010-11-12
1

First, if the rank of T is 1, then the dimension of its kernel is n-1. That means 0 is an eigenvalue of order n-1.

Now, as Prometheus said, you just have to find out whether you will find that last eigenvalue or not. To find it out, just take a vector $v \notin Ker(T)$, and look at $T(v)$. Either you'll again fall on $0$. What does it mean ? You have taken a vector from the only dimension not belonging to the kernel of $T$. It means it belongs to $Im(T)$. Any other vector there can be obtained by multiplying $v$ by some coefficient $t \in \mathbb{R}$ (or $\mathbb{C}$, ...). Since T is linear, the same will go for the image of $v$.

So if $T(v) = 0$, all the vectors from that dimension will fall on $0$ when going through $T$. So for any vector $u \in Im(T)$, $T(u) = 0$. That means that for any vector $v \in V$, $T(T(v)) = 0$, that is, $T^2(v) = 0$.

Now if $T(v)$ isn't zero, there will be some vector $u \in V$ (but different from $0$) such that $T(v) = u$. Thus, for any $t \in \mathbb{R}$, $T(t \cdot v) = t \cdot T(v) = t \cdot u$. And $u$ is in $Im(T)$ thus colinear to $v$. Now you have v being an eigenvector of $T$. Taking a basis of $Ker(T)$ and adding $v$ to it provides you a basis of $V$ on which $T$ is diagonalisable.

  • 0
    Just FYI. Vector spaces don't have to be over $\mathbb{R}$ or $\mathbb{C}$. Any field will do just fine.2010-11-12
  • 0
    I know but given the level of the question, I decided to stick to them in my answer.2010-11-12
  • 0
    This is helpful, thank you.2010-11-12
  • 1
    $v\notin \ker(T)$ doesn't mean that $v \in Im(T)$, though you can find a vector satisfying this, if $T^2 \neq 0$2010-11-12
0

EDITED after Prometheus's comment to use the characterization using sum of dimension of eigenspaces instead of counting eigenvalues. This argument should be correct.

Obviously $\dim V = n \geq 1$. Consider $T$'s restriction to its image. This is a linear map on a 1-dimensional space, so we can write $T\vert_{T(V)}(v) = cv$ for some $c \in k$. There are now two possibilities. Either $c = 0$ or $c \neq 0$.

  • $c=0$: Then $T^2 = 0$ (why?) and all of $T$'s eigenvalues are $0$ since we must have $0 = T(T(v)) =T(\lambda v) = \lambda T(v) = \lambda^2 v$ for any eigenvalue $\lambda$ and any eigenvector $v$. We conclude that $T$ can't be diagonalizable since all of its eigenvalues are $0$, but the eigenspace corresponding to $0$ is $\ker T$ which has dimension $n-1$.
  • $c \neq 0$: First note that clearly $T^2 \neq 0$. We show that the sum of the dimensions of $T$'s eigenspaces is $n$, so $T$ must be diagonalizable [Wikipedia]. And this is very easy: $T(V)$ is a 1-dimensional eigenspace with eigenvalue $c$, and $\ker T$ is an n-1-dimensional eigenspace with eigenvalue $0$. Thus the sum of the dimensions of the eigenspaces is at least $n$, but it can't be greater than $n$, so it must be exactly $n$ (not 100% sure on this... LinAlg is a long time ago...)
  • 0
    Diagonalizable doesn't mean n distinct eigenvalues. It means that (from the link you gave) sum of the dimensions of its eigenspaces is equal to n.2010-11-12
  • 0
    D'oh. You're absolutely right... Man, my linear algebra is rusty :(2010-11-12