2
$\begingroup$

So first off all, this WAS homework. Submitted it about 3 hours ago.

Let $A$ be an $m \times m$ matrix. then show that if rank$ A = m$, then $Ax = 0$ has a unique solution.

My roommate said this is wrong but here is how I solved it. very simply, i stated that if the rank of A is m that means the number of leading ones are the same as the columns. Hence there exists a determinant. According to theorms, if a matrix has a determinant then it has an inverse.

So keeping that in mind, we do this

$A \cdot A^{-1}x=0\cdot A^{-1}$ <--- multipling both sides by the inverse.

$\mathtt I x = 0$ where I is the identity matrix and hence showing that if rank $A = m$ then $Ax = 0$ has a unique solution.

So is this completely wrong? is there an alternative?

  • 1
    Every square matrix has a determinant; «having a determinant» does not imply invertibility. Apart from this, yes, you have proved that the only solution to the equation $Ax=0$ when $A$ is an $m\times m$ matrix of rank $m$ is the zero vector.2010-09-21

3 Answers 3

1

Yes, the rank is the number of pivots in Gaussian elimination. But you must explicitly state how you utilize this to conclude that the determinant is nonzero - viz. by pigeonhole, all diagonal elements must be 1, not 0. For further insight see this Wikipedia page for at least 17 properties that are equivalent to $\rm A$ being invertible.

Here's one alternative solution: since the matrix has full column rank, the columns are linearly independent. Therefore any solution, being a linear combination of the columns, is necessarily unique. Said slightly more generally: $\rm\ A\:$ has full column rank $\iff$ $\rm\:A$ is injective (1-1) as a linear map $\rm\iff\ ker\ A = \:0$

5

Your reasoning is not quite right. Every square matrix has a determinant regardless if it has an inverse or not.

The correct statement is: A square matrix has an inverse iff its determinant is non-zero.

I think the easiest way to solve your problem:

Let $A=[a_1|\ldots|a_m]$ i.e. $a_i$ is the $i^{\text{th}}$ column of $A$.

[Recall: $\text{rank}(A) = \text{dim}(\text{Span}(a_1,\ldots,a_m))$]

rank($A$)=m iff Span$(a_1,\ldots,a_m)=\mathbb{R}^m$ This comes from the definition of rank.

We have $m$ vectors $a_1,\ldots,a_m$ that span an $m$ dimension space $\mathbb{R}^m$ and thus $a_1,\ldots,a_m$ are linearly independent.

$0=Ax=x_1a_1+\ldots + x_ma_m$

and by definition of linearly independence, $x_i=0$ for $1\leq i\leq m$. Hence the zero vector is the unique solution.

3

There are several misstatements and one error in your solution, though the idea is essentially correct.

First, it is false that if the matrix has rank $m$ then "the number of leading ones are he same as the columns." The matrix $A$ may have no "leading ones" whatsoever. What you meant, presumably, is that the *row-echelon form (or the row-reduced form) of $A$ will have as many leading $1$s as there are columns.

Second: determinants exist for square matrices regardless of the form of the row-echelon form; the determinant of a square matrix is nonzero if and only if the matrix is invertible, if and only if the number of leading $1$s in the row-echelon form of $A$ equals the number of columns. This has to be proven in some way; for example: the elementary row operations will either multiply the determinant by $-1$ in the case of row exchanges, will multiply it by a nonzero constant $\alpha$ in the case of multiplying a row by $\alpha\neq 0$, or will leave the determinant without change in the case that you add the multiple of one row to another row. Thus, performing elementary row operations will not change whether the determinant is or is not equal to $0$, in the sense that if you obtain $B$ from $A$ by a finite sequence of elementary row operations, then $\det(A)=0$ if and only if $\det(B)=0$. Since the determinant of an upper triangular matrix is just the product of the diagonal entries, the determinant of a square matrix in row echelon form will be $1$ if the leading $1$s are all in the diagonal, and $0$ otherwise. So in your case, the row echelon form of $A$ has nonzero determinant, and therefore (by what we saw before) the determinant of $A$ is nonzero, so $A$ has an inverse.

That said, your next line is incorrect. You go from \[ A\mathbf{x}=\mathbf{0}\] to \[ AA^{-1}\mathbf{x} = \mathbf{0}A^{-1}.\] This not only does not follow, it does not even make sense! First, because if $A$ is an $m\times m$ matrix, then $\mathbf{x}$ and $\mathbf{0}$ are $m\times 1$ matrices; but for $\mathbf{0}A^{-1}$ to make sense, the number of columns of $\mathbf{0}$ (namely, 1) has to equal the number of rows in $A^{-1}$ (namely $m$); and this is not true unless you are dealing with $1\times 1$ matrices. Second, even if you could multiply both sides by $A^{-1}$ on the right, then you would not get $AA^{-1}\mathbf{x}$ on the left hand side of the equation, you would get $A\mathbf{x}A^{-1}$, and then you need to remember that matrix multiplication is not commutative, so in general you cannot go from $\mathbf{x}A^{-1}$ to $A^{-1}\mathbf{x}$ (as noted above, it may not even make sense to write one of them).

So, you multiplied by $A^{-1}$ on the wrong side, and you did it incorrectly. Rather, you want to multiply by $A^{-1}$ on the left of each side of the equality, not on the right. That is, you want to go from \[ A\mathbf{x}=\mathbf{0}\] to \[ A^{-1}(A\mathbf{x}) = A^{-1}\mathbf{0}.\] And from here you can proceed as you did.

So... partial, but definitely not full credit.

  • 0
    Hey so regarding your comment on multiple A inverse. If i multiplied it on the right hand side, then I cant multiply 0 * A^-1 due to column issues. Similarly, i cant compute the left side of the equation because of commutative property of matrices. However, if i multiple A inverse to the LEFT of the equality, then both my problems are solved right?2010-09-22
  • 1
    In fact, you cannot multiply it on the right either side due to size considerations. $A$ is $m\times m$, $A^{-1}$ is $m\times m$, and $\mathbf{x}$ is $m\times 1$. So you cannot multiply $A\mathbf{x}$ (which is $m\times 1$) by $A^{-1}$ on the right; you can only multiply it by $A^{-1}$ on the left. Commutativity doesn't even come into it (but it would stop you even if you did not realize you had size problems for the multiplication). But if you multiply both sides on the left (having fixed all the other problems), then the last part of your argument *does* go through.2010-09-23