2
$\begingroup$

Question: Let $v = (1,1,0,1) \in \mathbb{R}^{4}$ Let $$V:= \{f: \mathbb{R}^{4} \rightarrow \mathbb{R}^{2} \mbox{ linear } | f(v) = 0\}.$$

(a) Show that $V$ forms a vectorspace.

(b) Determine dim$(V)$. (Tip: Describe the elements in $V$ by extending $v$ to a basis of $\mathbb{R}^{4}$).

My attempt: for (a): ok i know that a vectorspace is defined through those 8 axioms, since it says "linear" i know that $f(v+u) = f(v)+f(u)$ and $\lambda f(v) = f(\lambda v)$, however i don't see how to proceed. what more should i be inferring from the given information? for (b) if i understand correctly, i can extend $v=(1,1,0,1)$ to a basis of $\mathbb{R}^{4}$ with $u=(0,1,0,0), w=(0,0,1,0), x=(0,0,0,1)$ correct? of course i wouldn't know what to do with that.. in the problems i've encountered so far it seems that dim$(V)$ has always been the amount of elements in a basis, however i'm doubting that it's so simple as dim$(V)=4$ just because dim$(\mathbb{R}^{4})=4$... i notice that $f(v)=0$, perhaps i could do something with the sum of the kernel and the image? as always, any help is greatly appreciated!

  • 0
    For (a), you have to show that the space $V$ of all possible $f$'s is a vector space. That is, now you're not dealing simply with vectors in $\mathbb R^4$, but with functions from $\mathbb R^4$ to $\mathbb R^2$, and you have to show that addition and scalar multiplication on these functions makes sense.2010-11-23
  • 0
    A hint: the set of linear maps from $\mathbb{R}^4\to\mathbb{R}^2$ is precisely the set of $4\times 2$ (or $2\times 4$, depending on convention) matrices. So the dimension of that set is clearly 8. Now, what is the general form of a matrix $A$ such that $Av = 0$? (It becomes easier if you write the matrix $A$ relative to a basis of $\mathbb{R}^4$ found by completing $v$.)2010-11-24
  • 0
    @Rahul thanks for the comment, it definitely helps to clarify my conception of the given information!2010-11-24
  • 0
    @Willie thanks for the hint, i think i understand why the linear map is the set of $4 \times 2$ matrices. however i must be missing something when i try to thing of the general form of a matrix you describe since i would thing matrix $A$ could be full of zeros..2010-11-24

1 Answers 1

18

Re-rewritten

It seems, from your comments, that you aren't even very comfortable with the idea of the set $V$ as a vector space.

As it happens, it is better to start from a bit further back, and consider the set of all linear transformations from $\mathbb{R}^4\to\mathbb{R}^2$, which I'm going to call $\mathbf{V}$; that is, $$\mathbf{V}=\left\{ T\colon\mathbb{R}^4\to\mathbb{R}^2\mid \text{ $T$ is linear}\right\}.$$

We want to make $\mathbf{V}$ into a vector space. So the elements of $\mathbf{V}$ are going to be our "vectors"; the fact that these vectors are actually functions when they are at home is immaterial: when they are at their "job" in $\mathbf{V}$, they are "vectors". In order for this to be a vector space, we need to have a way of adding these vectors, and of multiplying them by scalars so that the operations satisfy the axioms of a vector space.

How are we going to add these vectors? Remember from Calculus that if we have two functions $f\colon \mathbb{R}\to\mathbb{R}$ and $g\colon\mathbb{R}\to\mathbb{R}$, we can make a new function called the "sum of $f$ and $g$". This function is called "$f+g$". In order to tell you what function "$f+g$" is, I need to tell you how to evaluate it at any real number $r$. So here's the rule: To find the value of $f+g$ at $r$, first find the value of $f$ at $r$, then find the value of $g$ at $r$, and then add the two results. That is, $$(f+g)(r) = f(r)+g(r).$$ Now, there are two + symbols in that equation, but they mean different things: the first + sign is the symbol that represents the sum of two functions; the second + is the usual sum sign that represents the sum of two real numbers (notice that both $f(r)$ and $g(r)$ are real numbers). So, for example, if $f(x) = 2x-1$ and $g(x)=x^3$, then to find the value of $(f+g)(7)$, we first find $f(7)$ (which is $13$), then we find the value of $g(7)$ (which is $343$), and then we add them ($13+343=356$, so $(f+g)(7) = 356$).

We are going to do the same thing with the elements of $\mathbf{V}$. The elements of $\mathbf{V}$ are functions from $\mathbb{R}^4$ to $\mathbb{R}^2$. As it happens, we know how to "add" two elements of $\mathbb{R}^2$ (because this is a vector space). That means that we can play the same game with functions $T\colon\mathbb{R}^4\to\mathbb{R}^2$ and $U\colon \mathbb{R}^4\to\mathbb{R}^2$ that we played in calculus: we are going to define a new function, which I'm going to call $T\oplus U$ (I'm using a different symbol so we are clear that this is a new kind of addition which I am defining), which is going to be a function from $\mathbb{R}^4$ to $\mathbb{R}^2$. In order to tell you what this function is, I need to tell you how you evaluate it at any element of $\mathbb{R}^4$. Okay, here goes: given any $\mathbf{v}\in\mathbb{R}^4$, first find out what $T(\mathbf{v})$ is; then find out what $U(\mathbf{v})$ is. Both of these are elements of $\mathbb{R}^2$, so we can add them. The result of adding them is what $(T\oplus U)(\mathbf{v})$ is. That is, if + represents the sum of vectors in $\mathbb{R}^2$, then $$ (T\oplus U)(\mathbf{v}) = T(\mathbf{v}) + U(\mathbf{v}).$$ (I put the parentheses around $T\oplus U$ so that we are clear that it is a single entity with a long name which we are evaluating at $\mathbf{v}$; that way we don't think that we are doing $U(\mathbf{v})$ first, and then $\oplus$-ing something called $T$ to that).

Now, this makes sense for any two functions from $\mathbb{R}^4$ to $\mathbb{R}^2$, whether they are linear or not. For instance, if $T\colon\mathbb{R}^4\to\mathbb{R}^2$ is the function $T(a,b,c,d) = (a+b,cd)$, and $U\colon\mathbb{R}^4\to\mathbb{R}^2$ is the function $U(a,b,c,d) = (a,d)$, then the value of $T\oplus U$ at $(1,2,3,4)$ is the result of adding $T(1,2,3,4)=(3,12)$ and $U(1,2,3,4)=(1,4)$; so $(T\oplus U)(1,2,3,4)=(3,12)+(1,4) = (4,16)$. If $T$ and $U$ are any functions from $\mathbb{R}^4$ to $\mathbb{R}^2$, then $T\oplus U$ is also a function from $\mathbb{R}^4$ to $\mathbb{R}^2$. (As in Calculus, though, if $T$ and $U$ happen to have "nice" properties, then there is a good chance that these properties will be inherited by $T\oplus U$).

This kind of operation is called pointwise addition: whenever you have functions from the same domain to the same codomain, and you know how to "add" elements of the codomain, you can define a "sum of functions" $f+g$ by saying "first figure out the value of $f$, then figure out the value of $g$, and then add the results together." It's "pointwise" because the way we compute the sum is by first figuring out the values at each point, and then adding those values. Notice that the domain $\mathbb{R}^4$ is not playing any role here; you could replace it with any set and still be able to define the sum of functions, because the sum is taking place in the image, not in the domain.

We can play the same game with scalar multiplication. If $T\colon\mathbb{R}^4\to\mathbb{R}^2$ is any function, and $\alpha$ is any real number, then we can remember that we know how to "multiply" any element of $\mathbb{R}^2$ by $\alpha$; so we can define a "pointwise multiplication by $\alpha$" as well. I'm going to define a new function, called $\alpha\odot T\colon\mathbb{R}^4\to\mathbb{R}^2$. The rule for evaluating $\alpha\odot T$ is: to find the value at $\mathbf{v}$, first evaluate $T$ at $\mathbf{v}$, and then multiply the result by $\alpha$. That is, $$(\alpha\odot T)(\mathbf{v}) = \alpha\cdot T(\mathbf{v}).$$ Again, I put the parenthesis around $\alpha\odot T$ to remind us that it is a single entity with a long name. The $\cdot$ on the right hand side is the usual scalar multiplication of $\mathbb{R}^2$. For instance, going back to my $T\colon\mathbb{R}^4\to\mathbb{R}^2$ from the previous paragraph, $T(a,b,c,d)=(a+b,cd)$, if $\alpha=\frac{3}{2}$ then $\frac{3}{2}\odot T$ at $(1,2,3,4)$ is computed by first doing $T(1,2,3,4)=(3,12)$, and then doing $\frac{3}{2}(3,12) = (\frac{9}{2},18)$.

Again, this definition makes sense for any function into $\mathbb{R}^2$, whether linear or not, whether from $\mathbb{R}^4$ or not. This one is called pointwise (scalar) multiplication, because again the "multiplication" is happening after we figure out the value "at each point".

So, now let's go back and look at the set $\mathbf{V}$, $$\mathbf{V}= \bigl\{ T\colon\mathbb{R}^4\to\mathbb{R}^2\ \bigm|\ \text{$T$ is linear}\bigr\}.$$ This is a set of functions, and if we take two functions $T$ and $U$ from $\mathbf{V}$ then we can "add" them using $\oplus$ to get a function $T\oplus U\colon\mathbb{R}^4\to\mathbb{R}^2$, and if we take $\alpha\in\mathbb{R}$ and $T\in\mathbf{V}$, we can "scalar multiply them" and get a function $\alpha\odot T\colon\mathbb{R}^4\to\mathbb{R}^2$.

I claim that if $T$ and $U$ are in $\mathbf{V}$ (that is, if in addition to just being functions they are also linear functions), then $T\oplus U$ is also going to be in $\mathbf{V}$ (it's already a function from $\mathbb{R}^4$ to $\mathbb{R}^2$, so we will just need to check if it is linear). And that if $T\in\mathbf{V}$ and $\alpha\in\mathbb{R}$, then $\alpha\odot T$ is also going to be in $\mathbf{V}$. That is, $\oplus$ will give me a way of adding elements of $\mathbf{V}$ and get elements of $\mathbf{V}$, and $\odot$ will give me a way to multiply elements of $\mathbf{V}$ by scalars to get elements of $\mathbf{V}$. (This is the first requirement for a set to be a vector space: it must have some kind of "vector addition" and some kind of "scalar multiplication").

So, suppose that you start by taking $T$ and $U$ in $\mathbf{V}$. In order for $T\oplus U$ to also belong to the club $\mathbf{V}$, $T\oplus U$ must satisfy the membership requirements: it must be a function, it must map $\mathbb{R}^4$ to $\mathbb{R}^2$, and it must be linear. We already know it is a function that maps $\mathbb{R}^4$ to $\mathbb{R}^2$, so we just need to check that it is linear. Remember that each of $T$ and $U$ are themselves linear.

How does one check that a function $\mathfrak{f}\colon\mathbb{R}^4\to\mathbb{R}^2$ is linear? We check that if $u,w\in\mathbb{R}^4$ are any vectors, then $\mathfrak{f}(u+w) = \mathfrak{f}(u)+\mathfrak{f}(w)$ (the first + is the sum in $\mathbb{R}^4$; the second is the sum in $\mathbb{R}^2$; the equality will take place in $\mathbb{R}^2$); and we check that for any $u\in\mathbb{R}^4$ and any real number $\alpha$, $\mathfrak{f}(\alpha u) = \alpha\mathfrak{f}(u)$ (first scalar multiplication is in $\mathbb{R}^4$; the second is in $\mathbb{R}^2$; and the equality is taking place in $\mathbb{R}^2$). So, we do this for our function $T\oplus U$: let $u,w\in\mathbb{R}^4$. Then: \begin{align*} (T\oplus U)(u+w) &= T(u+w) + U(u+w) &\text{(by definition of $(T\oplus U)$)}\\ &= \bigl(T(u) + T(w)\bigr) + \bigl(U(u) + U(w)\bigr) &\text{(because $T$ and $U$ are each linear)}\\ &= \bigl(T(u) + U(u)\bigr) + \bigl(T(w) + U(w)\bigr) &\text{(because addition in $\mathbb{R}^2$}\\ &&\text{ is associative and commutative)}\\ &= (T\oplus U)(u) + (T\oplus U)(w). &\text{(by definition of $(T\oplus U)$)} \end{align*} And if $u\in\mathbb{R}^4$ and $\alpha\in\mathbb{R}$, then \begin{align*} (T\oplus U)(\alpha u) &= T(\alpha u) + U(\alpha u) &\text{(by definition of $(T\oplus U)$)}\\ &= \alpha T(u) + \alpha U(u) &\text{(because $T$ and $U$ are each linear)}\\ &= \alpha(T(u) + U(u)) &\text{(because scalar multiplication distributes over sum in $\mathbb{R}^2$)}\\ &=\alpha(T\oplus U)(u).&\text{(by definition of $T\oplus U$)} \end{align*} So, $T\oplus U$ is indeed a linear function from $\mathbb{R}^4$ to $\mathbb{R}^2$, and so by rights belongs to $\mathbf{V}$.

Likewise, to check that if $T\in\mathbf{V}$ and $\gamma\in\mathbb{R}$ then $\gamma \odot T$ is linear (I'm using $\gamma$ so we don't confuse it with the $\alpha$ that we need to check linearity) let $u,w\in\mathbb{R}^4$; then: \begin{align*} (\gamma\odot T)(u+w) &= \gamma T(u+w) &\text{(by definition of $\gamma\odot T$)}\\ &= \gamma\bigl(T(u) + T(w)\bigr) &\text{(because $T$ is linear)}\\ &= \gamma T(u) + \gamma T(w) &\text{(because scalar multiplication distributes in $\mathbb{R}^2$)}\\ &= (\gamma\odot T)(u) + (\gamma\odot T)(w). &\text{(by definition of $\gamma\odot T$)} \end{align*} So $(\gamma\odot T)(u+w) = (\gamma\odot T)(u) + (\gamma\odot T)(w)$. And, if $u\in \mathbb{R}^4$ and $\alpha\in\mathbb{R}$, then: \begin{align*} (\gamma\odot T)(\alpha u) &= \gamma T(\alpha u) &\text{(by definition of $\gamma\odot T$)}\\ &= \gamma\left(\alpha T(u)\right) &\text{(because $T$ is linear)}\\ &= (\gamma \alpha)T(u) &\text{(scalar multiplication is associative in $\mathbb{R}^2$)}\\ &= (\alpha\gamma)T(u) &\text{(real multiplication is commutative)}\\ &= \alpha\left(\gamma T(u)\right) &\text{(scalar multiplication is associative in $\mathbb{R}^2$)}\\ & = \alpha (\gamma\odot T)(u). &\text{(by definition of $\gamma\odot T$)} \end{align*} So $(\gamma\odot T)(\alpha u) = \alpha(\gamma\odot T)(u)$. Together with the additivity, this tells us that $\gamma\odot T$ is linear, and therefore is, by all rights, a member of $\mathbf{V}$.

So... now we have a set $\mathbf{V}$, and operations $\oplus$ and $\odot$ on $\mathbf{V}$. To check that this makes it into a vector space, we need to check the axioms. That is, we need to check that:

  1. If $T$ and $U$ are in $\mathbf{V}$, then $T\oplus U = U\oplus T$.

    • But what does it mean for $T\oplus U$ to equal $U\oplus T$? Well, they are both functions, so we need to check that they have the same domain and the exact same value at every point of the domain; that is, we need to check that for every $u\in\mathbb{R}^4$, $(T\oplus U)(u) = (U\oplus T)(u)$.
  2. If $S$, $T$, and $U$ are in $\mathbf{V}$, then $(S\oplus T)\oplus U = S\oplus(T\oplus U)$.

    • That is, the two functions, $(S\oplus T)\oplus U$ and $S\oplus(T\oplus U)$ have the same domain, and for every $u$ in the domain they have the same value at $u$.
  3. There is a function $O$ in $\mathbf{V}$ such that for all functions $T\in \mathbf{V}$, $O\oplus T = T$.

  4. For each $T\in\mathbf{V}$ there is a function $U\in\mathbf{V}$ such that $T\oplus U = O$.

  5. For each $T\in\mathbf{V}$, $1\odot T = T$.

  6. For each pair of real numbers $\alpha$ and $\beta$ and every $T\in\mathbf{V}$, $\alpha\odot(\beta \odot T) = (\alpha\beta)\odot T$.

  7. For each $\alpha\in\mathbb{R}$ and every $T$ and $U$ in $\mathbf{V}$, $\alpha\odot(T\oplus U) = (\alpha\odot T) \oplus (\alpha\odot U)$.

  8. For every $\alpha$ and $\beta$ in $\mathbb{R}$ and every $T\in\mathbf{V}$, $(\alpha+\beta)\odot T = (\alpha\odot T)\oplus (\beta\odot T)$.

    • Again, both $(\alpha+\beta)\odot T$ and $(\alpha\odot T)\oplus(\beta\odot T)$ are functions, so to check if they are equal you need to check that they have the same domain and the same value at each point of the domain.

If all of these conditions are satisfied, then by all rights we must acknowledge that $\mathbf{V}$, with the operations $\oplus$ for "vector addition" and $\odot$ as "scalar multiplication" is a vector space. The "vectors" of this vector space are linear transformations when they are relaxing at home, but at their job in $\mathbf{V}$ they are "vectors" (because the elements of a vector space are called "vectors").

Note that you have several different meanings of "vector" at play: there's the "vectors" that are elements of $\mathbf{V}$ (who, in reality, are linear maps from $\mathbb{R}^4$ to $\mathbb{R}^2$). You also have the "vectors" that are elements of $\mathbb{R}^4$ (at home, they are $4$-tuples of real numbers); and you have the elements of $\mathbb{R}^2$ (who are "really" ordered pairs of real numbers). You need to keep them straight, but it is not hard to do that if you just keep your wits about you and simpy refuse to get confused.


So, hopefully now we know that $\mathbf{V}$ actually is a vector space.

Our set $V$ is a subset of $\mathbf{V}$. In order to show that $V$ is itself a vector space (with the same operations that we have for $\mathbf{V}$, we are in the situation of checking if something is a subspace: because $\oplus$ is associative and commutative for all functions in $\mathbf{V}$, then it is also so in $V$; similarly with axioms 5 through 8 for $\odot$ and $\oplus$. Really, the only sticking points for $V$ to be a vector space are four things: what I call the "hidden axioms" of a vector space, and axioms 3 and 4. Explicitly:

  1. We know that if we take $T$ and $U$ in $\mathbf{V}$, then $T\oplus U$ is also in $\mathbf{V}$. But for $V$ to be a vector space, we need to make sure that if we take $T$ and $U$ in $V$, then $T\oplus U$ is not just going to be in $\mathbf{V}$, but actually in $V$ (we know the sum is in the same "state", but we actually need to make sure it lives in the same city). This is one of the "hidden axioms" of vector spaces, because it is not listed in the usual list; it is implicit when we talk about addition of vectors being an "operation". So we need to check that if $T$ and $U$ are in $V$ (and not merely in $\mathbf{V}$), then $T\oplus U$ is also in $V$.

  2. Likewise, we know that if $T$ is in $\mathbf{V}$ and $\alpha\in\mathbb{R}$, then $\alpha\odot T$ is going to be in $\mathbf{V}$. So if we start with $T\in V$, then certainly $\alpha\odot T$ is going to be in $\mathbf{V}$, but we actually need it to be in the more exclusive neighborhood of $V$. So we need to check that: if $T\in V$ and $\alpha\in\mathbb{R}$, we need to make sure that $\alpha\odot T$ is also in $V$ (and not merely in $\mathbf{V}$, which we know is the case).

  3. Axiom 3 is not immediate either: we know that there is a vector $O$ in $\mathbf{V}$ such that for every $T\in\mathbf{V}$, $O\oplus T = T$. In particular, if we take a $T\in V$, then we wil also have $O\oplus T=T$. But the problem is that for $V$ to be a vector space, we need this special vector $O$ to actually be in $V$, and not somewhere outside of $V$. So we need to check that this happens.

  4. Axiom 4 is also not immediate: if $T\in V$, then since $T$ is also in $\mathbf{V}$ we know there is a $U$ in $\mathbf{V}$ such that $T\oplus U = O$. But for $V$ to be a vector space, we need to make sure that $U$ is also in $V$, and not just in $\mathbf{V}$.

But if we can check these four new items, then we will have that $V$ is also a vector space. (In fact, there are shortcuts, which you may or may not have seen if you have already studied "subspaces"; you are really trying to show that $V$ is a subspace of $\mathbf{V}$, so if you know the shortcuts, you can use them here, keeping in mind that "vectors" means "elements of $V$" (or $\mathbf{V}$), etc.).

For example, how do we check that if $T$ and $U$ are in $V$, then $T\oplus U$ is in $V$? Well, we know $T\oplus U$ is linear, and maps $\mathbb{R}^4$ to $\mathbb{R}^2$ (because we know that $T\oplus U$ is in $\mathbf{V}$). The only extra membership requirement to be in $V$ is that you must have that $(T\oplus U)(v) = 0$ (that is, that $(T\oplus U)(1,1,0,1) = (0,0)$). Since we are assuming that $T$ and $U$ are each in $V$, that means that $T(v)=0$ and that $U(v)=0$. But that means that \begin{align*} (T\oplus U)(v) &= T(v) + U(v) &\text{(by definition of $T\oplus U$)}\\ &= 0 + 0 &\text{(since $T$ and $U$ are each in $V$)}\\ &= 0. &\text{(well... duh)} \end{align*} So... guess what? $T\oplus U$ satisfies all membership requirements for belonging to $V$, so it is in $V$! Great!

Then we need to check the rest of the conditions similarly.


What about (b)? There are a couple of ways; following the suggestion, remember that the elements of $V$ are really linear transformations. To "know" what a linear transformation is, it is enough to know what it does to a basis, any basis. So, let's start by taking the vector $v$, and extending $\{v\}$ to a basis for $\mathbb{R}^4$; say $\beta=[v, v_2, v_3, v_4]$. In order for me to give you an element $f$ of $\mathbf{V}$, it is enough for me to tell you what $f(v)$, $f(v_2)$, $f(v_3)$, and $f(v_4)$ are. From that information, and knowing that it is a linear map, you can figure out who $f$ is. And, what is more, any values I give you will give you an element of $\mathbf{V}$.

So, what do I need to give you in order to describe an element of $V$? I still need to tell you what $f(v)$, $f(v_2)$, $f(v_3)$ and $f(v_4)$ are, but as it happens, in order for $f$ to be in $V$ we need $f(v)=0$. So I just need to tell you what happens to $v_2$, $v_3$, and $v_4$.

You might be tempted to think this means the dimension of $V$ is $3$, because you have three degrees of freedom, but in fact this is not the case: each choice of $f(v_2)$, $f(v_3)$ and $f(v_4)$ is a vector in $\mathbb{R}^2$; so you actually have to specify two things for each of them.

This should suggest what the answer should be; you still have to actually prove it, though.

  • 0
    Arturo: I added a missing `$` sign that was breaking your math formatting.2010-11-24
  • 0
    @Rahul Narain: Thanks.2010-11-24
  • 0
    @Arturo so i think it's progress that i am seeing this as the set of linear maps from $\mathbb{R}^{4}$ to $\mathbb{R}^{2}$ for which $f(v)=0$. it wouldn't be that difficult for me to accept the properties of a vector space, but i find it very difficult to begin proving that they are true. for instance to 'prove' that $T + U$ are linear whenever $T$ and $U$ are linear, i feel like i would simply be restating using symbols if i were to write something like: (continued)2010-11-24
  • 0
    Let $T,U \in \mathbf{V}$, $T(a,b,c,d) + U(a,b,c,d) = (T+U)(a+a, b+b, c+c, d+d) \Rightarrow (T+U) \in \mathbf{V} \Rightarrow (T+U)$ is a linear map?? just to be clear, must the input for both $T$ and $U$ be the same?2010-11-24
  • 0
    @user3711: I'm on my way out, but when I get back I'll add to the answer to discuss how to approach this part of the problem.2010-11-24
  • 0
    @Arturo that'd be great, yeah it's absolutely no rush, but thank you very much.2010-11-24
  • 0
    @user3711: Phew. Done. Let me know what is still unclear, or where I did not explain myself clearly.2010-11-24
  • 0
    +1 for the *astounding* amount of effort you put into this!2010-11-24
  • 0
    @Rahul Narain: but not for the *quality* of the output? (-;2010-11-24
  • 0
    @Arturo: I'm afraid not, if I give you another +1 for the quality it doesn't work! :)2010-11-24
  • 0
    @Arturo: wow, thank you! this is extremely helpful, i'll be digesting it for a little while...2010-11-25
  • 0
    @user3711: Be sure to ask if something is unclear. For the purpose you want, there are other ways of doing this (e.g., by thinking in terms of matrices as suggested earlier); but I think it is best to put it all in the context in which it belongs, which is that of 'pointwise addition' and 'pointwise scalar multiplication' which is really what is going on "behind the scenes". The sooner you manage to make the leap to realize that all sorts of things that we don't usually think of as "vectors" can form 'vector spaces', the better!2010-11-25
  • 0
    @Arturo: Ok so it is unbelievably helpful for me to see a method for proving some of this stuff. I know it's just a minor detail, and i hope i am not being more clueless than usual, but i am having trouble seeing how $T: \mathbb{R}^{4} \rightarrow \mathbb{R}^{4}, T(a,b,c,d)=T(a+b,cd)$ would fulfill the $\alpha T(u)=T(\alpha u)$ requirement? is it correct to write Let $\alpha = 2$ so $ T(\alpha1,\alpha2,\alpha3,\alpha4) = T(2,4,6,8) = (6,48) \neq (6, 24) = \alpha (3, 12) = \alpha T(1,2,3,4)$ ?2010-11-26
  • 0
    @user3711: It would not (it is a map from $\mathbb{R}^4\to\mathbb{R}^2$), because that $T$ is *not* linear. But remember that I mentioned that you can define pointwise addition and/or pointwise multiplication with *any* functions, whether or not they were linear. That example was one in which the function was *not* linear (so it does not in general satisfy $T(u+v)=T(u)+T(v)$ or $T(\alpha u) = \alpha T(u)$); but you can *still* define pointwise addition with it. The point is to emphasize that the addition of functions here is more general than just addition of *linear* functions.2010-11-26
  • 0
    @Arturo: and in general, how do i prove something like: "There is a $0$, such that $T+ 0 = T$ ? is it only clear in this case since it was given that $f(v) = 0$ ? that it is for two vectors to be members, they would have to meet that criteria and if i were to add them they would still have to meet that criteria? once again a huge "thank you!" to you Arturo!2010-11-26
  • 0
    @user3711: Yes, you can show that this particular $T$ is not linear that way; in fact, giving an explicit example like that is the best way to show it is not linear. Your second comment is a bit confused. Before you get to the case of linear maps with $f(v)=0$, you want to see if you can find a linear map, which we call $\mathbf{O}$, such that $T\oplus\mathbf{O}=T$ for all $T$. For that to happen, you need $T(v)=(T\oplus\mathbf{O})(v) = T(v)+\mathbf{O}(v)$ for all $v\in\mathbb{R}^4$. What should $\mathbf{O}(v)$ be for that to work? (cont)2010-11-26
  • 0
    @user3711: Next, when you want to talk about $V$, think of $\mathbf{V}$ as a club: you can only get in if you meet the membership requirements (in this case, being a linear transformation from $\mathbb{R}^4$ to $\mathbb{R}^2$). The set $V$ is like the VIP room *inside* this club: you can only get in if you can get into the club, *and* you are a VIP. Here, to be a VIP you need to be a linear transformation from $\mathbb{R}^4$ to $\mathbb{R}^2$ (be in the club) *and* also satisfy $f(v)=0$ ("be a VIP"). Then you want to show that if you take two VIPs and you add them, you will also get a VIP.2010-11-26
  • 0
    @Arturo ok, i like the club analogy. as for the $\mathbf{O}$, if i need to find a linear map, i could say that by definition a linear map sends the zero vector to zero and that $\mathbf{O}(v) = 0$ ?2010-11-26
  • 0
    @user3711: The map $\mathbf{O}$ that will be the "zero vector" in the vector space $\mathbf{V}$ has to send *all* vectors to $0$; I'm afraid I may have been confusing by using $v$ above, given that $v$ has a special meaning here. It was meant to be "an arbitrary vector". So $\mathbf{O}$ must not just send "the zero vector to zero": the map that satisfies $T\oplus \mathbf{O}=T$ for *all* $T\in\mathbf{V}$ must send **every** vector in $\mathbb{R}^4$ to zero. It *is* linear (so it belongs to $\mathbf{V}$), and is also a VIP (it sends $v$ to $0$), so it is will also be in $V$.2010-11-26
  • 0
    This is, perhaps, one of the most beautiful things I have ever read here. I wept for joy.2012-06-16