The collection of all functions that are differentiable at least $n$ times (let's call it $\mathcal{F}_n$ just for the purposes of this post) form a vector space; and the operator $D$ defined by $D(f) = f'$ is linear ($D(f+g)=D(f)+D(g)$, and $D(\alpha f) = \alpha D(f)$ for all $f,g$ and scalars $\alpha$). So $D$ is a linear transformation from the vector space of functions that can be differentiated at least $n$ times, and functions that can be differentiated at least $n-1$ times.
Now, suppose you have a homogeneous linear differential equation:
$$ f_n(x) y^{(n)}(x) + \cdots + f_1(x)y'(x) + f_0(x)y(x) = 0.$$
Then the collection of all function $y(x)$ that are solutions to this system form a subspace of $\mathcal{F}_n$: because if $y_1$ and $y_2$ are solutions, then so is $y_1+\alpha y_2$ for any scalar $\alpha$, and the zero function is certainly a solution. It is, in fact, the nullspace of a certain linear transformation, namely, the linear transformation $L$ given by
$$L(y) = f_n(x)y^{(n)}(x)+\cdots + f_1(x)y'(x) + f_0(x)y(x).$$
So one can bring some linear algebra to bear on this problem; e.g., determining the dimension of the solution space, etc.
Also, in analogy to the case of systems of linear equations, consider a *non*homogeneous linear differential equation:
$$f_n(x) y^{(n)}(x) + \cdots + f_1(x)y'(x) + f_0(x)y(x) = g(x).$$
Suppose you could find a particular solution $y_p$ to this equation. Then, if $y_h$ is any solution to the corresponding homogeneous equation, then $y_p+y_h$ is a solution to the nonhomogeneous equation as well; and if $z_p$ is another solution to the nonhomogeneous equation, then $y_p-z_p$ is a solution to the homogeneous equation. So every solution to the nonhomogeneous equation is of the form $y_p + y_h$, where $y_p$ is the particular solution to found, and $y_h$ is a solution to the associated homogeneous equation. This is exactly the same thing that happens with systems of linear equations (where the unknowns are numbers).
In many cases, you can show that any solutions to your equations will actually be infinitely differentiable (e.g., if your $f$s are infinitely differentiable). Then you can think of the linear transformation $L$ as a linear operator on the space of infinitely differentiable functions. Then eigenvectors and eigenvalues can come into play: eigenvectors and eigenvalues give you simpler ways of thinking about a linear transformation, so they give you simpler ways of thinking about this particular linear transformation (which happens to correspond to solutions of a differential equation).
Also, systems of linear differential equations very naturally lead to linear transformations where the eigenvectors and eigenvalues play a key role in helping you solve the system, because they "de-couple" the system, by allowing you to think of a complex system in which each of the variables affects the derivative of the others as a system in which you have some new variables that are completely independent of one another (or in the case of generalized eigenvectors, easily dependent on only some of the others). This makes the system easier to solve.