What a spectacularly rubbish way to define convergence. Here's how to think of it in a simple, yet rigorous way.
For example, suppose we have a sequence $S_{n} = \frac{1}{2^n}$. We know that $S_{0} = 1, S_{1} = \frac{1}{2}, S_{2} = \frac{1}{4}$, and so on and so forth, just as you would have defined it in Calculus.
We might now be interested in the sum of $\sum_{n=0}^{\infty} S_{n} = 1 + \frac{1}{2} + \frac{1}{4} + ...$.
You may remember that this series will converge to 2. How should we express this analytically?
Since we cannot actually sum up an infinite number of terms, we'll never actually get to 2 if we sum this up by hand. However, suppose I only wanted to get a value CLOSE to 2. How close? Let's say we want our sum to be within 0.5 of 2. That is, we want the sum to be between $2-0.5$ and $2+0.5$. This is the interval $(1.5,2.5)$. I don't have to sum up all of the terms to get in this interval. In fact, I need to sum up only 3 terms: $1 + \frac{1}{2} + \frac{1}{4} = 1.75$ to be in this interval.
Clearly, $\sum_{n=0}^{3} S_{n}$ is in the interval. We can, if we want to, sum up more than 3 terms, and we will still remain in the interval. The point is that any number of terms more than 3 will still keep us in the interval. We can also say that we need at least the partial sum $S_{2}$ (which is our third term).
Now, suppose we wanted our interval to be smaller, between $2-0.01$ and $2+0.01$, which is the interval (1.99,2.01). It turns out you need 8 terms for our series to get within this interval. After these eight terms (i.e. after the partial sum $S_7$), your sequence stays in the interval.
In our above examples, we wanted to get within 0.5 or 0.01 of the point of convergence. These are our epsilons. That is $(2-\epsilon, 2 + \epsilon)$ is our interval, and we can set $\epsilon$ to be anything we like. We chose to set epsilon to 0.5 or 0.01, but we could have chosen a smaller number.And therein lies the idea of convergence.
You see, no matter how small I chose my epsilon (as long as it is greater than $0$), I can still find a point where my sequence enters the interval $(2-\epsilon, 2 + \epsilon)$, and never leaves it afterwards. When epsilon was 0.5, we needed $S_2$, when epsilon was 0.01, we needed $S_7$. If we can demonstrate that no matter HOW SMALL you make your epsilon, there is a $S_n$ that is in that interval (to stay), you have shown convergence.
How would I actually do this? Well, $\sum_{n=0}^{k} \frac{1}{2^n}$ is a geometric series equal to $2^{-k}(2^{k+1}-1)$. There is a formula for this in your textbook, I presume.
If we want $2^{-k}(2^{k+1}-1) > 2 - \epsilon$ we can solve algebraically to get that $k> -\frac{\log{\epsilon}}{\log{2}}$. If we set $\epsilon=0.01$, we'll get that $k>6.64...$. This means that we need at least $S_7$ as I mentioned above. Either way, the mere existence of this function demonstrates that we can always find a sufficiently large k.
Why does this suffice, conceptually? In order for you to argue that the sequence doesn't converge to a particular point, you need to show that the sequence is always some distance away from my point. Yet, no matter how small you make that distance, it is not small enough, for the sequence gets into that interval, and remains there.