One handles error with a probability model. In this case, the typical error in a distance measurement is likely proportional to the distance itself. The errors themselves are often thought of as accumulated small nearly independent errors, allowing one to invoke the Central Limit Theorem and suppose, at least as a reasonable hypothesis, that the errors are normally distributed. We usually assume there's no systematic bias in the errors: they should average out to zero in principle. A simple model is obtained by supposing the errors are independent. (Other suppositions can be treated, depending on how the measurements were made, but it quickly gets complicated.)
This leaves us with just three unknowns to estimate: the true coordinates $(x,y)$ of your location and the precision of the relative errors, usually expressed as their (common) standard deviation $\sigma$. Your data consist of $n$ measured distances to the known locations $(x_i, y_i)$, say $d_i, i = 1 \ldots n$. Mathematically, these assumptions translate to the following. The probability of observing $d_i$ equals
$$\frac{1}{\sqrt{2 \pi} \sigma \delta_i} \exp \left( -\frac{(d_i - \delta_i)^2}{2 \sigma^2 \delta_i^2)} \right)$$
where $\delta_i = \delta_i(x,y) = \sqrt{(x - x_i)^2 + (y - y_i)^2}$.
The probability of your data $(d_1, d_2, \ldots, d_n)$ is the product of these probabilities. This is the likelihood.
One reasonably tractable estimator maximizes the likelihood (as a function of the unknown parameters $x$, $y$, and $\sigma$). This is usually done by maximizing the negative logarithm of the likelihood, which will be a sum of terms, one for each data value. It's nonlinear and a little nasty, but we know geometrically that there will be at least one global optimum and, quite likely, exactly one. Many methods exist to solve this, available in statistical packages, numerical optimization routines, and general-purpose math software like Mathematica.
Of course the optimal arguments $(x,y)$ tell you where your location likely is. The optimal value of $\sigma$ estimates the typical relative error in the distance measurement: you can check that it's a reasonable value. There are ways to extract confidence intervals for all three parameters (from the Hessian of the log likelihood). A joint confidence interval for $(x,y)$ gives you an ellipse in which the true point is likely to be. Statistical packages will give you this information. If you are doing this by hand, you can analyze any other candidate solution $(x',y')$, such as the intersection of a few circles, by evaluating the likelihood there and comparing it to the optimum likelihood. If the logarithms differ by less than $2$, the candidate solution is consistent with the data.
BTW, as you've seen, you're practically forced to take a statistical approach. With real data you find that some circles don't even intersect and that overall the set of data has little or no internal consistency. You have to model the error somehow.