Sure, there are variations on least-squares approximations.
Here's an engineering answer (not really a pure math answer): On one project I did at my company, we had a thermal model that was approximating the thermal transfer function from measured power to device temperature. The cost of error in the positive direction (underestimating temperature) was worse than the cost of the same error in the negative direction (overestimating temperature) -- so we used a weighting function that was (K * error ^ 2) where K was 1 for negative temperature error and greater than 1 (e.g. 1.5 or 2) for positive temperature error.
We thought about using more complicated mappings (10 degrees underestimate much worse than 1 degree underestimate) but didn't want to go there... I assume this has some analog to a utility function (e.g. expected monetary gain of a system with random outcome has nonlinear mapping to "happiness" or "utility") where nonlinearity is intentionally introduced.
You could do something similar for approximating functions with polynomials: a least-squares fit treats error in a linear way, but there may be places in the function (e.g. at the ends or at the center) where minimizing error is more or less important.