It seems that I am a wee bit late to this particular party, but I thought I'd show yet another way to resolve the OP's algebraic system, which I'll recast here in different notation:
$$\begin{aligned}
w_1+w_2+w_3&=m_0\\
w_1 x_1+w_2 x_2+w_3 x_3&=m_1\\
w_1 x_1^2+w_2 x_2^2+w_3 x_3^2&=m_2\\
w_1 x_1^3+w_2 x_2^3+w_3 x_3^3&=m_3\\
w_1 x_1^4+w_2 x_2^4+w_3 x_3^4&=m_4\\
w_1 x_1^5+w_2 x_2^5+w_3 x_3^5&=m_5
\end{aligned}$$
and the problem is to find the $x_i$ and the $w_i$ satisfying the system of equations.
The key here is to recognize that this is exactly the problem of recovering the nodes $x_i$ and weights $w_i$ of an $n$-point Gaussian quadrature rule with some weight function $w(u)$ and some support interval $[a,b]$, given the moments $m_j=\int_a^b w(u)u^j \mathrm du,\quad j=0\dots2n-1$. Recall that $n$-point rules are designed to exactly integrate functions of the form $w(u)p(u)$, where $p(u)$ is a polynomial of degree at most $2n-1$, and this is the reason why we have $2n$ equations.
As is well known, the nodes and the weights of a Gaussian quadrature can be obtained if we know the orthogonal polynomials $P_k(u)$ associated with the weight function $w(u)$. This is due to the fact that the nodes of an $n$-point Gaussian rule are the roots of the orthogonal polynomial $P_n(u)$. The first phase of the problem, now, is determining the set of orthogonal polynomials from the moments.
Luckily, in 1859(!), Chebyshev obtained a method for determining the recursion coefficients $a_k$, $b_k$ for the orthogonal polynomial recurrence
$$P_{k+1}(u)=(u-a_k)P_k(u)-b_k P_{k-1}(u)\quad P_{-1}(u)=0,P_0(u)=1$$
when given the $m_j$. Chebyshev's algorithm goes as follows: initialize the quantities
$$\sigma_{-1,l}=0,\quad l=1,\dots,2n-2$$
$$\sigma_{0,l}=m_l,\quad l=0,\dots,2n-1$$
$$a_0=\frac{m_1}{m_0}$$
$$b_0=m_0$$
and then perform the recursion
$$\sigma_{k,l}=\sigma_{k-1,l+1}-a_{k-1}\sigma_{k-1,l}-b_{k-1}\sigma_{k-2,l}$$
for $l=k,\dots,2n-k+1$ and $k=1,\dots,n-1$, from which the recursion coefficients for $k=1,\dots,n-1$ are given by
$$a_k=\frac{\sigma_{k,k+1}}{\sigma_{k,k}}-\frac{\sigma_{k-1,k}}{\sigma_{k-1,k-1}}$$
$$b_k=\frac{\sigma_{k,k}}{\sigma_{k-1,k-1}}$$
I'll skip the details of how the algorithm was obtained, and will instead tell you to look at this paper by Walter Gautschi where he discusses these things.
Once the $a_k$ and $b_k$ have been obtained, solving the original set of equations can be done through the Golub-Welsch algorithm; essentially, one solves a symmetric tridiagonal eigenproblem where the $a_k$ are diagonal entries and the $b_k$ are the off-diagonal entries (the characteristic polynomial of this symmetric tridiagonal matrix is $P_n(x)$). The $x_i$ are the eigenvalues of this matrix, and the $w_i$ can be obtained from the first components of the normalized eigenvectors by multiplying the squares of those quantities with $m_0$.
I have been wholly theoretical up to this point, and you and most other people would rather have code to play with. I thus offer up the following Mathematica implementation of the theory discussed earlier:
(* Chebyshev's algorithm *)
chebAlgo[mom_?VectorQ, prec_: MachinePrecision] :=
Module[{n = Quotient[Length[mom], 2], si = mom, ak, bk, np, sp, s, v},
np = Precision[mom]; If[np === Infinity, np = prec];
ak[1] = mom[[2]]/First[mom]; bk[1] = First[mom];
sp = PadRight[{First[mom]}, 2 n - 1];
Do[
sp[[k - 1]] = si[[k - 1]];
Do[
v = sp[[j]];
sp[[j]] = s = si[[j]];
si[[j]] = si[[j + 1]] - ak[k - 1] s - bk[k - 1] v;
, {j, k, 2 n - k + 1}];
ak[k] = si[[k + 1]]/si[[k]] - sp[[k]]/sp[[k - 1]];
bk[k] = si[[k]]/sp[[k - 1]];
, {k, 2, n}];
N[{Table[ak[k], {k, n}], Table[bk[k], {k, n}]}, np]
]
(* Golub-Welsch algorithm *)
golubWelsch[d_?VectorQ, e_?VectorQ] :=
Transpose[
MapAt[(First[e] Map[First, #]^2) &,
Eigensystem[
SparseArray[{Band[{1, 1}] -> d, Band[{1, 2}] -> Sqrt[Rest[e]],
Band[{2, 1}] -> Sqrt[Rest[e]]}, {Length[d], Length[d]}]], {2}]]
(I note that the implementation here of Chebyshev's algorithm was optimized to use two vectors instead of a two-dimensional array.)
Let's try an example. Let $m_j=j!$ and take the system given earlier ($n=3$):
{d, e} = chebAlgo[Range[0, 5]!]
{{1., 3., 5.}, {1., 1., 4.}}
xw = golubWelsch[d, e]
{{6.2899450829374794, 0.010389256501586145}, {2.2942803602790467, 0.27851773356923976}, {0.4157745567834814, 0.7110930099291743}}
We have here the equivalence xw[[i, 1]]
$=x_i$ and xw[[i, 2]]
$=w_i$; let's see if the original set of equations are satisfied:
Chop[Table[Sum[xw[[j, 2]] xw[[j, 1]]^i, {j, 3}] - i!, {i, 0, 5}]]
{0, 0, 0, 0, 0, 0}
and they are.
(This example corresponds to generating the three-point Gauss-Laguerre rule.)
As a final aside, the solution given by Aryabhata is an acceptable way of generating Gaussian quadrature rules from moments, though it will require $O(n^3)$ effort in solving the linear equations, as opposed to the $O(n^2)$ effort required for the combination of Chebyshev and Golub-Welsch. Hildebrand gives a discussion of this approach in his book.
Here is Aryabhata's proposal in Mathematica code (after having done the elimination of the appropriate variables in the background):
gaussPolyGen[mom_?VectorQ, t_] :=
Module[{n = Quotient[Length[mom], 2]},
Expand[Fold[(#1 t + #2) &, 1, Reverse[LinearSolve[
Apply[HankelMatrix, Partition[mom, n, n-1]], -Take[mom, -n]]]]]]
and compare, using the earlier example:
gaussPolyGen[Range[0, 5]!, t]
-6 + 18*t - 9*t^2 + t^3
% == -6 LaguerreL[3, t] // Simplify
True
Having found the roots of the polynomial generated by gaussPolyGen[]
, one merely solves an appropriate Vandermonde linear system to get the weights.
nodes = t /. NSolve[gaussPolyGen[Range[0, 5]!, t], t, 20]
{0.4157745567834791, 2.294280360279042, 6.2899450829374794}
weights = LinearAlgebra`VandermondeSolve[nodes, Range[0, 2]!]
{0.711093009929173, 0.27851773356924087, 0.010389256501586128}
The results here and from the previous method are comparable.