If $\mu^{*}$ is an outer measure, then we define the measurable sets in terms of $\mu^{*}$; the measure is then defined to be the restriction of the outer measure to the measurable sets.
To be more explicit: if you have a $\sigma$-algebra and an outer measure $\mu^{*}$ on the algebra, then we say that a set $E$ is $\mu^{*}$-measurable if and only if for every $A$ in the $\sigma$-algebra,
$$\mu^{*}(A) = \mu^{*}(A\cap E) + \mu^{*}(A\cap E').$$
As Halmos says in his book Measure Theory,
It is rather difficutl to get an intuitive understanding of the meaning of $\mu^{*}$-measurability except through familiarity with its implications.
Once you have the definiiton of $\mu^{*}$-measurable, then let $S$ be the set of all measurable sets, and you define the measure $\mu$ on $S$ by $\mu(E) = \mu^{*}(E)$ for all $E\in S$.
In particular, this holds for the Lebesgue measure: if $E$ is Lebesgue measurable, then the Lebesgue measure of $E$ is equal to the outer measure of $E$, because the Lebesgue measure of $E$ is defined to be the outer measure of $E$. This holds for any Lebesgue measure, not just for measure $0$.
The point of the theorem you state earlier is that having outer measure zero implies that the set is measurable; that's the nontrivial part of the statement (not that the Lebesgue measure of the set will then be zero).