I have an application that wants controllable random functions from $\mathbb{Z}^2$ and $\mathbb{Z}^3$ to $2^{32}$ , where by controllable I basically mean seedable by some parameters (say, on the order of 3 to 5 32-bit integers) such that the same seeds will always produce the same functions. The most obvious way of doing this (for the two-dimensional case, say) would seem to be computing the value at some point $(x,y)$ by using $x$, $y$, and the seed parameters as seeds for something like an LFSR generator or a Mersenne Twister, then running the RNG for some fixed number of steps and taking the resultant value as the value of the function at that point.
My question is, how can I be certain that this procedure won't keep too much correlation between adjacent 'seed points', and is there either a straightforward analysis or even just some general guideline for how many iterations would be necessary to eliminate that correlation? My first back-of-the-envelope guess would be that each iteration roughly doubles the decorrelation between given seed values, so that 32 iterations would be necessary to achieve the requisite decorrelation over a range of $2^{32}$ values (and in practice I'd probably double it to 64 iterations), but that's strictly a guess and any proper analysis would be welcome!
Edited for clarification: To further outline the issue, I may be sampling this random function $f$ (for some given seed parameters) at arbitrary values, and need those samples to be identical between passes; so for instance, if a first application computes $f(0, 0)$, $f(437, 61)$, $f(-23, 129)$, and then $f(5,3)$, and a second (potentially concurrent) application computes $f(1,0)$ and then $f(5,3)$, both passes need to find the same value of $f$ at $(5,3)$. I may also be sampling $f$ at arbitrary points, so I'd like the evaluation to take constant time (and in particular, evaluating $f(x,y)$ shouldn't take time linear in $x+y$).