4
$\begingroup$

A matrix determinant (naively) can be computed in $O(n!)$ steps, or with a proper LU decomposition $O(n^3)$ steps. This assumes that all the matrix elements are constant. If, however the matrix elements are polynomials (say univariate of max order $p$) at each step of the LU decomposition an element is multiplied by another element producing (on average) ever larger polynomials. Each step therefore takes longer and longer - is the cost perform the decomposition still a polynomial?

EDIT: To clarify my reasoning a bit, if polynomial (using FFT as J.M. suggests in the comments) takes $O(m \log m)$, and we must perform $O(n^3)$ operations to get the LU decomposition, each step the polynomial could effectively double in degree at each multiplication*. The running time would look something like

$$ \approx O \left ( \sum_{k}^{n^3} (p \ln p)^{2k} \right ) .$$

* (it doesn't quite do that, and this is where I'm stuck)

  • 0
    If it takes polynomial time to multiply two polynomials (probably less if you employ special techniques like FFT), why wouldn't an LU decomposition of a matrix of polynomials also take polynomial time?2010-08-28

1 Answers 1

5

Suppose each entry of your $n \times n$ matrix is a polynomial of degree $d$ in the variable $t$. Appealing to the cofactor expansion, we see that the determinant will be a polynomial in $t$ of degree at most $dn$; lets call it $D(t)$. So you can do the following: evaluate the determinant at $t=1,2, \ldots, dn$. You will thus know $D(1),D(2),\ldots,D(dn)$ and can find out the coefficients of $D$ by interpolation.

This involves:

(i) $dn$ determinant evaluations, each of which takes $O(n^3)$ operations.

(ii) Interpolating a polynomial of degree $dn$. This can be done by solving a $dn \times dn$ system of equations (see http://en.wikipedia.org/wiki/Polynomial_interpolation under "Constructing the interpolating polynomial"). There are algorithms which will do this in $O(dn \log^2 dn)$.

Your final number of operations will be $O(d n^4 + dn \log^2 dn)$.

I would be interested in knowing if its possible to do this faster.

Update: I corrected an error and incorporated a suggestion of J.M. in the comments.

  • 0
    P.S. One can, of course, get a better running time by using the algorithms for matrix inversion which are faster than $O(n^3)$.2010-08-28
  • 0
    alex: If the matrix is structured, then yes he can do better than O(n³); but if you're thinking of things like Strassen's, they're not practical unless he has a huge matrix of polynomials (which I doubt he has).2010-08-28
  • 0
    The other thing: the matrix associated with polynomial interpolation is a Vandermonde matrix, and it's just wrong *not* to exploit the structure; one can already consider O((dn)²) slow, and if one takes *special* ;) interpolation points, O((dn)log(dn)) is certainly possible.2010-08-28
  • 0
    @J. Mangaldan - can you provide a reference for the choice of interpolation points? This is new to me...2010-08-29
  • 0
    The O(n²) method I am alluding to is the Bjorck-Pereyra algorithm: http://www.ams.org/journals/mcom/1970-24-112/S0025-5718-1970-0290541-1/S0025-5718-1970-0290541-1.pdf ; the hint for O(n log n) is the observation that the fast Fourier transform is nothing more than a very efficient way to multiply a Vandermonde matrix constructed from the roots of unity with a vector.2010-08-29
  • 0
    In fact, the best thing about the so-called Fourier matrix is that it is Vandermonde, and that it can be normalized so that it is unitary!2010-08-29