calgorithmmathgaussiandeterminants

Gaussian elimination without result for acceleration


I'm working on a C library (for myself, code: https://github.com/BattlestarSC/matrixLibrary.git) to handle matrix functions. This is mostly a learning/practice activity.

One of my challenges is to take the determinant of a matrix efficiently. As my current attempts have failed, I wanted to take a different approach. I was reading though this method from MIT documentation: Determinants and it made a lot of sense.

The issue I'm having is how to get to said point. As the Gaussian elimination method is good for multi-variable systems of equations, my matrices are not built from equations, and therefore are not part of a system. As in, each equation doesn't have any set result and does not fit into the form from this paper.

From this point, I'm at a loss as far as how to proceed with this method.

It makes a lot of sense to take the pivot point from each set of equations as described in the MIT paper, but how should I set up my matrices to make said result valid?


Solution

  • When you perform a Gaussian elimination, you swap rows and repeatedly subtract a multiple of one row from another to produce an upper triangular form.

    When you do this on a system of equations or an "augmented matrix", you do not use any information from the result column. The decisions about which rows to swap and which to subtract with what multiplier would be exactly the same no matter what numbers are in the result column.

    Because the "result column" is not used, you can perform the same procedure on a normal square matrix. Since the operations don't change the determinant (if you negate one row whenever you swap), you end up with an upper triangular matrix with the same det as the original.

    The MIT author calls a function lu to do this in the example near the start. This does L-U decomposition on the matrix, which returns the Gaussian-eliminated matrix in the 'U' part: https://en.wikipedia.org/wiki/LU_decomposition.

    L-U decomposition is pretty cool. It's like doing Gaussian elimination to solve all systems with the same "matrix part" all at once, which again you can do because the process doesn't need to see the result column at all.

    Starting with a matrix M, you get L and U such that LU = M. That means, if you want to solve:

    Mx = y

    ... where (x an y are column vectors), you have:

    LUx = y

    Solve Lv=y, which is easy (just substitution) because L is lower-triangular. Then you have:

    Ux = v

    ... which is easy to solve because U is upper-triangular.