The inversion of a matrix is a very important part of data analysis. The quality of inverse solution depends on how well we can invert the matrix.
d=Gm where “d” is the N dimensional column vector consisting the data, “m” is the M dimensional column vector consisting the model parameters which we seek to invert for, “G” is the NxM dimensional kernel matrix to map the model vectors into the data matrix.
=>m = Ad, where A is the inverse of the matrix G.
There are several matrix decomposition techniques. These techniques are not only useful for the inverse problems, but also for applications like principal decomposition that are widely used in seismic attribute studies.
When a matrix is pre-multiplied to a vector, the resultant vector can be regarded as a linear transformed version of the original vector. Therefore the multiplication of matrices is a linear transformation. Any matrix can generally be decomposed into transformations of some other matrices.
The decomposition of a general rectangular matrix requires the use of singular value decomposition or SVD (Lanczos, 1961). The SVD decomposes any rectangular matrix A of m rows and n columns into a multiplication of three matrices of useful properties:
A = U L V’
Here, L is the m x n rectangular diagonal matrix containing p singular values (or principal values) of the matrix A.
U and V are the two square unitary matrices. The number of non-trivial singular values, p is called the trace or rank of the matrix.
We try to solve for the an example of
A = (10 2; -10 2)
Unitary Matrix (the inverse of the matrix is simply its transpose conjugate):
inv(U) = (U’)* inv(V) = (V’)*
—Utpal Kumar (IES, Academia Sinica)