A matrix is an array of elements organised in rows and columns
The size of on array can be described by row columns such as a 2 x 4 matrix
Matrix Operations
Addition
To add two matrices simply add the corresponding elements
Multiplication by a Scalar
To multiply a matrix by a scalar simply multiply every individual element by the scalar
Matrix Multiplication
Matrices Matrix Multiplication Matrix Multiplication
Matrices , can be multiplied if has dimensions , and has dimensions . The resulting matrix will have dimensions (If this condition is met then is said to be multiplicatively conformable with ). Matrix multiplication is associative but not commutative.
To compute a matrix multiplication, take the dot product of each row of the first matrix with each column of the second matrix.
Also worded as: For , each element is the dot product of the -th row of and the -th column of
For example
Identity Matrix
The identity matrix is the matrix that is all zero apart from having 1’s on the leading diagonal. In terms of linear transforms this represents that all unit vectors remain where they are.
It has the unique property that for all matrices. This makes it the Identity Element of a Group of matrices under the operation of multiplication hence the name
Transpose
The transpose of a matrix is found by interchanging the rows and the columns. For example, if , .
Matrix Inverses
The inverse of any non-singular matrix is the matrix such that
If , is guaranteed to exist but if then does not exist. This can be related to the null space of . If e.g. no dimensions are lost/collapsed then every transform can be undone. However if then information/space is lost and the inverse doesn’t exist because multiple vectors collapse into the same vector so the inverse transform is multivalued and therefore not a proper linear transform.
Computation
2x2
, then
3x3
For a given matrix , where is the matrix of cofactors.
To find
Form the matrix of the minors. This is where each of the nine elements of the matrix is replaced by its minor
Change the signs of some elements with alternating signs as shown
Systems of Linear Equations
How to Solve Systems of Linear Equations
If then
If A is non-singular then a unique solution for can be found for any vector v
To solve a given system of equations for :
Cramer’s Rule
Cramer’s Rule is an alternate way of finding solutions to linear systems of equations which has very nice simple geometric intuition. However it is slower to compute both by hand and for computers than gaussian elimination.
Given an matrix with column vectors and two other vectors giving the equation .
You can find the solution with
Where is the matrix where is replaced by .
Proof
The reason why this works is that we consider area. In dimensions, for each basis vector, consider the volume bound by and the other basis vectors.
For a given basis vector we can compute this signed volume with where is the identity matrix with the th column replaced by . Which by using some basic geometry is simply equal to as the base are/volume is equal to and the perpendicular height is the coordinate of in the direction of the basis vector we chose.
After the transformation the volume scales by a factor of so the new area is simply but we can compute this area in another way as we know all the vectors needed after the transformation. All the basis vectors transform to become the column vectors of the matrix and the vector transforms to which we know so we can simply compute the volume with so equating these areas gives and rearranging gives the formula as required .
Linear Equation Consistency
A system of linear equations is consistent if at least one set of values that satisfy all equations simultaneously. If the matrix corresponding to a system is non-singular then the system has one solution and is consistent.
However if it is singular then either
The system is consistent and has infinitely many solutions