Fundamentals of Linear Algebra (I) Connections Between Determinants, Matrices, Vectors and Equations(above)

I recently wanted to look at computer graphics, but I saw that the course requirements have linear algebra as a pre-requisite, so I re-read some of them, and so far I have looked at properties of determinants, matrices, vectors and systems of equations.
The main purpose of this blog is not to summarize the properties of matrices one by one, but to sort out a question I have been having during the review process, which is, why these things are spoken together, what is the relationship between them, and to slightly combine some ideas from the rendering principles, how matrices are applied to them.

Some understanding of the relationship between series, matrices, vectors and systems of equations

  • A row equation is a value, he is a number, a scalar, while a matrix is a table.
  • The value of the row equation can be regarded as a property of the matrix, and some properties of the matrix can be judged by the calculation of the value of the row equation.
  • A matrix is an m*n table, but if each column is viewed as a column vector, then the entire matrix can be viewed as a row vector, or a vector group.
  • Both matrices and vectors can be an expression of a system of equations. By converting a system of equations into a matrix or vector representation, there are many properties that can help us simplify the computation.

Matrix vs. rendering pipeline

During the running of the game, we need to change the points in the model into pixel points on the screen, which involves transforming the coordinates of the points from the model coordinate system to the world coordinate system, and then to the camera’s coordinate system, etc. A series of transformation of the coordinate space, in fact, is the transformation of the coordinates, (x1, y1, z1) into (x2, y2, z2), this transformation is a system of equations three ternary A system of equations consisting of a single equation.
That is, we can turn the system of equations for coordinate transformation into the multiplication of matrices.
The advantage of this is that one is that it is more convenient to represent matrices, and the second is that we have many means to simplify the multiplication of matrices, while sometimes we need to transform each other between model spaces, and if we use a system of equations, we have to recalculate a system of equations, but if we use a matrix, then it is straightforward to use the inverse matrix, and if you are an orthogonal matrix, then you can even use the transpose matrix directly.

Some properties of and relationships between series, matrices, vectors, and systems of equations

Matrix expressions

A row equation is a number that is an algebraic sum of products of elements in different rows and columns.

One thing to note is that the number of rows and columns of the determinant is the same, both are n. Then the determinant is called the nth order determinant.

For determinants of order three and below, we can directly use the diagonal method to find the value of the determinant, for determinants of order three and above, we need to apply the formula

\initial{vmatrix} a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{22} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{n1} & a_{n2} & ... & a_{nn} \\ \end{vmatrix} = \sum_{r=1}^n(-1)^{r(j_1j_2...j_n)}a_{1j_1}a_{2j_2}...a_{nj_n}

where r(j1j2..jn) denotes the inverse order of j1j2..jn (a larger number in front of a smaller number is said to constitute an inverse order, and the total number of inverse orders in an arrangement is called the inverse order of the arrangement), and an even inverse order is called an even arrangement, and an odd inverse order is called an odd arrangement.

properties of the row equation

  • The value of the transposed determinant remains unchanged
  • Two rows (or two columns) swap positions and the value of the determinant is 0.
    • Corollary 1: Two rows or two columns are identical, the determinant value is 0

Whether the value of the series is zero is a very important property, which is closely related to whether the system of equations has a solution, whether the system of vectors is linearly independent, etc.

  • A row or column has a common factor k. You can put k outside the notation of the determinant, i.e., multiplying a number k by a determinant is equivalent to multiplying a row of it by k.
    • Corollary 1: A row or column value is 0, then the determinant value is 0
    • Corollary 2: The elements of two rows (or two columns) correspond to each other proportionally, and the value of the determinant is 0
  • A row or column can be the sum of two elements, then it can be split into two determinants and added together

\initial{vmatrix} a_1 + b_1 & a_2 + b_2 & a_3 + b_3 \\ c_1 & c_2 & c_3 \\ d_1 & d_2 & d_3 \end{vmatrix} = \initial{vmatrix} a_1 & a_2 & a_3 \\ c_1 & c_2 & c_3 \\ d_1 & d_2 & d_3 \end{vmatrix} + \initial{vmatrix} b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ d_1 & d_2 & d_3 \end{vmatrix}

  • k times of a row (or column) is added to another row (or column), the value of the determinant remains unchanged

\initial{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = \initial{vmatrix} a_1 & a_2 & a_3 \\ b_1 + ka_1 & b_2 + ka_2 & b_3 + ka_3 c_1 & c_2 & c_3 \end{vmatrix}

Row-by-row (or column-by-column) expansion formula (Laplace expansion)

The nth-order determinant is called the remainder of a_ij, denoted as M_ij, by crossing out all the elements of row i and column j of a_ij in the nth-order determinant.

Call (-1)^(i+j)M_ij the algebraic remainder equation of a_ij, denoted as A_ij,i.e:

Aij=(1)i+jMijA_{ij} = (-1)^{i + j}M_{ij}

Theorem 1: A determinant of order n is equal to the sum of the products of the elements of any row of it and its corresponding algebraic remainder equation

\initial{vmatrix} A \end{vmatrix} = a_{i1}A_{i1} + a_{i2}A_{i2} + ... + a_{ik}A_{ik} = \sum_{k=1}^na_{ik}A_{ik}, i = 1, 2, 3, ..., n

\initial{vmatrix} A \end{vmatrix} = a_{1i}A_{1i} + a_{2i}A_{2i} + ... + a_{ki}A_{ki} = \sum_{k=1}^na_{ki}A_{ki}, i = 1, 2, 3, ..., n

The first one becomes the row-by-row expansion of the determinant, and the second one is called the column-by-column expansion.

Theorem 2: The sum of the algebraic remainder of any row of the determinant with the elements of another row is 0

k=1naikAjk=0,ij\sum_{k=1}^na_{ik}A_{jk} = 0, i j

There are also several special cases:

  • The value of the upper (lower) trigonometric determinant is the product of the diagonals

  • The value of the determinant of the subdiagonal is then

(1)n(n1)2a1na2,n1...aan1(-1)^{\frac {n(n-1)}2}a_{1n}a_{2,n-1}...a_{an1}

  • If A and B are matrices of order m and n, respectively, then

    \initial{vmatrix} A & * \\ 0 & B \end{vmatrix} = \initial{vmatrix} A & 0 \\ * & B \end{vmatrix} = \initial{vmatrix} A \end{vmatrix} * \initial{vmatrix} B \end{vmatrix} , \initial{vmatrix} * & A \\ B & 0 \end{vmatrix} = \initial{vmatrix} 0 & A \\ B & * \end{vmatrix} = (-1)^{mn} \initial{vmatrix} A \end{vmatrix} * \initial{vmatrix} B \end{vmatrix}

  • Van der Munn determinant

\initial{vmatrix} 1 & 1 & ... & 1 \\ x_1 & x_2 & ... & x_n ... x_1^2 & x_2^2 & ... & x_n^2 ... & ... & ... & ... \\ x_1^{n-1} & x_2^{n-1} & .... & x_n^{n-1} \end{vmatrix} = \prod_{1 \le j \le i \le n}(x_i - x_j)

Kramer’s Law

If the coefficients of a system of non-simultaneous linear equations consisting of n equations with n unknowns have determinant values that are not zero, then the system of equations has a unique solution, and

x_i = \frac { \initial{vmatrix} A_i \end{vmatrix} } { \initial{vmatrix} A \end{vmatrix} } , i = 1, 2, 3, ..., n

where Ai is the determinant formed by replacing the elements of the i** column** in |A| with the constant terms at the right end of the system of equations

Corollary: A system of n equations n unknown quantities consisting of a system of linear equations with coefficients determinant|A| ! = 0 is sufficient for the system of equations to have a unique zero solution (since the constants at the right end of the system of equations are all 0, so |Ai| is 0, then all xi is 0).

Conversely, if there is a non-zero solution, then |A| = 0.

Matrix

vectors

System of equations