It is not an elementary matrix transformation. Elementary matrix transformations

Elementary matrix transformations are such transformations of the matrix as a result of which the equivalence of matrices is preserved. Thus, elementary transformations do not change the set of solutions of the system of linear algebraic equations that this matrix represents.

Elementary transformations are used in the Gaussian method to bring a matrix to a triangular or stepped form.

Definition

Elementary string conversions called:

In some courses of linear algebra, permutation of matrix rows is not separated into a separate elementary transformation due to the fact that the permutation of any two rows of the matrix can be obtained by multiplying any row of the matrix by a constant k (\ displaystyle k), and adding to any row of the matrix another row multiplied by the constant k (\ displaystyle k), k ≠ 0 (\ displaystyle k \ neq 0).

Similarly, elementary column transformations.

Elementary transformations reversible.

The designation indicates that the matrix A (\ displaystyle A) can be obtained from B (\ displaystyle B) by elementary transformations (or vice versa).

Properties

Rank invariance under elementary transformations

Theorem (on rank invariance under elementary transformations).
If A ∼ B (\ displaystyle A \ sim B), then r a n g A = r a n g B (\ displaystyle \ mathrm (rang) A = \ mathrm (rang) B).

Equivalence of SLAEs under Elementary Transformations

Let's call elementary transformations over the system of linear algebraic equations :
  • rearrangement of equations;
  • multiplying an equation by a nonzero constant;
  • the addition of one equation to another multiplied by some constant.
That is, elementary transformations over its extended matrix. Then the following statement is true: Recall that two systems are said to be equivalent if the sets of their solutions coincide.

Finding Inverse Matrices

Theorem (on finding the inverse matrix).
Let the determinant of the matrix A n × n (\ displaystyle A_ (n \ times n)) is not equal to zero, let the matrix B (\ displaystyle B) defined by the expression B = [A | E] n × 2 n (\ displaystyle B = _ (n \ times 2n))... Then, with an elementary transformation of the rows of the matrix A (\ displaystyle A) to the identity matrix E (\ displaystyle E) as part of B (\ displaystyle B) simultaneously transforming E (\ displaystyle E) To A - 1 (\ displaystyle A ^ (- 1)).

Let's introduce the concept of an elementary matrix.

DEFINITION. The square matrix obtained from the identity matrix as a result of a non-singular elementary transformation over rows (columns) is called the elementary matrix corresponding to this transformation.

So, for example, elementary matrices of the second order are matrices

where A is any nonzero scalar.

The elementary matrix is ​​obtained from the identity matrix E as a result of one of the following non-singular transformations:

1) multiplying the row (column) of the matrix E by a nonzero scalar;

2) addition (or subtraction) to any row (column) of the matrix E of another row (column) multiplied by a scalar.

Let us denote by the matrix obtained from the matrix E as a result of multiplying the row by a nonzero scalar A:

Let us denote by the matrix obtained from the matrix E as a result of addition (subtraction) to the row of the row multiplied by A;

Through we denote the matrix obtained from the identity matrix E as a result of the application of an elementary transformation over the rows; thus there is a matrix corresponding to the transformation

Let's consider some properties of elementary matrices.

PROPERTY 2.1. Any elementary matrix is ​​invertible. The inverse of the elementary matrix is ​​elementary.

Proof. A direct check shows that for any nonzero scalar A. and arbitrary the equalities

Based on these equalities, we conclude that property 2.1 holds.

PROPERTY 2.2. The product of elementary matrices is an invertible matrix.

This property follows directly from Property 2.1 and Corollary 2.3.

PROPERTY 2.3. If a non-singular row elementary transformation takes the -matrix A to the matrix B, then. The objectionable statement is also true.

Proof. If there is a multiplication of a string by a nonzero scalar A, then

If, then

It is easy to check that the converse is also true.

PROPERTY 2.4. If the matrix C is obtained from the matrix A using a chain of non-singular row elementary transformations, then. The converse is also true.

Proof. By property 2.3, the transformation transforms the matrix A into the matrix transforms the matrix into the matrix, etc. Finally, the transforms the matrix into the matrix Consequently,.

It is easy to check that the converse is also true. Matrix invertibility conditions. The following three lemmas are needed to prove Theorem 2.8.

LEMMA 2.4. A square matrix with zero row (column) is not invertible.

Proof. Let A be a square matrix with a zero row, B - any matrix,. Let be the zero row of the matrix A; then

that is, the i-th row of the matrix AB is zero. Consequently, the matrix A is irreversible.

LEMMA 2.5. If the rows of a square matrix are linearly dependent, then the matrix is ​​irreversible.

Proof. Let A be a square matrix with linearly dependent rows. Then there is a chain of nonsingular elementary row transformations that take A to stepped matrix; let such a chain. By Property 2.4 of elementary matrices, the equality

where C is a zero-row matrix.

Therefore, by Lemma 2.4, the matrix C is irreversible. On the other hand, if the matrix A were invertible, then the product on the left in equality (1) would be an invertible matrix, as the product of invertible matrices (see Corollary 2.3), which is impossible. Consequently, the matrix A is irreversible.

The next three operations are called elementary transformations of matrix rows:

1) Multiplication i-th line matrix by the number λ ≠ 0:

which we will write in the form (i) → λ (i).

2) Permutation of two rows in a matrix, for example the i-th and k-th rows:


which we will write in the form (i) ↔ (k).

3) Adding to the i-th row of the matrix its k-th row with the coefficient λ:


which we will write in the form (i) → (i) + λ (k).

Similar operations on columns of a matrix are called elementary column transformations.

Each elementary transformation of rows or columns of a matrix has inverse elementary transformation, which turns the transformed matrix into the original one. For example, the inverse transformation for a permutation of two strings is a permutation of the same strings.

Each elementary transformation of rows (columns) of matrix A can be interpreted as multiplication of A on the left (right) by a matrix of a special type. This matrix is ​​obtained if the same transformation is performed over unit matrix... Let's take a closer look at elementary string transformations.

Let the matrix B be obtained as a result multiplication of the i-th rows of the m × n matrix A by the number λ ≠ 0. Then B = Е i (λ) А, where the matrix Е i (λ) is obtained from the identity matrix E of order m by multiplying its i-th row by the number λ.

Let the matrix B be obtained as a result of the permutation of the ith and kth rows of the matrix A of type m × n. Then B = F ik А, where the matrix F ik is obtained from the identity matrix E of order m by permuting its i-th and k-th rows.

Let the matrix B be obtained by adding to the i-th row of the m × n matrix A its k-th row with the coefficient λ. Then B = G ik (λ) А, where the matrix G ik is obtained from the identity matrix E of order m as a result of adding the kth row with coefficient λ to the i-th row, i.e. at the intersection of the i-th row and the k-th column of the matrix E, the zero element is replaced by the number λ.

Elementary transformations of the columns of the matrix A are implemented in the same way, but at the same time it is multiplied by matrices of a special type not on the left, but on the right.

Using algorithms based on elementary transformations of rows and columns, matrices can be transformed to different kind... One of the most important such algorithms forms the basis of the proof of the following theorem.

Theorem 10.1. Using elementary row transformations, any matrix can be reduced to stepped view.

◄ The proof of the theorem consists in constructing a specific algorithm for reducing the matrix to a stepwise form. This algorithm consists in multiple repetitions in a certain order of three operations associated with some current element of the matrix, which is selected based on the location in the matrix. At the first step of the algorithm, we select the upper left one as the current element of the matrix, i.e. [A] 11.

one*. If the current element is zero, go to operation 2 *. If it is not equal to zero, then the line in which the current element is located ( current line), add with appropriate coefficients to the rows below, so that all matrix elements in the column below the current element vanish. For example, if the current element is [A] ij, then as the coefficient for the kth row, k = i + 1, ..., we should take the number - [A] kj / [A] ij. We select the new current element, shifting in the matrix one column to the right and one row down, and go to next step, repeating operation 1 *. If such displacement is not possible, i.e. the last column or row is reached, we stop converting.

2 *. If the current element in some row of the matrix is ​​equal to zero, then we look through the matrix elements located in the column below the current element. If there are no nonzero ones among them, go to operation 3 *. Let in kth line there is a nonzero element below the current element. We swap the current and kth lines and return to operation 1 *.

3 *. If the current element and all the elements below it (in the same column) are equal to zero, we change the current element, shifting one column to the right in the matrix. If such an offset is possible, that is, the current element is not in the rightmost column of the matrix, then we repeat the operation 1 *. If we have already reached the right edge of the matrix and changing the current element is impossible, then the matrix has a stepped form, and we can stop transformations.

Since the matrix has finite dimensions, and in one step of the algorithm, the position of the current element is shifted to the right by at least one column, the transformation process will end, and in no more than n steps (n is the number of columns in the matrix). This means that the moment will come when the matrix will have a stepped form.

Example 10.10. We transform the matrix to a stepped view using elementary string transformations.

Using the algorithm from the proof of Theorem 10.1 and writing the matrices after completing its operations, we obtain

Matrix, types of matrices, operations on matrices.

Types of matrices:


1. Rectangular: m and n- arbitrary positive integers

2. Square: m = n

3. Matrix string: m = 1... For example, (1 3 5 7) - in many practical problems such a matrix is ​​called a vector

4. Column matrix: n = 1... For example

5. Diagonal Matrix: m = n and a ij = 0, if i ≠ j... For example

6... Unit Matrix: m = n and

7. Zero Matrix: a ij = 0, i = 1,2, ..., m

j = 1,2, ..., n

8. Triangular matrix: all elements below the main diagonal are 0.

9. Symmetric matrix:m = n and a ij = a ji(i.e., at places symmetric with respect to the main diagonal, there are equal elements), and consequently A "= A

For example,

10. Skew-symmetric matrix: m = n and a ij = -a ji(that is, opposite elements are located on places symmetric with respect to the main diagonal). Consequently, there are zeros on the main diagonal (since for i = j we have a ii = -a ii)


Actions on matrices:


1... Addition

2. Subtraction matrices - elementwise operation

3... Work matrices by number - elementwise operation

4. Multiplication A * B matrices according to the rule row per column(the number of columns of matrix A must be equal to the number of rows of matrix B)

A mk * B kn = C mn with each element with ij matrices C mn is equal to the sum of the products of the elements of the i-th row of the matrix A by the corresponding elements of the j-th column of the matrix B, i.e.

Let us show the operation of matrix multiplication using the example

5. Transpose of the matrix A. The transposed matrix is ​​denoted by A T or A "

,For example

Rows and columns are swapped

Matrix operations properties:


(A + B) + C = A + (B + C)

λ (A + B) = λA + λB

A (B + C) = AB + AC

(A + B) C = AC + BC

λ (AB) = (λA) B = A (λB)

A (BC) = (AB) C

(λA) "= λ (A)"

(A + B) "= A" + B "

(AB) "= B" A "



2. Determinants of the second and third order (basic concepts, sv-va, calculations)

Property 1. The determinant does not change when transposed, i.e.

Proof.

Comment. The following qualifier properties will be stated for strings only. Moreover, property 1 implies that the columns will have the same properties.



Property 2... When the elements of the determinant string are multiplied by some number, the entire determinant is multiplied by this number, i.e.

.

Proof.

Property 3. The determinant, which has a null string, is equal to 0.

The proof of this property follows from property 2 for k = 0.

Property 4. A determinant that has two equal strings is 0.

Proof.

Property 5... The determinant, two lines of which are proportional, is equal to 0.

The proof follows from properties 2 and 4.

Property 6... When swapping two rows of the determinant, it is multiplied by –1.

Proof.

Property 7.

The proof of this property can be carried out independently by comparing the values ​​of the left and right sides of the equality found using Definition 1.5.

Property 8. The value of the determinant will not change if we add to the elements of one line the corresponding elements of the other line, multiplied by the same number.

Minor. Algebraic complement... Laplace's theorem.

Triangular reduction method consists in such a transformation of the given determinant, when all its elements lying on one side of one of its diagonals become equal to zero.

Example 8. Calculate the determinant

reduction to a triangular form.

Solution. Subtract the first line of the determinant from the rest of its lines. Then we get

.

This determinant is equal to the product of the elements of the main diagonal. Thus, we have

Comment. Everything considered above can be generalized for determinants of the nth order.

Reducing the matrix to a stepped form. Elementary transformations of rows and columns.

Elementary matrix transformations the following transformations are called:

I. Permutation of two columns (rows) of a matrix.

II. Multiplication of all elements of one column (row) of the matrix by the same nonzero number.

III. Adding to the elements of one column (row) the corresponding elements of another column (row), multiplied by the same number.

The matrix obtained from the original matrix by a finite number of elementary transformations is called equivalent ... This is indicated.

Elementary transformations are used to simplify matrices, which will be further used to solve various problems.

To bring the matrix to a stepped form (Fig. 1.4), you need to do the following.

1. In the first column, select an element other than zero ( leading element ). A row with a pivot ( leading line ), if it is not the first, rearrange it to the place of the first line (type I conversion). If there is no leading element in the first column (all elements are equal to zero), then we exclude this column and continue searching for the leading element in the rest of the matrix. The transformations end if all columns are excluded or all elements are zero in the rest of the matrix.

2. Split all elements of the leading line by the leading element (type II conversion). If the leading line is the last, then the transformation should end there.

3. To each line below the leading one, add the leading line, multiplied accordingly by such a number that the elements below the leading one turn out to be equal to zero (type III transformation).

4. Having excluded from consideration the row and column at the intersection of which the pivot element stands, go to step 1, in which all the described actions are applied to the rest of the matrix.

Example 1.29. Reduce to a stepped matrix

The following actions on the rows and columns of the matrix A are called elementary transformations:

1) permutation of two rows or columns of the matrix;

2) multiplying a row or column of a matrix by a nonzero number;

3) adding to one row (column) another row (column).

Theorem. Elementary transformations do not change the rank of the matrix, that is, if the matrix B is obtained from the matrix A by elementary transformations, then.

Proof. one). When the two columns of the matrix are swapped, the maximum number of linearly independent columns does not change, which means that its rank does not change either.

2). Let the matrix B be obtained from the matrix A by multiplying the –th row by the number t0 and r (A) = k. Obviously, any minor of matrix B not containing the i-th row is equal to the corresponding minor of the matrix A, and any minor of matrix B containing the i-th row is equal to the corresponding minor of the matrix A multiplied by the number t. Therefore, the minor of order k of matrix B corresponding to the basic minor of matrix A will be nonzero, and all minors of order k + 1 of matrix B, as well as all minors of order k + 1 of matrix A, will be equal to zero. This means that r (B) = k = r (A).

3). Let the matrix B be obtained from the matrix A by adding the -th row to the jth row and r (A) = k. Minors of order k + 1 of matrix B not containing the j-th row will be equal to the corresponding minors of matrix A, and therefore equal to zero. Minors of order k + 1 of the matrix B containing the –th and jth rows will be equal to the sum of two zero determinants. One of these determinants contains two identical rows (the j-th row contains elements of the i – th row), and the second determinant is a minor of order k + 1 of the matrix A and therefore is equal to zero. Minors of order k + 1 of matrix B containing the j-th row but not containing the i-th row will be equal to the sum of two minors of order k + 1 of the matrix A and therefore will also be equal to zero. Therefore, all the minors of order k + 1 of the matrix B are equal to 0 and r (B) k = r (A).

Let the matrix C be obtained from the matrix B by multiplying the i-th row by (-1). Then the matrix A is obtained from the matrix C by adding the i-th row to the j-th row and multiplying the i-th row by (-1). Therefore, as was proved above, r (A) r (C) = r (B). Thus, the inequalities r (B) r (A) and r (A) r (B) are simultaneously valid, which implies that r (A) = r (B).

This property of elementary transformations is used in practice to calculate the rank of a matrix. To do this, using elementary transformations, the given (nonzero) matrix A is reduced to a trapezoidal form, that is, to the form

B = ,

where elements for all i = 1,2, ..., k; elements for all i> j and

i> k. Obviously, r (B) = k, that is, the rank of the matrix B is equal to the number of nonzero rows. This follows from the fact that the minor of order k of the matrix B, located at the intersection of the first k rows and columns, is a determinant of the diagonal type and is equal to; and any minor of order k + 1 of the matrix В contains the zero row, and therefore is equal to 0 (or, if k = n, there are no such minors at all).

Theorem. Any nonzero matrix A of dimension mn can be reduced to a trapezoidal form using elementary transformations.

Proof. Since A0, there exists an element of the matrix
... Rearranging the first and i-th rows, the first and j-th columns, move the element to the left top corner matrix and denote
... Then, to the i-th row of the resulting matrix (i = 2,3, ..., m) we add the first row multiplied by the number ... As a result of these elementary transformations, we obtain the matrix

A
.

If all elements
the matrices A are equal to zero, then the theorem is proved. If there is an element
, then, by permutation of the second and i-th rows, the second and j-th columns of the matrix A, we move the element in place of the element and denote
(if
, then we immediately denote
). Then, to the i-th row of the resulting matrix (i = 3, ..., m) we add the second row multiplied by the number ... As a result, we get the matrix


.

Continuing this process, in a finite number of steps we obtain the matrix B, that is, we reduce the matrix A to a trapezoidal form.

Example. We calculate the rank of the matrix

... The arrows indicate the following elementary transformations: 1) rearranged the first and second lines; 2) added a third to the fourth line; 3) added to the third line the first, multiplied by -2, and the fourth line was divided by 3; 4) divided the third line by 5 and rearranged the third and fourth lines; 5) the second line was added to the third line multiplied by -3, and the third line was added to the fourth line. It is seen that the matrix obtained from the matrix A by the indicated elementary transformations has a trapezoidal shape with three nonzero rows. Therefore, r (A) = 3.