75. Determinants#

Let \(a, b, c, d\) be any four numbers, real or complex, the symbol

\[\begin{vmatrix} a & b\\ c & d\\ \end{vmatrix} \]

denotes \(ad - bc\) and is called a determinant of second order. \(a, b, c, d\) are called elements of the determinant and \(ad - bc\) is called value of the determinant.

As you can see, the elements of a determinant are positioned in the form of a square in its designation. The diagonal on which elements \(a\) and \(d\) lie is called the principal or primary diagonal of the determinant and the diagonal which is formed on the line of \(b\) and \(c\) is called the secondary diagonal.

A row is constituted by elements lying in the same horizontal line and a column is constituted by elements lying in the same vertical line.

Clearly, determinant of second order has two rows and two columns and its value is equal to the products of elements along primary diagonal minus the product of elements along the secondary diagonal. Thus, by definition

\[\begin{vmatrix} 2 & 4\\ 3 & 9\\ \end{vmatrix} = 18 - 12 = 6 \]

Let \(a_1, a_2, a_3, b_1, b_2, b_3, c_1, c_2, c_3\) be any nine numbers, then the symbol

\[\begin{vmatrix} a_1 & a_2 & a_3\\ b_1 & b_2 & b_3\\ c_1 & c_2 & c_3\\ \end{vmatrix} \]

is another way of saying

\[a_1\begin{vmatrix} b_2 & b_3\\ c_2 & c_3\\ \end{vmatrix} - a_2\begin{vmatrix} b_1 & b_3\\ c_1 & c_3 \end{vmatrix} + a_3\begin{vmatrix} b_1 & b_2\\ c_1 & c_2 \end{vmatrix}\]

i.e. \(a_1(b_2c_3 - b_3c_2)-a_2(b_1c_3-b_3c_1) + a_3(b_1c_2-b_2c_1)\)

Rule to put + or - before any element: Find the sum of number of rows and columns in which the considered element occus. If the sum is even put a \(+\) sign before the element and if the sum is odd, put a \(-\) sign before the element. Since \(a_1\) occurs in first row and first column whose sum is \(1 + 1 = 2\) which is an even number, therefore \(+\) sign occurs for it. Since \(a_2\) occurs in first row and second column whose sum is \(1+ 2 = 3\) which is an odd number, therefore \(-\) sign occurs before it.

We have expanded the determinant along first row in previous case. The value of determinant does not change no matter which row or column we expand it along.

Expanding the determinant along second row, we get

\[\begin{vmatrix} a_1 & a_2 & a_3\\ b_1 & b_2 & b_3\\ c_1 & c_2 & c_3 \end{vmatrix} = -b_1\begin{vmatrix} a_2 & a_3\\ c_2 & c_3 \end{vmatrix} + b_2\begin{vmatrix} a_1 & a_3\\ c_1 & c_3 \end{vmatrix} - b_3\begin{vmatrix} a_1 & a_2\\ c_1 & c_2 \end{vmatrix}\]

\(= -b_1(a_2c_3 - a_3c_2) + b_2(a_1c_3 - a_3c_1) - b_3(a_1c_2 - a_2c_1)\)

\(= a_1(b_2c_3 - b_3c_2)-a_2(b_1c_3-b_3c_1) + a_3(b_1c_2-b_2c_1)\)

Thus, we see that value of determinant remains unchanged irrespective of the change of row and column against which it is expanded.

Usually, an element of a determinant is denoted by a letter with two suffices, first one indicating the row and second one indicating the column in which the element occcur. Thus, \(a_{ij}\) element indicates that it has occurred in ith row and jth column.

We also denote the rows by \(R_1, R_2, R_3\) and so on. \(R_i\) denotes the ith row of determinant while \(R_j\) denotes jth row. Columns are denoted by \(C_, C_2, C_3\) and so on. \(C_i\) and \(C_j\) denote ith and jth column of determinant.

\(\Delta\) is the usual symbol for a determinant. Another way of denoting the determinant \(\begin{vmatrix}a_1&b_1&c_1\\a_2&b^2&c_2\\a_3&b_3&c_3 \end{vmatrix}\) is (\(a_1b_2c_3\)).

The expanded form of determinant has \(n!\) terms where \(n\) is the number of rows or columns.

Ex 1. Find the value of the determinant

\[\Delta = \begin{vmatrix} 1 & 2 & 4\\ 3 & 4 & 9\\ 2 & 1 & 6 \end{vmatrix}\]

Expanding the determinant along the first row

\[\Delta = 1\begin{vmatrix} 4 & 9\\ 1 & 6 \end{vmatrix} -2\begin{vmatrix} 3 & 9\\ 2 & 6 \end{vmatrix} + 4\begin{vmatrix} 3 & 4\\ 2 & 1 \end{vmatrix}\]

Expanding the determinant along first row \(= 1(24 -9) - 2(18 - 18) + 4(3 - 8) = -5\)

Ex 2. Find the value of the determinant

\[\Delta = \begin{vmatrix} 3 & 1 & 7\\ 5 & 0 & 2\\ 2 & 5 & 3 \end{vmatrix}\]

Expanding the determinant along second row,

\[\Delta = -5\begin{vmatrix} 1 & 7\\ 5 & 3 \end{vmatrix} + 0\begin{vmatrix} 3 & 7\\ 2 & 3 \end{vmatrix} -2\begin{vmatrix} 3 & 1\\ 2 & 5 \end{vmatrix}\]

\(= -5(3 - 35) -2(15 -2) = 134\)

75.1. Minors#

Consider the determinant

\[\Delta = \begin{vmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{31} & a_{33} \end{vmatrix}\]

If we leave the elements belonging to row and column of a particular element \(a_{ij}\) then we will obtain a second order determinant. The determinant thus obtained is called minor of \(a_{ij}\) and it is denoted by \(M_{ij},\) since there are \(9\) elements in the above determinant we will have \(9\) minors.

For example, the minor of element \(a_{21}=\begin{vmatrix}a_{12} & a_{13}\\ a_{32} & a_{33}\end{vmatrix} = M_{21}\)

The minor of element \(a_{32} = \begin{vmatrix}a_{11} & a_{13}\\a_{21} & a_{23}\end{vmatrix} = M_{32}\)

If we want to write the determinant in terms of minors then following is the expression obtained if we expand it along first row

\(\Delta = (-1)^{1+1}a_{11}M_{11} + (-1)^{1 + 2}a_{12}M_{12} + (-1)^{1 + 3} a_{13}M_{13}\)

\(=a_{11}M_{11} - a_{12}M_{12} + a_{13}M_{13}\)

75.2. Cofactors#

The minor \(M_{ij}\) multiplied with \((-1)^{i+j}\) is known as cofactor of the element \(a_{ij}\) and is denoted like \(A_{ij}\).

Thus, we can say that, \(\Delta = a_{11}A_{11} + a_{12}A_{12} + a_{13}A_{13}\)

75.3. Theorems on Determinants#

Theorem I. The value of a determinant is not changed when rows are changed into corresponsing columns.

Proof: Let \(\Delta = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2\\ a_3 & b_3 & c_3\end{vmatrix}\)

Expanding the determinant along first row, \(\Delta = a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)\)

If \(\Delta^{\prime}\) be the value of the determinant when rows of determinant \(\Delta\) are changed into corresponding columns then

\(\Delta^{\prime} = \begin{vmatrix}a_1&a_2&a_3\\b_2&b_2 & b_3\\ c_1 & c_2 & c_3\end{vmatrix}\)

\(= a_1(b_2c_3 - b_3c_2) - a_2(b_1c_3 - b_3c_1) + a_3(b_1c_2 - b_2c_1)\)

\(= a_1(b_2c_3 - b_3c_2) - a_2b_1c_3 + a_2b_3c_1 + a_3b_1c_2 - a_3b_2c_1\)

\(= a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)\)

Thus, we see that \(\Delta = \Delta^{\prime}\)

Theorem II. If any two rows or columns of a determinant are interchanged, the sign of determinant is changed, but its value remains the same.

Proof: Let \(\Delta = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2\\ a_3 & b_3 & c_3\end{vmatrix},\)

Expanding the determinant along first row, \(\Delta = a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)\)

Now \(\Delta^{\prime} = \begin{vmatrix}a_3 & b_3 & c_3\\a_2 & b_2 & c_2\\ a_1 & b_1 & c_1\end{vmatrix} [R_1 \leftrightarrow R_3]\)

\(= a_3(b_2c_1 - b_1c_2) - b_3(a_2c_1 - a_1c_2) + c_3(a_2b_1 - a_1b_2)\)

\(= a_3b_2c_1 - a_3b_1c_2 - b_3a_2c_2 + b_3a_1c_2 + c_3a_2b_1 - c_3a_1b_2\)

\(= -a_1(b_2c_3 - b_3c_2) + b_1(a_2c_3 - a_3c_2) - c_1(a_2b_3 - a_3b_2)\)

\(= -\Delta\)

Theroem III. The value of a determinant is zero if any two rows or columns are identical.

Proof: Let \(\Delta = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_1 & b_1 & c_1\end{vmatrix}\)

Then \(\Delta = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_1 & b_1 & c_1\end{vmatrix} = - \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_1 & b_1 & c_1\end{vmatrix} = -\Delta [R_1\leftrightarrow R_3]\)

Thus, \(\Delta = -\Delta \Rightarrow 2\Delta = 0 \Rightarrow \Delta = 0\)

Theorem IV. A common factor of all elements of any row(or of any column) may be taken outside the sign of the determinant. In other owrds, if all the elements of the same row(or the same column) are multiplies by a constant, then the determinant becomes multiplied by that number.

Proof: Let \(\Delta = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix}\)

Expanding the determinant along first row, \(\Delta = a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)\)

and \(\Delta^{\prime} = \begin{vmatrix}ma_1 & mb_1 & mc_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix}\)

\(= ma_1(b_2c_3 - b_3c_2) - mb_1(a_2c_3 - a_3c_2) + mc_1(a_2b_3 - a_3b_2)\)

\(= m\Delta\)

Theorem V. If every element of some row or column is the the sum of two terms, then the determinant is equal to the sum of two determinants; one containing only the first term in place of each term, the other only the second term. The remaining elements of both the determinants are the same as in the given determinant.

Proof: We have to prove that

\(\begin{vmatrix}a_1 + \alpha_1 & b_1 & c_1\\a_2 + \alpha_2 & b_2 & c_2\\ a_3 + \alpha_3 & b_3 & c_3\end{vmatrix} = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix} + \begin{vmatrix}\alpha_1 & b_1 & c_1\\\alpha_2 & b_2 & c_2 \\ \alpha_3 & b_3 & c_3\end{vmatrix}\)

Let \(\Delta = \begin{vmatrix}a_1 + \alpha_1 & b_1 & c_1\\a_2 + \alpha_2 & b_2 & c_2\\ a_3 + \alpha_3 & b_3 & c_3\end{vmatrix}\)

Then, \(\Delta = (a_1 + \alpha_1)\begin{vmatrix}b_2 & c_2 \\ b_3 & c_3\end{vmatrix} - (a_2 + \alpha_2)\begin{vmatrix}b_1 & c_1\\b_3 & c_3\end{vmatrix} + (a_3 + \alpha_3)\begin{vmatrix}b_1 & c_1\\b_2 & c_2\end{vmatrix}\)

\(= a_1\begin{vmatrix}b_2 & c_2 \\ b_3 & c_3\end{vmatrix} - a_2\begin{vmatrix}b_1 & c_1\\b_3 & c_3\end{vmatrix} + a_3\begin{vmatrix}b_1 & c_1\\b_2 & c_2\end{vmatrix} + \alpha_1\begin{vmatrix}b_2 & c_2 \\ b_3 & c_3\end{vmatrix} - \alpha_2\begin{vmatrix}b_1 & c_1\\b_3 & c_3\end{vmatrix} + \alpha_3\begin{vmatrix}b_1 & c_1\\b_2 & c_2\end{vmatrix}\)

\(= \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix} + \begin{vmatrix}\alpha_1 & b_1 & c_1\\\alpha_2 & b_2 & c_2 \\ \alpha_3 & b_3 & c_3\end{vmatrix}\)

Hence, proven.

Theorem VI. The value of a determinant does not change when any row or column is multiplied by a number or an expression and is then added to or subtracted from any other row or column.

Proof: We have to prove that

\(\begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix} = \begin{vmatrix}a_1 + mb_1 & b_1 & c_1\\a_2 + mb_2 & b_2 & c_2 \\ a_3 + mb_3 & b_3 & c_3\end{vmatrix}\)

Let \(\Delta = \begin{vmatrix}a_1 + mb_1 & b_1 & c_1\\a_2 + mb_2 & b_2 & c_2 \\ a_3 + mb_3 & b_3 & c_3\end{vmatrix}\)

then \(\Delta = \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix} + \begin{vmatrix}mb_1 & b_1 & c_1\\mb_2 & b_2 & c_2 \\mb_3 & b_3 & c_3\end{vmatrix}\)

\(= \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix} + m \begin{vmatrix}b_1 & b_1 & c_1\\b_2 & b_2 & c_2 \\ b_3 & b_3 & c_3\end{vmatrix}\)

\(= \begin{vmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix} + m.0 = \Delta\)

75.4. System of Linear Equations#

75.4.1. Consistent Linear Equations#

A system of linear equations is said to be consistent if it has at least one solution.

Example: (i) System of equations \(x + y = 2\) and \(2x + 2y = 7\) is inconsistent because it has no solution i.e. no values of \(x\) and \(y\) exit which can satisfy the pair of equations. (ii) On the other hand equations \(x + y = 2\) and \(x - y = 0\) has a solution \(x = 1, y = 1\) which satisfies the pair of equation making it a consistent system of linear equations.

75.5. Cramer’s Rule#

Cramer’s rule is used to solve system of linear equations using determinants. Consider two equations \(a_x + b_1y + c_1 = 0\) and \(a_2x + b_2y + c_2 = 0\) where \(\frac{a_1}{a_2}\neq \frac{b_1}{b_2}\)

Solving this by cross multiplication, we have,

\(\frac{x}{b_1c_2 - b_2c_1} = \frac{-y}{a_1c_2 - a_2c_1} = \frac{1}{a_1b_2 - a_2b_1}\)

\(\frac{x}{\begin{vmatrix}b_1 & c_1\\b_2 & c_2\end{vmatrix}} = \frac{-y}{\begin{vmatrix}a_1& c_1\\a_2 & c_2\end{vmatrix}} = \frac{1}{\begin{vmatrix}a_1&b_1\\a_2 & b_2\end{vmatrix}}\)

75.5.1. System of Linear Equations in Three Variables#

Let the given system of linear equations given in \(x, y\) and \(z\) be

\(a_1x + b_1y + c_1z = d_1\)

\(a_2x + b_2y + c_2z = d_2\)

\(a_3x + b_3y + c_3z = d_3\)

Let \(\Delta = \begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix}\)

\(\Delta_1 = \begin{vmatrix}d_1&b_1&c_1\\d_2&b_2&c_2\\d_3&b_3&c_3\end{vmatrix}\)

\(\Delta_2 = \begin{vmatrix}a_1&d_1&c_1\\a_2&d_2&c_2\\a_3&d_3&c_3\end{vmatrix}\)

\(\Delta_2 = \begin{vmatrix}a_1&b_1&d_1\\a_2&b_2&d_2\\a_3&b_3&d_3\end{vmatrix}\)

Let \(\Delta \neq 0\)

Now \(\Delta_1 = \begin{vmatrix}d_1&b_1&c_1\\d_2&b_2&c_2\\d_3&b_3&c_3\end{vmatrix} = \begin{vmatrix}a_1x + b_1y + c_1z& b_1 & c_1\\a_2x + b_2y + c_2z & b_2 & c_2 \\a_3x + b_3y + c_3z & b_3 & c_3\end{vmatrix}\)

\(= \begin{vmatrix}a_1x & b_1 & c_1\\a_2x & b_2 & c_2 \\a_3x & b_3 & c_3\end{vmatrix}[C_1\rightarrow C_1 - yC_2 -zC_3]\)

\(= x\begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix} = x\Delta\)

\(x = \frac{\Delta_1}{\Delta}\)

Similalry, \(y = \frac{\Delta_2}{\Delta}, z = \frac{\Delta_3}{\Delta}\)

This rule which gives the values of \(x, y\) and \(z\) is known as Cramer’s rule.

75.5.2. Nature of Solution of System of Linear Equations#

From previous section we have arrived at the fact that \(x\Delta = \Delta_1, y\Delta = \Delta_2, z\Delta = \Delta_3\)

Case I. When \(\Delta \neq 0\)

In this case unique values of \(x, y, z\) will be obtained and the system of equations will have a unique solution.

Case II. When \(\Delta = 0\)

Sub Case I. When at least one of \(\Delta_1, \Delta_2, \Delta_3\) is non-zero.

Let \(\Delta_1 \neq 0\) then \(\Delta_1 = x\Delta\) will not be satisfied for any value of \(x\) because \(\Delta = 0\) and hence no value is possible in this case. Same is the case for \(y\) and \(z\).

Thus, no solution is feasible and system of equations become inconsistent.

Sub Case II. When \(\Delta_1 = \Delta_2 = \Delta_3 = 0\)

In this case infinite number of solutions are possible.

75.5.3. Condition for Consistency of Three Linear Equations in Two Unknonws#

Consider a system of linear equations in \(x\) and \(y\) \(a_1x + b_1y + c_1 = 0\) \(a_2x + b_2y + c_2 = 0\) and \(a_3x+ b_3y + c_3 = 0\)

will be consistent if the values of \(x\) and \(y\) obtained from any two equations satisfy the third equations.

Solving first two equations by Cramer’s rule, we have

\(\frac{x}{\begin{vmatrix}b_1 & c_1\\b_2 & c_2\end{vmatrix}} = \frac{-y}{\begin{vmatrix}a_1 & c_1\\a_2 & c_2\end{vmatrix}} = \frac{1}{\begin{vmatrix}a_1 & b_1\\a_2 & b_2\end{vmatrix}} = k(\text{say})\)

Substituting these in third equation we get,

\(k[a_3(b_1c_2 - b_2c_1) - b_3(a_1c_2 - a_2c_1) + c_3(a_1b_2 - a_2b_1)] = 0\)

\(a_3(b_1c_2 - b_2c_1) - b_3(a_1c_2 - a_2c_1) + c_3(a_1b_2 - a_2b_1) = 0\)

\(\begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix} = 0\)

This is the required condition for consistency of three linear equations in two variables. If such a system of equations is consistent then number of solution is one i.e. a unique solution exists.

75.5.4. System of Homogeneous Linear Equations#

A system of linear equations is said to be homogeneous if the sum of powers of the variables in each term is one.

Let the three homogeneous equations in three unknowns \(x, y, z\) be

\(a_1x + b_1y + c_1z = 0\)

\(a_2x + b_2y + c_2z = 0\)

\(a_3x + b_3y + c_3z = 0\)

Clearly, \(x = 0, y = 0, z= 0\) is a solution of above system of equations. This solution is called trivial solution and any other solution is called non-triivial solution. Let the above system of equations has a non-trivial solution.

Let \(\Delta = \begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix}\)

From first two we have

\(\frac{x}{\begin{vmatrix}b_1 & c_1\\b_2 & c_2\end{vmatrix}} = \frac{-y}{\begin{vmatrix}a_1 & c_1\\a_2 & c_2\end{vmatrix}} = \frac{z}{\begin{vmatrix}a_1 & b_1\\a_2 & b_2\end{vmatrix}} = k(\text{say})\)

Substituting these in third equation we get

\(k[a_3(b_1c_2 - b_2c_1) - b_3(a_1c_2 - a_2c_1) + c_3(a_1b_2 - a_2b_1)] = 0\)

\(a_3(b_1c_2 - b_2c_1) - b_3(a_1c_2 - a_2c_1) + c_3(a_1b_2 - a_2b_1) = 0\)

\(\begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix} = 0\)

This is the condition for system of equation to have non-trivial solutions.

75.6. Use of Determinants in Coordinate Geometry#

75.6.1. Are of a Triangle#

The area of a triangle whose vertices are \((x_1, y_1), (x_2, y_2)\) and \((x_3, y_3)\) is

\(\frac{1}{2}\begin{vmatrix}x_1 & y_1 & 1\\x_2 & y_2 & 1\\x_3 & y_3 & 1\end{vmatrix}\)

75.6.2. Condition of Concurrency of Three Lines#

Three lines are said to be concurrent if they pass through a common point i.e. they meet at a point.

Let \(a_1x + b_1y + c_1 = 0\) \(a_2x + b_2y + c_2 = 0\) and \(a_3x+ b_3y + c_3 = 0\) be three lines.

These lines will be concurrent if \(\begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix} = 0\)

75.6.3. Condition for General Equation in Second Degree to Represent a Pair of Straight Lines#

The general second degree equation \(ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0\) represent a pair of straight lines if

\(\begin{vmatrix}a & h & g\\h & b & f\\g & f & c\end{vmatrix} = 0\)

75.7. Product of Two Determinants#

Let \(\Delta_1 = \begin{vmatrix}a_1 & a_2 & a_3\\b_1 & b_2 & b_3\\c_1 & c_2 & c_3\end{vmatrix}\) and \(\Delta_2 = \begin{vmatrix}x_1 & x_2 & x_3\\y_1 & y_2 & y_3\\z_1 & z_2 & z_3\end{vmatrix}\) then \(\Delta_1\Delta_2\) is defined as

\(\Delta_1\Delta_2 = \begin{vmatrix}a_1x_1 + a_2x_2 + a_3x_3 & a_1y_1 + a_2y_2 + a_3y_3 & a_1z_1 + a_2z_2 + a_3z_3\\b_1x_1 + b_2x_2 + b_3x_3 & b_1y_1 + b_2y_2 + b_3y_3 & b_1z_1 + b_2z_2 + b_3z_3 \\ c_1x_1 + c_2x_2 + c_3x_3 & c_1y_1 + c_2y_2 + c_3y_3 & c_1z_1 + c_2z_2 + c_3z_3\end{vmatrix}\)

75.8. An Important Result#

If \(\Delta = \begin{vmatrix}a_1 & a_2 & a_3\\b_1 & b_2 & b_3\\c_1 & c_2 & c_3\end{vmatrix}\) then \(\begin{vmatrix}A_1 & A_2 & A_3\\B_1 & B_2 & B_3\\C_1 & C_2 & C_3\end{vmatrix} = \Delta^2\)

where capital letters denote the cofactors of corresponding small letters in \(\Delta\) i.e. \(A_i =\) cofactor of \(a_i, B_i =\) cofactor of \(b_i\) and \(C_i =\) cofactor of \(c_i\) in the determinant \(\Delta\)

We know that,

\(a_1A_1 + a_2A_2 + a_3A_3 = \Delta,\) \(b_1B_1 + b_2B_2 + b_3C_3 = \Delta,\) \(c_1C_1 + c_2C_2 + c_3C_3 = \Delta\) and \(a_1B_1 + a_2B_2 + a_3B_3 = 0,\) \(b_1A_1 + b_2A_2 + b_3A_3 = 0,\) \(a_1C_1 + a_2C_2 + a_3C_3 = 0,\) \(c_1A_1 + c_2A_2 + c_3A_3 = 0,\) \(b_1C_1 + b_2C_2 + b_3C_3 = 0,\) \(c_1B_1 + c_2B_2 + c_3B_3 = 0\)

Let \(\Delta_1 = \begin{vmatrix}A_1 & A_2 & A_3\\B_1 & B_2 & B_3\\C_1 & C_2 & C_3\end{vmatrix}\)

Now, \(\Delta\Delta_1 = \begin{vmatrix}a_1 & a_2 & a_3\\b_1 & b_2 & b_3\\c_1 & c_2 & c_3\end{vmatrix}\begin{vmatrix}A_1 & A_2 & A_3\\B_1 & B_2 & B_3\\C_1 & C_2 & C_3\end{vmatrix}\)

\(= \begin{vmatrix}a_1A_1 + a_2A_2 + a_3A_3 & a_1B_1 + a_2B_2 + a_3B_3 & a_1C_1 + a_2C_2 + a_3C_3\\b_1A_1 + b_2A_2 + b_3A_3 & b_1B_1 + b_2B_2 + b_3C_3 & b_1C_1 + b_2C_2 + b_3C_3\\c_1A_1 + c_2A_2 + c_3A_3 & c_1B_1 + c_2B_2 + c_3B_3 & c_1C_1 + c_2C_2 + c_3C_3\end{vmatrix}\)

\(= \begin{vmatrix}\Delta & 0 & 0\\0 & \Delta & 0\\0 & 0 & \Delta\end{vmatrix}\)

\(\Delta\Delta_1= \Delta^3\)

\(\Delta_1 = \Delta^2\)

75.9. Differential Coefficient of Determinant#

Let \(y = \begin{vmatrix}f_1(x) & f_2(x) & f_3(x)]\\g_1(x) & g_2(x) & g_3(x)\\h_1(x) & h_2(x) & h_3(x)\end{vmatrix}\) where \(f_i(x), g_i(x), h_i(x), i= 1, 2, 3\) are differentiable functions of \(x.\)

Now, \(y = f_1(x)[g_2(x)h_3(x) - g_3(x)h_2(x)] - f_2(x)[g_1(x)h_3(x) - g_3(x)h_1(x)] +\) \(f_3(x)[g_1(x)h_2(x) - g_2(x)h_1(x)]\)

\(\therefore \frac{dy}{dx} = f_1^{\prime}(x)[g_2(x)h_3(x) - g_3(x)h_2(x)] + f_1(x)[g_2^{\prime}(x)h_3(x) - g_3^{\prime}(x)h_2(x)\) + \(g_2(x)h_3^{\prime}(x) - g_3(x)h_2^{\prime}(x)]\) + \(-f_2^{\prime}(x)[g_1(x)h_3(x) - g_3(x)h_1(x)]\) + \(-f_2(x)[g_1^{\prime}(x)h_3(x) - g_1(x)h_3^{\prime}(x)\) + \(g_1(x)h_3^{\prime}(x) - g_3(x)h_3^{\prime}(x)]\) + \(f_3^{\prime}(x)[g_1(x)h_2(x) - g_2(x)h_1(x)]\) + \(f_3(x)[g_1^{\prime}(x)h_2(x) - g_2^{\prime}(x)h_1(x)\) + \(g_1(x)h_2^{\prime}(x) - g_2(x)h_1^{\prime}(x)]\)

\(= \begin{vmatrix}f_1^{\prime}(x) & f_2^{\prime}(x) & f_1^{\prime}(x)\\g_1(x) & g_2(x) & g_3(x)\\h_1(x) & h_2(x) & h_3(x)\end{vmatrix} + \begin{vmatrix}f_1(x) & f_2(x) & f_3(x)\\g_1^{\prime}(x) & g_2^{\prime}(x) & g_3^{\prime}(x) \\h_1(x) & h_2(x) & h_3(x) \end{vmatrix} + \begin{vmatrix}f_(x) & f_2(x) & f_3(x)\\g_1(x) & g_2(x) & g_3(x)\\h_1^{\prime}(x) & h_2^{\prime}(x) & h_3^{\prime}(x)\end{vmatrix}\)