1911 Encyclopædia Britannica/Algebraic Forms

13876541911 Encyclopædia Britannica, Volume 1 — Algebraic FormsPercy Alexander MacMahon

ALGEBRAIC FORMS. The subject-matter of algebraic forms is to a large extent connected with the linear transformation of algebraical polynomials which involve two or more variables. The theories of determinants and of symmetric functions and of the algebra of differential operations have an important bearing upon this comparatively new branch of mathematics. They are the chief instruments of research, and have themselves much benefited by being so employed. When a homogeneous polynomial is transformed by general linear substitutions as hereafter explained, and is then expressed in the original form with new coefficients affecting the new variables, certain functions of the new coefficients and variables are numerical multiples of the same functions of the original coefficients and variables. The investigation of the properties of these functions, as well for a single form as for a simultaneous set of forms, and as well for one as for many series of variables, is included in the theory of invariants. As far back as 1773 Joseph Louis Lagrange, and later Carl Friedrich Gauss, had met with simple cases of such functions, George Boole, in 1841 (Camb. Math. Journ. iii. pp. 1-20), made important steps, but it was not till 1845 that Arthur Cayley (Coll. Math. Papers, i. pp. 80-94, 95-112) showed by his calculus of hyper-determinants that an infinite series of such functions might be obtained systematically. The subject was carried on over a long series of years by himself, J. J. Sylvester, G. Salmon, L. O. Hesse, S. H. Aronhold, C. Hermite, Francesco Brioschi, R. F. A. Clebsch, P. Gordon, &c. The year 1868 saw a considerable enlargement of the field of operations. This arose from the study by Felix Klein and Sophus Lie of a new theory of groups of substitutions; it was shown that there exists an invariant theory connected with every group of linear substitutions. The invariant theory then existing was classified by them as appertaining to “finite continuous groups.” Other “Galois” groups were defined whose substitution coefficients have fixed numerical values, and are particularly associated with the theory of equations. Arithmetical groups, connected with the theory of quadratic forms and other branches of the theory of numbers, which are termed “discontinuous,” and infinite groups connected with differential forms and equations, came into existence, and also particular linear and higher transformations connected with analysis and geometry. The effect of this was to co-ordinate many branches of mathematics and greatly to increase the number of workers. The subject of transformation in general has been treated by Sophus Lie in the classical work Theorie der Transformationsgruppen. The present article is merely concerned with algebraical linear transformation. Two methods of treatment have been carried on in parallel lines, the unsymbolic and the symbolic; both of these originated with Cayley, but he with Sylvester and the English school have in the main confined themselves to the former, whilst Aronhold, Clebsch, Gordan, and the continental schools have principally restricted themselves to the latter. The two methods have been conducted so as to be in constant touch, though the nature of the results obtained by the one differs much from those which flow naturally from the other. Each has been singularly successful in discovering new lines of advance and in encouraging the other to renewed efforts. P. Gordan first proved that for any system of forms there exists a finite number of covariants, in terms of which all others are expressible as rational and integral functions. This enabled David Hilbert to produce a very simple unsymbolic proof of the same theorem. So the theory of the forms appertaining to a binary form of unrestricted order was first worked out by Cayley and P. A. MacMahon by unsymbolic methods, and later G. E. Stroh, from a knowledge of the results, was able to verify and extend the results by the symbolic method. The partition method of treating symmetrical algebra is one which has been singularly successful in indicating new paths of advance in the theory of invariants; the important theorem of expressibility is, directly we exclude unity from the partitions, a theorem concerning the expressibility of covariants, and involves the theory of the reducible forms and of the syzygies. The theory brought forward has not yet found a place in any systematic treatise in any language, so that it has been judged proper to give a fairly complete account of it.[1]

I. The Theory of Determinants.[1]

Let there be given quantities

and form from them a product of quantities

where the first suffixes are the natural numbers taken in order, and is some permutation of these numbers. This permutation by a transposition of two numbers, say becomes and by successively transposing pairs of letters the permutation can be reduced to the form Let such transpositions be necessary; then the expression

the summation being for all permutations of the numbers, is called the determinant of the quantities. The quantities are called the elements of the determinant; the term is called a member of the determinant, and there are evidently members corresponding to the permutations of the numbers The determinant is usually written

the square array being termed the matrix of the determinant. A matrix has in many parts of mathematics a signification apart from its evaluation as a determinant. A theory of matrices has been constructed by Cayley in connexion particularly with the theory of linear transformation. The matrix consists of rows and columns. Each row as well as each column supplies one and only one element to each member of the determinant. Consideration of the definition of the determinant shows that the value is unaltered when the suffixes in each element are transposed.

Theorem.—If the determinant is transformed so as to read by columns as it formerly did by rows its value is unchanged. The leading member of the determinant is and corresponds to the principal diagonal of the matrix.

We write frequently

If the first two columns of the determinant be transposed the expression for the determinant becomes , viz. and are transposed, and it is clear that the number of transpositions necessary to convert the permutation of the second suffixes to the natural order is changed by unity. Hence the transposition of columns merely changes the sign of the determinant. Similarly it is shown that the transposition of any two columns or of any two rows merely changes the sign of the determinant.

Theorem.—Interchange of any two rows or of any two columns merely changes the sign of the determinant.

Corollary.—If any two rows or any two columns of a determinant be identical the value of the determinant is zero.

Minors of a Determinant.—From the value of we may separate those members which contain a particular element as a factor, and write the portion ; , the cofactor of , is called a minor of order of the determinant.

Now , wherein is not to be changed, but the second suffixes in the product assume all permutations, the number of transpositions necessary determining the sign to be affixed to the member.

Hence , where the cofactor of is clearly the determinant obtained by erasing the first row and the first column.

Hence

Similarly , the cofactor of , is shown to be the product of and the determinant obtained by erasing from the i th row and k th column. No member of a determinant can involve more than one element from the first row. Hence we have the development

,

proceeding according to the elements of the first row and the corresponding minors.

Similarly we have a development proceeding according to the elements contained in any row or in any column, viz.



This theory enables the evaluation of a determinant by successive reduction of the orders of the determinants involved.

Ex. gr.

Since the determinant

, having two identical rows,

vanishes identically; we have by development according to the elements of the first row

;

and, in general, since

,

if we suppose the ith and kth rows identical

;

and proceeding by columns instead of rows,

identical relations always satisfied by these minors.

If in the first relation of we write we find that so that breaks up into a sum of determinants, and we also obtain a theorem for the addition of determinants which have rows in common. If we multiply the elements of the second row by an arbitrary magnitude , and add to the corresponding elements of the first row, becomes , showing that the value of the determinant is unchanged. In general we can prove in the same way the—

Theorem.—The value of a determinant is unchanged if we add to the elements of any row or column the corresponding elements of the other rows or other columns respectively each multiplied by an arbitrary magnitude, such magnitude remaining constant in respect of the elements in a particular row or a particular column.

Observation.—Every factor common to all the elements of a row or of a column is obviously a factor of the determinant, and may be taken outside the determinant brackets.

Ex. gr.

The minor is , and is itself a determinant of order . We may therefore differentiate again in regard to any element where , ; we will thus obtain a minor of , which is a minor also of of order . It will be and will be obtained by erasing from the determinant the row and column containing the element ; this was originally the r th row and the sth column of ; the r th row of is the r th or (r–1)th row of according as and the sth column of is the sth or (s−1)th column of according as . Hence, if denote the number of transpositions necessary to bring the succession into ascending order of magnitude, the sign to be attached to the determinant arrived at by erasing the ith and r th rows and the k th and s th columns from in order produce will be raised to the power of .

Similarly proceeding to the minors of order , we find that is obtained from by erasing the i th, r th, t th, rows, the k th, s th, u th columns, and multiplying the resulting determinant by raised to the power and the general law is clear.

Corresponding Minors.—In obtaining the minor in the form of a determinant we erased certain rows and columns, and we would have erased in an exactly similar manner had we been forming the determinant associated with , since the deleting lines intersect in two pairs of points. In the latter case the sign is determined by raised to the same power as before, with the exception that , replaces ; but if one of these numbers be even the other must be uneven; hence

.

Moreover

,

where the determinant factor is given by the four points in which the deleting lines intersect. This determinant and that associated with are termed corresponding determinants. Similarly lines of deletion intersecting in points yield corresponding determinants of orders and respectively. Recalling the formula

,

it will be seen that and involve corresponding determinants. Since is a determinant we similarly obtain

,

and thence

;

and as before

,

an important expansion of .

Similarly

,

and the general theorem is manifest, and yields a development in a sum of products of corresponding determinants. If the jth column be identical with the ith the determinant vanishes identically; hence if be not equal to , or ,

.

Similarly, by putting one or more of the deleted rows or columns equal to rows or columns which are not deleted, we obtain, with Laplace, a number of identities between products of determinants of complementary orders.

Multiplication.—From the theorem given above for the expansion of a determinant as a sum of products of pairs of corresponding determinants it will be plain that the product of and may be written as a determinant of order , viz.


Multiply the 1st, 2nd ... nth rows by respectively, and add to the (n+1)th row; by , and add to the (n+2)th row; by and add to the (n+3)rd row, &c. C then becomes

and all the elements of D become zero. Now by the expansion theorem the determinant becomes

We thus obtain for the product a determinant of order . We may say that, in the resulting determinant, the element in the i th row and kth column is obtained by multiplying the elements in the kth row of the first determinant severally by the elements in the i th row of the second, and has the expression

,

and we obtain other expressions by transforming either or both determinants so as to read by columns as they formerly did by rows.

Remark.—In particular the square of a determinant is a determinant of the same order such that ; it is for this reason termed symmetrical.

The Adjoint or Reciprocal Determinant arises from by substituting for each element the corresponding minor so as to form . If we form the product by the theorem for the multiplication of determinants we find that the element in the i th row and kth column of the product is

,

the value of which is zero when is different from , whilst it has the value when . Hence the product determinant has the principal diagonal elements each equal to and the remaining elements zero. Its value is therefore and we have the identity

or .

It can now be proved that the first minor of the adjoint determinant, say is equal to .

From the equations

we derive


and thence


and comparison of the first and third systems yields

.

In general it can be proved that any minor of order of the adjoint is equal to the complementary of the corresponding minor of the original multiplied by the (p – 1)th power of the original determinant.

Theorem.—The adjoint determinant is the (n – 1)th power of the original determinant. The adjoint determinant will be seen subsequently to present itself in the theory of linear equations and in the theory of linear transformation.

Determinants of Special Forms.—It was observed above that the square of a determinant when expressed as a determinant of the same order is such that its elements have the property expressed by . Such determinants are called symmetrical. It is easy to see that the adjoint determinant is also symmetrical, viz. such that , for the determinant got by suppressing the i th row and kth column differs only by an interchange of rows and columns from that got by suppressing the kth row and i th column. If any symmetrical determinant vanish and be bordered as shown below

it is a perfect square when considered as a function of . For since , with similar relations, we have a number of relations similar to , and either or for all different values of and . Now the determinant has the value

in general, and hence by substitution

A skew symmetric determinant has and for all values of and . Such a determinant when of uneven degree vanishes, for if we multiply each row by we multiply the determinant by , and the effect of this is otherwise merely to transpose the determinant so that it reads by rows as it formerly did by columns, an operation which we know leaves the determinant unaltered. Hence or . When a skew symmetric determinant is of even degree it is a perfect square. This theorem is due to Cayley, and reference may be made to Salmon’s Higher Algebra, 4th ed. Art. 39. In the case of the determinant of order 4 the square root is

.

A skew determinant is one which is skew symmetric in all respects, except that the elements of the leading diagonal are not all zero. Such a determinant is of importance in the theory of orthogonal substitution. In the theory of surfaces we transform from one set of three rectangular axes to another by the substitutions

where . This relation implies six equations between the coefficients, so that only three of them are independent. Further we find

and the problem is to express the nine coefficients in terms of three independent quantities.

In general in space of dimensions we have substitutions similar to

,

and we have to express the coefficients in terms of independent quantities; which must be possible, because

where and for all values of and . There are then quantities . Let the determinant of the b’s be and , the minor corresponding to . We can eliminate the quantities and obtain relations

and from these another equivalent set

and now writing

we have a transformation which is orthogonal, because and the elements , are functions of the independent quantities . We may therefore form an orthogonal transformation in association with every skew determinant which has its leading diagonal elements unity, for the quantities are clearly arbitrary.

For the second order we may take

,

and the adjoint determinant is the same; hence

Similarly, for the order 3, we take

and the adjoint is

,

leading to the orthogonal substitution

.
Functional determinants were first investigated by Jacobi in a work De Determinantibus Functionalibus. Suppose dependent variables , each of which is a function of independent variables , so that . From the differential coefficients of the y’s with regard to the x’s we form the functional determinant

If we have new variables z such that zsφs(y1, y2,...yn), we have also zsψs(x1, x2,...xn), and we may consider the three determinants

(y1, y2,...yn
x1, x2,...xn
), (z1, z2,...zn
y1, y2,...yn
), (z1, z2,...zn
x1, x2,...xn
)

Forming the product of the first two by the product theorem, we obtain for the element in the ith row and kth column

zi/y1 y1/xk+zi/y2 y2/xk+...+zi/yn yn/xk

which is zi/xk, the partial differential coefficient of zi, with regard to xk . Hence the product theorem

(z1, z2,...zn
y1, y2,...yn
), (y1, y2,...yn
x1, x2,...xn
)(z1, z2,...zn
x1, x2,...xn
);
and as a particular case
(y1, y2,...yn
x1, x2,...xn
) (x1, x2,...xn
y1, y2,...yn
)=1.

Theorem.—If the functions y1, y2,...yn be not independent of one another the functional determinant vanishes, and conversely if the determinant vanishes, y1, y2,...yn are not independent functions of x1, x2,...xn.
Linear Equations.—It is of importance to study the application of the theory of determinants to the solution of a system of linear equations. Suppose given the n equations

ƒ1a11x1+ a12x2+ ... a1nxn=0,
ƒ2a21x1+ a22x2+ ... a2nxn=0,
.......
ƒnan1x1+ an2x2+ ... annxn=0.

Denote by Δ the determinant (a11a22...ann).

Multiplying the equations by the minors A1μ, A2μ,...Anμ respectively, and adding, we obtain

xμ(a1μA1μ+a2μA2μ+...+anμAnμ)=xμΔ=0,

since from results already given the remaining coefficients of x1, x2,...xμ–1, xμ+1,...xn vanish identically.

Hence if Δ does not vanish x1x1=... =xn=0 is the only solution; but if Δ vanishes the equations can be satisfied by a system of values other than zeros. For in this case the n equations are not independent since identically

A1μƒ1 + A2μƒ2+...+Anμƒn=0,

and assuming that the minors do not all vanish the satisfaction of n–1 of the equations implies the satisfaction of the nth.

Consider then the system of n–1 equations

a21x1+ a22x2 +...+ a2nxn=0
a31x1+ a32x2 +...+ a3nxn=0
......
an1x1+ an2x2 +...+ annxn=0,
which becomes on writing xs/xnys,
a21y1+ a22y2 +...+ a2,n−1yn−1 +a2n=0
a31y1+ a32y2 +...+ a3,n−1yn−1 +a3n=0
.......
an1y1+ an2y2 +...+ an,n−1yn−1 +ann=0.
We can solve these, assuming them independent, for the n−1 ratios y1, y2,...yn−1.
Now
a21A11 + a22A12+...+a2nA1n=0
a31A11 + a32A12+...+a3nA1n=0
.......
an1A11 + an2A12+...+annA1n=0
and therefore, by comparison with the given equations, xiρA1i, where ρ is an arbitrary factor which remains constant as i varies.

Hence yiA1i/A1n where A li and A1n, are minors of the complete determinant
(a11a22...ann).

a21 a22 ...a2,i–1 a2,i+1... a2n
a31 a32 ...a3,i–1 a3,i+1 ...a3n
...........

yi=(−)i+n 
an1 an2 ...an,i–1 an,i+1 ...a2nn
 ————————————,
a21 a22 ...a2,n–1
a31 a22 ...a2,n–1
......
an1 an2 ...an,n–1

or, in words, yi is the quotient of the determinant obtained by erasing the i th column by that obtained by erasing the nth column, multiplied by (–1)i+n. For further information concerning the compatibility and independence of a system of linear equations, see Gordon, Vorlesungen über Invariantentheorie, Bd. 1, § 8.

Resultants.—When we are given k homogeneous equations in k variables or k non-homogeneous equations in k − 1 variables, the equations being independent, it is always possible to derive from them a single equation R=0, where in R the variables do not appear. R is a function of the coefficients which is called the "resultant" or "eliminant" of the k equations, and the process by which it is obtained is termed "elimination." We cannot combine the equations so as to eliminate the variables unless on the supposition that the equations are simultaneous, i.e. each of them satisfied by a common system of values; hence the equation R=0 is derived on this supposition, and the vanishing of R expresses the condition that the equations can be satisfied by a common system of values assigned to the variables.

Consider two binary equations of orders m and n respectively expressed in non-homogeneous form, viz.

ƒ(x) =ƒ=a0xma1xm–1 + a2xm–2 – ...=0,
ƒ(φ)=φb0xnb1xn–1 + b2xn–2 – ...=0,
If α1, α2, ...αm be the roots of ƒ=0, β1, β2, ...βn the roots of φ=0, the condition that some root of φ=0 may cause ƒ to vanish is clearly
Rƒ,φ=ƒ (β1)ƒ(β2)...ƒ(β2)=0;
so that Rƒ,φ is the resultant of ƒ and φ, and expressed as a function of the roots, it is of degree m in each root β, and of degree n in each root α, and also a symmetric function alike of the roots α and of the roots β; hence, expressed in terms of the coefficients, it is homogeneous and of degree n in the coefficients of ƒ, and homogeneous and of degree m in the coefficients of φ
Ex. gr.
ƒ=a0x² − a1x+a2=0, φb0x² − b1x+b2.
We have to multiply a0β2
1
a1β1+a2 by a0β2
2
a1β2+a2 and we obtain
a2
0
β2
1
β2
2
a0a1(β2
1
β2 + β1β2
2
) + a0a2(β2
1
β2
1
+ β1β2
2
) + a2
1
0
β1β2a1a2(β1 + β2) + a2
2
,
where
β1 + β2b1/b0,β1 β2b2/b0, β1 β2b2
1
– 2b0b2
/b2
0
,
and clearing of fractions
Rƒ,φ=(a0b2a2b0)² + (a1b0a0b1)(a1b2a2b1).

We may equally express the result as
φ(α(1)φ(α2)...φ(αm)=0,
or as
II
s,t
(αsβt=0.

This expression of R shows that, as will afterwards appear, the resultant is a simultaneous invariant of the two forms.

The resultant being a product of mn root differences, is of degree mn in the roots, and hence is of weight mn in the coefficients of the forms; i.e. the sum of the suffixes in each term of the resultant is equal to mn.

Resultant Expressible as a Determinant.—From the theory of linear equations it can be gathered that the condition that p linear equations in p variables (homogeneous and independent) may be simultaneously satisfied is expressible as a determinant, viz. if
a11x1 + a12x2 +...+ a1pxp=0,
a21x1 + a22x2 +...+ a2pxp=0,
......
ap1x1 + ap2x2 +...+ appxp=0,

be the system the condition is, in determinant form

(a11a22...app)=0;

in fact the determinant is the resultant of the equations.

Now, suppose ƒ and φ to have a common factor xγ,

ƒ(x)=ƒ1(x)(xγ); φ(x)=φ1(x)(xγ),

ƒ1 and φ1 being of degrees m – 1 and n – 1 respectively; we have the identity φ1ƒ(x)=ƒ1(x)φ(x) of degree m + n – 1.

Assuming then φ1 to have the coefficients B1, B2,...Bn
and ƒ1the coefficients A1, A2,...Am,

we may equate coefficients of like powers of x in the identity, and obtain m + n homogeneous linear equations satisfied by the m + n quantities B1, B2,...Bn, A1, A2,...Am. Forming the resultant of these equations we evidently obtain the resultant of ƒ and φ.

Thus to obtain the resultant of

ƒ=a0x3 + a1x2 + a2x+ a3, , φb0x2 + b1x+ b2

we assume the identity

(B0x + B1)(a0x3 + a1x2 + a2x+ a3)=(A0x2 + A1x+ A2)(b0x2 + b1x+ b2),

and derive the linear equations

B0a0 −A0b0 =0,
B0a1 +B1a0 −A0b1 −A1b0 =0,
B0a2 +B1a1 −A0b2 −A1b1 −A2b0 =0,
B0a3 +B1a2 −A1b2 −A2b1 =0,
B1a3 −A2b2 =0,
and by elimination we obtain the resultant

This is Euler’s method. Sylvester’s leads to the same expression, but in a simpler manner.

He forms n equations from ƒ by separate multiplication by xn–1, xn–2,...x, 1, in succession, and similarly treats φ with m multipliers xm –1, xm –2,...x, 1. From these m + n equations he eliminates the m + n powers xm +n –1, xm +n–2, x,.. 1, treating them as independent unknowns. Taking the same example as before the process leads to the system of equations

a0x4+ a1x3+ a2x2+ a3x =0,
a0x3+ a1x2+ a2x+ a3 =0,
b0x4+ b1x3+ b2x2 =0,
b0x3+ b1x2+ b2x =0,
b0x2+ b1x+ b2 =0,

whence by elimination the resultant

a0 a1 a2 a3 0
0 a0 a1 a2 a3
b0 b1 b2 0 0
0 b0 b1 b2 0
0 0 b9 b1 b2

which reads by columns as the former determinant reads by rows, and is therefore identical with the former. E. Bézout’s method gives the resultant in the form of a determinant of order m or n, according as m is ≷ n. As modified by Cayley it takes a very simple form. He forms the equation
ƒ(x)φ(x ′) − ƒ(x ′)φ(x)=0,
which can be satisfied when ƒ and φ possess a common factor. He first divides by the factor xx ′, reducing it to the degree m − 1 in both x and x ′ where m > n; he then forms m equations by equating to zero the coefficients of the various powers of x ′; these equations involve the m powers x0, x, x2,... xm−1 of x, and regarding these as the unknowns of a system of linear equations the resultant is reached in the form of a determinant of order m. Ex. gr. Put
 (a0x3+a1x2+a2x +a3) (b0x ′2+b1x ′+b2) − (a0x ′3+a1x ′2+a2x ′ +a3) (b0x2+b1x+b2)=0;
after division by xx ′ the three equations are formed

a0b0x2+a0b1x+a0b2 =0,
a0b1x2+(a0b2+a1b1a0b2)x+a1b2a3b0 =0,
a0b2x2+(a1b2a3b0)x+a2b2a3b1 =0

and thence the resultant

a0b0 a0b1 a0b2
a0b1 a0b2+a1b1a0b2 a1b2a3b0
a0b2 a1b2a3b0 a2b2a3b1

which is a symmetrical determinant.

Case of Three Variables.—In the next place we consider the resultants of three homogeneous polynomials in three variables. We can prove that if the three equations be satisfied by a system of values of the variable, the same system will also satisfy the Jacobian or functional determinant. For if u, v, w be the polynomials of orders m, n, p respectively, the Jacobian is (u1 v2 w3), and by Euler’s theorem of homogeneous functions
xu1 + yu2 + zu3mu
xv1 + yv2 + zv3nv
xw1 + yw2 + zw3pw;
denoting now the reciprocal determinant by (U1 V2 W3) we obtain JxmuU1 + nvV1 + pwW1; Jy=..., Jz=..., and it appears that the vanishing of u, v, and w implies the vanishing of J. Further, if mnp, we obtain by differentiation
J + x∂J/x =m (u∂U1/x. + v∂V1/x + u∂W1/x + u1U1 v1V1 w1W1).
or
x∂J/x =m – 1)J + m (u∂U1/x. + v∂V1/x + u∂W1/x).

Hence the system of values also causes ∂J/x to vanish in this case; and by symmetry ∂J/y and ∂J/z also vanish.

The proof being of general application we may state that a system of values which causes the vanishing of k polynomials in k variables causes also the vanishing of the Jacobian, and in particular, when the forms are of the same degree, the vanishing also of the differential coefficients of the Jacobian in regard to each of the variables.

There is no difficulty in expressing the resultant by the method of symmetric functions. Taking two of the equations
axm + (by + cz) xm–1 +... =0,
a′xn + (b′y + c′z) xn–1 +... =0,
we find that, eliminating x, the resultant is a homogeneous function of y and z of degree mn; equating this to zero and solving for the ratio of y to z we obtain mn solutions; if values of y and z, given by any solution, be substituted in each of the two equations, they will possess a common factor which gives a value of x which, combined with the chosen values of y and z, yields a system of values which satisfies both equations. Hence in all there are mn such systems. If, therefore, we have a third equation, and we substitute each system of values in it successively and form the product of the mn expressions thus formed, we obtain a function which vanishes if any one system of values, common to the first two equations, also satisfies the third. Hence this product is the required resultant of the three equations.

Now by the theory of symmetric functions, any symmetric functions of the mn values which satisfy the two equations, can be expressed in terms of the coefficient of those equations. Hence, finally, the resultant is expressed in terms of the coefficients of the three equations, and since it is at once seen to be of degree mn in the coefficient of the third equation, by symmetry it must be of degrees np and pm in the coefficients of the first and second equations respectively. Its weight will be mnp (see Salmon’s Higher Algebra, 4th ed. § 77). The general theory of the resultant of k homogeneous equations in k variables presents no further difficulties when viewed in this manner.

The expression in form of a determinant presents in general considerable difficulties. If three equations, each of the second degree, in three variables be given, we have merely to eliminate the six products x², y², z², yz, zx, xy from the six equations
uvw∂J/x∂J/y∂J/z=0; if we apply the same process to these equations each of degree three, we obtain similarly a determinant of order 21, but thereafter the process fails. Cayley, however, has shown that, whatever be the degrees of the three equations, it is possible to represent the resultant as the quotient of two determinants (Salmon, l.c. p. 89).

Discriminants.—The discriminant of a homogeneous polynomial in k variables is the resultant of the k polynomials formed by differentiations in regard to each of the variables.

It is the resultant of k polynomials each of degree m–1, and thus contains the coefficients of each form to the degree (m–1)k–1; hence the total degrees in the coefficients of the k forms is, by addition, k(m–1)k–1; it may further be shown that the weight of each term of the resultant is constant and equal to m(m–1)k–1 (Salmon, l.c. p. 100).

A binary form which has a square factor has its discriminant equal to zero. This can be seen at once because the factor in question being once repeated in both differentials, the resultant of the latter must vanish.

Similarly, if a form in k variables be expressible as a quadratic function of k – 1, linear functions X1, X2, ... Xk – 1, the coefficients being any polynomials, it is clear that the k differentials have, in common, the system of roots derived from X1=X2=...=Xk – 1=0, and have in consequence a vanishing resultant. This implies the vanishing of the discriminant of the original form.

Expression in Terms of Roots.—Since x∂ƒ/x+∂ƒ/ymƒ, if we take any root x1, y1, of ∂ƒ/x, and substitute in mf we must obtain, y1(∂ƒ/y)
xx1

yy1
; hence the resultant of ∂ƒ/x and ƒ is, disregarding numerical factors, y1y2...yn–1 × discriminant of ƒ=a0 × disct. of ƒ.

Now
ƒ=(xy1x1y)(xy2x2y) ... (xymxmy),
∂ƒ/x =Σ1 y1(xymxmy),
and substituting in the latter any root of ƒ and forming the product, we find the resultant of ƒ and ∂ƒ/x, viz.

y1y2...ym(x1y2x2y1)2(x1y3x3y1)2...(xrysxsyr)2...

and, dividing by y1y2...ym, the discriminant of ƒ is seen to be equal to the product of the squares of all the differences of any two roots of the equation. The discriminant of the product of two forms is equal to the product of their discriminants multiplied by the square of their resultant. This follows at once from the fact that the discriminant is
II(αrαs)2II(βrβs)2{II(αrβs}2.

References for the Theory of Determinants.—T. Muir’s “List of Writings on Determinants,” Quarterly Journal of Mathematics. vol. xviii. pp. 110-149, October 1881, is the most important bibliographical article on the subject in any language; it contains 589 entries, arranged in chronological order, the first date being 1693 and the last 1880. The bibliography has been continued, and published at various dates (vol. xxi. pp. 299-320; vol. xxxvi. pp. 171-267) in the same periodical. These lists contain 1740 entries. T. Muir, History of the Theory of Determinants (2nd ed., London, 1906). School treatises are those of Thomson, Mansion, Bartl, Mollame, in English, French, German and Italian respectively.—Advanced treatises are those of William Spottiswoode (1851), Francesco Brioschi (1854), Richard Baltzer (1857), George Salmon (1859), N. Trudi (1862), Giovanni Garbieri (1874), Siegmund Gunther (1875), Georges J. Dostor (1877), Baraniecki (the most extensive of all) (1879), R. F. Scott (2nd ed., 1904), T. Muir (1881).

II. The Theory Of Symmetric Functions

Consider quantities .

Every rational integral function of these quantities, which does not alter its value however the suffixes be permuted, is a rational integral symmetric function of the quantities. If we write , are called the elementary symmetric functions.




The general monomial symmetric function is

,

the summation being for all permutations of the indices which result in different terms. The function is written

for brevity, and repetitions of numbers in the bracket are indicated by exponents, so that is written The weight of the function is the sum of the numbers in the bracket, and the degree the highest of those numbers.

Ex. gr. The elementary functions are denoted by

,

are all of the first degree, and are of weights respectively.

Remark.—In this notation ; ; ... , &c. The binomial coefficients appear, in fact, as symmetric functions, and this is frequently of importance.

The order of the numbers in the bracket is immaterial; we may therefore always place them, as is most convenient, in descending order of magnitude; the numbers then constitute an ordered partition of the weight , and the leading number denotes the degree.

The sum of the monomial functions of a given weight is called the homogeneous-product-sum or complete symmetric function of that weight; it is denoted by ; it is connected with the elementary functions by the formula

,

which remains true when the symbols and are interchanged, as is at once evident by writing for . This proves, also, that in any formula connecting with the symbols and may be interchanged.

Ex. gr, from we derive .

The function being as above denoted by a partition of the weight, viz. , it is necessary to bring under view other functions associated with the same series of numbers: such, for example, as

.

The expression just written is in fact a partition of a partition, and to avoid confusion of language will be termed a separation of a partition. A partition is separated into separates so as to produce a separation of the partition by writing down a set of partitions, each separate partition in its own brackets, so that when all the parts of these partitions are reassembled in a single bracket the partition which is separated is reproduced. It is convenient to write the distinct partitions or separates in descending order as regards weight. If the successive weights of the separates be enclosed in a bracket we obtain a partition of the weight which appertains to the separated partition. This partition is termed the specification of the separation. The degree of the separation is the sum of the degrees of the component separates. A separation is the symbolic representation of a product of monomial symmetric functions. A partition, , can be separated in the manner , and we may take the general form of a partition to be and that of a separation when denote the distinct separates involved.

Theorem.— The function symbolized by , viz. the sum of the nth powers of the quantities, is expressible in terms of functions which are symbolized by separations of any partition of the number . The expression is—


,

being a separation of and the summation being in regard to all such separations. For the particular case

To establish this write—

,

the product on the right involving a factor for each of the quantities , and being arbitrary.

Multiplying out the right-hand side and comparing coefficients

,
,
,
,

,

the summation being for all partitions of .

Auxiliary Theorem.—The coefficient of in the product is where is a separation of of specification , and the sum is for all such separations.

To establish this observe the result.

and remark that is a separation of of specification . A similar remark may be made in respect of

,

and therefore of the product of those expressions. Hence the theorem.

Now


whence, expanding by the exponential and multinomial theorems, a comparison of the coefficients of gives


and, by the auxiliary theorem, any term on the right-hand side is such that the coefficient of in is

,

where since is the specification of , . Comparison of the coefficients of therefore yields the result


,

for the expression of in terms of products of symmetric functions symbolized by separations of .

Let denote the sums of the nth powers of quantities whose elementary symmetric functions are ; ; respectively: then the result arrived at above from the logarithmic expansion may be written

,

exhibiting as an invariant of the transformation given by the expressions of in terms of .

The inverse question is the expression of any monomial symmetric function by means of the power functions .

Theorem of Reciprocity.—If

,

where is a numerical coefficient, then also

.

We have found above that the coefficient of in the product is

,

the sum being for all separations of which have the specification . We can multiply out this expression so as to obtain a series of monomials of the form . It can be shown that the number enumerates distributions of a certain nature defined by the partitions , , and it is seen intuitively that the number remains unaltered when the first two of these partitions are interchanged (see Combinatorial Analysis). Hence the theorem is established.

Putting and we find a particular law of reciprocity given by Cayley and Betti,

and another by putting , for then becomes , and we have

Theorem of Expressibility.—“If a symmetric function be symboilized by and be any partitions of respectively, the function is expressible by means of functions symbolized by separation of

For, writing as before,

is a linear function of separations of of specification and if is a linear function of separations of of specification Suppose the separations of to involve different specifications and form the identities

where is one of the specifications.

The law of reciprocity shows that

viz.: a linear function of symmetric functions symbolized by the specifications; and that A table may be formed expressing the expressions as linear functions of the expressions , , and the numbers occurring therein possess row and column symmetry. By solving linear equations we similarly express the latter functions as linear functions of the former, and this table will also be symmetrical.

Theorem.—“The symmetric function whose partition is a specification of a separation of the function symbolized by is expressible as a linear function of symmetric functions symbolized by separations of and a symmetrical table may be thus formed.” It is now to be remarked that the partition can be derived from by substituting for the numbers certain partitions of those numbers (vide the definition of the specification of a separation).

Hence the theorem of expressibility enunciated above. A new statement of the law of reciprocity can be arrived at as follows:—Since.

where is a separation of of specification placing under the summation sign to denote the specification involved,

where .

Theorem of Symmetry.—If we form the separation function

appertaining to the function each separation having a specification , multiply by and take therein the coefficient of the function we obtain the same result as if we formed the separation function in regard to the specification multiplied by and took therein the coefficient of the function

Ex. gr., take we find

The Differential Operators.—Starting with the relation

multiply each side by thus introducing a new quantity we obtain

so that a rational integral function of the elementary functions, is converted into

where

and denotes, not successive operations of but the operator of order obtained by raising to the power symbolically as in Taylor’s theorem in the Differential Calculus.

Write also so that

The introduction of the quantity converts the symmetric function into

Hence, if

Comparing coefficients of like powers of we obtain

while unless the partition contains a part Further, if denote successive operations of and

and the operations are evidently commutative.

Also and the law of operation of the operators upon a monomial symmetric function is clear.

We have obtained the equivalent operations

where denotes (by the rule over ) that the multiplication of operators is symbolic as in Taylor’s theorem. denotes, in fact, an operator of order but we may transform the right-hand side so that we are only concerned with the successive performance of linear operations. For this purpose write

It has been shown (vide Memoir on Symmetric Functions of the Roots of Systems of Equations,” Phil. Trans. 1890, p. 490) that

where now the multiplications on the dexter denote successive operations, provided that

being an undetermined algebraic quantity.

Hence we derive the particular cases

and we can express in terms of products denoting successive operations, by the same law which expresses the elementary function in terms of the sums of powers Further, we can express in terms of by the same law which expresses the power function in terms of the elementary functions

Operation of upon a Product of Symmetric Functions.—Suppose to be a product of symmetric functions If in the identity we introduce a new root we change into and we obtain

and now expanding and equating coefficients of like powers of

the summation in a term covering every distribution of the operators of the type presenting itself in the term.

Writing these results

we may write in general

the summation being for every partition of and being

Ex. gr. To operate with upon we have

and hence

Application to Symmetric Function Multiplication.—An example will explain this. Suppose we wish to find the coefficient of in the product .

Write

then

every other term disappearing by the fundamental property of Since

we have:—

where ultimately disappearing terms have been struck out. Finally

The operator which is satisfied by every symmetric fraction whose partition contains no unit (called by Cayley non-unitary symmetric functions), is of particular importance in algebraic theories. This arises from the circumstance that the general operator

is transformed into the operator by the substitution

so that the theory of the general operator is coincident with that of the particular operator For example, the theory of invariants may be regarded as depending upon the consideration of the symmetric functions of the differences of the roots of the equation

and such functions satisfy the differential equation

For such functions remain unaltered when each root receives the same infinitesimal increment but writing for causes to become respectively and becomes

and hence the functions satisfy the differential equation. The important result is that the theory of invariants is from a certain point of view coincident with the theory of non-unitary symmetric functions of the roots of are symmetric functions of differences of the roots of

and on the other hand that symmetric functions of the differences of the roots of

are non-unitary symmetric functions of the roots of

An important notion in the theory of linear operators in general is that of MacMahon’s multilinear operator (“Theory of a Multilinear partial Differential Operator with Applications to the Theories of Invariants and Reciprocants,” Proc. Lond. Math. Soc. t. xviii. (1886), pp. 61-88). It is defined as having four elements, and is written

the coefficient of being The operators are seen to be and respectively. Also the operator of the Theory of Pure Reciprocents (see Sylvester Lectures of the New Theory of Reciprocants, Oxford, 1888) is

It will be noticed that

The importance of the operator consists in the fact that taking any two operators of the system

the operator equivalent to

where

and we conclude that quâ “alternation” the operators of the system form a “group.” It is thus possible to study simultaneously all the theories which depend upon operations of the group.

Symbolic Representation of Symmetric Functions.—Denote the elementary symmetric function by at pleasure; then, taking equal to we may write

where

Further, let

so that

and, by multiplication,

Denote by brackets and symmetric functions of the quantities and respectively. Then

Expanding the right-hand side by the exponential theorem, and then expressing the symmetric functions of which arise, in terms of we obtain by comparison with the middle series the symbolical representation of all symmetric functions in brackets appertaining to the quantities To obtain particular theorems the quantities are auxiliaries which are at our entire disposal. Thus to obtain Stroh’s theory of seminvariants put

we then obtain the expression of non-unitary symmetric functions of the quantities as functions of differences of the symbols

Ex. gr. with must be a term in

and since we must have

as is well known.

Again, if be the roots of and leading to

and

and we see further that vanishes identically unless . If be infinite and

we have the symbolic identity

and

Instead of the above symbols we may use equivalent differential operators. Thus let

and let be equivalent quantities. Any function of differences of being formed, the expansion being carried out, an operand or or being taken and being subsequently put equal to , a non-unitary symmetric function will be produced.

Ex. gr.

The whole theory of these forms is consequently contained implicitly in the operation

Symmetric Functions of Several Systems of Quantities.—It will suffice to consider two systems of quantities as the corresponding theory for three or more systems is obtainable by an obvious enlargement of the nomenclature and notation.

Taking the systems of quantities to be

we start with the fundamental relation

As shown by L. Schläfli[2] this equation may be directly formed and exhibited as the resultant of two given equations, and an arbitrary linear non-homogeneous equation in two variables. The right-hand side may be also written

The most general symmetric function to be considered is

conveniently written in the symbolic form

Observe that the summation is in regard to the expressions obtained by permuting the suffixes The weight of the function is bipartite and consists of the two numbers and the symbolic expression of the symmetric function is a partition into biparts (multiparts) of the bipartite (multipartite) number Each part of the partition is a bipartite number, and in representing the partition it is convenient to indicate repetitions of parts by power symbols. In this notation the fundamental relation is written

where in general

All symmetric functions are expressible in terms of the quantities in a rational integral form; from this property they are termed elementary functions; further they are said to be single-unitary since each part of the partition denoting involves but a single unit.

The number of partitions of a biweight into exactly biparts is given (after Euler) by the coefficient of in the expansion of the generating function

The partitions with one bipart correspond to the sums of powers in the single system or unipartite theory; they are readily expressed in terms of the elementary functions. For write and take logarithms of both sides of the fundamental relation; we obtain

and

From this formula we obtain by elementary algebra

corresponding to Thomas Waring’s formula for the single system. The analogoous formula appertaining to systems of quantities which express in terms of elementary functions can be at once written down.

Ex. gr. We can verify the relations

The formula actually gives the expression of by means of separations of

which is one of the partitions of This is the true standpoint from which the theorem should be regarded. It is but a particular case of a general theory of expressibility.

To invert the formula we may write

and thence derive the formula—

which expresses the elementary function in terms of the single bipart functions. The similar theorem for systems of quantities can be at once written down.

It will be shown later that every rational integral symmetric function is similarly expressible.

The Function .—As the definition of we take

and now expanding the right-hand side

the summation being for all partitions of the biweight. Further writing

we find that the effect of changing the signs of both and is merely to interchange the symbols and hence in any relation connecting the quantities with the quantities we are at liberty to interchange the symbols and By the exponential and multinomial theorems we obtain the results—

Differential Operations.—If, in the identity

we multiply each side by the right-hand side becomes

hence any rational integral function of the coefficients say is converted into

The rule over will serve to denote that is to be raised to the various powers symbolically as in Taylor’s theorem.

Writing

now, since the introduction of the new quantities results in the addition to the function of the new terms

we find

and thence

while unless the part is involved in We may then state that is an operation which obliterates one part when such part is present, but in the contrary case causes the function to vanish. From the above Dpq is an operator of order pq, but it is convenient for some purposes to obtain its expression in the form of a number of terms, each of which denotes pq successive linear operations: to accomplish this write

dpqarsd/ap+r,q+s

and note the general result[3]
exp (m10d10 + m01d01 + … + mpqdpq + …)
= exp (M10d10 + M01d01 + … + Mpqdpq + …);
where the multiplications on the left- and right-hand sides of the equation are symbolic and unsymbolic respectively, provided that mpq, Mpq are quantities which satisfy the relation
exp (M10ξ + M01η + … + Mpqξ pηp + …)
= 1 + m10ξ + m01η + … + mpqξ pηq + …;
where ξ, η are undetermined algebraic quantities. In the present particular case putting m10 = μ, m01 = ν and mpq = 0 otherwise
M10ξ + M01η + … + Mpqξ pηq + … = log (1 + μξ + νη)
or
Mpq = ( − )p+q−1(p + q − 1)!/p! q!μpνq;
and the result is thus
exp (μd10 + νd01)
= exp { μd10 + νd011/2 (μ2d20 + 2μνd11 + ν2d02) + … }
= 1 + μD10 + νD01 + … + μpνqDpq + …;
and thence
μd10 + νd011/2 (μ2d20 + 2μνd11 + ν2d02) + …
=log (1 + μD10 + νD01 + … + μpνqDpq + …).
From these formulae we derive two important relations, viz.
 ( − )p+q−1(p + q − 1)!/p!q!dpq = ( − )Σπ−1 (Σπ − 1)!/π1! π2! …Dπ1
p1q1
Dπ2
p2q2
…,
 ( − )p+q−1Dpq = (p1 + q1 − 1)!/p1!q1!π1(p2 + q2 − 1)!/p2!q2!π2
( − )Σπ − 1)!/π1! π2! …dπ1
p1q1
dπ2
p2q2
…,
the last written relation having, in regard to each term on the right-hand side, to do with Σπ successive linear operations. Recalling the formulae above which connect spq and apq, we see that dpq and Dpq are in co-relation with these quantities respectively, and may be said to be operations which correspond to the partitions (pq), (10p 01q) respectively. We might conjecture from this observation that every partition is in correspondence with some operation; this is found to be the case, and it has been shown (loc. cit. p. 493) that the operation
1/π1!1/π2!dπ1
p1q1
dπ2
p2q2
(multiplication symbolic)
corresponds to the partition (/p1q1π1 /p2q2π2…). The partitions being taken as denoting symmetric functions we have complete correspondence between the algebras of quantity and operation, and from any algebraic formula we can at once write down an operation formula. This fact is of extreme importance in the theory of algebraic forms, and is easily representable whatever be the number of the systems of quantities.

We may remark the particular result
( − )p+q−1(p + q − 1)!/p!q!dpqspq = Dpq(pq) = 1;
dpq causes every other single part function to vanish, and must cause any monomial function to vanish which does not comprise one of the partitions of the biweight pq amongst its parts.

Since
dpq = ( − )p+q−1(p + q − 1)!/p!q!d/dspq
the solutions of the partial differential equation dpq = 0 are the single bipart forms, omitting spq, and we have seen that the solutions of Dpq = 0 are those monomial functions in which the part pq is absent.

One more relation is easily obtained, viz.
d/dapq = dpqh10dp+1,qh01dp,q+1 + … + ( − )r+shrsdp+r,q+s + … .

References for Symmetric Functions.—Albert Girard, Invention nouvelle en l’algèbre (Amsterdam, 1629); Thomas Waring, Meditationes Algebraicae (London, 1782); Lagrange, Mém. de l’acad. de Berlin (1768); Meyer-Hirsch, Sammlung von Aufgaben aus der Theorie der algebraischen Gleichungen (Berlin, 1809); Serret, Cours d'algèbre supérieure, t. iii. (Paris, 1885); Unferdinger, Sitzungsber. d. Acad. d. Wissensch. i. Wien, Bd. lx. (Vienna, 1869); L. Schläfli, “Ueber die Resultante eines Systemes mehrerer algebraischen Gleichungen,” Vienna Transactions, t. iv. 1852; MacMahon, “Memoirs on a New Theory of Symmetric Functions,” American Journal of Mathematics, Baltimore, Md. 1888–1890; “Memoir on Symmetric Functions of Roots of Systems of Equations,” Phil. Trans. 1890.

III. The Theory of Binary Forms

A binary form of order n is a homogeneous polynomial of the nth degree in two variables. It may be written in the form
axn
1
+ bxn−1
1
x2 + cn−2
1
x2
2
+ …;
or in the form
axn
1
+ (n
1
)bxn−1
1
x2 + (n
2
)cn−2
1
x2
2
+ …;
which Cayley denotes by
(a, b, c, …) (x1, x2)n
(n
1
), (n
2
)… being a notation for the successive binomial coefficients n, 1/2n (n − 1), …. Other forms are
axn
1
+ nbn−1
1
x2 + n(n − 1)cxn−2
1
x2
2
+ …,
the binomial coefficients (n
s
) being replaced by s!(n
s
), and
axn
1
+ 1/1!bn−1
1
x2 + 1/2!cxn−2
1
x2
2
+ …,
the special convenience of which will appear later. For present purposes the form will be written
/a0xn
1
+ (n
1
)/a1xn−1
1
x2 + (n
1
)/a1xn−1
1
x2
2
+ … + /anxn
2
,
the notation adopted by German writers; the literal coefficients have a rule placed over them to distinguish them from umbral coefficients which are introduced almost at once. The coefficients /a0, /a1, /a2, … /an, n + 1 in number are arbitrary. If the form, sometimes termed a quantic, be equated to zero the n + 1 coefficients are equivalent to but n, since one can be made unity by division and the equation is to be regarded as one for the determination of the ratio of the variables.

If the variables of the quantic 𝑓(x1, x2) be subjected to the linear transformation
x1 = α11ξ1 + α12ξ2,
x2 = α21ξ1 + α22ξ2,
ξ1, ξ2 being new variables replacing x1, x2 and the coefficients α11, α12, α21, α22, termed the coefficients of substitution (or of transformation), being constants, we arrive at a transformed quantic
𝑓 (ξ1, ξ2) = a
0
ξn
1
+ (n
1
)a
1
ξn−1
1
ξ2 + (n
2
)a
2
ξn−2
1
ξ2 + … + a
n
ξn
2

in the new variables which is of the same order as the original quantic; the new coefficients a/0, a/1, a/2 . . . a/n are linear functions of the original coefficients, and also linear functions of products, of the coefficients of substitution, of the nth degree.

By solving the equations of transformation we obtain
rξ1 = α22x1α12x2,
rξ1 = − α21x1α11x2,
where r = |α11α12
α21α22
| = α11α22α12α21;
r is termed the determinant of substitution or modulus of transformation; we assure x1, x2 to be independents, so that r must differ from zero.

In the theory of forms we seek functions of the coefficients and variables of the original quantic which, save as to a power of the modulus of transformation, are equal to the like functions of the coefficients and variables of the transformed quantic. We may have such a function which does not involve the variables, viz.
F(a
0
, a
1
, a
2
, … a
n
) = rλ F(/a0, /a1, /a2, … /an),
the function F(/a0, /a1, /a2, … /an) is then said to be an invariant of the quantic quâ linear transformation. If, however, F involve as well the variables, viz.
F(a
0
, a
1
, a
2
, … ; ξ1, ξ2) = rλ F(/a0, /a1, /a2, … ; x1, x2),
the function F(/a0, /a1, /a2, … ; x1, x2) is said to be a covariant of the quantic. The expression “invariantive forms” includes both invariants and covariants, and frequently also other analogous forms which will be met with. Occasionally the word “invariants” includes covariants; when this is so it will be implied by the text. Invariantive forms will be found to be homogeneous functions alike of the coefficients and of the variables. Instead of a single quantic we may have several
𝑓 (/a0, /a1, /a2, … ; x1, x2), φ(/b0, /b1, /b2, … ; x1, x2), …
which have different coefficients, the same variables, and are of the same or different degrees in the variables; we may transform them all by the same substitution, so that they become
𝑓 (a
0
, a
1
, a
2
, … ; ξ1, ξ2), φ(b
0
, b
1
, b
2
, … ; ξ1, ξ2), …
If then we find
F(a
0
, a
1
, a
2
, … b
0
, b
1
, b
2
, …, …; ξ1, ξ2),
= rλ F(/a0, /a1, /a2, … /b0, /b1, /b2, …, …; x1, x2),
the function F, on the right which multiplies r, is said to be a simultaneous invariant or covariant of the system of quantics. This notion is fundamental in the present theory because we will find that one of the most valuable artifices for finding invariants of a single quantic is first to find simultaneous invariants of several different quantics, and subsequently to make all the quantics identical. Moreover, instead of having one pair of variables x1, x2 we may have several pairs y1, y2; z1, z2;… in addition, and transform each pair to a new pair by substitutions, having the same coefficients α11, α12, α21, α22 and arrive at functions of the original coefficients and variables (of one or more quantics) which possess the above definied invariant property. A particular quantic of the system may be of the same or different degrees in the pairs of variables which it involves, and these degrees may vary from quantic to quantic of the system. Such quantics have been termed by Cayley multipartite.

Symbolic Form.—Restricting consideration, for the present, to binary forms in a single pair of variables, we must introduce the symbolic form of Aronhold, Clebsch and Gordan; they write the form
(a1x1 + a2x2)n = an
1
xn
1
+ (n
1
)an−1
1
a2xn−1
1
x2 + ... + an
2
xn
3
= an
x

wherein a1, a2 are umbrae, such that
an
1
, an−1
1
a2, ... a1an−1
2
. an
2

are symbolical representations of the real coefficients a0, a1, ... an−1, an, and in general ank
1
ak
2
is the symbol for ak. If we restrict ourselves to this set of symbols we can uniquely pass from a product of real coefficients to the symbolic representations of such product, but we cannot, uniquely, from the symbols recover the real form, This is clear because we can write
a1a2 = an−1
1
a2. an−2
1
a2
2
= a2n−3
1
a3
2

while the same product of umbrae arises from
a0a3 = an
1
.an−3
1
a3
2
= a2n−3
1
a3
2

Hence it becomes necessary to have more than one set of umbrae, so that we may have more than one symbolical representation of the same real coefficients. We consider the quantic to have any number of equivalent representations an
x
bn
x
cn
x
≡ …. So that ank
1
ak
2
bnk
1
bk
2
cnk
1
ck
2
≡ … = ak; and if we wish to denote, by umbrae, a product of coefficients of degree s we employ s sets of umbrae.

Ex. gr. We write a1a2 = an−1
1
a2.bn−2
1
b2
2

/a2
2
= an−3
1
a3
2
.bn−3
1
b3
2
.cn−3
1
c3
2
,
and so on whenever we require to represent a product of real coefficients symbolically; we then have a one-to-one correspondence between the products of real coefficients and their symbolic forms. If we have a function of degree s in the coefficients, we may select any s sets of umbrae for use, and having made a selection we may when only one quantic is under consideration at any time permute the sets of umbrae in any manner without altering the real significance of the symbolism.Ex. gr. To express the function a0a2a2
1
, which is the discriminant of the binary quadratic a0x2
1
+ 2 a1x1x2 + a2x2
2
= a2
x
= b2
x
, in a symbolic form we have
2(a0a2a2
1
) = a0a2 + a1a2 − 2a1 . a1 = a2
1
b2
1
+ a2
2
b2
1
− 2a1a2b1b2
= (a1b2a2b1)2.

Such an expression as a1b2a2b1 which is
ax/x1bx/x2ax/x2bx/x1,
is usually written (ab) for brevity; in the same notation the determinant, whose rows are al, a2, a3; b1, b2, b3; c1, c2, c3 respectively, is written (abc) and so on. It should be noticed that the real function denoted by (ab)2 is not the square of a real function denoted by (ab). For a single quantic of the first order (ab) is the symbol of a function of the coefficients which vanishes identically; thus
(ab) = a1b2a2b1 = a0a1a1a0 = 0
and, indeed, from a remark made above we see that (ab) remains unchanged by interchange of a and b; but (ab), = −(ba), and these two facts necessitate (ab) = 0.

To find the effect of linear transformation on the symbolic form of quantic we will disuse the coefficients a11, a12, a21, a22, and employ λ1, μ1, λ2, μ2. For the substitution
x1 = λ1ξ1 + μ1ξ2, x2 = λ2ξ1 + μ2ξ2,
of modulus |λ1
λ2
μ1
μ2
| = (λ1μ2λ2μ1) = (λμ),
the quadratic form a0x2
1
+ 2ax1x2 + a2x2
2
= 2
x
= ƒ(x),
becomes
A0ξ2
1
+ 2A1ξ1ξ2 + A2ξ2
2
= A2
ξ
= φ(ξ),
where
A0 = a0λ2
1
+ 2a1λ1λ2 + a2λ2
2
,
A1 = a0λ1μ1 + a1(λ1μ2 + λ2μ1) + a2λ2μ2,
A2 = a0μc+ 2a1μ1μ2 + a2μ2
2
.

We pass to the symbolic forms
a2
x
= (a1x1 + a2x2)2,A2
ξ
= (A1ξ1 + A2ξ2)2,
by writing for
a0, a1, a2 the symbols a2
1
, a1a2, a2
2

A0, A1, A2 the symbols A2
1
, A1A2, A2
2

and then
A0 = a2
1
λ2
1
+ 2a1a2λ1λ2 + a2
2
λ2
2
= (a1λ1 + a2λ2)2 = a2
λ
,
A1 = (a1λ1 + a2λ2) (a1μ1 + a2μ2) = aλaμ,
A2 = (a1μ1 + a2μ2)2 = a2
μ
;
so that
A2
ξ
= a2
λ
ξ2
1
+ 2aλaμξ1ξ2 + a2
μ
ξ2
2
= (aλξ1 + aμξ2)2;
whence A1, A2 become aλ, 'aμ respectively and
φ(ξ) = (aλξ1 + aμξ2)2.
The practical result of the transformation is to change the umbrae al, a2 into the umbrae
aλ = a1λ1 + a2λ1,aμ = a1μ1 + a2μ2
respectively.

By similarly transforming the binary nic form an
x
we find
A0 = (a1λ1 + a2λ2)n = an
λ
+ An
1
,
A1 = (a1λ1 + a2λ2)n−1 (a1μ1 + a2μ2) = an−1
λ
aμ = An−1
1
A2,
········ Ak = (a1λ1 + a2λ2)nk (a1μ1 + a2μ2)k = ank
λ
ak
μ
= Ank
1
Ank
2
,
so that the umbrae A1, A2 are aλ, aμ respectively.

Theorem.-When the binary form
an
x
= (a1x1 + a2x2)n
is transformed to
An
ξ
= (A1ξ1 + A2ξ2)n
by the substitutions
x1 = λ1ξ1 + μ1ξ2, x2 = λ2ξ1 + μ2ξ2,
the umbrae A1, A2 are expressed in terms of the umbrae a1, a2 by the formulae
A1 = λ1a1 + λ2a2, A2 = μ1a1 + μ2a2,
We gather that A1, A2 are transformed to a1, a2 in such wise that the determinant of transformation reads by rows as the original determinant reads by columns, and that the modulus of the transformation is, as before, (λμ). For this reason the umbrae A1, A2 are said to be contragredient to x1, x2. If we solve the equations connecting the original and transformed unbrae we find
(λμ)(−a2) = λ1(−A2) + μ1A1,
(λμ)a1 = λ2(−A2) + μ2A1,
and we find that, except for the factor (λμ), −a2 and +a1 are transformed to −A2 and +A1 by the same substitutions as x1 and x2 are transformed to ξ1 and ξ2. For this reason the umbrae −a2, a1 are said to be cogredient to x1 and x2. We frequently meet with cogredient and contragedient quantities, and we have in general the following definitions:-(1) "If two equally numerous sets of quantities x, y, z, ... x′, y′, z′, ... are such that whenever one set x, y, z,... is expressed in terms of new quantities X, Y, Z, ... the second set x′, y′, z′, ... is expressed in terms of other new quantities X′, Y′, Z′, .... by the same scheme of linear substitution the two sets are said to be cogredient quantities." (2) "Two sets of quantities x, y, z, ...; ξ, ηζ, ... are said to be contragredient when the linear substitutions for the first set are
x = λ1X + μ1Y + ν1Z +
y = λ2X + μ2Y + ν2Z +
z = λ3X + μ3Y + ν3Z +
····· and these are associated with the following formulae appertaining to the second set,
Ξ = λ1ξ + λ2η + λ3ζ + ,
Η = μ1ξ + μ2η + μ3ζ + ,
Ζ = ν1ξ + ν2η + ν3ζ + ,
···· wherein it should be noticed that new quantities are expressed in terms of the old, as regards the latter set, and not vice versa."

Ex. gr. The symbols d/dx, d/dy, d/dz, ... are contragredient with the variables x, y, z, ... for when
(x, y, z, ...) = (λ1, μ1, ν1, ...)(X, Y, Z, ...)
( x , z, ���) = (A l, �i, VI I ���)

(X, Y, Z, ���), I A 2, / 2 2, Y2, ... I I A S, 1 2 3, Y 3, .... 1

(Tr (T d d d d d d ,.. rd Y' ' ...) = 01, A2, A 3, ...)

(d ' ' z / 2 1, /22, / 1 3, ... Pl, P2, P3, ... we find
(d/dX, d/dY, d/dZ, …) = (λ1, λ2, λ3, …) (d/dx, d/dy, d/dz, …)

μ1, μ2, μ3,
ν1, ν2, ν3,
. . . .

Observe the notation, which is that introduced by Cayley into the theory of matrices which he himself created.

Just as cogrediency leads to a theory of covariants, so contragrediency leads to a theory of contravariants. If u, a quantic in x, y, z, …, be expressed in terms of new variables X, Y, Z …; and if, ξ, η, ζ, …, be quantities contragredient to x, y, z, …; there are found to exist functions of ξ, η, ζ …, and of the coefficients in u, which need, at most, be multiplied by powers of the modulus to be made equal to the same functions of Ξ, Η, Ζ, … of the transformed coefficients of u; such functions are called contravariants of u. There also exist functions, which involve both sets of variables as well as the coefficients of u, possessing a like property; such have been termed mixed concomitants, and they, like contravariants, may appertain as well to a system of forms as to a single form.

As between the original and transformed quantic we have the umbral relations
A1 = λ1a1 + λ2a2, A2 = μ1a1 + μ2a2,
and for a second form
B1 = λ1b1 + λ2b2, B2 = μ1b1 + μ2b2.
The original forms are an
x
, bn
x
, and we may regard them either as different forms or as equivalent representations of the same form. In other words, B, b may be regarded as different or alternative symbols to A, a. In either case
(AB) = A1B2 − A2B1 = (λμ)(ab);
and, from the definition, (ab) possesses the invariant property. We cannot, however, say that it is an invariant unless it is expressible in terms of the real coefficients. Since (ab) = a1b2a2b1, that this may be the case each form must be linear; and if the forms be different (ab) is an invariant (simultaneous) of the two forms, its real expression being a0b1a1b0. This will be recognized as the resultant of the two linear forms. If the two linear forms be identical, the umbral sets a1, a2; b1, b2 are alternative, are ultimately put equal to one another and (ab) vanishes. A single linear form has, in fact, no invariant. When either of the forms is of an order higher than the first (ab), as not being expressible in terms of the actual coefficients of the forms, is not an invariant and has no significance. Introducing now other sets of symbols C, D, …; c, d, … we may write
(AB)i(AC)j(BC)k… = (λμ)i+j+k+…(ab)i(ac)j(bc)k…,
so that the symbolic product
(ab)i(ac)j(bc)k…,
possesses the invariant property. If the forms be all linear and different, the function is an invariant, viz. the ith power of that appertaining to ax and bx multiplied by the jth power of that appertaining to ax and cx multiplied by &c. If any two of the linear forms, say px, qx, be supposed identical, any symbolic expression involving the factor (pq) is zero. Notice, therefore, that the symbolic product (ab)i(ac)j(bc)k… may be always viewed as a simultaneous invariant of a number of different linear forms ax, bx, cx, …. In order that (ab)i(ac)j(bc)k… may be a simultaneous invariant of a number of different forms an1
x
, bn2
x
, cn3
x
,…, where n1, n2, n3, … may be the same or different, it is necessary that every product of umbrae which arises in the expansion of the symbolic product be of degree n1 in a1, a2; in the case of b1, b2 of degree n2; in the case of c 1, c2 of degree n3; and so on. For these only will the symbolic product be replaceable by a linear function of products of real coefficients. Hence the condition is
i + j + … = n1,
i + k + … = n2,
j + k + … = n3,
....
If the forms an
x
, bn
x
, cn
x
, … be identical the symbols are alternative, and provided that the form does not vanish it denotes an invariant of the single form an
x
.

There may be a number of forms an
x
, bn
x
, cn
x
, … and we may suppose such identities between the symbols that on the whole only two, three, or more of the sets of umbrae are not equivalent; we will then obtain invariants of two, three, or more sets of binary forms. The symbolic expression of a covariant is equally simple, because we see at once that since Aξ, Bξ, Cξ, … are equal to ax, bx, cx, … respectively, the linear forms ax, bx, cx, … possess the invariant property, and we may write
(AB)i(AC)j(BC)k…Aρ
ξ
Bσ
ξ
Cτ
ξ

= (λμ)i+j+k+…(ab)i(ac)j(bc)kaρ
x
bσ
x
cτ
x
…,
and assert that the symbolic product
(ab)i(ac)j(bc)kaρ
x
bσ
x
cτ
x
…,
possesses the invariant property. It is always an invariant or covariant appertaining to a number of different linear forms, and as before it may vanish if two such linear forms be identical. In general it will be simultaneous covariant of the different forms an1
x
, bn2
x
, cn3
x
, … if
i + j + … + ρ = n1,
i + j + … + σ = n2,
i + j + … + τ = n3,
. . ..

It will also be a covariant if the symbolic product be factorizable into portions each of which satisfies these conditions. If the forms be identical the sets of symbols are ultimately equated, and the form, provided it does not vanish, is a covariant of the form an
x
.

The expression (ab)4 properly appertains to a quartic; for a quadratic it may also be written (ab)2 (cd)2, and would denote the square of the discriminant to a factor près. For the quartic
(ab)4 = (a1b2a2b1)4 = a4
1
b4
2
− 4a4
1
a2b1b3
2
+ 6a2
1
a2
2
b2
1
b2
2

− 4a1a3
2
}b3
1
b2 + a4
2
b4
1
= a0a4 − 4a1a3 + 6a2
2
− 4a1a3 + a0a4
= 2(a0a4 − 4a1a3 + 3a2
2
),
one of the well-known invariants of the quartic.

For the cubic (ab)2axbx is a covariant because each symbol a, b occurs three times; we can first of all find its real expression as a simultaneous covariant of two cubics, and then, by supposing the two cubics to merge into identity, find the expression of the quadratic covariant, of the single cubic, commonly known as the Hessian.

By simple multiplication
(a3
1
b1b2
2
− 2a2
1
a2b2
1
b2 + a1a2
2
b3
1
)x2
1

+(a3
1
b3
2
- a1a2
2
b2
1
b2 - a2
1
a2b1b2
2
+ a3
2
b3
1
)x1x2
+ (a2
1
a2b3
2
- 2a1a2
2
b1b3
2
+ a3
2
b2
1
b2)x2
2
;
and transforming to the real form,
(a0b2 − 2a1b1 + a2b0)x2
1
(a0b3a1b2a2b1 + a3b0)x1x2
+ (a1b3 − 2a2b2 + a3b1)x2
2
,
the simultaneous covariant; and now, putting b = a, we obtain twice. the Hessian
(a0a2a2
1
)x2
2
+ (a0a3a1a2)x1x2 + (a1a3a2
2
)x2
2
.

It will be shown later that all invariants, single or simultaneous, are expressible in terms of symbolic products. The degree of the covariant in the coefficients is equal to the number of different symbols a, b, c, … that occur in the symbolic expression; the degree in the variables (i.e. the order of the covariant) is ρ + σ + τ … and the weight[4] of the coefficient of the leading term xρ + σ + τ
1
is equal to i + j + k + …. It will be apparent that there are four numbers associated with a covariant, viz. the orders of the quantic and covariant, and the degree and weight of the leading coefficient; calling these n, ε, θ, w respectively we can see that they are not independent integers, but that they are invariably connected by a certain relation nθ − 2w = ε. For, if φ(a0,…x1, x2) be a covariant of order ε appertaining to a quantic of order n,
φ(A0,…ξ1ξ2) = (λμ)w φ(a0,…λ1ξ1 + μ1ξ2, λ2ξ1 + μ2ξ2)
we find that the left- and right-hand sides are of degrees nθ and 2w + ε respectively in λ1, μ1, λ2, μ2, and thence nθ = 2w + ε.

Symbolic Identities.— For the purpose of manipulating symbolic expressions it is necessary to be in possession of certain simple identities which connect certain symbolic products. From the three equations
ax = a1x1 + a2x2, bx = b1x1 + b2x2, cx = c1x1 + c2x2,
we find by eliminating x1, and x2 the relation
ax(bc) + bx(ca) + cx(ab) = 0...(I.)
Introduce now new umbrae d1, d2 and recall that +d2d1 are cogredient with x1, and x2. We may in any relation substitute for any pair of quantities any other cogredient pair so that writing +d2, −d1 for x1 and x2, and noting that gx then becomes (gd), the above-written identity becomes
(ad)(bc) + (bd)(ca) + (cd)(ab) = 0... (II.)
Similarly in (I.), writing for c1, c2 the cogredient pair -y2, +y1, we obtain
axbyaybx = (ab)(xy). ... (III.)
Again in (I.) transposing ax(bc) to the other side and squaring, we obtain
2(ac)(bc)axbx = (bc)2a2
x
+ (ac)2bx − (ab)2c2
x
. (IV.)
and herein writing d2, −d1 for x1, x2,
2(ac)(bc)(ad)(bd) − (bc)2(ad)2 + (ac)2(bd)2 − (ab)2(cd)2. (V.)

As an illustration multiply (IV.) throughout by an−2
x
bn−2
x
cn−2
x
so that each term may denote a covariant of an nic.
2(ac)(bc)an−1
x
bn−1
x
cn−1
x

= (bc)2an
x
bn−2
x
cn−2
x
+ (ac)2an−2
x
bn
x
cn−2
x
− (ab)2an−2
x
bn−2
x
cn
x
,

Each term on the right-hand side may be shown by permutation of a, b, c to be the symbolical representation of the same covariant; they are equivalent symbolic products, and we may accordingly write
2(ac) (bc)ai -1 bi -1 cx 2 =(ab)2a:-2b:-2c:,
a relation which shows that the form on the left is the product of the two covariants
n (ab) ay 2 by 2 and cZ.

The identities are, in particular, of service in reducing symbolic products to standard forms. A symbolical expression may be always so transformed that the power of any determinant factor (ab) is even. For we may in any product interchange a and b without altering its signification; therefore
(ab) 2m+1 4) 1 = - (ab) 2 " 4)2,
where 4,1 becomes by the interchange, and hence
(ab)2m+14)1= Z (ab) 2m+1 (4) 1 - 02);
and identity (I.) will always result in transforming 01-02 so as to make it divisible by (ab).

Ex. gr.
(ab)(ac)bxcx = - (ab)(bc)axcx = 2(ab)c x {(ac)bx-(bc)axi = 1(ab)2ci;
so that the covariant of the quadratic on the left is half the product of the quadratic itself and its only invariant. To obtain the corresponding theorem concerning the general form of even order we multiply throughout by (ab)2' 2c272 and obtain (ab)2m-1(ac)bxc2:^1=(ab)2mc2

Paying attention merely to the determinant factors there is no form with one factor since (ab) vanishes identically. For two factors the standard form is (ab) 2; for three factors (ab) 2 (ac); for four factors (ab) 4 and (ab) 2 (cd) 2; for five factors (ab) 4 (ac) and (ab) 2 (ac)(de) 2; for six factors (ab) 6, (ab) 2 (bc) 2 (ca) 2 , and (ab) 2 (cd) 2 (ef) 2 . It will be a useful exercise for the reader to interpret the corresponding covariants of the general quantic, to show that some of them are simple powers or products of other covariants of lower degrees and order.

The Polar Process.—The �th polar of ax with regard to y is
n-� a aye i.e. of the symbolic factors of the form are replaced by IA others in which new variables y1, y2 replace the old variables x1, x 2 . The operation of taking the polar results in a symbolic product, and the repetition of the process in regard to new cogredient sets of variables results in symbolic forms. It is therefore an invariant process. All the forms obtained are invariants in regard to linear transformations, in accordance with the same scheme of substitutions, of the several sets of variables.

An important associated operation is a ? 32 ax l ay 2 ax2ay1' which, operating upon any polar, causes it to vanish. Moreover, its operation upon any invariant form produces an invariant form. Every symbolic product, involving several sets of cogredient variables, can be exhibited as a sum of terms, each of which is a polar multiplied by a product of powers of the determinant factors ( xy), (xz), (yz),... Transvection. - We have seen that (ab) is a simultaneous invariant of the two different linear forms a x, bx, and we observe that (ab) is equivalent to where f =a x, 4)=b. If f =ay, 4 = b' be any two binary forms, we generalize by forming the function (m-k)! (n-k)! of a4) of a 4) k m! l ax 2 2 ax i l This is called the kth transvectant of f over 4); it may be conveniently denoted by (f, (15)k. (a m b n) k (ab) kamkbn-k x, x - x it is clear that the k th transvectant is a simultaneous covariant of the two forms.

It has been shown by Gordan that every symbolic product is expressible as a sum of transvectants.

If m > n there are n +1 transvectants corresponding to the values o, t, 2,... n of k; if k = o we have the product of the two forms, and for all values of k>n the transvectants vanish. In general we may have any two forms 01/1X1+ 'II � Yy + 02x2) p Y'x =, / / being the umbrae, as usual, and for the kth transvectant we have (4)1,,, 4)Q) k = (4)) k 4)2 -krk, a simultaneous covariant of the two forms. We may suppose of, 4 ,2 to be any two covariants appertaining to a system, and the process of transvection supplies a means of proceeding from them to other covariants.

The two forms ax, bx, or of, 0, may be identical; we then have the kth transvectant of a form over itself which may, or may not, vanish identically; and, in the latter case, is a covariant of the single form. It is obvious that, when k is uneven, the kth transvectant of a form over itself does vanish. We have seen that transvection is equivalent to the performance of partial differential operations upon the two forms, but, practically, we may regard the process as merely substituting (ab) k, (OW for azbx, 4x t ' respectively in the symbolic product subjected to transvection. It is essentially an operation performed upon the product of �two forms. If, then, we require the transvectants of the two forms f+Xf', 0+14', we take their product fc5+xf'95+,-ifct'+atif'cb', and the kth transvectant is simply obtained by operating upon each term separately, viz.

(f, 4)) k +(f, 4)) k +�(f, 4/) k +a�(1, 4)')k; and, moreover, if we require to find the kth transvectant of one linear system of forms over another we have merely to multiply the two systems, and take the k th transvectant of the separate products.

The process of transvection is connected with the operations 12; for ?k (a m b n) = (ab)kam-kbn-k, (x y x y or S 2 k (a x by) x = 4))k; so also is the polar process, for since f k m-k k k n - k k y = a x by, 4)y = bx by, if we take the k th transvectant of f i x; over 4 k, regarding y,, y 2 as the variables, (f k, 4)y) k (ab) ka x -kb k (f, 15)k; or the k th transvectant of the k th polars, in regard to y, is equal to the kth transvectant of the forms. Moreover, the kth transvectant (ab) k a m-k b: -k is derivable from the kth polar of ax, viz. ai by substituting for y 1, y 2 the cogredient quantities b2,-b1, and multiplying by by-k.

First and Second Transvectants.- A few words must be said about the first two transvectants as they are of exceptional Interest. Since, If F = An, 4) = By, 1 = I

(Df A4) Of A?) Ab A"'^1Bz 1=, (F, Mn Ax I Ax 2 Axe Ax1) J

The First Transvectant Differs But By A Numerical Factor From The Jacobian Or Functional Determinant, Of The Two Forms. We Can Find An Expression For The First Transvectant Of (F, �) 1 Over Another Form Cp. For (M N)(F,4)), =Nf.4Y Mfy.4), And F,4, F 5.4)= (Axby A Y B X) A X B X 1= (Xy)(F,4))1; (F,Ct)1=F5.D' 7,(Xy)(F4)1. Put M 1 For M, N I For N, And Multiply Through By (Ab); Then { (F ,C6) } = (Ab) A X 2A Y B X 1 M N I 2 (Xy) ,?) 2, = (A B)Ax 1B X 2B Y L I Multiply By Cp 1 And For Y L, Y2 Write C 2, C1;

Then The Right Hand Side Becomes

(Ab)(Bc)Am Lbn 2Cp 1 M I C P (F?) 2 M { N2 X, Of Which The First Term, Writing C P = ,,T, Is Mn 2 A B (Ab)(Bc)Axcx 1 M 2 N 2 P 2 2222 2 2 _2 A X B X C (Bc) A C Bx M N 2 2 2 M2°N 2 N 2 M 2 2 A X (Bc) B C P C P (Ab) A B B(Ac) Ax Cp 2 = 2 (04) 2 1 (F,0) 2.4 (F,Y') 2 �?;

And, If

(F,4)) 1 = Km " 2, (F??) 1 1 M N S X X X Af A _Af A Ax, Ax Ax Ax1 and this, on writing c2, − c1 For y1, y2, becomes

(kc)K X 'T 3C X 1= (ƒ,0 1 ', G 1; �

∴1{F,O}1 M 1=1 M 2 0`,4)) (ƒ,φ2).ψ+ (0,0 2 .F '

and thence it appears that the first transvectant of (ƒ, (φ)1 over ψ) is always expressible by means of forms of lower degree in the coefficients wherever each of the forms F, 0, 4, is of higher degree than the first in x1, x2.

The second transvectant of a form over itself is called the Hessian of the form. It is

(ƒ,ƒ′)2 = (ab)2 a n-2 r7 2 =Hx - =H;

unsymbolically it is a numerical multiple of the determinant ∂2ƒ a2f (32 f) It is also the first transvectant of the differxi ax axa x 2 ential coefficients of the form with regard to the variables, viz. (L, _f_)'. For the quadratic it is the discriminant (ab) 2 and for ax2 the cubic the quadratic covariant (ab) 2 axbx.

In general for a form in n variables the Hessian is 3 2 f 3 2 f a2f ax i ax n ax 2 ax " �� ' axn and there is a remarkable theorem which states that if H =o and n=2, 3, or 4 the original form can be exhibited as a form in 1, 2, 3 variables respectively.

The Form ƒ+λφ. - An important method for the formation of covariants is connected with the form ƒ+λφ, where ƒ and φ are of the same order in the variables and X is an arbitrary constant. If the invariants and covariants of this composite quantic be formed we obtain functions of X such that the coefficients of the various powers of X are simultaneous invariants of f and 4). In particular, when 4) is a covariant of f, we obtain in this manner covariants of f. The Partial Differential Equations.--It will be shown later that covariants may be studied by restricting attention to the leading coefficient, viz. that affecting xi where e is the order of the covariant.

An important fact, discovered by Cayley, is that these coefficients, and also the complete covariants, satisfy certain partial differential equations which suffice to determine them, and to ascertain many of their properties. These equations can be arrived at in many ways; the method here given is due to Gordan. X1, X 2, u1, /22 being as usual the coefficients of substitution, let x1a ? + X 2 - = D, X 1 -' j +X 2 =D 2 AA' ?2 / 2 1 3 - 5 -, =112 87,2 = ?1a a + ?2a a =D��, 1 be linear operators. Then if j, J be the original and transformed forms of an invariant J= (a1)wj, w being the weight of the invariant.

Operation upon J results as follows D AA J = wJ; D A J=0; D �A J =0;D �� J = wJ.

The first and fourth of these indicate that (a 2) w is a homogeneous function of X i, X2, and of /u1, � 2 separately, and the second and third arise from the fact that (X / 1) is caused to vanish by both Da � and D�A. Since J= F(A0,A11...Ak,�..), where A k= we find that the results are equivalent to. aJ - ., _ A aJ �. k (DwAk) Ak 0; (D (� A k) Ak =wJ.

k k According to the well-known law for the changes of independent variables. Now D A xA k = (n - k) A k; A� A k = k A?1; D �A A k = (n - k) A k+1;D m� A k = kA k; (n - k)A ka - w Ak - 1 aA k = O; a _ J (n - k) A k +l A k = O; kA k Ak = wJ; equations which are valid when X 1, X 2, � 1, �2 have arbitrary values, and therefore when the values are such that J =j, A k =ak� Hence °a-do +(n -1)71 (a2aa-+... =wj, - aj aj - aj a °aa1 +2a 1aa2 +3a 2aa3 +... =0, - aj aj aj nal aao +(n-1)a2 at i -} (n - 2)a 3aa2+... =0, a 1 a ? +2a 2 a? +3a 3 a +... = wj, aa 1 aa 2 a a 3 the complete system of equations satisfied by an invariant. The fourth shows that every term of the invariant is of the same weight. Moreover, if we add the first to the fourth we obtain aj 2w ak = 7 1=6, j, =0j, where 0 is the degree of the invariant; this shows, as we have before observed, that for an invariant w= - n0. The second and third are those upon the solution of which the theory of the invariant may be said to depend. An instantaneous deduction from the relation w= 2 n0 is that forms of uneven orders possess only invariants of even degree in the coefficients. The two operators - a a - a = a °aa 1 +2 a 1aa2 +... +na" -laan -a a O = na laao + (n 1)a 2aa1 +�.. +a"aa"-1 have been much studied by Sylvester, Hammond, Hilbert and Elliott (Elliott, Algebra of Quantics, ch. vi.). An important reference is “The Differential Equations satisfied by Concomitants of Quantics,” by A. R. Forsyth, Proc. Lond. Math. Soc. vol. xix.

The Evectant Process.—If we have a symbolic product, which contains the symbol a only in determinant factors such as (ab), we may write x 2 ,-x 1 for a 1, a 2 , and thus obtain a product in which (ab) is replaced by b x, (ac) by c x and so on. In particular, when the product denotes an invariant we may transform each of the symbols a, b,...to x in succession, and take the sum of the resultant products; we thus obtain a covariant which is called the first evectant of the original invariant. The second evectant is obtained by similarly operating upon all the symbols remaining which only occur in determinant factors, and so on for the higher evectants.

Ex. gr. From (ac) 2 (bd) 2 (ad)(bc) we obtain (bd) 2 (bc) cyd x +(ac) 2 (ad) c xdx - (bd) 2 (ad)axb x - (ac)2(bc)axbx =4(bd) 2 (bc)c 2. d x the first evectant; and thence 4cxdi the second evectant; in fact the two evectants are to numerical factors pres, the cubic covariant Q, and the square of the original cubic.

If θ be the degree of an invariant j

aj aj a; oj =a ° a a o +al aa l +... +anaan naj n.-1 aj naj =a l aa ° +a 1 a2c3a1...+a2aan

and, herein transforming from a to x, we obtain the first evectant

(-) k, x1x2 aak k

Combinants. - An important class of invariants, of several binary forms of the same order, was discovered by Sylvester. The invariants in question are invariants quâ linear transformation of the forms themselves as well as quâ linear transformation of the variables. If the forms be ax, b2, cy,... The Aronhold process, given by the operation a as between any two of the forms, causes such an invariant to vanish. Thus it has annihilators of the forms

a0 db - 0 +al d 1+a2d 22+... °c - iao l a12da2+'..

and Gordan, in fact, takes the satisfaction of these conditions as defining those invariants which Sylvester termed " combinants." The existence of such forms seems to have been brought to Sylvester's notice by observation of the fact that the resultant of of and b must be a factor of the resultant of Xax+ 12 by and X'a +tA2 for a common factor of the first pair must be also a common factor so we obtain P: = of the second pair; so that the condition for the existence of such common factor must be the same in the two cases. A leading proposition states that, if an invariant of Xax and i ubi be considered as a form in the variables X and ,u, and an invariant of the latter be taken, the result will be a combinant of cif and b1'. The idea_can be generalized so as to have regard to ternary and higher forms each of the same order and of the same number of variables.

For further information see Gordan, Vorlesungen Tiber Invariantentheorie, Bd. ii. � 6 (Leipzig, 1887); E. B. Elliott, Algebra of Quantics, Art. 264 (Oxford, 1895).

Associated Forms.-A system of forms, such that every form appertaining to the binary form is expressible as a rational and integral function of the members of the system, is difficult to obtain. If, however, we specify that all forms are to be rational, but not necessarily integral functions, a new system of forms arises which is easily obtainable. A binary form of order n contains n independent constants, three of which by linear transformation can be given determinate values; the remaining n-3 coefficients, together with the determinant of transformation, give us n -2 parameters, and in consequence one relation must exist between any n - I invariants of the form, and fixing upon n-2 invariants every other invariant is a rational function of its members. Similarly regarding 1 x 2 as additional parameters, we see that every covariant is expressible as a rational function of n fixed covariants. We can so determine these n covariants that every other covariant is expressed in terms of them by a fraction whose denominator is a power of the binary form.

First observe that with f x =a: = b z = ���,f1 = a l a z ', f 2 = a 2 az-', f x =f,x i +f 2 x i, we find (ab) - (a f) bx - (b f) ax. fx ? and that thence every symbolic product is equal to a rational function of covariants in the form of a fraction whose denominator is a power of f x. Making the substitution in any symbolic product the only determinant factors that present themselves in the numerator are of the form (af), (bf), (cf),...and every symbol a finally appears in the form.

% -k Y k = (af) k a n x. 'hc has f as a factor, and may be written f. uk; for observing that 1,to =f. =f. uo; 4, 1=0=f.; where u 0 =1, u1=o, assume that tfik = (af) k ay -k = f. u k =�y. ukx(n-2) � Taking the first polar with regard to y (n - k) (a f) xa x -k-l ay+ k (af) k-l ay -k (ab) (n -1) b12by n kn-2k-1 n-1 k(n-2) =k(n- 2)a u x u5+nax ayux and, writing f 2 and -f l for y1 and 3,21 (n-k)(a f) k+ta i k-1 + k (n - 1)(ab)(a f) k-1 (b f)4 1 k by-2 = (uf)u xn-2k-1? Moreover the second term on the left contains ( a f)' c -2b z 2 = 2 (a f) k-2b x 2 - (b) /0-2a 2 � if k be uneven, and (af)?'bx (i f) of) '-la if k be even; in either case the factor (af) bx - (bf) ax = (ab) f, and therefore (n-k),bk+1 +M�f = k(n-2)f.(uf)uxn-2k-1; and 4 ' +1 is seen to be of the form f .14+1. We may write therefore 1 These forms, n in number, are called " associated forms " of f (" Schwesterformen," " formes associbes ").

Every covariant is rationally expressible by means of the forms f, u 2, u3,... u n since, as we have seen uo =I, u 1 =o. It is easy to find the relations u2 =2(f u3 = ((f ,f')2,f") 114=2(f,f') 4 �f 2 41(1,f')212, and so on.

To exhibit any covariant as a function of uo, ul, a n = (aiy1+a2y2) n and transform it by the substitution fi y 1+f2 y where f l = aay 1 ,f2 = a2ay -1, x y - x y = X x thence f . y1 = x 15+f2n; f� y2 =x2-f?n, f .a b = ax+ (a f) n, l; n u 2 " 2 22 2 +` n) u3 n-3n3+...+U 2jn� 3 n Now a covariant of ax =f is obtained from the similar covariant of ab by writing therein x i, x 2, for yl, y2, and, since y?, Y2 have been linearly transformed to and n, it is merely necessary to form the covariants in respect of the form (u1E+u2n) n, and then division, by the proper power of f, gives the covariant in question as a function of f, u0 = I, u2, u3,...un.

Summary of Results.-We will now give a short account of the results to which the foregoing processes lead. Of any form az there exists a finite number of invariants and covariants, in terms of which all other covariants are rational and integral functions (cf. Gordan,, Bd. ii. � 21). This finite number of forms is said to constitute the complete system. Of two or more binary forms there are also complete systems containing a finite number of forms. There are also algebraic systems, as above mentioned, involving fewer covariants which are such that all other covariants are rationally expressible in terms of them; but these smaller systems do not possess the same mathematical interest as those first mentioned.

The Binary Quadratic.-The complete system consists of the form itself, ax, and the discriminant, which is the second transvectant of the form upon itself, viz.: (f, f') 2 = (ab) 2; or, in real coefficients, 2(a 0 a 2 a 2 1). The first transvectant, (f,f') 1 = (ab) a x b x ,vanishes identically. Calling the discriminate D, the solution of the quadratic as =o is given by the formula a: = o ( a0+a12_x2 (a0x+aix2 If the form a 2 be written as the product of its linear factors p.a., the discriminant takes the form -2(pq) 2. The vanishing of this invariant is the condition for equal roots. The simultaneous system of two quadratic forms ai, ay, say f and 0, consists of six forms, viz.

the two quadratic forms f, 4); the two discriminants (f, f')2,(0,4')2, and the first and second transvectants of f upon 4, (f, ,>) 1 and (f, 402, which may be written (aa)a x a x and (aa) 2 . These fundamental or ground forms are connected by the relation - 2 1 (f,4) 1) 2 = -2f4,(f ,4,)2+ 02(f,f')2.

If the covariant (f,4) 1 vanishes f and 4 are clearly proportional, and if the second transvectant of (f, 4 5) 1 upon itself vanishes, f and 4) possess a common linear factor; and the condition is both necessary and sufficient. In this case (f, �) 1 is a perfect square, since its discriminant vanishes. If (f,4) 1 be not a perfect square, and rx, s x be its linear factors, it is possible to express f and 4, in the canonical forms Xi(rx)2+X2(sx)2, 111(rx)2+1.2 (sx) 2 respectively. In fact, if f and 4, have these forms, it is easy to verify that (f, 4,)i= (A j z) (rs)r x s x . The fundamental system connected with n quadratic forms consists of (i.) the n forms themselves f i, f2,�� fn, (ii.) the (2) functional determinants (f i ,f k) 1 , (iii.) the (n 2 1) in variants (f l, fk) 2, (iv.) the (3) forms (f i, (f k, f ni)) 2 , each such form remaining unaltered for any permutations of i, k, m. Between these forms various relations exist (cf. Gordan, � 134).

The Binary Cubic.-The complete system consists of f=aa,(f,f')'=(ab)2a b =0 2 ,(f 0)= (ab) 2 (ca)b c=Q3, x x x x x x and (0,0')2 (ab) 2 (cd) 2 (ad) (bc) = R.

To prove that this system is complete we have to consider (f, o) 2, 04') 1, (f,Q) 1, (f,Q) 2, (f,Q) 3, 0,Q) 1, (o,Q)2, and each of these can be shown either to be zero or to be a rational integral function of f, 0 Q and R. These forms are connected by the relation 2Q2+ 3+Rf2=0.

The discriminant of f is equal to the discriminant of 0, and is therefore (0, 0') 2 = R; if it vanishes both f and 0 have two roots equal, 0 is a rational factor of f and Q is a perfect cube; the cube root being equal, to a numerical factor pres, to the square root of A. The Hessian 0 =A 2 is such that (f, 2 and if f is expressible in the form X(p x) 3 +,i(g x) 3 , that is as the sum of two perfect cubes,. we find that Di must be equal to p x g x for then t x (p x) 3 +, u (g x) 3, Hence, if px, qx be the linear factors of the Hessian 64, the cubic can be put into the form A(p x) 3 +�(g x) 3 and immediately solved. This method of solution fails when the discriminant R vanishes, for then the Hessian has equal roots, as also the cubic f. The Hessian in that case is a factor of f, and Q is the third power of u2,... linear factor which occurs to the second power in . If, moreover, vanishes identically is a perfect cube.

The Binary Quartic.—The fundamental system consists of five forms ; ; ; ; , viz. two invariants, two quartics and a sextic. They are connected by the relation

.

The discriminant, whose vanishing is the condition that f may possess two equal roots, has the expression j 2 - 6 i 3; it is nine times the discriminant of the cubic resolvent k 3 - 2 ik- 3j , and has also the expression 4(1, t') 6 . The quartic has four equal roots, that is to say, is a perfect fourth power, when the Hessian vanishes identically; and conversely. This can be verified by equating to zero the five coefficients of the Hessian (ab) 2 axb2. Gordan has also shown that the vanishing of the Hessian of the binary n ic is the necessary and sufficient condition to ensure the form being a perfect n th power. The vanishing of the invariants i and j is the necessary and sufficient condition to ensure the quartic having three equal roots. On the one hand, assuming the quartic to have the form 4xix 2, we find i=j=o, and on the other hand, assuming i=j=o, we find that the quartic must have the form a o xi+4a 1 xix 2 which proves the proposition. The quartic will have two pairs of equal roots, that is, will be a perfect square, if it and its Hessian merely differ by a numerical factor. For it is easy to establish] the formula (yx) 2 0 4 = 2f.4-2(f y 1 ) 2 connecting the Hessian with the quartic and its first and second polars; now a, a root of f, is also a root of Ox, and con se uentl the first polar 1 of of q y p f? =y la xl -i-y2a x2 must also vanish for the root a, and thence ax, and a must also vanish for the same root; which proves that a is a double root of f, and f therefore a perfect square. When f = 6xix2 it will be found that 0 = -f. The simplest form to which the quartic is in general reducible is +6mxix2+x2, involving one parameter m; then Ox = 2m (xi +x2) +2 (1-3m2) x2 ix2; i = 2 (t +3m2) ;j= '6m (1 - m) 2; t= (1 - 9m 2) (xi - x2) (x21 + x2) x i x 2. The .sextic covariant t is seen to be factorizable into three quadratic factors 4 = x 1 x 2, =x 2 1 - 1 - 2 2, 4) - x, which are such that the three mutual second transvectants vanish identically; they are for this reason termed conjugate quadratic factors. It is on a consideration of these factors of t that Cayley bases his solution of the quartic equation. For, since -2t 2 =0 3 -21f 2 ,6,-3j(-f) 3, he compares the right-hand side with cubic resolvent k 3 -21X 2 k - j 2. of f=0, :and notices that they become identical on substituting 0 for k, and -f for X; hence, if k1, k2, k 3 be the roots of the resolvent -21 2 = (o + k if) (A + k 2f)(o + k 3f); and now, if all the roots of f be different, so also are those of the resolvent, since the latter, and f, have practically the same discriminant; consequently each of the three factors, of -21 2, must be perfect squares and taking the square root 1 t = -' (1)�x4; and it can be shown that 0, x, 1P are the three conjugate quadratic factors of t above mentioned. We have A +k 1 f =0 2, O+k 2 f = x2, O+k3f =4) 2 , and Cayley shows that a root of the quartic can be xpressed in the determinant form 1, k, 0.1y the remaining roots being obtained by varying 1, k, x the signs which occur in the radicals 2 u The transformation to the normal form reduces 1, k 3 ,? the quartic to a quadratic. The new variables y1= 0 are the linear factors of 0. If 4) = rx.sx, the Y2 =1 normal form of a:, can be shown to be given by (rs) 4 .a x 4 = (ar) 4s: 6 (ar) 2 (as) 2rxsy -I- (as) 4rx; 4) is any one of the conjugate quadratic factors of t, so that, in determining rx, sx from J z+k 1 f =o, k 1 is any root of the resolvent. The transformation to the normal form, by the solution of a cubic and a quadratic, therefore, supplies a solution of the quartic. If (X�) is the modulus of the transformation by which a2 is reduced to 3 the normal form, i becomes (X /2) 4 i, and j, (Ap) 3 j; hence ? 3 is absolutely unaltered by transformation, and is termed the absolute invariant. Since therefore ? 2 - 9 m 2 (1 3 m 2)) 2 we have a cubic equation for determining m 2 as a function of the absolute invariant.

Remark.—Hermite has shown (Crelle, Bd. lii.) that the substitution, , reduces to the form

.

The Binary Quintic.—The complete system consists of 23 forms, of which the simplest are f =a:; the Hessian H = (f, f') 2 = (ab) 2axbz; the quadratic covariant i= (f, f) 4 = (ab) 4axbx; and the nonic co variant T = (f, (f', f") 2) 1 = (f, H) 1 = (aH) azHi = (ab) 2 (ca) axbycy; the remaining 19 are expressible as transvectants of compounds of these four.

There are four invariants (i, i')2; (13, H)6; (f2, 151c.; (f t, 17)14 four linear forms (f, i 2) 4; (f, i 3) 5; (i 4, T) 8; ( 2 5 , T)9 three quadratic forms i; (H, i 2)4; (H, 23)5 three cubic forms (f, i)2; (f, i 2) 3; (13, T)6 two quartic forms (H, i) 2; (H, 12)3. three quintic forms f; (f, i) 1; (i 2, T)4 two sextic forms H; (H, 1)1 one septic form (i, T)2 one nonic form T.

We will write the cubic covariant (f, i) 2 =j, and then remark that the result, (f,j) 3 = o, can be readily established. The form j is completely defined by the relation (f,j) 3 =o as no other covariant possesses this property.

Certain convariants of the quintic involve the same determinant factors as appeared in the system of the quartic; these are f, H, i, T and j, and are of special importance. Further, it is convenient to have before us two other quadratic covariants, viz. T = (j, j) 2 jxjx; 0 = (iT)i x r x; four other linear covariants, viz. a = - (ji) 2 jx; s = (ia)ix; Y = (ra)r x: (3= (T0)T x . Further, in the case of invariants, we write A= (1, i') 2 and take three new forms B = (i, T) 2; C = (r, r`) 2; R = (/y). Hermite expresses the quintic in a forme-type in which the constants are invariants and the variables linear covariants. If a, a be the linear forms, above defined, he raises the identity ax(0) =ax(aJ3) - (3x(aa) to the fifth power (and in general to the power n) obtaining (aa) 5 f = (a13) 5 az - 5 (a0) 4 (aa) ax?3 -F... - (aa) and then expresses the coefficients, on the right, in terms of the fundamental invariants. On this principle the covariant j is expressible in the form R 2 j =5 3 + BS 2 a+4ACSa 2 + C(3AB -4C)a3 when S, a are the above defined linear forms.

Hence, solving the cubic, R 2 j = (S -m i a) (S - m 2 a) (S - m3a) wherein m 1 m2, m 3 are invariants.

Sylvester showed that the quintic might, in general, be expressed as the sum of three fifth powers, viz. in the canonical form

f=k1(px)5 +k2(gx) 5 +k3(rx) 5 .

Now, evidently, the third transvectant of f, expressed in this form, with the cubic pxgxrx is zero, and hence from a property of the covariant j we must have j = pxgxrx; showing that the linear forms involved are the linear factors of j. We may therefore write I. / f = k1.(S-mia)5+k2(S-m2a)5+k3(6-m3a)5; and we have merely to determine the constants k1, k2, k3. To determine them notice that R = (a6) and then (f, a 5) 5 = - R 5 (k1 +k2+k3) (f, a 4 5) 5 = - 5R5 ( m 1 k 1+ m 2 k 2+ m 3 k 3), (f, a352) 5 = -10R5 (m21ke +m2k2+m3k3) three equations for determining k 1, k2, k3. This canonical form depends upon j having three unequal linear factors. When C vanishes j has the form j = pxg x , and (f,j) 3 = (ap) 2 (aq)ax = o. Hence, from the identity ax (pq) = px (aq) -qx (ap), we obtain (pet' = (aq) 5px - 5 (ap) (aq) 4 pxg x - (ap) 5 gi, the required canonical form. Now, when C = o, clearly (see ante) R 2 j = 6 2 p where p = S +2 B a; and Gordan then proves the relation 6R 4 .f = B65�5B64p - 4A2p5, which is Bring's form of quintic at which we can always arrive, by linear transformation, whenever the invariant C vanishes. Remark.-The invariant C is a numerical multiple of the resultant of the covariants i and j, and if C = o, p is the common factor of i and j. The discriminant is the resultant of ax and ax and of degree 8 in the coefficients; since it is a rational and integral function of the fundamental invariants it is expressible as a linear function of A 2 and B; it is independent of C, and is therefore unaltered when C vanishes; we may therefore take f in the canonical form

6R 4 f = BS5+5BS4p-4A2p5. The two equations

,
,

yield by elimination of and the discriminant

.

The general equation of degree 5 cannot be solved algebraically, but the roots can be expressed by means of elliptic modular functions. For an algebraic solution the invariants must fulfil certain conditions. When , and neither of the expressions , vanishes, the covariant is a linear factor of ; but, when , also vanishes, and then is a product of the form and of the Hessian of . When and the invariants and all vanish, either A or must vanish; in the former case is a perfect cube, its Hessian vanishing, and further contains as a factor; in the latter case, if , be the linear factors of , can be expressed as ; if both and vanish also vanishes identically, and so also does . If, however, the condition be the vanishing of , contains a linear factor to the fourth power.

The Binary Sextic.—The complete system consists of 26 forms, of which the simplest are ; the Hessian ; the quartic ; the covariants ; ; and the invariants ; . There are

5 invariants: ;
6 of order 2: ;
5 of order 4: ;
5 of order 6: ;
3 of order 8: ;
1 of order 10: ;
1 of order 12: .

For a further discussion of the binary sextic see Gordan, loc. cit., Clebsch, loc. cit. The complete systems of the quintic and sextic were first obtained by Gordan in 1868 (Journ. f. Math. lxix. 323-354). August von Gall in 1880 obtained the complete system of the binary octavic (Math. Ann. xvii. 31-52, 139-152, 456); and, in 1888, that of the binary septimic, which proved to be much more complicated (Math. Ann. xxxi. 318-336). Single binary forms of higher and finite order have not been studied with complete success, but the system of the binary form of infinite order has been completely determined by Sylvester, Cayley, MacMahon and Stroh, each of whom contributed to the theory.

As regards simultaneous binary forms, the system of two quadratics, and of any number of quadratics, is alluded to above and has long been known. The system of the quadratic and cubic, consisting of 15 forms, and that of two cubics, consisting of 26 forms, were obtained by Salmon and Clebsch; that of the cubic and quartic we owe to Sigmund Gundelfinger (Programm Stuttgart, 1869, 1-43); that of the quadratic and quintic to Winter (Programm Darmstadt, 1880); that of the quadratic and sextic to von Gall (Programm Lemgo, 1873); that of two quartics to Gordan (Math. Ann. ii. 227-281, 1870); and to Eugenio Bertini (Batt. Giorn. xiv. 1-14, 1876; also Math. Ann. xi. 30-41, 1877). The system of four forms, of which two are linear and two quadratic, has been investigated by Perrin (S. M. F. Bull. xv. 45-61, 1887).

Ternary and Higher Forms.—The ternary form of order is represented symbolically by

;

and, as usual, are alternative symbols, so that

.

To form an invariant or covariant we have merely to form a product of factors of two kinds, viz. determinant factors , , , etc.…, and other factors in such manner, that each of the symbols occurs times. Such a symbolic product, if its does not vanish identically, denotes an invariant or a covariant, according as factors do not or do appear. To obtain the real form we multiply out, and, in the result, substitute for the products of symbols the real coefficients which they denote.

For example, take the ternary quadratic

,

or in real form . We can see that is not a covariant, because it vanishes identically, the interchange of and changing its sign instead of leaving it unchanged; but is an invariant. If , , be different forms we obtain, after development of the squared determinant and conversion to the real form (employing single and double dashes to distinguish the real coefficients of and ),



;

a simultaneous invariant of the three forms, and now suppressing the dashes we obtain

,

the expression in brackets being the well-known invariant of , the vanishing of which expresses the condition that the form may break up into two linear factors, or, geometrically, that the conic may represent two right lines. The complete system consists of the form itself and this invariant.

The ternary cubic has been investigated by Cayley, Aronhold, Hermite, Brioschi and Gordan. The principal reference is to Gordan (Math. Ann. i. 90-128, 1869, and vi. 436-512, 1873). The complete covariant and contravariant system includes no fewer than 34 forms; from its complexity it is desirable to consider the cubic in a simple canonical form; that chosen by Cayley was (Amer. J. Math. iv. 1-16, 1881). Another form, associated with the theory of elliptic functions, has been considered by Dingeldey (Math. Ann. xxxi. 157-176, 1888), viz. , and also the special form of the cuspidal cubic. An investigation, by non-symbolic methods, is due to F. C. J. Mertens (Wien. Ber. xcv. 942-991, 1887). Hesse showed independently that the general ternary cubic can be reduced, by linear transformation, to the form

,

a form which involves 9 independent constants, as should be the case; it must, however, be remarked that the counting of constants is not a sure guide to the existence of a conjectured canonical form. Thus the ternary quartic is not, in general, expressible as a sum of five 4th powers as the counting of constants might have led one to expect, a theorem due to Sylvester. Hesse’s canonical form shows at once that there cannot be more than two independent invariants; for if there were three we could, by elimination of the modulus of transformation, obtain two functions of the coefficients equal to functions of , and thus, by elimination of , obtain a relation between the coefficients, showing them not to be independent, which is contrary to the hypothesis.

The simplest invariant is cf degree 4, which for the canonical form of Hesse is ; its vanishing indicates that the form is expressible as a sum of three cubes. The Hessian is symbolically , and for the canonical form . By the process of Aronhold we can form the invariant for the cubic , and then the coefficient of is the second invariant . Its symbolic expression, to a numerical factor près, is

,

and it is clearly of degree 6.

One more covariant is requisite to make an algebraically complete set. This is of degree 8 in the coefficients, and degree 6 in the variables, and, for the canonical form, has the expression


.

Passing on to the ternary quartic we find that the number of ground forms is apparently very great. Gordan (Math. Ann. xvii. 217-233), limiting himself to a particular case of the form, has determined 54 ground forms, and G. Maisano (Batt. G. xix. 198-237, 1881) has determined all up to and including the 5th degree in the coefficients.

The system of two ternary quadratics consists of 20 forms; it has been investigated by Gordan (Clebsch-Lindemann’s Vorlesungen i. 288, also Math. Ann. xix. 529-552); Perrin (S. M. F. Bull. xviii. 1-80, 1890); Rosanes (Math. Ann. vi. 264); and Gerbaldi (Annali (2), xvii. 161-196).

Ciamberlini has found a system of 127 forms appertaining to three ternary quadratics (Batt. G. xxiv. 141-157).

A. R. Forsyth has discussed the algebraically complete sets of ground forms of ternary and quaternary forms (see Amer. J. xii. 1-60, 115-160, and Camb. Phil. Trans. xiv. 409-466, 1889). He proves, by means of the six linear partial differential equations satisfied by the concomitants, that, if any concomitant be expanded in powers of , , , the point variables—and of , , , the contragredient line variables—it is completely determinate if its leading coefficient be known. For the unipartite ternary quantic of order he finds that the fundamental system contains individuals. He successfully considers the systems of two and three simultaneous ternary quadratics. In Part III. of the Memoir he discusses bi-ternary quantics, and in particular those which are lineo-linear, quadrato-linear, cubo-linear, quadrato-quadratic, cubo-cubic, and the system of two lineo-linear quantics. He shows that the system of the bi-ternary comprises

individuals.

Bibliographical references to ternary forms are given by Forsyth (Amer. J. xii. p. 16) and by Cayley (Amer. J. iv., 1881). Clebsch, in 1872, in papers in Abh. d. K. Akad. d. U. zu Göttingen, t. xvii. and Math. Ann. t. v., established the important result that in the case of a form in variables, the concomitants of the form, or of a system of such forms, involve in the aggregate classes of variables. For instance, those of a ternary form involve two classes which may be geometrically interpreted as point and line co-ordinates in a plane; those of a quaternary form involve three classes which may be geometrically interpreted as point, line and plane coordinates in space.

IV. Enumerating Generating Functions

Professor Michael Roberts (Quart. Math. J. iv.) was the first to remark that the study of covariants may be reduced to the study of their leading coefficients, and that from any relations connecting the latter are immediately derivable the relations connecting the former. It has been shown above that a covariant, in general, satisfies four partial differential equations. Two of these show that the leading coefficient of any covariant is an isobaric and homogeneous function of the coefficients of the form; the remaining two may be regarded as operators which cause the vanishing of the covariant. These may be written, for the binary ,

;
;

or in the form

, ;

where

,
.

Let a covariant of degree in the variables, and of degree in the coefficients (the weight of the leading coefficient being and ), be

.

Operating with we find ; that is to say, satisfies one of the two partial differential equations satisfied by an invariant. It is for this reason called a seminvariant, and every seminvariant is the leading coefficient of a covariant. The whole theory of invariants of a binary form depends upon the solutions of the equation . Before discussing these it is best to transform the binary form by substituting , for respectively; it then becomes

,

and takes the simpler form

.

One advantage we have obtained is that, if we now write , and substitute for , when > 0, we obtain

which is the form of for a binary .

Hence by merely diminishing each suffix in a seminvariant by unity, we obtain another seminvariant of the same degree, and of weight , appertaining to the . Also, if we increase each suffix in a seminvariant, we obtain terms, free from , of some seminvariant of degree and weight . Ex. gr. from the invariant of the quartic the diminishing process yields , the leading coefficient of the Hessian of the cubic, and the increasing process leads to which only requires the additional term to become a seminvariant of the sextic. A more important advantage, springing from the new form of , arises from the fact that if

,

the sums of powers all satisfy the equation . Hence, excluding , we may, in partition notation, write down the fundamental solutions of the equation, viz.—

,

and say that with , we have an algebraically complete system. Every symmetric function denoted by partitions, not involving the figure unity (say a non-unitary symmetric function), which remains unchanged by any increase of , is also a seminvariant, and we may take if we please another fundamental system, viz.—

or .

Observe that, if we subject any symmetric function to the diminishing process, it becomes .

Next consider the solutions of which are of degree and weight . The general term in a solution involves the product wherein , ; the number of such products that may appear depends upon the number of partitions of into or fewer parts limited not to exceed in magnitude. Let this number be denoted by . In order to obtain the seminvariants we would write down the terms each associated with a literal coefficient; if we now operate with we obtain a linear function of products, for the vanishing of which the literal coefficients must satisfy linear equations; hence of these coefficients may be assumed arbitrarily, and the number of linearly independent solutions of , of the given degree and weight, is precisely . This theory is due to Cayley; its validity depends upon showing that the linear equations satisfied by the literal coefficients are independent; this has only recently been established by E. B. Elliott. These seminvariants are said to form an asyzygetic system. It is shown in the article on Combinatorial Analysis that is the coefficient of in the ascending expansion of the fraction

.

Hence is given by the coefficient of in the fraction

,

the enumerating generating function of asyzygetic seminvariants. We may, by a well-known theorem, write the result as a coefficient of in the expansion of

;

and since this expression is unaltered by the interchange of and we prove Hermite’s Law of Reciprocity, which states that the asyzygetic forms of degree for the are equinumerous with those of degree for the .

The degree of the covariant in the variables is ; consequently we are only concerned with positive terms in the developments and will be negative unless . It is convenient to enumerate the seminvariants of degree and order by a generating function; so, in the first written generating function for seminvariants, write for and for ; we obtain

in which we have to take the coefficient of , the expansion being in ascending powers of . As we have to do only with that part of the expansion which involves positive powers of , we must try to isolate that portion, say . For we can prove that the complete function may be written

,

where

;

and this is the reduced generating function which tells us, by its denominator factors, that the complete system of the quadratic is composed of the form itself of degree order 1, 2 shown by , and of the Hessian of degree order 2, 0 shown by .

Again, for the cubic, we can find

,

where the ground forms are indicated by the denominator factors, viz.: these are the cubic itself of degree order 1, 3; the Hessian of degree order 2, 2; the cubi-covariant G of degree order 3, 3, and the quartic invariant of degree order 4, 0. Further, the numerator factor establishes that these are not all algebraically independent, but are connected by a syzygy of degree order 6, 6.

Similarly for the quartic

,

establishing the 5 ground forms and the syzygy which connects them.

The process is not applicable with complete success to quintic and higher ordered binary forms. This arises from the circumstance that the simple syzygies between the ground forms are not all independent, but are connected by second syzygies, and these again by third syzygies, and so on; this introduces new difficulties which have not been completely overcome. As regards invariants a little further progress has been made by Cayley, who established the two generating functions for the quintic

,

and for the sextic

.

Accounts of further attempts in this direction will be found in Cayley’s Memoirs on Quantics (Collected Papers), in the papers of Sylvester and Franklin (Amer. J. i.-iv.), and in Elliott’s Algebra of Quantics, chap. viii.

Perpetuants.—Many difficulties, connected with binary forms of finite order, disappear altogether when we come to consider the form of infinite order. In this case the ground forms, called also perpetuants, have been enumerated and actual representative seminvariant forms established. Putting equal to ∞, in a generating function obtained above, we find that the function, which enumerates the asyzygetic seminvariants of degree , is

that is to say, of the weight , we have one form corresponding to each non-unitary partition of into the parts 2, 3, 4,.... The extraordinary advantage of the transformation of to association with non-unitary symmetric functions is now apparent; for we may take, as representative forms, the symmetric functions which are symbolically denoted by the partitions referred to. Ex. gr., of degree 3 weight 8, we have the two forms , . If we wish merely to enumerate those whose partitions contain the figure , and do not therefore contain any power of as a factor, we have the generator

.

If , every form is obviously a ground form or perpetuant, and the series of forms is denoted by . Similarly, if , every form is a perpetuant. For these two cases the perpetuants are enumerated by

, and

respectively.

When it is clear that no form, whose partition contains a part 3, can be reduced; but every form, whose partition is composed of the parts 4 and 2, is by elementary algebra reducible by means of perpetuants of degree 2. These latter forms are enumerated by ; hence the generator of quartic perpetuants must be

;

and the general form of perpetuants is .

When , the reducible forms are connected by syzygies which there is some difficulty in enumerating. Sylvester, Cayley and MacMahon succeeded, by a laborious process, in establishing the generators for , and , viz.:

, ;

but the true method of procedure is that of Stroh which we are about to explain.

Method of Stroh.—In the section on “Algebraic Forms” , it was noted that Stroh considers

,

where and symbolically, to be the fundamental form of seminvariant of degree and weight ; he observes that every form of this degree and weight is a linear function of such symbolic expressions. We may write

.

If we expand the symbolic expression by the multinomial theorem, and remember that any symbolic product retains the same value, however the suffixes be permuted, we shall obtain a sum of terms, such as , which in real form is ; and, if we express in terms of , and arrange the whole as a linear function of products of , each coefficient will be a seminvariant, and the aggregate of the coefficients will give us the complete asyzygetic system of the given degree and weight.

When the proper degree is < a factor must be of course understood.

Ex. gr.


.

In general the coefficient, of any product , will have, as coefficient, a seminvariant which, when expressed by partitions, will have as leading partition (preceding in dictionary order all others) the partition . Now the symbolic expression of the seminvariant can be expanded by the binomial theorem so as to be exhibited as a sum of products of seminvariants, of lower degrees if can be broken up into any two portions

such that , for then

and each portion raised to any power denotes a seminvariant. Stroh assumes that every reducible seminvariant can in this way be reduced. The existence of such a relation, as , necessitates the vanishing of a certain function of the coefficients , and as a consequence one product of these coefficients can be eliminated from the expanding form and no seminvariant, which appears as a coefficient to such a product (which may be the whole or only a part of the complete product, with which the seminvariant is associated), will be capable of reduction.

Ex. gr. for , ; either or will vanish if ; but every term, in the development, is of the form and therefore vanishes; so that none are left to undergo reduction. Therefore every form of degree 2, except of course that one whose weight is zero, is a perpetuant. The generating function is .

For , ; the condition is clearly , and since every seminvariant, of proper degree 3, is associated, as coefficient, with a product containing , all such are perpetuants. The general form is and the generating function .

For , ; the condition is

.

Hence every product of , , , , which contains the product disappears before reduction; this means that every seminvariant, whose partition contains the parts 4, 3, is a perpetuant. The general form of perpetuant is and the generating function

.

In general when is even and , the condition is

;

and we can determine the lowest weight of a perpetuant; the degree in the quantities is

.

Again, if is uneven , the condition is

;

and the degree, in the quantities , is


.

Hence the lowest weight of a perpetuant is , when is >2. The generating function is thus

.

The actual form of a perpetuant of degree has been shown by MacMahon to be

,


being given any zero or positive integer values.

Simultaneous Seminvariants of two Binary Forms.—Taking the two forms to be

,
,

every leading coefficient of a simultaneous covariant vanishes by the operation of

.

Observe that we may employ the principle of suffix diminution to obtain from any seminvariant one appertaining to a and a , and that suffix augmentation produces a portion of a higher seminvariant, the degree in each case remaining unaltered. Remark, too, that we are in association with non-unitary symmetric functions of two systems of quantities which will be denoted by partitions in brackets , respectively. Solving the equation

,

by the ordinary theory of linear partial differential equations, we obtain independent solutions, of which appertain to , ; the remaining one is , the leading coefficient of the Jacobian of the two forms. This constitutes an algebraically complete system, and, in terms of its members, all seminvariants can be rationally expressed. A similar theorem holds in the case of any number of binary forms, the mixed seminvariants being derived from the Jacobians of the several pairs of forms. If the seminvariant be of degree in the coefficients, the forms of orders respectively, and the weight , the degree of the covariant in the variables will be , an easy generalization of the theorem connected with a single form. The general term of a seminvariant of degree and weight will be

where       , and .

The number of such terms is the number of partitions of into parts, the part magnitudes, in the two portions, being limited not to exceed and respectively. Denote this number by . The number of linearly independent seminvariants of the given type will then be denoted by

;

and will be given by the coefficient of in

;

that is, by the coefficient of in

;

which preserves its expression when and and and are separately or simultaneously interchanged.

Taking the first generating function, and writing , and for , and respectively, we obtain the coefficient of , that is of , in

;

the unreduced generating function which enumerates the covariants of degrees in the coefficients and order in the variables. Thus, for two linear forms, , we find

,

the positive part of which is

;

establishing the ground forms of degrees-order (1, 0; 1), (0, 1; 1), (1, 1; 0), viz:—the linear forms themselves and their Jacobian . Similarly, for a linear and a quadratic, , , and the reduced form is found to be

,

where the denominator factors indicate the forms themselves, their Jacobian, the invariant of the quadratic and their resultant; connected, as shown by the numerator, by a syzygy of degrees-order (2, 2; 2).

The complete theory of the perpetuants appertaining to two or more forms of infinite order has not yet been established. For two forms the seminvariants of degrees 1, 1 are enumerated by , and the only one which is reducible is of weight zero; hence the perpetuants of degrees 1, 1 are enumerated by

;

and the series is evidently

,
,
,

one for each of the weights 1, 2, 3,...ad infin.

For the degrees 1, 2, the asyzygetic forms are enumerated by , and the actual forms for the first three weights are

,
,
,
,
,
;

amongst these forms are included all the asyzygetic forms of degrees 1, 1, multiplied by , and also all the perpetuants of the second binary form multiplied by ; hence we have to subtract from the generating function and , and obtain the generating function of perpetuants of degrees 1, 2.

.

The first perpetuant is the last seminvariant written, viz.:—

,

or, in partition notation,

;

and, in this form, it is at once seen to satisfy the partial differential equation. It is important to notice that the expression

denotes a seminvariant, if be neither of them unity, for, after operation, the terms destroy one another in pairs: when , must be taken to denote and so for . In general it is a seminvariant of degrees , and weight ; for this there is an exception, viz., when , or when , the corresponding partial degrees are 1 and 1. When , we have the general perpetuant of degrees 1, 1. There is a still more general form of the seminvariant; we may have instead of any collections of non-unitary integers not exceeding in magnitude respectively, Ex. gr.





,

is a seminvariant; and since these terms are clearly enumerated by

,

an expression which also enumerates the asyzygetic seminvariants, we may regard the form, written, as denoting the general form of asyzygetic seminvariant; a very important conclusion. For the case in hand, from the simplest perpetuant of degrees 1, 2, we derive the perpetuants of weight ,

,
,
,

a series of or of forms according as is even or uneven. Their number for any weight is the number of ways of composing with the parts 1, 2, and thus the generating function is verified. We cannot, by this method, easily discuss the perpetuants of degrees 2, 2, because a syzygy presents itself as early as weight 2. It is better now to proceed by the method of Stroh.

We have the symbolic expression of a seminvariant.

where

; ;

and .

Proceeding as we did in the case of the single binary form we find that for a given total degree , the condition which expresses reducibility is of total degree in the coefficients and ; combining this with the knowledge of the generating function of asyzygetic forms of degrees , , we find that the perpetuants of these degrees are enumerated by

,

and this is true for as well as for other values of (compare the case of the single binary form).

Observe that, if there be more than two binary forms, the weight of the simplest perpetuant of degrees is , as can be seen by reasoning of a similar kind.

To obtain information concerning the actual forms of the perpetuants, write


where .

For the case , , the condition is

,

which since , is really a condition of weight unity. For the form is , which we may write ; the remaining perpetuants, enumerated by , have been set forth above.

For the case , , the condition is ; and the simplest perpetuant, derived directly from the product , is ; the remainder of those enumerated by may be represented by the form

;

and each assuming all integer (including zero) values. For the case , the condition is

.

To represent the simplest perpetuant, of weight 7, we may take as base either or , and since the former is equivalent to and the latter to ; so that we have, apparently, a choice of four products. gives and , ; these two merely differ in sign; and similarly yields , and that due to merely differs from it in sign. We will choose from the forms in such manner that the product of letters is either a power of , or does not contain ; this rule leaves us with and ; of these forms we will choose that one which in letters is earliest in ascending dictionary order; this is and our earliest perpetuant is

and thence the general form enumerated by the generating function is

For the case , , the condition is

By the rules adopted we take , which gives

the simplest perpetuant of weight 7; and thence the general form enumerated by the generating function

viz:—  

For the case , , the condition is

The calculation results in

By the rules adopted we take , giving the simplest perpetuant of weight 15, viz:—

and thence the general form

due to the generating function

For the case , , the condition is

the calculation gives

Selecting the product , we find the simplest perpetuant

and thence the general form

due to the generating function

The series may be continued, but the calculations soon become very laborious.

V. Restricted Substitutions

We may regard the factors of a binary equated to zero as denoting straight lines through the origin, the co-ordinates being Cartesian and the axes inclined at any angle. Taking the variables to be and effecting the linear transformation


so that

it is seen that the two lines, on which lie , , have a definite projective correspondence. The linear transformation replaces points on lines through the origin by corresponding points on projectively corresponding lines through the origin; it therefore replaces a pencil of lines by another pencil, which corresponds projectively, and harmonic and other properties of pencils which are unaltered by linear transformation we may expect to find indicated in the invariant system. Or, instead of looking upon a linear substitution as replacing a pencil of lines by a projectively corresponding pencil retaining the same axes of co-ordinates, we may look upon the substitution as changing the axes of co-ordinates retaining the same pencil. Then a binary , equated to zero, represents straight lines through the origin, and the of any line through the origin are given constant multiples of the sines of the angles which that line makes with two fixed lines, the axes of co-ordinates. As new axes of co-ordinates we may take any other pair of lines through the origin, and for the corresponding to any new constant multiples of the sines of the angles which the line makes with the new axes. The substitution for in terms of is the most general linear substitution in virtue of the four degrees of arbitrariness introduced, viz. two by the choice of axes, two by the choice of multiples. If now the denote a given pencil of lines, an invariant is the criterion of the pencil possessing some particular property which is independent alike of the axes and of the multiples, and a covariant expresses that the pencil of lines which it denotes is a fixed pencil whatever be the axes or the multiples.

Besides the invariants and covariants, hitherto studied, there are others which appertain to particular cases of the general linear substitution. Thus what have been called seminvariants are not all of them invariants for the general substitution, but are invariants for the particular substitution


    

Again, in plane geometry, the most general equations of substitution which change from old axes inclined at to new axes inclined at , and inclined at angles to the old axis of , without change of origin, are


a transformation of modulus

The theory of invariants originated in the discussion, by George Boole, of this system so important in geometry. Of the quadratic

he discovered the two invariants

and it may be verified that, if the transformed of the quadratic be

The fundamental fact that he discovered was the invariance of , viz.—

from which it appears that the Boolian invariants of are nothing more than the full invariants of the simultaneous quadratics

the word invariant including here covariant. In general the Boolian system, of the general , is coincident with the simultaneous system of the and the quadratic .

Orthogonal System.—In particular, if we consider the transformation from one pair of rectangular axes to another pair of rectangular axes we obtain an orthogonal system which we will now briefly inquire into. We have and the substitution


with modulus unity. This is called the direct orthogonal substitution, because the sense of rotation from the axis of to the axis of is the same as that from that of to that of . If the senses of rotation be opposite we have the skew orthogonal substitution


of modulus . In both cases and are cogredient with and ; for, in the case of direct substitution,


and for skew substitution


Hence, in both cases, contragrediency and cogrediency are identical, and contravariants are included in covariants. Consider the binary , and the direct substitution


where ; replacing respectively. In the notation

,

observe that

,
.

Suppose that

is transformed into

then of course the fundamental fact which appertains to the theory of the general linear substitution; now here we have additional and equally fundamental facts; for since




showing that, in the present theory, and possess the invariant property. Since we have six types of symbolic factors which may be used to form invariants and covariants, viz.—

The general form of covariant is therefore



If this be of order and appertain to an




viz., the symbols a, b, c,... must each occur n times. It may denote a simultaneous orthogonal invariant of forms of orders ; the symbols must then present themselves times respectively. The number of different symbols denotes the degree of the covariant in the coefficients. The coefficients of the covariants are homogeneous, but not in general isobaric functions, of the coefficients of the original form or forms. Of the above general form of covariant there are important transformations due to the symbolic identities:—

as a consequence any even power of a determinant factor may be expressed in terms of the other symbolic factors, and any uneven power may be expressed as the product of its first power and a function of the other symbolic factors. Hence in the above general form of covariant we may suppose the exponents

if the determinant factors to be, each of them, either zero or unity. Or, if we please, we may leave the determinant factors untouched and consider the exponents to be, each of them, either zero or unity. Or, lastly, we may leave the exponents h, k, j, l, untouched and consider the product

to be reduced either to the form where g is a symbol of the series or to a power of To assist us in handling the symbolic products we have not only the identity

but also


and many others which may be derived from these in the manner which will be familiar to students of the works of Aronhold, Clebsch and Gordan. Previous to continuing the general discussion it is useful to have before us the orthogonal invariants and covariants of the binary linear and quadratic forms.

For the linear forms there are four fundamental forms

(i.)  of degree-order (1, 1),
(ii.)  ”    (0, 2),
(iii.)  ”    (1, 1),
(iv.)  ”    (2, 0),

(iii.) and (iv.) being the linear covariant and the quadrinvariant respectively. Every other concomitant is a rational integral function of these four forms. The linear covariant, obviously the Jacobian of and is the line perpendicular to and the vanishing of the quadrinvariant is the condition that passes through one of the circular points at infinity. In general any pencil of lines, connected with the line by descriptive or metrical properties, has for its equation a rational integral function of the four forms equated to zero.

For the quadratic we have

(i.) 
(ii.) 
(iii.) 
(iv.) 
(v.) 

This is the fundamental system; we may, if we choose, replace by since the identity shows the syzygetic relation

There is no linear covariant, since it is impossible to form a symbolic product which will contain once and at the same time appertain to a quadratic. (v.) is the Jacobian; geometrically it denotes the bisectors of the angles between the lines or, as we may say, the common harmonic conjugates of the lines and the lines The linear invariant is such that, when equated to zero, it determines the lines as harmonically conjugate to the lines or, in other words, it is the condition that may denote lines at right angles.

References.—Cayley, “Memoirs on Quantics,” in the Collected Mathematical Papers (Cambridge, 1898); Salmon, Lessons Introductory to the Modern Higher Algebra (Dublin, 1885); E. B. Elliott, Algebra of Quantics (Oxford, 1895); F. Brioschi, Teoria dei Covarianti (Rome, 1861); W. Fiedler, Die Elemente der neueren Geometrie and der Algebra der binären Formen (Leipzig, 1862); A. Clebsch, Theorie der binären Algebraischen Formen (Leipzig, 1872); Vorlesungen über Geometrie (Leipzig, 1875); Faà de Bruno, Théorie des formes binaires (Turin, 1876); P. Gordan, Vorlesungen über Invariantentheorie, Bd. i. “Determinanten” (Leipzig, 1885); Bd. ii. “Binäre Formen” (Leipzig, 1887); G. Rubini, Teoria delle forme in generale, e specialmente delle binarie (Leue, 1886); E. Study, Methoden zur Theorie der Ternären Formen (Leipzig, 1889); Lie, Theorie der Transformationsgruppen (Leipzig, 1888–1890); Franz Meyer, Bericht über den gegenwärtigen Stand der Invariantentheorie; Jahresbericht der Deutschen Mathematiker-Vereinigung, Bd. i. (Berlin, 1892); Encyklopädie der mathematischen Wissenschaften, Bd. i., Heft 3, 4, by Heinrich Burkhardt and Franz Meyer (Leipzig, 1899); J. H. Grace and A. Young, The Algebra of Invariants (Cambridge, 1903).  (P. A. M.) 


  1. 1.0 1.1 The elementary theory is given in the article Determinant.
  2. Vienna Transactions, t. iv. 1852.
  3. Phil. Trans., 1890, p. 490.
  4. The weight of a term ak0
    0
    ak1
    1
    akn
    n
    is defined as being k1 + 2k2 + … + nkn.