# 1911 Encyclopædia Britannica/Differential Equation

**DIFFERENTIAL EQUATION,** in mathematics, a relation between one or more functions and their differential coefficients. The subject is treated here in two parts: (1) an elementary introduction dealing with the more commonly recognized types of differential equations which can be solved by rule; and (2) the general theory.

*Part I.—Elementary Introduction.*

Of equations involving only one independent variable, *x* (known as *ordinary* differential equations), and one dependent variable, *y*, and containing only the first differential coefficient *dy*/*dx* (and therefore said to be of the first *order*), the simplest form is that reducible to the type

*dy*/*dx* = ƒ(*x*)/F(*y*),

leading to the result ƒF(*y*)dy − ƒƒ(*x*)dx = A, where A is an arbitrary constant; this result is said to solve the differential equation, the problem of evaluating the integrals belonging to the integral calculus.

Another simple form is

*dy*/*dx* + *y*P = Q,

where P, Q are functions of *x* only; this is known as the linear equation, since it contains *y* and *dy*/*dx* only to the first degree. If ƒPdx = *u*, we clearly have

so that *y* = *e*^{-u}(ƒ*e*^{u}Qdx + A) solves the equation, and is the only possible solution, A being an arbitrary constant. The rule for the solution of the linear equation is thus to multiply the equation by *e*^{u}, where *u* = ƒPdx.

A third simple and important form is that denoted by

*y* = *px* + ƒ(*p*),

where *p* is an abbreviation for *dy*/*dx*; this is known as Clairaut’s form. By differentiation in regard to *x* it gives

where

thus, either (*i*.) *dp*/*dx* = 0, that is, *p* is constant on the curve satisfying the differential equation, which curve is thus any one of the straight lines *y* = *cx* = ƒ(*c*), where *c* is an arbitrary constant, or else, (ii.) *x* + ƒ′(*p*) = 0; if this latter hypothesis be taken, and *p* be eliminated between *x* + ƒ′(*p*) = 0 and *y* = *px* + ƒ(*p*), a relation connecting *x* and *y*, not containing an arbitrary constant, will be found, which obviously represents the envelope of the straight lines *y* = *cx* + ƒ(*c*).

In general if a differential equation φ(*x*, *y*, *dy*/*dx*) = 0 be satisfied by any one of the curves F(*x*, *y*, *c*) = 0, where *c* is an arbitrary constant, it is clear that the envelope of these curves, when existent, must also satisfy the differential equation; for this equation prescribes a relation connecting only the co-ordinates *x*, *y* and the differential coefficient *dy*/*dx*, and these three quantities are the same at any point of the envelope for the envelope and for the particular curve of the family which there touches the envelope. The relation expressing the equation of the envelope is called a *singular* solution of the differential equation, meaning an *isolated* solution, as not being one of a family of curves depending upon an arbitrary parameter.

An extended form of Clairaut’s equation expressed by

*y* = *x*F(*p*) + ƒ(*p*)

may be similarly solved by first differentiating in regard to *p*, when it reduces to a linear equation of which *x* is the dependent and *p* the independent variable; from the integral of this linear equation, and the original differential equation, the quantity *p* is then to be eliminated.

Other types of solvable differential equations of the first order are (1)

M *dy*/*dx* = N,

where M, N are homogeneous polynomials in *x* and *y*, of the same order; by putting *v* = *y*/x and eliminating *y*, the equation becomes of the first type considered above, in *v* and *x*. An equation (*a*B ≷ *b*A)

(*ax* + *by* + *c*)*dy*/*dx* = A*x* + B*y* + C

may be reduced to this rule by first putting *x* + *h*, *y* + *k* for *x* and *y*, and determining *h*, *k* so that *ah* + *bk* + *c* = 0, A*h* + B*k* + C = 0.

(2) An equation in which *y* does not explicitly occur,

ƒ(*x*, *dy*/*dx*) = 0,

may, theoretically, be reduced to the type *dy*/*dx* = F(*x*); similarly an equation F(*y*, *dy*/*dx*) = 0.

(3) An equation

ƒ(*dy*/*dx*, *x*, *y*) = 0,

which is an integral polynomial in *dy*/*dx*, may, theoretically, be solved for *dy*/*dx*, as an algebraic equation; to any root *dy*/*dx* = F_{1}(*x*, *y*) corresponds, suppose, a solution φ_{1}(*x*, *y*, *c*) = 0, where *c* is an arbitrary constant; the product equation φ_{1}(*x*, *y*, *c*)φ_{2}(*x*, *y*, *c*) ... = 0, consisting of as many factors as there were values of *dy*/*dx*, is effectively as general as if we wrote φ_{1}(*x*, *y*, *c*_{1})φ_{2}(*x*, *y*, *c*_{2}) ... = 0; for, to evaluate the first form, we must necessarily consider the factors separately, and nothing is then gained by the multiple notation for the various arbitrary constants. The equation φ_{1}(*x*, *y*, *c*)φ_{2}(*x*, *y*, *c*) ... = 0 is thus the solution of the given differential equation.

In all these cases there is, except for cases of singular solutions, one and only one arbitrary constant in the most general solution of the differential equation; that this must necessarily be so we may take as obvious, the differential equation being supposed to arise by elimination of this constant from the equation expressing its solution and the equation obtainable from this by differentiation in regard to *x*.

A further type of differential equation of the first order, of the form

*dy*/*dx* = A + B*y* + C*y*²

in which A, B, C are functions of *x*, will be briefly considered below under differential equations of the second order.

When we pass to ordinary differential equations of the second order, that is, those expressing a relation between *x*, *y*, *dy*/*dx* and *d*²y/dx², the number of types for which the solution can be found by a known procedure is very considerably reduced. Consider the general linear equation

where P, Q, R are functions of *x* only. There is no method always effective; the main general result for such a linear equation is that if any particular function of *x*, say *y*_{1}, can be discovered, for which

then the substitution *y* = *y*_{1}η in the original equation, with R on the right side, reduces this to a linear equation of the first order with the dependent variable *d*η/*dx*. In fact, if *y* = *y*_{1}η we have

and

and thus

if then

and *z* denote *d*η/dx, the original differential equation becomes

From this equation *z* can be found by the rule given above for the linear equation of the first order, and will involve one arbitrary constant; thence *y* = *y*_{1} η = *y*_{1} ∫ *zdx* + A*y*_{1}, where A is another arbitrary constant, will be the general solution of the original equation, and, as was to be expected, involves two arbitrary constants.

The case of most frequent occurrence is that in which the coefficients P, Q are constants; we consider this case in some detail. If θ be a root of the quadratic equation θ² + θP + Q = 0, it can be at once seen that a particular integral of the differential equation with zero on the right side is *y*_{1} = *e*^{θx}. Supposing first the roots of the quadratic equation to be different, and φ to be the other root, so that φ + θ = -P, the auxiliary differential equation for *z*, referred to above, becomes *dz*/*dx* + (θ − φ)*z* = R*e*^{-θx} which leads to *ze*^{(θ-φ)} = B + ∫ R*e*^{-θx}*dx*, where B is an arbitrary constant, and hence to

or say to , where A, C are arbitrary constants and U is a function of *x*, not present at all when R = 0. If the quadratic equation θ² + Pθ + Q = 0 has equal roots, so that 2θ = -P, the auxiliary equation in *z* becomes giving , where B is an arbitrary constant, and hence

or, say, , where A, B are arbitrary constants, and U is a function of *x* not present at all when R = 0. The portion or of the solution, which is known as the *complementary function*, can clearly be written down at once by inspection of the given differential equation. The remaining portion U may, by taking the constants in the complementary function properly, be replaced by any particular solution whatever of the differential equation

for if *u* be any particular solution, this has a form

or a form

thus the general solution can be written

or

where A − A_{0}, B − B_{0}, like A, B, are arbitrary constants.

A similar result holds for a linear differential equation of any order, say

where P_{1}, P_{2}, ... P_{n} are constants, and R is a function of *x*. If we form the algebraic equation θ^{n} + P_{1}θ^{n-1} + ... + P_{n} = 0, and all the roots of this equation be different, say they are θ_{1}, θ_{2}, ... θ_{n}, the general solution of the differential equation is

where A_{1}, A_{2}, ... A_{n} are arbitrary constants, and *u* is any
particular solution whatever; but if there be one root θ_{1} repeated *r* times, the terms A_{1} *e*^{θ1 x} + ... + A_{r} *e*^{θr x} must be replaced by (A_{1} + A_{2}*x* + ... + A_{r}*x*^{r-1})*e*^{θ1 x} where A_{1}, ... A_{n} are arbitrary constants; the remaining terms in the complementary function will similarly need alteration of form if there be other repeated roots.

To complete the solution of the differential equation we need some method of determining a particular integral *u*; we explain a procedure which is effective for this purpose in the cases in which R is a sum of terms of the form *e*^{ax}φ(*x*), where φ(*x*) is an integral polynomial in *x*; this includes cases in which R contains terms of the form cos *bx*·φ(*x*) or sin *bx*·φ(*x*). Denote *d*/*dx* by D; it is clear that if *u* be any function of *x*, D(*e*^{ax}*u*) = *e*^{ax}D*u* + *ae*^{ax}*u*, or say, D(*e*^{ax}*u*) = *e*^{ax}(D + *a*)u; hence D²(*e*^{ax}*u*), *i.e.* *d*²/*dx*² (*e*^{ax}*u*), being equal to D(*e*^{ax}*v*), where *v* = (D + *a*)u, is equal to *e*^{ax}(D + *a*)v, that is to *e*^{ax}(D + *a*)²*u*. In this way we find D^{n}(*e*^{ax}*u*) = *e*^{ax}(D + *a*)^{n}*u*, where *n* is any positive integer. Hence if ψ(D) be any polynomial in D with constant coefficients, ψ(D) (*e*^{ax}*u*) = *e*^{ax}ψ(D + *a*)*u*. Next, denoting ∫ *udx* by D^{-1}*u*, and any solution of the differential equation *dz*/*dx* + *az* = *u* by *z* = (*d* + *a*)^{-1}*u*, we have D[*e*^{ax}(D + *a*)^{-1}*u*] = D(*e*^{ax}*z*) = *e*^{ax}(D + *a*)*z* = *e*^{ax}*u*, so that we may write D^{-1}(*e*^{ax}*u*) = *e*^{ax}(D + *a*)^{-1}*u*, where the meaning is that one value of the left side is equal to one value of the right side; from this, the expression D^{-2}(*e*^{ax}*u*), which means D^{-1}[D^{-1}(*e*^{ax}*u*)], is equal to D^{-1}(*e*^{ax}*z*) and hence to *e*^{ax}(D + *a*)^{-1}*z*, which we write *e*^{ax}(D + *a*)^{-2}*u*; proceeding thus we obtain

D^{-n}(*e*^{ax}*u*) = *e*^{ax}(D + *a*)^{-n}*u*,

where *n* is any positive integer, and the meaning, as before, is that one value of the first expression is equal to one value of the second. More generally, if ψ(D) be any polynomial in D with constant coefficients, and we agree to denote by [1/ψ(D)]*u* any solution *z* of the differential equation ψ(D)*z* = *u*, we have, if *v* = [1/ψ(D + *a*)]*u*, the identity ψ(D)(*e*^{ax}*v*) = *e*^{ax}ψ(D + *a*)*v* = *e*^{ax}*u*, which we write in the form

This gives us the first step in the method we are explaining, namely that a solution of the differential equation ψ(D)*y* = *e*^{ax}*u* + *e*^{bx}*v* + ... where *u*, *v*, ... are any functions of *x*, is any function denoted by the expression

It is now to be shown how to obtain one value of , when *u* is a polynomial in *x*, namely one solution of the differential equation ψ(D + *a*)z = *u*. Let the highest power of *x* entering in *u* be *x*^{m}; if *t* were a variable quantity, the rational fraction in *t*, by first writing it as a sum of partial fractions, or otherwise, could be identically written in the form

K_{r}*t*^{-r} + K_{r-1}*t*^{-r+1} + ... + K_{1}*t*^{-1} + H + H_{1}*t* + ... + H_{m}*t*^{m} + *t*^{m+1}φ(*t*)/ψ(*t* + *a*),

where φ(*t*) is a polynomial in *t*; this shows that there exists an identity of the form

1 = ψ(*t* + *a*)(K_{r}*t*^{−r} + ... + K_{1}*t*^{−1} + H + H_{1}*t* + ... + H_{m}*t*^{m}) + φ(*t*)t^{m+1},

and hence an identity

*u* = ψ(D + *a*) [K_{r}D^{−r} + ... + K_{1}D^{−1} + H + H_{1}D + ... + H_{m}D^{m}] *u* + φ(D) D^{m+1}*u*;

in this, since *u* contains no power of *x* higher than *x*^{m}, the second term on the right may be omitted. We thus reach the conclusion that a solution of the differential equation ψ(D + *a*)z = *u* is given by

*z* = (K_{r}D^{−r} + ... + K_{1}D^{−1} + H + H_{1}D + ... + H_{m}D^{m})*u*,

of which the operator on the right is obtained simply by expanding 1/ψ(D + *a*) in ascending powers of D, as if D were a numerical quantity, the expansion being carried as far as the highest power of D which, operating upon *u*, does not give zero. In this form every term in *z* is capable of immediate calculation.

*Example.*—For the equation

or

the roots of the associated algebraic equation (θ² + 1)² = 0 are θ = ±*i*, each repeated; the complementary function is thus

(A + B*x*)e^{ix} + (C + D*x*)e^{−ix},

where A, B, C, D are arbitrary constants; this is the same as

(H + K*x*) cos *x* + (M + N*x*) sin *x*,

where H, K, M, N are arbitrary constants. To obtain a particular integral we must find a value of (1 + D²)−²*x*³ cos *x*; this is the real part of (1 + D²)−² *e*^{ix}*x*³ and hence of *e*^{ix} [1 + (D + *i*)²]−² *x*³
or

*e*^{ix} [2*i*D(1 + ½*i*D)]−² *x*³,

or

−¼*e*^{ix} D−² (1 + *i*D − ¾D² − ½*i*D³ + ^{5}⁄_{16}D^{4} + ^{3}⁄_{16}*i*D^{5} ...)*x*³,

or

−¼*e*^{ix} (^{1}⁄_{20}*x*^{5} + ¼ix^{4} − ¾*x*³ − ^{3}⁄_{2} ix² + ^{15}⁄_{8} *x* + ^{9}⁄_{8} *i*);

the real part of this is

−¼ (^{1}⁄_{20} *x*^{5} − ¾*x*² + ^{15}⁄_{8}*x*) cos *x* + ¼ (¼*x*^{4} − ^{3}⁄_{4}*x*² + ^{9}⁄_{8}) sin *x*.

This expression added to the complementary function found above gives the complete integral; and no generality is lost by omitting from the particular integral the terms −^{15}⁄_{32}*x* cos *x* + ^{9}⁄_{32} sin *x*, which are of the types of terms already occurring in the complementary function.

The symbolical method which has been explained has wider applications than that to which we have, for simplicity of explanation, restricted it. For example, if ψ(*x*) be any function of *x*, and *a*_{1}, *a*_{2}, ... *a*_{n} be different constants, and [(*t* + *a*_{1}) (*t* + *a*_{2}) ... (*t* + *a*_{n})]^{−1} when expressed in partial fractions be written Σ*c*_{m}(*t* + *a*_{m})^{−1}, *a* particular integral of the differential equation (D + *a*_{1})(D + *a*_{2}) ... (D + *a*_{n})*y* = ψ(*x*) is given by

*y* = Σ*c*_{m}(D + *a*_{m})^{−1} ψ(*x*) = Σ*c*_{m} (D + *a*_{m})^{−1} *e*^{−a}*m*^{x}*e*^{a}*m*^{x} ψ(*x*) = Σ*c*_{m}*e*^{−a}*m*^{x}D^{−1} (*e*^{a}*m*^{x}ψ(*x*) ) = Σ*c*_{m}*e*^{−a}*m*^{x} ∫ *e*^{a}*m*^{x}ψ(*x*)*dx*.

The particular integral is thus expressed as a sum of *n* integrals. A linear differential equation of which the left side has the form

where P_{1}, ... P_{n} are constants, can be reduced to the case considered above. Writing *x* = *e*^{t} we have the identity

where θ = d/dt

When the linear differential equation, which we take to be of the second order, has variable coefficients, though there is no general rule for obtaining a solution in finite terms, there are some results which it is of advantage to have in mind. We have seen that if one solution of the equation obtained by putting the right side zero, say *y*_{1}, be known, the equation can be solved. If y_{2} be another solution of

there being no relation of the form my_{1} + ny_{2} = *k*, where *m*, *n*, *k* are constants, it is easy to see that

so that we have

*y*_{1}′*y*_{2} − *y*_{1}*y*_{2}′ = A exp. (∫ Pdx),

where A is a suitably chosen constant, and exp. *z* denotes *e*^{z}. In terms of the two solutions *y*_{1}, *y*_{2} of the differential equation having zero on the right side, the general solution of the equation with R = φ(*x*) on the right side can at once be verified to be A*y*_{1} + B*y*_{2} + *y*_{1}*u* − *y*_{2}*v*, where *u*, *v* respectively denote the integrals

*u* = ∫ *y*_{2}φ(*x*) (*y*_{1}′*y*_{2} − *y*_{2}′*y*_{1})^{−1}*dx*, *v* = ∫ *y*_{1}φ(*x*) (*y*_{1}′*y*_{2} − *y*_{2}′*y*_{1})^{−1}*dx*.

The equation

by writing *y* = *v* exp. (-½ ∫ Pdx), is at once seen to be reduced to *d*²v/dx² + I*v* = 0, where I = Q − ½*d*P/dx − ¼P². I*f* η = − 1/*v* *dv*/*dx*, the equation *d*²v/*dx*² + I*v* = 0 becomes *d*η/*dx* = I + η², a non-linear equation of the first order.

More generally the equation

where A, B, C are functions of *x*, is, by the substitution

reduced to the linear equation

The equation

known as Riccati’s equation, is transformed into an equation of the same form by a substitution of the form η = (*a*Y + *b*)/(*c*Y + *d*), where *a*, *b*, *c*, *d* are any functions of *x*, and this fact may be utilized to obtain a solution when A, B, C have special forms; in particular if any particular solution of the equation be known, say η_{0}, the
substitution η = η_{0} − 1/Y enables us at once to obtain the general solution; for instance, when

a particular solution is η_{0} = √(-A/C). This is a case of the remark, often useful in practice, that the linear equation

where μ is a constant, is reducible to a standard form by taking a new independent variable *z* = ∫ *dx*[φ(*x*)]^{-½}.

We pass to other types of equations of which the solution can be obtained by rule. We may have cases in which there are two dependent variables, *x* and *y*, and one independent variable *t*, the differential coefficients *dx*/*dt*, *dy*/*dt* being given as functions of *x*, *y* and *t*. Of such equations a simple case is expressed by the pair

wherein the coefficients *a*, *b*, *c*, *a*′, *b*′, *c*′, are constants. To integrate these, form with the constant λ the differential coefficient of *z* = *x* + λ*y*, that is *dz*/*dt* = (*a* + λ*a*′)*x* + (*b* + λ*b*′)*y* + *c* + λ*c*′, the quantity λ being so chosen that *b* + λ*b*′ = λ(*a* + λ*a*′), so that we have *dz*/*dt* = (*a* + λ*a*′)*z* + *c* + λ*c*′; this last equation is at once integrable in the form *z*(a + λ*a*′) + *c* + λ*c*′ = A*e*^{(a + λa′)t}, where A is an arbitrary constant. In general, the condition *b* + λ*b*′ = λ(*a* + λ*a*′) is satisfied by two different values of λ, say λ_{1}, λ_{2}; the solutions corresponding to these give the values of *x* +λ_{1}*y* and *x* + λ_{2}*y*, from which *x* and *y* can be found as functions of *t*, involving two arbitrary constants. If, however, the two roots of the quadratic equation for λ are equal, that is, if (*a* − *b*′)² + 4*a*′b = 0, the method described gives only one equation, expressing *x* + λ*y* in terms of *t*; by means of this equation *y* can be eliminated from *dx*/*dt* = *ax* + *by* + *c*, leading to an equation of the form *dx*/*dt* = P*x* + Q + R*e*^{(a + λa′)t}, where P, Q, R are constants. The integration of this gives *x*, and thence *y* can be found.

A similar process is applicable when we have three or more dependent variables whose differential coefficients in regard to the single independent variables are given as linear functions of the dependent variables with constant coefficients.

Another method of solution of the equations

*dx*/*dt* = ax + by + *c*, *dy*/*dt* = *a*′x + *b*′y + *c*′,

consists in differentiating the first equation, thereby obtaining

from the two given equations, by elimination of *y*, we can express *dy*/*dt* as a linear function of *x* and *dx*/*dt*; we can thus form an equation of the shape *d*²x/*dt*² = P + Q*x* + R*dx*/*dt*, where P, Q, R are constants; this can be integrated by methods previously explained, and the integral, involving two arbitrary constants, gives, by the equation *dx*/*dt* = *ax* + *by* + *c*, the corresponding value of *y*. Conversely it should be noticed that any single linear differential equation

where *u*, *v*, *w* are functions of *t*, by writing *y* for *dx*/*dt*, is equivalent with the two equations *dx*/*dt* = *y*, *dy*/*dt* = *u* + *vx* + *wy*. In fact a similar reduction is possible for any system of differential equations with one independent variable.

Equations occur to be integrated of the form

Xdx + Ydy + Zdz = 0,

where X, Y, Z are functions of *x*, *y*, *z*. We consider only the case in which there exists an equation φ(*x*, *y*, *z*) = C whose differential

is equivalent with the given differential equation; that is, μ being a proper function of *x*, *y*, *z*, we assume that there exist equations

these equations require

&c.,

and hence

conversely it can be proved that this is sufficient in order that μ may exist to render μ(X*dx* + Y*dy* + Z*dz*) a perfect differential; in particular it may be satisfied in virtue of the three equations such as

in which case we may take μ = 1. Assuming the condition in its general form, take in the given differential equation a plane section of the surface φ = C parallel to the plane *z*, viz. put *z* constant, and consider the resulting differential equation in the two variables *x*, *y*, namely X*dx* + Y*dy* = 0; let ψ(*x*, *y*, *z*) = constant, be its integral, the constant *z* entering, as a rule, in ψ because it enters in X and Y. Now differentiate the relation ψ(*x*, *y*, *z*) = ƒ(*z*), where ƒ is a function to be determined, so obtaining

there exists a function σ of *x*, *y*, *z* such that

because ψ = constant, is the integral of X*dx* + Y*dy* = 0; we desire to prove that ƒ can be chosen so that also, in virtue of ψ(*x*, *y*, *z*) = ƒ(*z*), we have

namely

if this can be proved the relation ψ(*x*, *y*, *z*) − ƒ(*z*) = constant, will be the integral of the given differential equation. To prove this it is enough to show that, in virtue of ψ(*x*, *y*, *z*) = ƒ(*z*), the function ∂ψ/∂*x* − σZ can be expressed in terms of *z* only. Now in consequence of the originally assumed relations,

we have

and hence

this shows that, as functions of *x* and *y*, ψ is a function of φ (see the note at the end of part *i*. of this article, on Jacobian determinants), so that we may write ψ = F(*z*, φ), from which

then or

in virtue of ψ(*x*, *y*, *z*) = ƒ(*z*), and ψ = F(*z*, φ), the function φ can be written in terms of *z* only, thus ∂F/∂*z* can be written in terms of *z* only, and what we required to prove is proved.

Consider lastly a simple type of differential equation containing *two* independent variables, say *x* and *y*, and one dependent variable *z*, namely the equation

where P, Q, R are functions of *x*, *y*, *z*. This is known as Lagrange’s linear partial differential equation of the first order. To integrate this, consider first the ordinary differential equations *dx*/*dz* = P/R, *dy*/*dz* = Q/R, and suppose that two functions *u*, *v*, of *x*, *y*, *z* can be determined, independent of one another, such that the equations *u* = *a*, *v* = *b*, where *a*, *b* are arbitrary constants, lead to these ordinary differential equations, namely such that

and

Then if F(*x*, *y*, *z*) = 0 be a relation satisfying the original differential equations, this relation giving rise to

and we have

It follows that the determinant of three rows and columns vanishes whose first row consists of the three quantities ∂F/∂*x*, ∂F/∂*y*, ∂F/∂*z*, whose second row consists of the three quantities ∂*u*/∂*x*, ∂*u*/∂*y*, ∂*u*/∂*z*, whose third row consists similarly of the partial derivatives of *v*. The vanishing of this so-called Jacobian determinant is known to imply that F is expressible as a function of *u* and *v*, unless these are themselves functionally related, which is contrary to hypothesis (see the note below on Jacobian determinants). Conversely, any relation φ(*u*, *v*) = 0 can easily be proved, in virtue of the equations satisfied by *u* and *v*, to lead to

The solution of this partial equation is thus reduced to the solution of the two ordinary differential equations expressed by *dx*/P = *dy*/Q = *dz*/R. In regard to this problem one remark may be made which is often of use in practice: when one equation *u* = *a* has been found to satisfy the differential equations, we may utilize this to obtain the second equation *v* = *b*; for instance, we may, by means of *u* = *a*, eliminate *z*—when then from the resulting equations in *x* and *y* a relation *v* = *b* has been found containing *x* and *y* and *a*, the substitution *a* = *u* will give a relation involving *x*, *y*, *z*.

*Note on Jacobian Determinants.*—The fact assumed above that the vanishing of the Jacobian determinant whose elements are the partial derivatives of three functions F, *u*, *v*, of three variables *x*, *y*, *z*,
involves that there exists a functional relation connecting the three functions F, *u*, *v*, may be proved somewhat roughly as follows:—

The corresponding theorem is true for any number of variables. Consider first the case of two functions *p*, *q*, of two variables *x*, *y*. The function *p*, not being constant, must contain one of the variables, say *x*; we can then suppose *x* expressed in terms of *y* and the function *p*; thus the function *q* can be expressed in terms of *y* and the function *p*, say *q* = Q(*p*, *y*). This is clear enough in the simplest cases which arise, when the functions are rational. Hence we have

and

these give

by hypothesis ∂*p*/∂*x* is not identically zero; therefore if the Jacobian determinant of *p* and *q* in regard to *x* and *y* is zero identically, so is ∂Q/∂*y*, or Q does not contain *y*, so that *q* is expressible as a function of *p* only. Conversely, such an expression can be seen at once to make the Jacobian of *p* and *q* vanish identically.

Passing now to the case of three variables, suppose that the Jacobian determinant of the three functions F, *u*, *v* in regard to *x*, *y*, *z* is identically zero. We prove that if *u*, *v* are not themselves functionally connected, F is expressible as a function of *u* and *v*. Suppose first that the minors of the elements of ∂F/∂*x*, ∂F/∂*y*, ∂F/∂*z* in the determinant are all identically zero, namely the three determinants such as

then by the case of two variables considered above there exist three functional relations. ψ_{1}(*u*, *v*, *x*) = 0, ψ_{2}(*u*, *v*, *y*) = 0, ψ_{3}(*u*, *v*, *z*) = 0, of which the first, for example, follows from the vanishing of

We cannot assume that *x* is absent from ψ_{1}, or *y* from ψ_{2}, or *z* from ψ_{3}; but conversely we cannot simultaneously have *x* entering in ψ_{1}, and *y* in ψ_{2}, and *z* in ψ_{3}, or else by elimination of *u* and *v* from the three equations ψ_{1} = 0, ψ_{2} = 0, ψ_{3} = 0, we should find a necessary relation connecting the three independent quantities *x*, *y*, *z*; which is absurd. Thus when the three minors of ∂F/∂*x*, ∂F/∂*y*, ∂F/∂*z* in the Jacobian determinant are all zero, there exists a functional relation connecting *u* and *v* only. Suppose no such relation to exist; we can then suppose, for example, that

is not zero. Then from the equations *u*(*x*, *y*, *z*) = *u*, *v*(x, *y*, *z*) = *v* we can express *y* and *z* in terms of *u*, *v*, and *x* (the attempt to do this could only fail by leading to a relation connecting *u*, *v* and *x*, and the existence of such a relation would involve that the determinant

was zero), and so write F in the form F(*x*, *y*, *z*) = Φ(*u*, *v*, *x*). We then have

thereby the Jacobian determinant of F, *u*, *v* is reduced to

by hypothesis the second factor of this does not vanish identically; hence ∂Φ/∂*x* = 0 identically, and Φ does not contain *x*; so that F is expressible in terms of *u*, *v* only; as was to be proved.

*Part II.—General Theory.*

Differential equations arise in the expression of the relations between quantities by the elimination of details, either unknown or regarded as unessential to the formulation of the relations in question. They give rise, therefore, to the two closely connected problems of determining what arrangement of details is consistent with them, and of developing, apart from these details, the general properties expressed by them. Very roughly, two methods of study can be distinguished, with the names Transformation-theories, Function-theories; the former is concerned with the reduction of the algebraical relations to the fewest and simplest forms, eventually with the hope of obtaining explicit expressions of the dependent variables in terms of the independent variables; the latter is concerned with the determination of the general descriptive relations among the quantities which are involved by the differential equations, with as little use of algebraical calculations as may be possible. Under the former heading we may, with the assumption of a few theorems belonging to the latter, arrange the theory of partial differential equations and Pfaff’s problem, with their geometrical interpretations, as at present developed, and the applications of Lie’s theory of transformation-groups to partial and to ordinary equations; under the latter, the study of linear differential equations in the manner initiated by Riemann, the applications of discontinuous groups, the theory of the singularities of integrals, and the study of potential equations with existence-theorems arising therefrom. In order to be clear we shall enter into some detail in regard to partial differential equations of the first order, both those which are linear in any number of variables and those not linear in two independent variables, and also in regard to the function-theory of linear differential equations of the second order. Space renders impossible anything further than the briefest account of many other matters; in particular, the theories of partial equations of higher than the first order, the function-theory of the singularities of ordinary equations not linear and the applications to differential geometry, are taken account of only in the bibliography. It is believed that on the whole the article will be more useful to the reader than if explanations of method had been further curtailed to include more facts.

When we speak of a function without qualification, it is to be understood that in the immediate neighbourhood of a particular set *x*_{0}, *y*_{0}, ... of values of the independent variables *x*, *y*, ... of the function, at whatever point of the range of values for *x*, *y*, ... under consideration *x*_{0}, *y*_{0}, ... may be chosen, the function can be expressed as a series of positive integral powers of the differences *x* − *x*_{0}, *y* − *y*_{0}, ..., convergent when these are sufficiently small (see Function: Functions of Complex Variables). Without this condition, which we express by saying that the function is developable about *x*_{0}, *y*_{0}, ..., many results provisionally stated in the transformation theories would be unmeaning or incorrect. If, then, we have a set of *k* functions, ƒ_{1} ... ƒ_{k} of *n* independent variables *x*_{1} ... *x*_{n}, we say that they are independent when *n* ≥ *k* and not every determinant of *k* rows and columns vanishes of the matrix of *k* rows and *n* columns whose *r*-th row has the constituents *d*ƒ_{r}/*dx*_{1}, ... *d*ƒ_{r}/*dx*_{n}; the justification being in the theorem, which we assume, that if the determinant involving, for instance, the first *k* columns be not zero for *x*_{1} = *x*º_{1} ... *x*_{n} = *x*º_{n}, and the functions be developable about this point, then from the equations ƒ_{1} = *c*_{1}, ... ƒ_{k} = *c*_{k} we can express *x*_{1}, ... *x*_{k} by convergent power series in the differences *x*_{k+1} − *x*º_{k+1}, ... *x*_{n} − *x*_{nº}, and so regard *x*_{1}, ... *x*_{k} as functions of the remaining variables. This we often express by saying that the equations ƒ_{1} = *c*_{1}, ... ƒ_{k} = *c*_{k} can be solved for *x*_{1}, ... *x*_{k}. The explanation is given as a type of explanation often understood in what follows.

We may conveniently begin by stating the theorem: If each of the *n* functions φ_{1}, ... φ_{n} of the (*n* + 1) variables *x*_{1}, ... *x*_{n}*t* be developable Ordinary equations of the first order. about the values *x*º_{1}, ... *x*_{n}^{0}*t*^{0}, the *n* differential equations of the form *dx*_{1}/*dt* = φ_{1}(tx_{1}, ... *x*_{n}) are satisfied by convergent power series

*x*_{r} = *x*º_{r} + (*t* − *t*^{0}) A_{r1} + (*t* − *t*_{0})² A_{r2} + ...

reducing respectively to *x*º_{1}, ... *x*º_{n} when *t* = *t*^{0}; and the only functions satisfying the equations and reducing respectively to *x*º_{1}, ... *x*º_{n} when *t* = *t*^{0}, are those determined by continuation of these series. If the result of solving these *n* equations for *x*º_{1}, ... *x*º_{n} be written in the form ω_{1}(*x*_{1}, ... *x*_{n}*t*) = *x*º_{1}, ... ω_{n}(*x*_{1}, ... *x*_{n}*t*) = *x*º_{n}, Single homogeneous partial equation of the first order. it is at once evident that the differential equation

*d*ƒ/*dt* + φ_{1}*d*ƒ/*dx*_{1} + ... + φ_{n}*d*ƒ/*dx*_{n} = 0

possesses *n* integrals, namely, the functions ω_{1}, ... ω_{n}, which are developable about the values (*x*º_{1} ... *x*_{n}^{0}*t*^{0}) and reduce respectively to *x*_{1}, ... *x*_{n} when *t* = *t*^{0}. And in fact it has no other integrals so reducing. Thus this equation also possesses a unique integral reducing when *t* = *t*^{0} to an arbitrary function ψ(*x*_{1}, ... *x*_{n}), this integral being. ψ(ω_{1}, ... ω_{n}). Conversely the existence of these *principal* integrals ω_{1}, ... ω_{n} of the partial equation establishes the existence of the specified solutions of the ordinary equations *dx*_{i}/*dt* = φ_{i}. The following sketch of the proof of the existence of these principal integrals for the case *n* = 2 will show the character of more general investigations. Put *x* for *x* − *x*^{0}, &c., and consider the equation *a*(*xyt*) *d*ƒ/*dx* + *b*(*xyt*) *d*ƒ/*dy* = *d*ƒ/*dt*, wherein the functions *a*, *b* are developable about *x* = 0, *y* = 0, *t* = 0; say

*a*(xyt) = *a*_{0} + ta_{1} + *t*²a_{2}/2! + ..., *b*(*xyt*) = *b*_{0} + *tb*_{1} + *t*²*b*_{2}/2! + ...,

so that

*ad*/*dx* + *bd*/*dy* = δ_{0} + *t*δ_{1} + ½*t*²δ_{2} + ...,

where δ = *a*_{r}*d*/*dx* + *b*_{r}*d*/*dy*. In order that

ƒ = *p*_{0} + tp_{1} + *t*²*p*_{2}/2! + ...

wherein *p*_{0}, *p*_{1} ... are power series in *x*, *y*, should satisfy the equation, it is necessary, as we find by equating like terms, that

*p*_{1} = δ_{0}*p*_{0}, *p*_{2} = δ_{0}*p*_{1} + δ_{1}*p*_{0}, &c.

and in generalProof of the existence of integrals.

*p*_{s+1} = δ_{0}*p*_{s} + *s*_{1}δ_{1}*p*_{s-1} + *s*_{2}δ_{2}*p*_{s-2} +... + δ_{s}*p*_{0},

where

*s*_{r} = (*s*!)/(*r*!) (*s* − *r*)!

Now compare with the given equation another equation

A(*xyt*)*d*F/*dx* + B(*xyt*)*d*F/*dy* = *d*F/*dt*,

wherein each coefficient in the expansion of either A or B is real and positive, and not less than the absolute value of the corresponding coefficient in the expansion of *a* or *b*. In the second equation let us substitute a series

F = P_{0} + *t*P_{1} + *t*²P_{2}/2! + ...,

wherein the coefficients in P_{0} are real and positive, and each not less than the absolute value of the corresponding coefficient in *p*_{0}; then putting Δ_{r} = A_{r}*d*/*dx* + B_{r}*d*/*dy* we obtain necessary equations of the same form as before, namely,

P_{1} = Δ_{0}P_{0}, P_{2} = Δ_{0}P_{1} + Δ_{1}P_{0}, ...

and in general P_{s+1} = Δ_{0}P_{s}, + *s*_{1}Δ_{1}P_{s-1} + ... + Δ_{s}P_{0}. These give for every coefficient in P_{s+1} an integral aggregate with real positive coefficients of the coefficients in P_{s}, P_{s-1}, ..., P_{0} and the coefficients in A and B; and they are the same aggregates as would be given by the previously obtained equations for the corresponding coefficients in *p*_{s+1} in terms of the coefficients in *p*_{s}, *p*_{s-1}, ..., *p*_{0} and the coefficients in *a* and *b*. Hence as the coefficients in P_{0} and also in A, B are real and positive, it follows that the values obtained in succession for the coefficients in P_{1}, P_{2}, ... are real and positive; and further, taking account of the fact that the absolute value of a sum of terms is not greater than the sum of the absolute values of the terms, it follows, for each value of *s*, that every coefficient in *p*_{s+1} is, in absolute value, not greater than the corresponding coefficient in P_{s+1}. Thus if the series for F be convergent, the series for ƒ will also be; and we are thus reduced to (1), specifying functions A, B with real positive coefficients, each in absolute value not less than the corresponding coefficient in *a*, *b*; (2) proving that the equation

A*d*F/*dx* + B*d*F/*dy* = *d*F/*dt*

possesses an integral P_{0} + *t*P_{1} + *t*²P_{2}/2! + ... in which the coefficients in P_{0} are real and positive, and each not less than the absolute value of the corresponding coefficient in *p*_{0}. If *a*, *b* be developable for *x*, *y* both in absolute value less than *r* and for *t* less in absolute value than R, and for such values *a*, *b* be both less in absolute value than the real positive constant M, it is not difficult to verify that we may take A = B = M[1 − (*x* + *y*)/*r*]^{-1} (1 − *t*/R)^{-1}, and obtain

and that this solves the problem when *x*, *y*, *t* are sufficiently small for the two cases *p*_{0} = *x*, *p*_{0} = *y*. One obvious application of the general theorem is to the proof of the existence of an integral of an ordinary linear differential equation given by the *n* equations *dy*/*dx* = *y*_{1}, *dy*_{1}/*dx* = *y*_{2}, ...,

*dy*_{n-1}/*dx* = *p* − *p*_{1}*y*_{n-1} − ... − *p*_{n}*y*;

but in fact any simultaneous system of ordinary equations is reducible to a system of the form

*dx*_{i}/*dt* = φ_{i}(*tx*_{1}, ... *x*_{n}).

Suppose we have *k* homogeneous linear partial equations of the first order in *n* independent variables, the general equation being *a*_{σ1}*d*ƒ/*dx*_{1} + ... + *a*_{σn}*d*ƒ/*dx*_{n} = 0, where σ = 1, ... *k*, and that Simultaneous linear partial equations. we desire to know whether the equations have common solutions, and if so, how many. It is to be understood that the equations are linearly independent, which implies that *k* ≤ *n* and not every determinant of *k* rows and columns is identically zero in the matrix in which the *i*-th element of the σ-th row is *a*_{σi}}(*i* = 1, ... *n*, σ = 1, ... *k*). Denoting the left side of the σ-th equation by Pσƒ, it is clear that every common solution of the two equations P_{σ}ƒ = 0, P_{ρ}ƒ = 0, is also a solution of the equation P_{ρ}(*p*_{σ}ƒ), P_{σ}(*p*_{ρ}ƒ), We immediately find, however, that this is also a linear equation, namely, ΣH_{i}*d*ƒ/*dx*_{i} = 0 where H_{i} = P_{ρ}*a*_{σ} − P_{σ}*a*_{ρ}, and if it be not already contained among the given equations, or be linearly deducible from them, it may be added to them, as not introducing any additional limitation of the possibility of their having common solutions. Proceeding thus with every pair of the original equations, and then with every pair of the possibly augmented system so obtained, and so on continually, we shall arrive at a system of equations, linearly independent of each other and therefore not more than *n* in number, such that the combination, in the way described, of every pair of them, leads to an equation which is linearly deducible from them. If the number of this so-called *complete system* is *n*, the equations give *d*ƒ/*dx*_{1} = 0 ... *d*ƒ/*dx*_{n} = 0, leading to the nugatory result ƒ = a constant. Suppose, then, the number of this system to be *r* < *n*; suppose, further, that from the Complete systems of linear partial equations. matrix of the coefficients a determinant of *r* rows and columns not vanishing identically is that formed by the coefficients of the differential coefficients of ƒ in regard to *x*_{1} ... *x*_{r}; also that the coefficients are all developable about the values *x*_{1} = *x*º_{1}, ... *x*_{n}= *x*º_{n}, and that for these values the determinant just spoken of is not zero. Then the main theorem is that the complete system of *r* equations, and therefore the originally given set of *k* equations, have in common *n* − *r* solutions, say ω_{r+1}, ... ω_{n}, which reduce respectively to *x*_{r+1}, ... *x*_{n} when in them for *x*_{1}, ... *x*_{r} are respectively put *x*º_{1}, ... *x*º_{r}; so that also the equations have in common a solution reducing when *x*_{1} = *x*º_{1}, ... *x*_{r} = *x*º_{r} to an arbitrary function ψ(*x*_{r+1}, ... *x*_{n}) which is developable about *x*º_{r+1}, ... *x*º_{n}, namely, this common solution is ψ(ω_{r+1}, ... ω_{n}). It is seen at once that this result is a generalization of the theorem for *r* = 1, and its proof is conveniently given by induction from that case. It can be verified without difficulty (1) that if from the *r* equations of the complete system we form *r* independent linear aggregates, with coefficients not necessarily constants, the new system is also a complete system; (2) that if in place of the independent variables *x*_{1}, ... *x*_{n} we introduce any other variables which are independent functions of the former, the new equations also form a complete system. It is convenient, then, from the complete system of *r* equations to form *r* new equations by solving separately for *d*ƒ/*dx*_{1}, ..., *d*ƒ/*dx*_{r}; suppose the general equation of the new system to be

Q_{σ}ƒ = *d*ƒ/*dx*_{σ} + *c*_{σjr+1}*d*ƒ/*dx*_{r+1} + ... + *c*_{σn}*d*ƒ/*dx*_{n} = 0 (σ = 1, ... *r*).

Then it is easily obvious that the equation Q_{ρ}Q_{σ}ƒ − Q_{σ}Q_{ρ}ƒ = 0 contains only the differential coefficients of ƒ in regard to *x*_{r+1} ... *x*_{n}; as it is at most a linear function of Q_{1}ƒ, ... Q_{r}ƒ, it must be identically zero. So reduced the system is called a Jacobian system. Of this system Q_{1}ƒ=0 has *n* − 1 principal solutions reducing respectively Jacobian systems. to *x*_{2}, ... *x*_{n} when

*x*_{1} = *x*º_{1},

and its form shows that of these the first *r* − 1 are exactly *x*_{2} ... *x*_{r}. Let these *n* − 1 functions together with *x*_{1} be introduced as *n* new independent variables in all the *r* equations. Since the first equation is satisfied by *n* − 1 of the new independent variables, it will contain no differential coefficients in regard to them, and will reduce therefore simply to *d*ƒ/*dx*_{1} = 0, expressing that any common solution of the *r* equations is a function only of the *n* − 1 remaining variables. Thereby the investigation of the common solutions is reduced to the same problem for *r* − 1 equations in *n* − 1 variables. Proceeding thus, we reach at length one equation in *n* − *r* + 1 variables, from which, by retracing the analysis, the proposition stated is seen to follow.

The analogy with the case of one equation is, however, still closer. With the coefficients *c*_{σj}, of the equations Q_{σ}ƒ = 0 in transposed array (σ = 1, ... *r*, *j* = *r* + 1, ... *n*) we can put down the (*n* − *r*) equations, *dx*_{j} = *c*_{1j}*dx*_{1} + ... + *c*_{rj}*dx*_{r}, equivalent to System of total differential equations. the *r*(*n* − *r*) equations *dx*_{j}/*dx*_{σ} = *c*_{σr}. That consistent with them we may be able to regard *x*_{r+1}, ... *x*_{n} as functions of *x*_{1}, ... *x*_{r}, these being regarded as independent variables, it is clearly necessary that when we differentiate *c*_{σj} in regard to *x*_{ρ} on this hypothesis the result should be the same as when we differentiate *c*_{ρj}, in regard to *x*_{σ} on this hypothesis. The differential coefficient of a function ƒ of *x*_{1}, ... *x*_{n} on this hypothesis, in regard to *x*_{ρj} is, however,

*d*ƒ/*dx*_{ρ} + *c*_{ρjr+1}*d*ƒ/*dx*_{r+1} + ... + *c*_{ρn}*d*ƒ/*dx*_{n},

namely, is Q_{ρ}ƒ. Thus the consistence of the *n* − *r* total equations requires the conditions Q_{ρ}*c*_{σj} − Q_{σ}*c*_{ρj} = 0, which are, however, verified in virtue of Q_{ρ}(Q_{σ}ƒ) − Q_{σ}(Q_{ρ}ƒ) = 0. And it can in fact be easily verified that if ω_{r+1}, ... ω_{n} be the principal solutions of the Jacobian system, Q_{σ}ƒ = 0, reducing respectively to *x*_{r+1}, ... *x*_{n} when *x*_{1} = *x*º_{1}, ... *x*_{r} = *x*º_{r}, and the equations ω_{r+1} = *x*^{0}_{r+1}, ... ω_{n} = *x*º_{n} be solved for *x*_{r+1}, ... *x*_{n} to give *x*_{j} = ψ_{j}(*x*_{1}, ... *x*_{r}, *x*^{0}_{r+1}, ... *x*º_{n}), these values solve the total equations and reduce respectively to *x*^{0}_{r+1}, ... *x*º_{n} when *x*_{1} = *x*º_{1} ... *x*_{r} = *x*º_{r}. And the total equations have no other solutions with these initial values. Conversely, the existence of these solutions of the total equations can be deduced a priori and the theory of the Jacobian system based upon them. The theory of such total equations, in general, finds its natural place under the heading *Pfaffian Expressions*, below.

A practical method of reducing the solution of the *r* equations of a Jacobian system to that of a single equation in *n* − *r* + 1 variables may be explained in connexion with a geometrical interpretation which will perhaps be clearer in a particular Geometrical interpretation and solution. case, say *n* = 3, *r* = 2. There is then only one total equation, say *dz* = *adz* + *bdy*; if we do not take account of the condition of integrability, which is in this case *da*/*dy* + *bda*/*dz* = *db*/*dx* + *adb*/*dz*, this equation may be regarded as defining through an arbitrary point (*x*_{0}, *y*_{0}, *z*_{0}) of three-dimensioned space (about which *a*, *b* are developable) a plane, namely, *z* − *z*_{0} = *a*_{0}(*x* − *x*_{0}) + *b*_{0}(*y* − *y*_{0}), and therefore, through this arbitrary point ∞^{2} directions, namely, all those in the plane. If now there be a surface *z* = ψ(*x*, *y*), satisfying *dz* = *adz* + *bdy* and passing through (*x*_{0}, *y*_{0}, *z*_{0}), this plane will touch the surface, and the operations of passing along the surface from (*x*_{0}, *y*_{0}, *z*_{0}) to

(*x*_{0} + *dx*_{0}, *y*_{0}, *z*_{0} + *dz*_{0})

and then to (*x*_{0} + *dx*_{0}, *y*_{0} + *dy*_{0}, Z_{0} + *d*^{1}*z*_{0}), ought to lead to the same value of *d*^{1}*z*_{0} as do the operations of passing along the surface from (*x*_{0}, *y*_{0}, *z*_{0}) to (*x*_{0}, *y*_{0} + *dy*_{0}, *z*_{0} + δ*z*_{0}), and then to

(*x*_{0} + *dx*_{0}, *y*_{0} + *dy*_{0}, *z*_{0} + δ^{1}*z*_{0}),

namely, δ^{1}*z*_{0} ought to be equal to *d*^{1}*z*_{0}. But we find

and so at once reach the condition of integrability. If now we put
*x* = *x*_{0} + *t*, *y* = *y*_{0} + *mt*, and regard *m* as constant, we shall in fact be considering the section of the surface by a fixed plane *y* − *y*_{0} = *m*(*x* − *x*_{0}); along this section *dz* = *dt*(*a* + *bm*); if we then integrate the equation *dx*/*dt* = *a* + *bm*, where *a*, *b* are expressed as functions of *m* and *t*, with *m* kept constant, finding the solution which reduces to *z*_{0} for *t* = 0, and in the result again replace *m* by (*y* − *y*_{0})/(*x* − *x*_{0}), we shall have the surface in question. In the general case the equations

*dx*_{j} = *c*_{ij}*dx*_{1} + ... *c*_{rj}*dx*_{r}

similarly determine through an arbitrary point *x*º_{1}, ... *x*º_{n} Mayer’s method of integration. a planar manifold of *r* dimensions in space of *n* dimensions, and when the conditions of integrability are satisfied, every direction in this manifold through this point is tangent to the manifold of *r* dimensions, expressed by ω_{r+1} = *x*^{0}_{r+1}, ... ω_ = *x*º_{n}, which satisfies the equations and passes through this point. If we put *x*_{1} − *x*º_{1} = *t*, *x*_{2} − *x*º_{2} = *m*_{2}*t*, ... *x*_{r} − *x*º_{r} = *m*_{r}*t*, and regard *m*_{2}, ... *m*_{r} as fixed, the (*n* − *r*) total equations take the form *dx*_{j}/*dt* = *c*_{1j} + *m*_{2}*c*_{2j} + ... + *m*_{r}*c*_{rj}, and their integration is equivalent to that of the single partial equation

in the *n* − *r* + 1 variables *t*, *x*_{r+1}, ... *x*_{n}. Determining the solutions Ω_{r+1}, ... Ω_{n} which reduce to respectively *x*_{r+1}, ... *x*_{n} when *t* = 0, and substituting *t* = *x*_{1} − *x*º_{1}, *m*_{2} = (*x*_{2} − *x*º_{2})/(*x*_{1} − *x*º_{1}), ... *m*_{r} = (*x*_{r} − *x*º_{r})/(*x*_{1} − *x*º_{1}), we obtain the solutions of the original system of partial equations previously denoted by ω_{r+1}, ... ω_{n}. It is to be remarked, however, that the presence of the fixed parameters *m*_{2}, ... *m*_{r} in the single integration may frequently render it more difficult than if they were assigned numerical quantities.

We have above considered the integration of an equation

*dz* = *adz* + *bdy*

on the hypothesis that the condition

*da*/*dy* + *bda*/*dz* = *db*/*dz* + *adb*/*dz*.

It is natural to inquire what relations among *x*, *y*, *z*, if any, Pfaffian Expressions. are implied by, or are consistent with, a differential relation adx + bdy + cdx = 0, when *a*, *b*, *c* are unrestricted functions of *x*, *y*, *z*. This problem leads to the consideration of the so-called *Pfaffian Expression* adx + bdy + cdz. It can be shown (1) if each of the quantities *db*/*dz* − *dc*/*dy*, *dc*/*dx* − *da*/*dz*, *da*/*dy* − *db*/*dz*, which we shall denote respectively by *u*_{23}, *u*_{31}, *u*_{12}, be identically zero, the expression is the differential of a function of *x*, *y*, *z*, equal to *dt* say; (2) that if the quantity *au*_{23} + *bu*_{31} + *cu*_{12} is identically zero, the expression is of the form *udt*, *i.e.* it can be made a perfect differential by multiplication by the factor 1/*u*; (3) that in general the expression is of the form *dt* + *u*_{1}*dt*_{1}. Consider the matrix of four rows and three columns, in which the elements of the first row are *a*, *b*, *c*, and the elements of the (*r* + 1)-th row, for *r* = 1, 2, 3, are the quantities *u*_{r1}, *u*_{r2}, *u*_{r3}, where *u*_{11} = *u*_{22} = *u*_{33} = 0. Then it is easily seen that the cases (1), (2), (3) above correspond respectively to the cases when (1) every determinant of this matrix of two rows and columns is zero, (2) every determinant of three rows and columns is zero, (3) when no condition is assumed. This result can be generalized as follows: if *a*_{1}, ... *a*_{n} be any functions of *x*_{1}, ... *x*_{n}, the so-called Pfaffian expression *a*_{1}*dx*_{1} + ... + *a*_{n}*dx*_{n} can be reduced to one or other of the two forms

*u*_{1}*dt*_{1} + ... + *u*_{k}*dt*_{k}, *dt* + *u*_{1}*dt*_{1} + ... + *u*_{k-1}*dt*_{k-1},

wherein *t*, *u*_{1} ..., *t*_{1}, ... are independent functions of *x*_{1}, ... *x*_{n}, and *k* is such that in these two cases respectively 2*k* or 2*k* − 1 is the rank of a certain matrix of *n* + 1 rows and *n* columns, that is, the greatest number of rows and columns in a non-vanishing determinant of the matrix; the matrix is that whose first row is constituted by the quantities *a*_{1}, ... *a*_{n}, whose *s*-th element in the (*r* + 1)-th row is the quantity *da*_{r}/*dx*_{s} − *da*_{s}/*dx*_{r}. The proof of such a reduced form can be obtained from the two results: (1) If t be any given function of the 2*m* independent variables *u*_{1}, ... *u*_{m}, *t*_{1}, ... *t*_{m}, the expression *dt* + *u*_{1}*dt*_{1} + ... + *u*_{m}*dt*_{m} can be put into the form *u*′_{1}*dt*′_{1} + ... + *u*′_{m}*dt*′_{m}. (2) If the quantities *u*_{1}, ..., *u*_{1}, *t*_{1}, ... *t*_{m} be connected by a relation, the expression *n*_{1}*dt*_{1} + ... + *u*_{m}*dt*_{m} can be put into the format *dt*′ + *u*′_{1}*dt*′_{1} + ... + *u*′_{m-1}*dt*′_{m-1}; and if the relation connecting *u*_{1}, *u*_{m}, *t*_{1}, ... *t*_{m} be homogeneous in *u*_{1}, ... *u*_{m}, then *t*′ can be taken to be zero. These two results are deductions from the theory of *contact transformations* (see below), and their demonstration requires, beside elementary algebraical considerations, only the theory of complete systems of linear homogeneous partial differential equations of the first order. When the existence of the reduced form of the Pfaffian expression containing only independent quantities is thus once assured, the identification of the number *k* with that defined by the specified matrix may, with some difficulty, be made *a posteriori*.

In all cases of a single Pfaffian equation we are thus led to consider what is implied by a relation *dt* − *u*_{1}*dt*_{1} − ... − *u*_{m}*dt*_{m} = 0, in which *t*, *u*_{1}, ... *u*_{m}, *t*_{1} ..., *t*_{m} are, except for this equation, independent variables. This is to be satisfied in virtue of Single linear Pfaffian equation. one or several relations connecting the variables; these must involve relations connecting *t*, *t*_{1}, ... *t*_{m} only, and in one of these at least *t* must actually enter. We can then suppose that in one actual system of relations in virtue of which the Pfaffian equation is satisfied, all the relations connecting *t*, *t*_{1} ... *t*_{m} only are given by

*t* = ψ(*t*_{s+1} ... *t*_{m}), *t*_{1} = ψ_{1}(*t*_{s+1} ... *t*_{m}), ... *t*_{s} = ψ_{s}(*t*_{s+1} ... *t*_{m});

so that the equation

*d*ψ − *u*_{1}*d*ψ_{1} − ... − *u*_{s}*d*ψ_{s} − *u*_{s+1}*dt*_{s+1} − ... − *u*_{m}*dt*_{m} = 0

is identically true in regard to *u*_{1}, ... *u*_{m}, *t*_{s+1} ..., *t*_{m}; equating to zero the coefficients of the differentials of these variables, we thus obtain *m* − *s* relations of the form

*d*ψ/*dt*_{j} − *u*_{1}*d*ψ_{1}/*dt*_{j} − ... − *u*_{s}*d*ψ_{s}/*dt*_{j} − *u*_{j} = 0;

these *m* − *s* relations, with the previous *s* + 1 relations, constitute a set of *m* + 1 relations connecting the 2*m* + 1 variables in virtue of which the Pfaffian equation is satisfied independently of the form of the functions ψ,ψ_{1}, ... ψ_{s}. There is clearly such a set for each of the values *s* = 0, *s* = 1, ..., *s* = *m* − 1, *s* = *m*. And for any value of *s* there may exist relations additional to the specified *m* + 1 relations, provided they do not involve any relation connecting *t*, *t*_{1}, ... *t*_{m} only, and are consistent with the *m* − *s* relations connecting *u*_{1}, ... *u*_{m}. It is now evident that, essentially, the integration of a Pfaffian equation

*a*_{1}*dx*_{1} + ... + *a*_{n}*dx*_{n} = 0,

wherein *a*_{1}, ... *a*_{n} are functions of *x*_{1}, ... *x*_{n}, is effected by the processes necessary to bring it to its reduced form, involving only independent variables. And it is easy to see that if we suppose this reduction to be carried out in all possible ways, there is no need to distinguish the classes of integrals corresponding to the various values of *s*; for it can be verified without difficulty that by putting *t*′ = *t* − *u*_{1}*t*_{1} − ... − *u*_{s}*t*_{s}, *t*′_{1} = *u*_{1}, ... *t*′_{s} = *u*_{s}, *u*′_{1} = −*t*_{1}, ..., *u*′_{s} = −*t*_{s}, *t*′_{s+1} = *t*_{s+1}, ... *t*′_{m} = *t*_{m}, *u*′_{s+1} = *u*_{s+1}, ... *u*′_{m} = *u*_{m}, the reduced equation becomes changed to *dt*′ − *u*′_{1}*dt*′_{1} − ... − *u*′_{m}*dt*′_{m} = 0, and the general relations changed to

*t*′ = ψ(*t*′_{s+1}, ... *t*′_{m}) − *t*′_{1}ψ_{1}(*t*′_{s+1}, ... *t*′_{m}) − ... − *t*′_{s}ψ_{s}(*t*′_{s+1}, ... *t*′_{m}), = φ,

say, together with *u*′_{1} = *d*φ/*dt*′_{1}, ..., *u*′_{m} = *d*φ/*dt*′_{m}, which contain only one relation connecting the variables *t*′, *t*′_{1}, ... *t*′_{m} only.

This method for a single Pfaffian equation can, strictly speaking, be generalized to a simultaneous system of (*n* − *r*) Pfaffian equations *dx*_{j} = *c*_{1j}*dx*_{1} + ... + *c*_{rj}*dx*_{r} only in the case already treated, Simultaneous Pfaffian equations. when this system is satisfied by regarding *x*_{r+1}, ... *x*_{n} as suitable functions of the independent variables *x*_{1}, ... *x*_{r}; in that case the integral manifolds are of *r* dimensions. When these are non-existent, there may be integral manifolds of higher dimensions; for if

*d*φ = φ_{1}*dx*_{1} + ... + φ_{r}*dx*_{r} + φ_{r+1}(*c*_{1,r+1}*dx*_{1} + ... + *c*_{r,r+1}*dx*_{r}) + φ_{r+2}( ) + ...

be identically zero, then φσ + *c*σ,_{r+1}φ_{r+1} + ... + *c*σ,_{n}φ_{n} ≈ 0, or φ satisfies the *r* partial differential equations previously associated with the total equations; when these are not a complete system, but included in a complete system of *r* − μ equations, having therefore *n* − *r* − μ independent integrals, the total equations are satisfied over a manifold of *r* + μ dimensions (see E. *v*. Weber, *Math. Annal.* 1*v*. (1901), *p*. 386).

It seems desirable to add here certain results, largely of algebraic character, which naturally arise in connexion with the theory of contact transformations. For any two functions of the 2*n* Contact transformations. independent variables *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n} we denote by (φψ) the sum of the *n* terms such as *d*φ*d*ψ/*dp*_{i}*dx*_{i} − *d*ψ*d*φ/*dp*_{i}*dx*_{i} For two functions of the (2*n* + 1) independent variables *z*, *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n} we denote by φψ the sum of the *n* terms such as

It can at once be verified that for any three functions [ƒ[φψ]] + [φ[ψƒ]] + [ψ[ƒφ]] = *d*ƒ/*dz* [φψ] + *d*φ/*dz* [ψƒ] + *d*ψ/*dz* [ƒφ], which when ƒ, φ,ψ do not contain *z* becomes the identity (ƒ(φψ)) + (φ(ψƒ)) + (ψ(ƒφ)) = 0.Then, if X_{1}, ... X_{n}, P_{1}, ... P_{n} be such functions Of x_{1}, ... *x*_{n}, *p*_{1} ... *p*_{n} that P_{1}*d*X_{1} + ... + P_{n}*d*X_{n} is identically equal to *p*_{1}*dx*_{1} + ... + *p*_{n}*dx*_{n}, it can be shown by elementary algebra, after equating coefficients of independent differentials, (1) that the functions X_{1}, ... P_{n} are independent functions of the 2*n* variables *x*_{1}, ... *p*_{n}, so that the equations *x*′_{i} = X_{i}, *p*′_{i} = P_{i} can be solved for *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, and represent therefore a transformation, which we call a homogeneous contact transformation; (2) that the X_{1}, ... X_{n} are homogeneous functions of *p*_{1}, ... *p*_{n} of zero dimensions, the P_{1}, ... P_{n} are homogeneous functions of *p*_{1}, ... *p*_{n} of dimension one, and the ½*n*(n − 1) relations (X_{i}X_{j}) = 0 are verified. So also are the *n*² relations (P_{i}X_{i} = 1, (P_{i}X_{j}) = 0, (P_{i}P_{j}) = 0. Conversely, if X_{1}, ... X_{n} be independent functions, each homogeneous of zero dimension in *p*_{1}, ... *p*_{n} satisfying the ½*n*(n − 1) relations (X_{i}X_{j}) = 0, then P_{1}, ... P_{n} can be uniquely determined, by solving linear algebraic equations, such that P_{1}*d*X_{1} + ... + P_{n}*d*X_{n} = *p*_{1}*dx*_{1} + ... + *p*_{n}*dx*_{n}. If now we put *n* + 1 for *n*, put *z* for *x*_{n+1}, Z for X_{n+1}, Q_{i} for -P_{i}/P_{n+1}, for *i* = 1, ... *n*, put *q*_{i} for -*p*_{i}/*p*_{n+1} and σ for *q*_{n+1}/Q_{n+1}, and then finally write P_{1}, ... P_{n}, *p*_{1}, ... *p*_{n} for Q_{1}, ... Q_{n}, *q*_{1}, ... *q*_{n}, we obtain the following results: If ZX_{1} ... X_{n}, P_{1}, ... P_{n} be functions of *z*, *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, such that the expression *d*Z − P_{1}*d*X_{1} − ... − P_{n}*d*X_{n} is identically equal to σ(*dz* − *p*_{1}*dx*_{1} − ... − *p*_{n}*dx*_{n}), and σ not zero, then (1) the functions Z, X_{1}, ... X_{n}, P_{1}, ... P_{n} are independent functions of *z*, *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, so that the equations *z*′ = Z, *x*′_{i} = X_{i}, *p*′_{i} = P_{i} can be solved for *z*, *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n} and determine a transformation which we call a (non-homogeneous) contact transformation; (2) the Z, X_{1}, ... X_{n} verify the ½*n*(*n* + 1)
identities [ZX_{i}] = 0, [X_{i}X_{j}] = 0. And the further identities

are also verified. Conversely, if Z, *x*_{1}, ... X_{n} be independent functions satisfying the identities [ZX_{i}] = 0, [X_{i}X_{j}] = 0, then σ, other than zero, and P_{1}, ... P_{n} can be uniquely determined, by solution of algebraic equations, such that

*d*Z − P_{1}*d*X_{1} − ... − P_{n}*d*X_{n} = σ(*dz* − *p*_{1}*dx*_{1} − ... − *p*_{n}*dx*_{n}).

Finally, there is a particular case of great importance arising when σ = 1, which gives the results: (1) If U, X_{1}, ... X_{n}, P_{1}, ... P_{n} be 2*n* + 1 functions of the 2*n* independent variables *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, satisfying the identity

*d*U + P_{1}*dx*_{1} + ... + P_{n}*d*X_{n} = *p*_{1}*dx*_{1} + ... + *p*_{n}*dx*_{n},

then the 2*n* functions P_{1}, ... P_{n}, X_{1}, ... X_{n} are independent, and we have

(X_{i}X_{j}) = 0, (X_{i}U) = δX_{i}, (P_{i}X_{i}) = 1, (P_{i}X_{j}) = 0, (P_{i}P_{j}) = 0, (P_{i}U) + P_{i} = δP_{i},

where δ denotes the operator *p*_{1}*d*/*dp*_{1} + ... + *p*_{n}*d*/*dp*_{n}; (2) If X_{1}, ... X_{n} be independent functions of *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, such that (X_{i}X_{j}) = 0, then U can be found by a quadrature, such that

(X_{i}U) = δX_{i};

and when X_{i}, ... X_{n}, U satisfy these ½*n*(n + 1) conditions, then P_{1}, ... P_{n} can be found, by solution of linear algebraic equations, to render true the identity *d*U + P_{1}*d*X_{1} + ... + P_{n}*d*X_{n} = *p*_{1}*dx*_{1} + ... + *p*_{n}*dx*_{n}; (3) Functions X_{1}, ... X_{n}, P_{1}, ... P_{n} can be found to satisfy this differential identity when U is an arbitrary given function of *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}; but this requires integrations. In order to see what integrations, it is only necessary to verify the statement that if U be an arbitrary given function of *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, and, for *r* < *n*, X_{1}, ... X_{r} be independent functions of these variables, such that (XσU) = δXσ, (XρXσ) = 0, for ρ, σ = 1 ... *r*, then the *r* + 1 homogeneous linear partial differential equations of the first order (Uƒ) + δƒ = 0, (Xρƒ) = 0, form a complete system. It will be seen that the assumptions above made for the reduction of Pfaffian expressions follow from the results here enunciated for contact transformations.

We pass on now to consider the solution of any partial differential equation of the first order; we attempt to explain certain ideas relatively to a single equation with any number of independent variables (in particular, an Partial differential equation of the first order. ordinary equation of the first order with one independent variable) by speaking of a single equation with two independent variables *x*, *y*, and one dependent variable *z*. It will be seen that we are naturally led to consider systems of such simultaneous equations, which we consider below. The central discovery of the transformation theory of the solution of an equation F(*x*, *y*, *z*, *dz*/*dx*, *dz*/*dy*) = 0 is that its solution can always be reduced to the solution of partial equations which are *linear*. For this, however, we must regard *dz*/*dx*, *dz*/*dy*, during the process of integration, not as the differential coefficients of a function *z* in regard to *x* and *y*, but as variables independent of *x*, *y*, *z*, the too great indefiniteness that might thus appear to be introduced being provided for in another way. We notice that if *z* = ψ(*x*, *y*) be a solution of the differential equation, then *dz* = *dxd*ψ/dx + *dyd*ψ/*dy*; thus if we denote the equation by F(*x*, *y*, *z*, *p*, *q*,) = 0, and prescribe the condition *dz* = *pdx* + *qdy* for every solution, any solution such as *z* = ψ(*x*, *y*) will necessarily be associated with the equations *p* = *dz*/*dx*, *q* = *dz*/*dy*, and *z* will satisfy the equation in its original form. We have previously seen (under *Pfaffian Expressions*) that if five variables *x*, *y*, *z*, *p*, *q*, otherwise independent, be subject to *dz* − *pdx* − *qdy* = 0, they must in fact be subject to at least three mutual relations. If we associate with a point (*x*, *y*, *z*) the plane

Z − *z* = *p*(X − *x*) + *q*(Y − *y*)

passing through it, where X, Y, Z are current co-ordinates, and call this association a surface-element; and if two consecutive elements of which the point(*x* + *dx*, *y* + *dy*, *z* + *dz*) of one lies on the plane of the other, for which, that is, the condition *dz* = *pdx* + *qdy* is satisfied, be said to be *connected,* and an infinity of connected elements following one another continuously be called a *connectivity*, then our statement is that a connectivity consists of not more than ∞² elements, the whole number of elements (*x*, *y*, *z*, *p*, *q*) that are possible being called ∞^{5}. The solution of an equation F(*x*, *y*, *z*, *dz*/*dx*, *dz*/*dy*) = 0 is then to be understood to mean finding in all possible ways, from the ∞^{4} elements (*x*, *y*, *z*, *p*, *q*) which satisfy F(*x*, *y*, *z*, *p*, *q*) = 0 a set of ∞² elements forming a connectivity; or, more analytically, finding in all possible ways two relations G = 0, H = 0 connecting *x*, *y*, *z*, *p*, *q* and independent of F = 0, so that the three relations together may involve

*dz* = *pdx* + *qdy*.

Such a set of three relations may, for example, be of the form *z* = ψ(*x*, *y*), *p* = *d*ψ/*dx*, *q* = *d*ψ/*dy*; but it may also, as another case, involve two relations *z* = ψ(*y*), *x* = ψ_{1}(*y*) connecting *x*, *y*, *z*, the third relation being

ψ′(*y*) = *p*ψ′_{1}(*y*) + *q*,

the connectivity consisting in that case, geometrically, of a curve in space taken with ∞¹ of its tangent planes; or, finally, a connectivity is constituted by a fixed point and all the planes passing through that point. This generalized view of the meaning of a solution of F = 0 is of advantage, moreover, in view of anomalies otherwise arising from special forms of the equation Meaning of a solution of the equation. itself. For instance, we may include the case, sometimes arising when the equation to be solved is obtained by transformation from another equation, in which F does not contain either *p* or *q*. Then the equation has ∞² solutions, each consisting of an arbitrary point of the surface F = 0 and all the ∞² planes passing through this point; it also has ∞² solutions, each consisting of a curve drawn on the surface F = 0 and all the tangent planes of this curve, the whole consisting of ∞² elements; finally, it has also an isolated (or singular) solution consisting of the points of the surface, each associated with the tangent plane of the surface thereat, also ∞² elements in all. Or again, a linear equation F = P*p* + Q*q* − R = 0, wherein P, Q, R are functions of *x*, *y*, *z* only, has ∞² solutions, each consisting of one of the curves defined by

*dx*/P = *dy*/Q = *dz*/R

taken with all the tangent planes of this curve; and the same equation has ∞² solutions, each consisting of the points of a surface containing ∞¹ of these curves and the tangent planes of this surface. And for the case of *n* variables there is similarly the possibility of *n* + 1 kinds of solution of an equation F(*x*_{1}, ... *x*_{n}, *z*, *p*_{1}, ... *p*_{n}) = 0; these can, however, by a simple contact transformation be reduced to one kind, in which there is only one relation *z*′ = ψ(*x*′_{1}, ... *x*′_{n}) connecting the new variables *x*’_{1}, ... *x*′_{n}, *z*′ (see under Pfaffian Expressions); just as in the case of the solution

*z* = ψ(*y*), *x* = ψ_{1}(*y*), ψ′(*y*) = *p*ψ′_{1}(*y*) + *q*

of the equation P*p* + Q*q* = R the transformation *z*’ = *z* − px, *x*′ = *p*, *p*′ = −*x*, *y*′ = *y*, *q*′ = *q* gives the solution

*z*′ = ψ(*y*′) + *x*′ψ_{1}(*y*′), *p*′ = *dz*′/*dx*′, *q*′ = *dz*′/*dy*′

of the transformed equation. These explanations take no account of the possibility of *p* and *q* being infinite; this can be dealt with by writing *p* = -*u*/*w*, *q* = -*v*/*w*, and considering homogeneous equations in *u*, *v*, *w*, with udx + vdy + wdz = 0 as the differential relation necessary for a connectivity; in practice we use the ideas associated with such a procedure more often without the appropriate notation.

In utilizing these general notions we shall first consider the theory of characteristic chains, initiated by Cauchy, which shows well the nature of the relations implied by the given differential equation; the alternative ways of carrying Order of the ideas. out the necessary integrations are suggested by considering the method of Jacobi and Mayer, while a good summary is obtained by the formulation in terms of a Pfaffian expression.

Consider a solution of F = 0 expressed by the three independent equations F = 0, G = 0, H = 0. If it be a solution in which there is more than one relation connecting *x*, *y*, *z*, let new variables *x*′, *y*′, *z*′, *p*′, *q*′ be introduced, as before explained under Pfaffian Expressions, Characteristic chains. in which *z*’ is of the form

*z*′ = *z* − *p*_{1}*x*_{1} − ... − *p*_{s}*x*_{s} (*s* = 1 or 2),

so that the solution becomes of a form *z*’ = ψ(*x*′y′), *p*′ = *d*ψ/*dx*′, *q*′ = *d*ψ/*dy*′, which then will identically satisfy the transformed equations F′ = 0, G′ = 0, H′ = 0. The equation F′ = 0, if *x*′, *y*′, *z*′ be regarded as fixed, states that the plane Z − *z*′ = *p*′(X − *x*′) + *q*′(Y − *y*′) is tangent to a certain cone whose vertex is (*x*′, *y*′, *z*′), the consecutive point (*x*′ + *dx*′, *y*′ + *dy*′, *z*′ + *dz*′) of the generator of contact being such that

Passing in this direction on the surface *z*′ = ψ(*x*′, *y*′) the tangent
plane of the surface at this consecutive point is (*p*′ + *dp*′, *q*′ + *dq*′), where, since F′(*x*′, *y*′, ψ, *d*ψ/*dx*′, *d*ψ/*dy*′) = 0 is identical, we have *dx*′ (*d*F′/*dx*′ + *p*′dF′/*dz*′) + *dp*′*d*F′/dp′ = 0. Thus the equations, which we shall call the characteristic equations,

are satisfied along a connectivity of ∞¹ elements consisting of a curve on *z*′ = ψ(*x*′, *y*′) and the tangent planes of the surface along this curve. The equation F′ = 0, when *p*′, *q*′ are fixed, represents a curve in the plane Z − *z*′ = *p*′(X − *x*′) + *q*′(Y − *y*′) passing through (*x*′, *y*′, *z*′); if (*x*′ + δ*x*′, *y*′ + δ*y*′, *z*′ + δ*z*′) be a consecutive point of this curve, we find at once

thus the equations above give δ*x*′*dp*′ + δ*y*′*dq*′ = 0, or the tangent line of the plane curve, is, on the surface *z*′ = ψ(*x*′, *y*′), in a direction conjugate to that of the generator of the cone. Putting each of the fractions in the characteristic equations equal to *dt*, the equations enable us, starting from an arbitrary element *x*′_{0}, *y*′_{0}, *z*′_{0}, *p*′_{0}, *q*′_{0}, about which all the quantities F′, *d*F′/*dp*′, &c., occurring in the denominators, are developable, to define, from the differential equation F′ = 0 alone, a connectivity of ∞¹ elements, which we call a *characteristic chain*; and it is remarkable that when we transform again to the original variables (*x*, *y*, *z*, *p*, *q*), the form of the differential equations for the chain is unaltered, so that they can be written down at once from the equation F = 0. Thus we have proved that the characteristic chain starting from any ordinary element of any integral of this equation F = 0 consists only of elements belonging to this integral. For instance, if the equation do not contain *p*, *q*, the characteristic chain, starting from an arbitrary plane through an arbitrary point of the surface F = 0, consists of a pencil of planes whose axis is a tangent line of the surface F = 0. Or if F = 0 be of the form P*p* + Q*q* = R, the chain consists of a curve satisfying *dx*/P = *dy*/Q = *dz*/R and a single infinity of tangent planes of this curve, determined by the tangent plane chosen at the initial point. In all cases there are ∞³ characteristic chains, whose aggregate may therefore be expected to exhaust the ∞^{4} elements satisfying F = 0.

Consider, in fact, a single infinity of connected elements each satisfying F = 0, say a chain connectivity T, consisting of elements specified by *x*_{0}, *y*_{0}, *z*_{0}, *p*_{0}, *q*_{0}, which we suppose expressed as Complete integral constructed with characteristic chains. functions of a parameter *u*, so that

U_{0} = *dz*_{0}/*du* − *p*_{0}*dx*_{0}/*du* − *q*_{0}*dy*_{0}/*du*

is everywhere zero on this chain; further, suppose that each of F, *d*F/*dp*, ... , *d*F/*dx* + *pd*F/*dz* is developable about each element of this chain T, and that T is *not* a characteristic chain. Then consider the aggregate of the characteristic chains issuing from all the elements of T. The ∞² elements, consisting of the aggregate of these characteristic chains, satisfy F = 0, provided the chain connectivity T consists of elements satisfying F = 0; for each characteristic chain satisfies *d*F = 0. It can be shown that these chains are connected; in other words, that if *x*, *y*, *z*, *p*, *q*, be any element of one of these characteristic chains, not only is

*dz*/*dt* − *pdx*/*dt* − *qdy*/*dt* = 0,

as we know, but also U = *dz*/*du* − *pdx*/*du* − *qdy*/*du* is also zero. For we have

which is equal to

As *d*F/*dz* is a developable function of *t*, this, giving

shows that U is everywhere zero. Thus integrals of F = 0 are obtainable by considering the aggregate of characteristic chains issuing from arbitrary chain connectivities T satisfying F = 0; and such connectivities T are, it is seen at once, determinable without integration. Conversely, as such a chain connectivity T can be taken out from the elements of any given integral all possible integrals are obtainable in this way. For instance, an arbitrary curve in space, given by *x*_{0} = θ(*u*), *y*_{0} = φ(*u*), *z*_{0} = ψ(*u*), determines by the two equations F(*x*_{0}, *y*_{0}, *z*_{0}, *p*_{0}, *q*_{0}) = 0, ψ′(*u*) = *p*_{0}θ′(*u*) + *q*_{0}φ′(*u*), such a chain connectivity T, through which there passes a perfectly definite integral of the equation F = 0. By taking ∞² initial chain connectivities T, as for instance by taking the curves *x*_{0} = θ, *y*_{0} = φ, *z*_{0} = ψ to be the ∞² curves upon an arbitrary surface, we thus obtain ∞² integrals, and so ∞^{4} elements satisfying F = 0. In general, if functions G, H, independent of F, be obtained, such that the equations F = 0, G = *b*, H = *c* represent an integral for all values of the constants *b*, *c*, these equations are said to constitute a *complete integral*. Then ∞^{4} elements satisfying F = 0 are known, and in fact every other form of integral can be obtained without further integrations.

In the foregoing discussion of the differential equations of a characteristic chain, the denominators *d*F/*dp*, ... may be supposed to be modified in form by means of F = 0 in any way conducive to a simple integration. In the immediately following explanation of ideas, however, we consider indifferently all equations F = constant; when a function of *x*, *y*, *z*, *p*, *q* is said to be zero, it is meant that this is so identically, not in virtue of F = 0; in other words, we consider the integration of F = *a*, where *a* is an arbitrary constant. In the theory of linear partial equations we have seen that the integration Operations necessary for integration of F = *a*. of the equations of the characteristic chains, from which, as has just been seen, that of the equation F = *a* follows at once, would be involved in completely integrating the single linear homogeneous partial differential equation of the first order [Fƒ] = 0 where the notation is that explained above under Contact Transformations. One obvious integral is ƒ = F. Putting F = *a*, where *a* is arbitrary, and eliminating one of the independent variables, we can reduce this equation [Fƒ] = 0 to one in four variables; and so on. Calling, then, the determination of a single integral of *a* single homogeneous partial differential equation of the first order in *n* independent variables, *an operation of order* *n* − 1, the characteristic chains, and therefore the most general integral of F = *a*, can be obtained by successive operations of orders 3, 2, 1. If, however, an integral of F = *a* be represented by F = *a*, G = *b*, H = *c*, where *b* and *c* are arbitrary constants, the expression of the fact that a characteristic chain of F = *a* satisfies *d*G = 0, gives [FG] = 0; similarly, [FH] = 0 and [GH] = 0, these three relations being identically true. Conversely, suppose that an integral G, independent of F, has been obtained of the equation [Fƒ] = 0, which is an operation of order three. Then it follows from the identity [ƒ[φψ]] + [φ[ψƒ]] + [ψ[ƒφ]] = *d*ƒ/*dz* [ψφ] + *d*φ/*dz* [ψƒ] + *d*ψ/*dz* [ƒφ] before remarked, by putting φ = F, ψ = G, and then [Fƒ] = A(ƒ), [Gƒ] = B(ƒ), that AB(ƒ) − BA(ƒ) = *d*F/*dz* B(ƒ) − *d*G/*dz* A(ƒ), so that the two linear equations [Fƒ] = 0, [Gƒ] = 0 form a complete system; as two integrals F, G are known, they have a common integral H, independent of F, G, determinable by an operation of order one only. The three functions F, G, H thus identically satisfy the relations [FG] = [GH] = [FH] = 0. The ∞² elements satisfying F = *a*, G = *b*, H = *c*, wherein *a*, *b*, *c* are assigned constants, can then be seen to constitute an integral of F = *a*. For the conditions that a characteristic chain of G = *b* issuing from an element satisfying F = *a*, G = *b*, H = *c* should consist only of elements satisfying these three equations are simply [FG] = 0, [GH] = 0. Thus, starting from an arbitrary element of (F = *a*, G = *b*, H = *c*), we can single out a connectivity of elements of (F = *a*, G = *b*, H = *c*) forming a characteristic chain of G = *b*; then the aggregate of the characteristic chains of F = *a* issuing from the elements of this characteristic chain of G = *b* will be *a* connectivity consisting only of elements of

(F = *a*, G = *b*, H = *c*),

and will therefore constitute an integral of F = *a*; further, it will include all elements of (F = *a*, G = *b*, H = *c*). This result follows also from a theorem given under *Contact Transformations*, which shows, moreover, that though the characteristic chains of F = *a* are not determined by the three equations F = *a*, G = *b*, H = *c*, no further integration is now necessary to find them. By this theorem, since identically [FG] = [GH] = [FH] = 0, we can find, by the solution of linear algebraic equations only, a non-vanishing function σ and two functions A, C, such that

*d*G − A*d*F − C*d*H = σ(*dz* − *pdz* − *qdy*);

thus all the elements satisfying F = *a*, G = *b*, H = *c*, satisfy *dz* = *pdx* + *qdy* and constitute a connectivity, which is therefore an integral of F = *a*. While, further, from the associated theorems, F, G, H, A, C are independent functions and [FC] = 0. Thus C may be taken to be the remaining integral independent of G, H, of the equation [Fƒ] = 0, whereby the characteristic chains are entirely determined.

When we consider the particular equation F = 0, neglecting the case when neither *p* nor *q* enters, and supposing *p* to enter, we may express *p* from F = 0 in terms of *x*, *y*, *z*, *q*, and then eliminate it from all other equations. Then instead of the equation [Fƒ] = 0, we have, if F = 0 give *p* = ψ(*x*, *y*, *z*, *q*), the equation

moreover obtainable by omitting the term in *d*ƒ/*dp* in [*p* − ψ, ƒ] = 0. Let *x*_{0}, *y*_{0}, *z*_{0}, *q*_{0}, be values about which the coefficients in The single equation F = 0 and Pfaffian formulations. this equation are developable, and let ζ, η, ω be the principal solutions reducing respectively to *z*, *y* and *q* when *x* = *x*_{0}. Then the equations *p* = ψ, ζ = *z*_{0}, η = *y*_{0}, ω = *q*_{0} represent a characteristic chain issuing from the element *x*_{0}, *y*_{0}, *z*_{0}, ψ_{0}, *q*_{0}; we have seen that the aggregate of such chains issuing from the elements of an arbitrary chain satisfying

*dz*_{0} = *p*_{0}*dx*_{0} − *q*_{0}*dy*_{0} = 0

constitute an integral of the equation *p* = ψ. Let this arbitrary
chain be taken so that *x*_{0} is constant; then the condition for initial values is only

*dz*_{0} − *q*_{0}*dy*_{0} = 0,

and the elements of the integral constituted by the characteristic chains issuing therefrom satisfy

*d*ζ − ω*d*η = 0.

Hence this equation involves *dz* − ψ*dx* − *qdy* = 0, or we have

*dz* − ψ*dx* − *qdy* = σ(*d*ζ − ω*d*η),

where σ is not zero. Conversely, the integration of *p* = ψ is, essentially, the problem of writing the expression *dz* − ψ*dx* − *qdy* in the form σ(*d*ζ − ω*d*η), as must be possible (from what was said under Pfaffian Expressions).

To integrate a system of simultaneous equations of the first order X_{1} = *a*_{1}, ... X_{r} = *a*_{r} in *n* independent variables *x*_{1}, ... *x*_{n} and one dependent variable *z*, we write *p*_{1} for *dz*/*dx*_{1}, &c., System of equations of the first order. and attempt to find *n* + 1 − *r* further functions Z, X_{r+1} ... X_{n}, such that the equations Z = *a*, X_{i} = *a*_{i},(*i* = 1, ... *n*) involve *dz* − *p*_{1}*dx*_{1} − ... − *p*_{n}*dx*_{n} = 0. By an argument already given, the common integral, if existent, must be satisfied by the equations of the characteristic chains of any one equation X_{i} = *a*_{i}; thus each of the expressions [X_{i}X_{j}] must vanish in virtue of the equations expressing the integral, and we may without loss of generality assume that each of the corresponding ½*r*(r − 1) expressions formed from the *r* given differential equations vanishes in virtue of these equations. The determination of the remaining *n* + 1 − *r* functions may, as before, be made to depend on characteristic chains, which in this case, however, are manifolds of *r* dimensions obtained by integrating the equations [X_{1}ƒ] = 0, ... [X_{r}ƒ] = 0; or having obtained one integral of this system other than X_{1}, ... X_{r}, say X_{r+1}, we may consider the system [X_{1}ƒ] = 0, ... [X_{r+1}ƒ] = 0, for which, again, we have a choice; and at any stage we may use Mayer’s method and reduce the simultaneous linear equations to one equation involving parameters; while if at any stage of the process we find some but not all of the integrals of the simultaneous system, they can be used to simplify the remaining work; this can only be clearly explained in connexion with the theory of so-called function groups for which we have no space. One result arising is that the simultaneous system *p*_{1} = φ_{1}, ... *p*_{r} = φ_{r}, wherein *p*_{1}, ... *p*_{r} are not involved in φ_{1}, ... φ_{r}, if it satisfies the ½*r*(r − 1) relations [*p*_{i} − φ_{i}, *p*_{j} − φ_{j}] = 0, has a solution *z* = ψ(*x*_{1}, ... *x*_{n}), *p*_{1} = *d*ψ/*dx*_{1}, ... *p*_{n} = *d*ψ/*dx*_{n}, reducing to an arbitrary function of *x*_{r+1}, ... *x*_{n} only, when *x*_{1} = *x*º_{1}, ... *x*_{r} = *x*º_{r} under certain conditions as to developability; a generalization of the theorem for linear equations. The problem of integration of this system is, as before, to put

*dz* − φ_{1}*dx*_{1} − ... − φ_{r}*dx*_{r} − *p*_{r+1}*dx*_{r+1} − ... − *p*_{n}*dx*_{n}

into the form σ(*d*ζ − ω_{r+1} + *d*ξ_{r+1} − ... − ω_{n}*d*ξ_{n}); and here ζ, ξ_{r+1}, ... ξ_{n}, ω_{r+1}, ... ω_{n} may be taken, as before, to be principal integrals of a certain complete system of linear equations; those, namely, determining the characteristic chains.

If L be a function of *t* and of the 2*n* quantities *x*_{1}, ... *x*_{n}, ẋ_{1}, ... ẋ_{n}, where ẋ_{i}, denotes *dx*_{i}/*dt*, &c., and if in the *n* equations

we put *p*_{i} = *d*L/*d*ẋ_{i}, and so express ẋ_{i}, ... ẋ_{n} in terms of *t*, *x*_{i}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, assuming that the determinant of the quantities *d*²L/*dx*_{i}*d*ẋ_{j} is not zero; if, further, H denote the function of *t*, *x*_{1}, ... *x*_{n}, *p*_{1}, ... *p*_{n}, numerically equal to *p*_{1}ẋ_{1} + ... + *p*_{n}ẋ_{n} − L, it is easy Equations of dynamics. to prove that *dp*_{i}/*dt* = −*d*H/*dx*_{i}, *dx*_{i}/*dt* = *d*H/*dp*_{i}. These so-called *canonical* equations form part of those for the characteristic chains of the single partial equation *dz*/*dt* + H(*t*, *x*_{1}, ... *x*_{n}, *dz*/*dx*_{1}, ..., *dz*/*dx*_{n}) = 0, to which then the solution of the original equations for *x*_{1} ... *x*_{n} can be reduced. It may be shown (1) that if *z* = ψ(*t*, *x*_{1}, ... *x*_{n}, *c*_{1}, .. *c*_{n}) + *c* be a complete integral of this equation, then *p*_{i} = *d*ψ/*dx*_{i}, *d*ψ/*dc*_{i} = *e*_{i} are 2*n* equations giving the solution of the canonical equations referred to, where *c*_{1} ... *c*_{n} and *e*_{1}, ... *e*_{n} are arbitrary constants; (2) that if *x*_{i} = X_{i}(*t*, *x*^{0}_{1}, ... *p*º_{n}), *p*_{i} = P_{i}(*t*, *x*º_{1}, ... *p*^{0}_{n}) be the principal solutions of the canonical equations for *t* = *t*^{0}, and ω denote the result of substituting these values in *p*_{1}*d*H/*dp*_{1} + ... + *p*_{n}*d*H/*dp*_{n} − H, and Ω = ∫*t*_{t0} ω*dt*, where, after integration, Ω is to be expressed as a function of *t*, *x*_{1}, ... *x*_{n}, *x*º_{1}, ... *x*º_{n}, then *z* = Ω + *z*^{0} is a complete integral of the partial equation.

A system of differential equations is said to allow a certain continuous group of transformations (see Groups, Theory of) when the introduction for the variables in the differential equations of the new variables given by the Application of theory of continuous groups to formal theories. equations of the group leads, for all values of the parameters of the group, to the same differential equations in the new variables. It would be interesting to verify in examples that this is the case in at least the majority of the differential equations which are known to be integrable in finite terms. We give a theorem of very general application for the case of a simultaneous complete system of linear partial homogeneous differential equations of the first order, to the solution of which the various differential equations discussed have been reduced. It will be enough to consider whether the given differential equations allow the infinitesimal transformations of the group.

It can be shown easily that sufficient conditions in order that a complete system Π_{1}ƒ = 0 ... Π_{k}ƒ = 0, in *n* independent variables, should allow the infinitesimal transformation Pƒ = 0 are expressed by *k* equations Π_{i}Pƒ − PΠ_{i}ƒ = λ_{i1}Π_{1}ƒ + ... + λ_{ik}Π_{k}ƒ. Suppose now a complete system of *n* − *r* equations in *n* variables to allow a group of *r* infinitesimal transformations (P_{1}*f*, ..., P_{r}ƒ) which has an invariant subgroup of *r* − 1 parameters (P_{1}ƒ, ..., P_{r-1}ƒ), it being supposed that the *n* quantities Π_{1}ƒ, ..., Π_{n-r}ƒ, P_{1}ƒ, ..., P_{r}ƒ are not connected by an identical linear equation (with coefficients even depending on the independent variables). Then it can be shown that one solution of the complete system is determinable by a quadrature. For each of Π_{i}Pσƒ − PσΠ_{i}*f* is a linear function of Π_{1}ƒ, ..., Π_{n-r}ƒ and the simultaneous system of independent equations Π_{1}ƒ = 0, ... Π_{n-r}ƒ = 0, P_{1}ƒ = 0, ... P_{r-1}ƒ = 0 is therefore a complete system, allowing the infinitesimal transformation P_{r}ƒ. This complete system of *n* − 1 equations has therefore one common solution ω, and P_{r}(ω) is a function of ω. By choosing ω suitably, we can then make P_{r}(ω) = 1. From this equation and the *n* − 1 equations Π_{i}ω = 0, P_{σω} = 0, we can determine ω by a quadrature only. Hence can be deduced a much more general result, *that if the group of *r* parameters be integrable, the complete system can be entirety solved by quadratures*; it is only necessary to introduce the solution found by the first quadrature as an independent variable, whereby we obtain a complete system of *n* − *r* equations in *n* − 1 variables, subject to an integrable group of *r* − 1 parameters, and to continue this process. We give some examples of the application of the theorem. (1) If an equation of the first order *y*′ = ψ(*x*, *y*) allow the infinitesimal transformation ξ*d*ƒ/*dx* + η*d*ƒ/*dy*, the integral curves ω(*x*, *y*) = *y*^{0}, wherein ω(*x*, *y*) is the solution of *d*ƒ/*dx* + ψ(*x*, *y*) *d*ƒ/*dy* = 0 reducing to *y* for *x* = *x*^{0}, are interchanged among themselves by the infinitesimal transformation, or ω(*x*, *y*) can be chosen to make ξ*d*_{ω}/*dx* + η*d*_{ω}/*dy* = 1; this, with *d*ω/*dx* + ψ*d*ω/*dy* = 0, determines ω as the integral of the complete differential (*dy* − ψ*dx*)/(η − ψξ). This result itself shows that every ordinary differential equation of the first order is subject to an infinite number of infinitesimal transformations. But every infinitesimal transformation ξ*d*ƒ/*dx* + η*d*ƒ/*dy* can by change of variables (after integration) be brought to the form *d*ƒ/*dy*, and all differential equations of the first order allowing this group can then be reduced to the form F(*x*, *dy*/*dx*) = 0. (2) In an ordinary equation of the second order *y*” = ψ(*x*, *y*, *y*′), equivalent to *dy*/*dx* = *y*_{1}, *dy*_{1}/*dx* = ψ(*x*, *y*, *y*_{1}), if H, H_{1} be the solutions for *y* and *y*_{1} chosen to reduce to *y*^{0} and *y*º_{1} when *x* = *x*^{0}, and the equations H = *y*, H_{1}= *y*_{1} be equivalent to ω = *y*^{0}, ω_{1} = *y*º_{1}, then ω, ω_{1} are the principal solutions of Πƒ = *d*ƒ/*dx* + *y*_{1}*d*ƒ/*dy* + ψ*d*ƒ/*dy*_{1} = 0. If the original equation allow an infinitesimal transformation whose first *extended* form (see Groups) is Pƒ = ξ*d*ƒ/*dx* + η*d*ƒ/*dy* + η_{1}*d*ƒ/*dy*_{1}, where η_{1}δ*t* is the increment of *dy*/*dx* when ξδ*t*, ηδ*t* are the increments of *x*, *y*, and is to be expressed in terms of *x*, *y*, *y*_{1}, then each of Pω and Pω_{1} must be functions of ω and ω_{1}, or the partial differential equation Πƒ must allow the group Pƒ. Thus by our general theorem, if the differential equation allow a group of two parameters (and such a group is always integrable), it can be solved by quadratures, our explanation sufficing, however, only provided the form Πƒ and the two infinitesimal transformations are not linearly connected. It can be shown, from the fact that η_{1} is a quadratic polynomial in *y*_{1}, that no differential equation of the second order can allow more than 8 really independent infinitesimal transformations, and that every homogeneous linear differential equation of the second order allows just 8, being in fact reducible to *d*²y/*dx*² = 0. Since every group of more than two parameters has subgroups of two parameters, a differential equation of the second order allowing a group of more than two parameters can, as a rule, be solved by quadratures. By transforming the group we see that if a differential equation of the second order allows a single infinitesimal transformation, it can be transformed to the form F(*x*, *d*γ/*dx*, *d*²γ/*dx*²); this is not the case for every differential equation of the second order. (3) For an ordinary differential equation of the third order, allowing an integrable group of three parameters whose infinitesimal transformations are not linearly connected with the partial equation to which the solution of the given ordinary equation is reducible, the similar result follows that it can be integrated by quadratures. But if the group of three parameters be simple, this result must be replaced by the statement that the integration is reducible to quadratures and that of a so-called Riccati equation of the first order, of the form *dy*/*dx* = A + B*y* + C*y*², where A, B, C are functions of *x*. (4) Similarly for the integration by quadratures of an ordinary equation *y*_{n} = ψ(*x*, *y*, *y*_{1}, ... *y*_{n-1}) of any order. Moreover, the group allowed by the equation may quite well consist of extended contact transformations. An important application is to the case where the differential equation is the resolvent equation defining the group of
transformations or rationality group of another differential equation (see below); in particular, when the rationality group of an ordinary linear differential equation is integrable, the equation can be solved by quadratures.

Following the practical and provisional division of theories of differential equations, to which we alluded at starting, into transformation theories and function theories, we pass now to give some account of the latter. These are both Consideration of function theories of differential equations. a necessary logical complement of the former, and the only remaining resource when the expedients of the former have been exhausted. While in the former investigations we have dealt only with values of the independent variables about which the functions are developable, the leading idea now becomes, as was long ago remarked by G. Green, the consideration of the neighbourhood of the values of the variables for which this developable character ceases. Beginning, as before, with existence theorems applicable for ordinary values of the variables, we are to consider the cases of failure of such theorems.

When in a given set of differential equations the number of equations is greater than the number of dependent variables, the equations cannot be expected to have common solutions unless certain conditions of compatibility, obtainable by equating different forms of the same differential coefficients deducible from the equations, are satisfied. We have had examples in systems of linear equations, and in the case of a set of equations *p*_{1} = φ_{1}, ..., *p*_{r} = φ_{r}. For the case when the number of equations is the same as that of dependent variables, the following is a general theorem which should be referred to: Let there be *r* equations in *r* dependent variables *z*_{1}, ... *z*_{r} and *n* independent A general existence theorem. variables *x*_{1}, ... *x*_{n}; let the differential coefficient of *z*_{σ} of highest order which enters be of order *h*_{σ}, and suppose *d*^{hσ}*z*_{σ} / *dx*_{1}^{hσ} to enter, so that the equations can be written *d*^{hσ}*z*_{σ} / *dx*_{1}^{hσ} = Φ_{σ}, where in the general differential coefficient of *z*_{ρ} which enters in Φ_{σ}, say

*d*^{k1 + ... + kn} *z*_{ρ} / *dx*_{1}^{k1} ... *dx*_{n}^{kn},

we have *k*_{1} < *h*_{ρ} and *k*_{1} + ... + *k*_{n} ≤ *h*_{ρ}. Let *a*_{1}, ... *a*_{n}, *b*_{1}, ... *b*_{r}, and *b*^{ρ}_{k1 ... kn} be a set of values of

*x*_{1}, ... *x*_{n}, *z*_{1}, ... *z*_{r}

and of the differential coefficients entering in Φ_{σ} about which all the functions Φ_{1}, ... Φ_{r}, are developable. Corresponding to each dependent variable *z*_{σ}, we take now a set of *h*_{σ} functions of *x*_{2}, ... *x*_{n}, say φ_{σ}, φ_{σ};^{(1)}, ... ,φ_{σ}^{h−1} arbitrary save that they must be developable about *a*_{2}, *a*_{3}, ... *a*_{n}, and such that for these values of *x*_{2}, ... *x*_{n}, the function φ_{ρ} reduces to *b*_{ρ}, and the differential coefficient

*d*^{k2 + ... + kn} φ_{ρ}^{k1} / *dx*_{2}^{k2} ... dx_{n}^{kn}

reduces to *b*^{ρ}_{k1 ... kn}. Then the theorem is that there exists one, and only one, set of functions *z*_{1}, ... *z*_{r}, of *x*_{2}, ... *x*_{n} developable about *a*_{1}, ... *a*_{n} satisfying the given differential equations, and such that for *x*_{1} = *a*_{1} we have

*z*_{σ} = φ_{σ}, *dz*_{σ} / *dx*_{1} = φ_{σ}^{(1)}, ... *d*^{hσ−1}*z*_{σ} / *d*^{hσ−1}*x*_{1} = φ_{σ}^{hσ−1}.

And, moreover, if the arbitrary functions φ_{σ}, φ_{σ}^{(1)} ... contain a certain number of arbitrary variables *t*_{1}, ... *t*_{m}, and be developable about the values *t*º_{1}, ... *t*º_{m} of these variables, the solutions *z*_{1}, ... *z*_{r} will contain *t*_{1}, ... *t*_{m}, and be developable about *t*º_{1}, ... *t*º_{m}.

The proof of this theorem may be given by showing that if ordinary power series in *x*_{1} − *a*_{1}, ... *x*_{n} − *a*_{n}, *t*_{1} − *t*º_{1}, ... *t*_{m} − *t*º_{m} be substituted in the equations wherein in *z*_{σ} the coefficients of (*x*_{1} − *a*_{1})º, *x*_{1} − *a*_{1}, ..., (*x*_{1} − *a*_{1})^{hσ−1} are the arbitrary functions φ_{σ}, φ_{σ}^{(1)}, ..., φ_{σ}^{h−1}, divided respectively by 1, 1!, 2!, &c., then the differential equations determine uniquely all the other coefficients, and that the resulting series are convergent. We rely, in fact, upon the theory of monogenic analytical functions (see Function), a function being determined entirely by its development in the neighbourhood of one set of values of the independent variables, from which all its other values arise by *continuation*; it being of course understood that the coefficients in the differential equations are to be continued at the same time. But it is to be remarked that there is no ground for believing, if this method of continuation be utilized, that the function is single-valued; we may quite well return to the same values of the independent variables with a different Singular points of solutions. value of the function; belonging, as we say, to a different branch of the function; and there is even no reason for assuming that the number of branches is finite, or that different branches have the same singular points and regions of existence. Moreover, and this is the most difficult consideration of all, all these circumstances may be dependent upon the values supposed given to the arbitrary constants of the integral; in other words, the singular points may be either *fixed*, being determined by the differential equations themselves, or they may be *movable* with the variation of the arbitrary constants of integration. Such difficulties arise even in establishing the reversion of an elliptic integral, in solving the equation

(*dx*/*ds*)² = (*x* − *a*_{1})(*x* − *a*_{2})(*x* − *a*_{3})(*x* − *a*_{4});

about an ordinary value the right side is developable; if we put *x* − *a*_{1} = *t*_{1}², the right side becomes developable about *t*_{1} = 0; if we put *x* = 1/*t*, the right side of the changed equation is developable about *t* = 0; it is quite easy to show that the integral reducing to a definite value *x*_{0} for a value *s*_{0} is obtainable by a series in integral powers; this, however, must be supplemented by showing that for no value of *s* does the value of *x* become entirely undetermined.

These remarks will show the place of the theory now to be sketched of a particular class of ordinary linear homogeneous Linear differential equations with rational coefficients. differential equations whose importance arises from the completeness and generality with which they can be discussed. We have seen that if in the equations

*dy*/*dx* = *y*_{1}, *dy*_{1}/*dx* = *y*_{2}, ..., *dy*_{n−2}/*dx* = *y*_{n−1},

*dy*_{n−1}/*dx* = *a*_{n}*y* + *a*_{n−1}*y*_{1} + ... + *a*_{1}*y*_{n−1},

where *a*_{1}, *a*_{2}, ..., *a*_{n} are now to be taken to be rational functions of *x*, the value *x* = *x*º be one for which no one of these rational functions is infinite, and *y*º, *y*º_{1}, ..., *y*º_{n−1} be quite arbitrary finite values, then the equations are satisfied by

*y* = *y*º*u* + *y*º_{1}*u*_{1} + ... + *y*º_{n−1}*u*_{n−1},

where *u*, *u*_{1}, ..., *u*_{n−1} are functions of *x*, independent of *y*º, ... *y*º_{n−1}, developable about *x* = *x*º; this value of *y* is such that for *x* = *x*º the functions *y*, *y*_{1} ... *y*_{n−1} reduce respectively to *y*º, *y*º_{1}, ... *y*º_{n−1}; it can be proved that the region of existence of these series extends within a circle centre *x*º and radius equal to the distance from *x*º of the nearest point at which one of *a*_{1}, ... *a*_{n} becomes infinite. Now consider a region enclosing *x*º and only one of the places, say Σ, at which one of *a*_{1}, ... *a*_{n} becomes infinite. When *x* is made to describe a closed curve in this region, including this point Σ in its interior, it may well happen that the continuations of the functions *u*, *u*_{1}, ..., *u*_{n−1} give, when we have returned to the point *x*, values *v*, *v*_{1}, ..., *v*_{n−1}, so that the integral under consideration becomes changed to *y*º + *y*º_{1}*v*_{1} + ... + *y*º_{n−1}*v*_{n−1}. At xº let this branch and the corresponding values of *y*_{1}, ... *y*_{n−1} be ηº, ηº_{1}, ... ηº_{n−1}; then, as there is only one series satisfying the equation and reducing to (ηº, ηº_{1}, ... ηº_{n−1}) for *x* = *x*º and the coefficients in the differential equation are single-valued functions, we must have ηº*u* + ηº_{1}*u*_{1} + ... + ηº_{n−1}*u*_{n−1} = *y*ºv + *y*º_{1}*v*_{1} + ... + *y*º_{n−1}*v*_{n−1}; as this holds for arbitrary values of *y*º ... *y*º_{n−1}, upon which *u*, ... *u*_{n−1} and *v*, ... *v*_{n−1} do not depend, it follows that each of *v*, ... *v*_{n−1} is a linear function of *u*, ... *u*_{n−1} with constant coefficients, say *v*_{i} = A_{i1}*u* + ... + A_{in}*u*_{n−1}. Then

*y*ºv + ... + *y*º_{n−1}*v*_{n−1} = (Σ_{i} A_{i1} *y*º_{i})*u* + ... + (Σ_{i} A_{in} *y*º_{i}) *u*_{n−1};

this is equal to μ(*y*ºu + ... + *y*º_{n−1}*u*_{n−1}) if Σ_{i} A_{ir} *y*º_{i} = μ*y*º_{r−1}; eliminating *y*º ... *y*º_{n−1} from these linear equations, we have a determinantal equation of order *n* for μ; let μ_{1} be one of its roots; determining the ratios of *y*º, *y*_{1}º, ... *y*º_{n−1} to satisfy the linear equations, we have thus proved that there exists an integral, H, of the equation, which when continued round the point Σ and back to the starting-point, becomes changed to H_{1} = μ_{1}H. Let now ξ be the value of *x* at Σ and *r*_{1} one of the values of (½π*i*) log μ_{1}; consider the function (*x* − ξ)^{−r1}H; when *x* makes a circuit round *x* = ξ, this becomes changed to

exp (-2πir_{1}) (*x* − ξ)^{−r1} μH,

that is, is unchanged; thus we may put H = (*x* − ξ)^{r1}φ_{1}, φ_{1} being a function single-valued for paths in the region considered described about Σ, and therefore, by Laurent’s Theorem (see Function), capable of expression in the annular region about this point by a series of positive and negative integral powers of *x* − ξ, which in general may contain an infinite number of negative powers; there is, however, no reason to suppose *r*_{1} to be an integer, or even real. Thus, if all the roots of the determinantal equation in μ are different, we obtain *n* integrals of the forms (*x* − ξ)^{r1}φ_{1}, ..., (*x* − ξ)^{rn}φ_{n}. In general we obtain as many integrals of this form as there are really different roots; and the problem arises to discover, in case a root be *k* times repeated, *k* − 1 equations of as simple a form as possible to replace the *k* − 1 equations of the form *y*º + ... + *y*º_{n−1}*v*_{n−1} = μ(*y*º + ... + *y*º_{n−1}*u*_{n−1}) which would have existed had the roots been different. The most natural method of obtaining a suggestion lies probably in remarking that if *r*_{2} = *r*_{1} + *h*, there is an integral [(*x* − ξ)^{r1 + h}φ_{2} − (*x* − ξ)^{r1}φ_{1}] / *h*, where the coefficients in φ_{2} are
the same functions of *r*_{1} + *h* as are the coefficients in φ_{1} of *r*_{1}; when *h* vanishes, this integral takes the form

(*x* − ξ)^{r1} [*d*φ_{1}/*dr*_{1} + φ_{1} log (*x* − ξ)],

or say

(*x* − ξ)^{r1} [φ_{1} + ψ_{1} log (*x* − ξ)];

denoting this by 2π*i*μ_{1}K, and (*x* − ξ)^{r1} φ_{1} by H, a circuit of the point ξ changes K into

A similar artifice suggests itself when three of the roots of the determinantal equation are the same, and so on. We are thus led to the result, which is justified by an examination of the algebraic conditions, that whatever may be the circumstances as to the roots of the determinantal equation, *n* integrals exist, breaking up into batches, the values of the constituents H_{1}, H_{2}, ... of a batch after circuit about *x* = ξ being H_{1}′ = μ_{1}H_{1}, H_{2}′ = μ_{1}H_{2} + H_{1}, H_{3}′ = μ_{1}H_{3} + H_{2}, and so on. And this is found to lead to the forms (*x* − ξ)^{r1}φ_{1}, (*x* − ξ)^{r1} [ψ_{1} + φ_{1} log (*x* − ξ)], (*x* − ξ)^{r1} [χ_{1} + χ_{2} log (*x* − ξ) + φ_{1}(log(*x* − ξ) )²], and so on. Here each of φ_{1}, ψ_{1}, χ_{1}, χ_{2}, ... is a series of positive and negative integral powers of *x* − ξ in which the number of negative powers may be infinite.

It appears natural enough now to inquire whether, under proper conditions for the forms of the rational functions *a*_{1}, ... *a*_{n}, it may be possible to ensure that in each of the series φ_{1}, ψ_{1}, [χ]_{1}, ... the number of negative powers shall be finite. Herein Regular equations. lies, in fact, the limitation which experience has shown to be justified by the completeness of the results obtained. Assuming *n* integrals in which in each of φ_{1}, ψ_{1}, χ_{1} ... the number of negative powers is finite, there is a definite homogeneous linear differential equation having these integrals; this is found by forming it to have the form

*y*′ ^{n} = (*x* − ξ)^{−1} *b*_{1}*y*′ ^{(n−1)} + (*x* − ξ)^{−2} *b*_{2}*y*′ ^{(n−2)} + ... + (*x* − ξ)^{−n} *b*_{n}*y*,

where *b*_{1}, ... *b*_{n} are finite for *x* = ξ. Conversely, assume the equation to have this form. Then on substituting a series of the form (*x* − ξ)^{r} [1 + A_{1}(*x* − ξ) + A_{2}(*x* − ξ)² + ... ] and equating the coefficients of like powers of *x* − ξ, it is found that *r* must be a root of an algebraic equation of order *n*; this equation, which we shall call the index equation, can be obtained at once by substituting for *y* only (*x* − ξ)^{r} and replacing each of *b*_{1}, ... *b*_{n} by their values at *x* = ξ; arrange the roots *r*_{1}, *r*_{2}, ... of this equation so that the real part of *r*_{i} is equal to, or greater than, the real part of *r*_{i+1}, and take *r* equal to *r*_{1}; it is found that the coefficients A_{1}, A_{2} ... are uniquely determinate, and that the series converges within a circle about *x* = ξ which includes no other of the points at which the rational functions *a*_{1} ... *a*_{n} become infinite. We have thus a solution H_{1} = (*x* − ξ)^{r1}φ_{1} of the differential equation. If we now substitute in the equation *y* = H_{1}∫η*dx*, it is found to reduce to an equation of order *n* − 1 for η of the form

η′ ^{(n−1)} = (*x* − ξ)^{−1} *c*_{1}η′ ^{(n−2)} + ... + (*x* − ξ)^{(n−1)} *c*_{n−1}η,

where *c*_{1}, ... *c*_{n−1} are not infinite at *x* = ξ. To this equation precisely similar reasoning can then be applied; its index equation has in fact the roots *r*_{2} − *r*_{1} − 1, ..., *r*_{n} − *r*_{1} − 1; if *r*_{2} − *r*_{1} be zero, the integral (*x* − ξ)^{−1}ψ_{1} of the η equation will give an integral of the original equation containing log (*x* − ξ); if *r*_{2} − *r*_{1} be an integer, and therefore a negative integer, the same will be true, unless in ψ_{1} the term in (*x* − ξ)^{r1 − r2} be absent; if neither of these arise, the original equation will have an integral (*x* − ξ)^{r2}φ_{2}. The η equation can now, by means of the one integral of it belonging to the index *r*_{2} − *r*_{1} − 1, be similarly reduced to one of order *n* − 2, and so on. The result will be that stated above. We shall say that an equation of the form in question is *regular* about *x* = ξ.

We may examine in this way the behaviour of the integrals at all the points at which any one of the rational functions *a*_{1} ... *a*_{n} becomes infinite; in general we must expect that beside these the value *x* = ∞ will be a singular point for the Fuchsian equations. solutions of the differential equation. To test this we put *x* = 1/*t* throughout, and examine as before at *t* = 0. For instance, the ordinary linear equation with constant coefficients has no singular point for finite values of *x*; at *x* = ∞ it has a singular point and is not regular; or again, Bessel’s equation *x*²*y*″ + *xy*′ + (*x*² − *n*²)*y* = 0 is regular about *x* = 0, but not about *x* = ∞. An equation regular at all the finite singularities and also at *x* = ∞ is called a Fuchsian equation. We proceed to examine particularly the case of an equation of the second order

*y*″ + *ay*′ + *by* = 0.

Putting *x* = 1/*t*, it becomes

*d*²y/*dt*² + (2*t*^{−1} − at^{−2}) *dy*/*dt* + *bt*^{−4} *y* = 0,

which is not regular about *t* = 0 unless 2 − *at*^{−1} and *bt*^{−2}, that is, unless *ax* and *bx*² are finite at *x* = ∞; which we thus assume; putting *y* = *t*^{r}(1 + A_{1}*t* + ... ), we find for the index equation at *x* = ∞ the equation *r*(*r* − 1) + *r*(2 − *ax*)_{0} + (*bx*²)_{0} = 0. If there be Equation of the second order. finite singular points at ξ_{1}, ... ξ_{m}, where we assume *m* > 1, the cases *m* = 0, *m* = 1 being easily dealt with, and if φ(*x*) = (*x* − ξ_{1}) ... (*x* − ξ_{m}), we must have *a*·φ(*x*) and *b*·[φ(*x*)]² finite for all finite values of *x*, equal say to the respective polynomials ψ(*x*) and θ(*x*), of which by the conditions at *x* = ∞ the highest respective orders possible are *m* − 1 and 2(*m* − 1). The index equation at *x* = ξ_{1} is *r*(r − 1) + *r*ψ(ξ_{1}) / φ′ (ξ_{1}) + θ(ξ)_{1} / [φ′(ξ_{1})]² = 0, and if α_{1}, β_{1} be its roots, we have α_{1} + β_{1} = 1 − ψ(ξ_{1}) / φ′ (ξ_{1}) and α_{1}β_{1} = θ(ξ)_{1} / [φ′(ξ_{1})]². Thus by an elementary theorem of algebra, the sum Σ(1 − α_{i} − β_{i}) / (*x* − ξ_{i}), extended to the *m* finite singular points, is equal to ψ(*x*) / φ(*x*), and the sum Σ(1 − α_{i} − β_{i}) is equal to the ratio of the coefficients of the highest powers of *x* in ψ(*x*) and φ(*x*), and therefore equal to 1 + α + β, where α, β are the indices at *x* = ∞. Further, if (*x*, 1)_{m−2} denote the integral part of the quotient θ(*x*) / φ(*x*), we have Σ α_{i}β_{i}φ′ (ξ_{i}) / (*x* = ξ_{i}) equal to −(*x*, 1)_{m−2} + θ(*x*)/φ(*x*), and the coefficient of *x*^{m−2} in (*x*, 1)_{m−2} is αβ. Thus the differential equation has the form

*y*″ + *y*′Σ (1 − α_{i} − β_{i}) / (*x* − ξ_{i}) + *y*[(*x*, 1)_{m-2} + Σ α_{i}β_{i}φ′(ξ_{i}) / (*x* − ξ_{i})]/φ(*x*) = 0.

If, however, we make a change in the dependent variable, putting *y* = (*x* − ξ_{1})^{α1} ... (*x* − ξ_{m})^{α mη}, it is easy to see that the equation changes into one having the same singular points about each of which it is regular, and that the indices at *x* = ξ_{i} become 0 and β_{i} − α_{i}, which we shall denote by λ_{i}, for (*x* − ξ_{i})^{αj} can be developed in positive integral powers of *x* − ξ_{i} about *x* = ξ_{i}; by this transformation the indices at *x* = ∞ are changed to

α + α_{1} + ... + α_{m}, β + β_{1} + ... + β_{m}

which we shall denote by λ, μ. If we suppose this change to have been introduced, and still denote the independent variable by *y*, the equation has the form

*y*″ + *y*′Σ (1 − λ_{i}) / (*x* − ξ_{i}) + *y*(x, 1)_{m−2} / φ(*x*) = 0,

while λ + μ + λ_{1} + ... + λ_{m} = *m* − 1. Conversely, it is easy to verify that if λμ be the coefficient of *x*^{m−2} in (*x*, 1)_{m−2}, this equation has the specified singular points and indices whatever be the other coefficients in (*x*, 1)_{m−2}.

Thus we see that (beside the cases *m* = 0, *m* = 1) the “Fuchsian equation” of the second order with *two* finite singular points is distinguished by the fact that it has a definite form when the singular points and the indices are assigned. Hypergeometric equation. In that case, putting (*x* − ξ_{1}) / (*x* − ξ_{2}) = *t* / (*t* − 1), the singular points are transformed to 0, 1, ∞, and, as is clear, without change of indices. Still denoting the independent variable by *x*, the equation then has the form

*x*(1 − *x*)y″ + *y*′[1 − λ_{1} − *x*(1 + λ + μ)] − λμ*y* = 0,

which is the ordinary hypergeometric equation. Provided none of λ_{1}, λ_{2}, λ − μ be zero or integral about *x* = 0, it has the solutions

F(λ, μ, 1 − λ_{1}, *x*), *x*^{λ1} F(λ + λ_{1}, μ + λ_{1}, 1 + λ_{1}, *x*);

about *x* = 1 it has the solutions

F(λ, μ, 1 − λ_{2}, 1 − *x*), (1 − *x*)^{λ2} F(λ + λ_{2}, μ + λ_{2}, 1 + λ_{2}, 1 − *x*),

where λ + μ + λ_{1} + λ_{2} = 1; about *x* = ∞ it has the solutions

*x*^{−λ} F(λ, λ + λ_{1}, λ − μ + 1, *x*^{−1}), *x*^{−μ} F(μ, μ + λ_{1}, μ − λ + 1, *x*^{−1}),

where F(α, β, γ, *x*) is the series

which converges when |*x*| < 1, whatever α, β, γ may be, converges for all values of *x* for which |*x*| = 1 provided the real part of γ − α − β < 0 algebraically, and converges for all these values except *x* = 1 provided the real part of γ − α − β > −1 algebraically.

In accordance with our general theory, logarithms are to be expected in the solution when one of λ_{1}, λ_{2}, λ − μ is zero or integral. Indeed when λ_{1} is a negative integer, not zero, the second solution about *x* = 0 would contain vanishing factors in the denominators of its coefficients; in case λ or μ be one of the positive integers 1, 2, ... (−λ_{1}), vanishing factors occur also in the numerators; and then, in fact, the second solution about *x* = 0 becomes *x*^{λ1} times an integral polynomial of degree (−λ_{1}) − λ or of degree (−λ_{1}) − μ. But when λ_{1} is a negative integer including zero, and neither λ nor μ is one of the positive integers 1, 2 ... (−λ_{1}), the second solution about *x* = 0 involves a term having the factor log *x*. When λ_{1} is a positive integer, not zero, the second solution about *x* = 0 persists as a solution, in accordance with the order of arrangement of the roots of the index equation in our theory; the first solution is then replaced by an integral polynomial of degree -λ or −μ_{1}, when λ or μ is one of the negative integers 0, −1, −2, ..., 1 − λ_{1}, but otherwise contains a logarithm. Similarly for the solutions about *x* = 1 or *x* = ∞; it will be seen below how the results are deducible from those for *x* = 0.

Denote now the solutions about *x* = 0 by *u*_{1}, *u*_{2}; those about *x* = 1 by *v*_{1}, *v*_{2}; and those about *x* = ∞ by *w*_{1}, *w*_{2}; in the region (S_{0}S_{1}) common to the circles S_{0}, S_{1} of radius 1 whose centres are the points *x* = 0, *x* = 1, all the first four are valid, March of the Integral. and there exist equations *u*_{1} =A*v*_{1} + B*v*_{2}, *u*_{2} = C*v*_{1} + D*v*_{2} where A, B, C, D are constants; in the region (S_{1}S) lying inside the circle S_{1} and outside the circle S_{0}, those that are valid are *v*_{1}, *v*_{2}, *w*_{1}, *w*_{2}, and there exist equations *v*_{1} = P*w*_{1} + Q*w*_{2}, *v*_{2} = R*w*_{1} + T*w*_{2}, where P, Q, R, T are constants; thus considering any integral whose expression within the circle S_{0} is au_{1} + bu_{2}, where *a*, *b* are constants, the same integral will be represented within the circle S_{1} by (*a*A + *b*C)*v*_{1} + (*a*B + *b*D)*v*_{2}, and outside these circles will be represented by

[*a*A + *b*C)P + (*a*B + *b*D)R]*w*_{1} + [(*a*A + *b*C)Q + (*a*B + *b*D)T]*w*_{2}.

A single-valued branch of such integral can be obtained by making a barrier in the plane joining ∞ to 0 and 1 to ∞; for instance, by excluding the consideration of real negative values of *x* and of real
positive values greater than 1, and defining the phase of *x* and *x* − 1 for real values between 0 and 1 as respectively 0 and π.

We can form the Fuchsian equation of the second order with three arbitrary singular points ξ_{1}, ξ_{2}, ξ_{3}, and no singular point at *x* = ∞, and with respective indices α_{1}, β_{1}, α_{2}, β_{2}, α_{3}, β_{3} such that α_{1} + β_{1} + α_{2} + β_{2} + α_{3} + β_{3} = 1. This equation can then be Transformation of the equation into itself. transformed into the hypergeometric equation in 24 ways; for out of ξ_{1}, ξ_{2}, ξ_{3} we can in six ways choose two, say ξ_{1}, ξ_{2}, which are to be transformed respectively into 0 and 1, by (*x* − ξ_{1})/(*x* − ξ_{2}) = *t*(t − 1); and then there are four possible transformations of the dependent variable which will reduce one of the indices at *t* = 0 to zero and one of the indices at *t* = 1 also to zero, namely, we may reduce either α_{1} or β_{1} at *t* = 0, and simultaneously either α_{2} or β_{2} at *t* = 1. Thus the hypergeometric equation itself can be transformed into itself in 24 ways, and from the expression F(λ, μ, 1 − λ_{1}, *x*) which satisfies it follow 23 other forms of solution; they involve four series in each of the arguments, *x*, *x* − 1, 1/*x*, 1/(1 − *x*), (*x* − 1)/*x*, *x*/(*x* − 1). Five of the 23 solutions agree with the fundamental solutions already described about *x* = 0, *x* = 1, *x* = ∞; and from the principles by which these were obtained it is immediately clear that the 24 forms are, in value, equal in fours.

The quarter periods K, K′ of Jacobi’s theory of elliptic functions, of which K = ∫π/20 (1 − *h* sin ²θ)^{−½}*d*θ, and K′ is the same function of 1-*h*, can easily be proved to be the solutions of a hypergeometric Inversion. Modular functions. equation of which *h* is the independent variable. When K, K′ are regarded as defined in terms of *h* by the differential equation, the ratio K′/K is an infinitely many valued function of *h*. But it is remarkable that Jacobi’s own theory of theta functions leads to an expression for *h* in terms of K′/K (see Function) in terms of single-valued functions. We may then attempt to investigate, in general, in what cases the independent variable *x* of a hypergeometric equation is a single-valued function of the ratio *s* of two independent integrals of the equation. The same inquiry is suggested by the problem of ascertaining in what cases the hypergeometric series F(α, β, γ, *x*) is the expansion of an algebraic (irrational) function of *x*. In order to explain the meaning of the question, suppose that the plane of *x* is divided along the real axis from -∞ to 0 and from 1 to +∞, and, supposing logarithms not to enter about *x* = 0, choose two quite definite integrals *y*_{1}, *y*_{2} of the equation, say

*y*_{1} = F(λ, μ, 1 − λ_{1}, *x*), *y*_{2} = *x*^{λ1} F(λ + λ_{1}, μ + λ_{1}, 1 + λ_{1}, *x*),

with the condition that the phase of *x* is zero when *x* is real and between 0 and 1. Then the value of ς = *y*_{2}/*y*_{1} is definite for all values of *x* in the divided plane, ς being a single-valued monogenic branch of an analytical function existing and without singularities all over this region. If, now, the values of ς that so arise be plotted on to another plane, a value *p* + iq of σ being represented by a point (*p*, *q*) of this ς-plane, and the value of *x* from which it arose being mentally associated with this point of the σ-plane, these points will fill a connected region therein, with a continuous boundary formed of four portions corresponding to the two sides of the two barriers of the *x*-plane. The question is then, firstly, whether the same value of *s* can arise for two different values of *x*, that is, whether the same point (*p*, *q*) of the ς-plane can arise twice, or in other words, whether the region of the ς-plane overlaps itself or not. Supposing this is not so, a second part of the question presents itself. If in the *x*-plane the barrier joining -∞ to 0 be momentarily removed, and *x* describe a small circle with centre at *x* = 0 starting from a point *x* = −*h* − ik, where *h*, *k* are small, real, and positive and coming back to this point, the original value *s* at this point will be changed to a value σ, which in the original case did not arise for this value of *x*, and possibly not at all. If, now, after restoring the barrier the values arising by continuation from σ be similarly plotted on the ς-plane, we shall again obtain a region which, while not overlapping itself, may quite possibly overlap the former region. In that case two values of *x* would arise for the same value or values of the quotient *y*_{2}/*y*_{1}, arising from two different branches of this quotient. We shall understand then, by the condition that *x* is to be a single-valued function of *x*, that the region in the ς-plane corresponding to any branch is not to overlap itself, and that no two of the regions corresponding to the different branches are to overlap. Now in describing the circle about *x* = 0 from *x* = −*h* − *ik* to −*h* + *ik*, where *h* is small and *k* evanescent,

ς = *x*^{λ1} F(λ + λ_{1}, μ + λ_{1}, 1 + λ_{1}, *x*) / F(λ, μ, 1 − λ_{1}, *x*)

is changed to σ = ς*e*^{2πiλ1}. Thus the two portions of boundary of the *s*-region corresponding to the two sides of the barrier (−∞, 0) meet (at ς = 0 if the real part of λ_{1} be positive) at an angle 2πL_{1}, where L_{1} is the absolute value of the real part of λ_{1}; the same is true for the σ-region representing the branch σ. The condition that the *s*-region shall not overlap itself requires, then, L_{1} = 1. But, further, we may form an infinite number of branches σ = ς*e*^{2πiλ1}, σ_{1} = *e*^{2πiλ1}, ... in the same way, and the corresponding regions in the plane upon which *y*_{2}/*y*_{1} is represented will have a common point and each have an angle 2πL_{1}; if neither overlaps the preceding, it will happen, if L_{1} is not zero, that at length one is reached overlapping the first, unless for some positive integer α we have 2παL_{1} = 2π, in other words L_{1} = 1/α. If this be so, the branch σ_{α−1} = ς*e*^{2πiαλ1} will be represented by a region having the angle at the common point common with the region for the branch ς; but not altogether coinciding with this last region unless λ_{1} be real, and therefore = ±1/α; then there is only a finite number, α, of branches obtainable in this way by crossing the barrier (−∞, 0). In precisely the same way, if we had begun by taking the quotient

ς′ = (*x* − 1)^{λ2} F(λ + λ_{2}, μ + λ_{2}, 1 + λ_{2}, 1 − *x*) / F(λ, μ, 1 − λ_{2}, 1 − *x*)

of the two solutions about *x* = 1, we should have found that *x* is not a single-valued function of ς′ unless λ_{2} is the inverse of an integer, or is zero; as ς′ is of the form (A_{σ} + B)/(C_{ς} + D), A, B, C, D constants, the same is true in our case; equally, by considering the integrals about *x* = ∞ we find, as a third condition necessary in order that *x* may be a single-valued function of ς, that λ − μ must be the inverse of an integer or be zero. These three differences of the indices, namely, λ_{1}, λ_{2}, λ − μ, are the quantities which enter in the differential equation satisfied by *x* as a function of ς, which is easily found to be

where *x*_{1} = *dx*/*d*ς, &c.; and *h*_{1} = 1 − *y*_{1}², *h*_{2} = 1 − λ_{2}², *h*_{3} = 1 − (λ − μ)². Into the converse question whether the three conditions are sufficient to ensure (1) that the σ region corresponding to any branch does not overlap itself, (2) that no two such regions overlap, we have no space to enter. The second question clearly requires the inquiry whether the group (that is, the monodromy group) of the differential equation is properly discontinuous. (See Groups, Theory of.)

The foregoing account will give an idea of the nature of the function theories of differential equations; it appears essential not to exclude some explanation of a theory intimately related both to such theories and to transformation theories, which is a generalization of Galois’s theory of algebraic equations. We deal only with the application to homogeneous linear differential equations.

In general a function of variables *x*_{1}, *x*_{2} ... is said to be rational when it can be formed from them and the integers 1, 2, 3, ... by a finite number of additions, subtractions, multiplications and divisions. We generalize this definition. Assume that Rationality group of a linear equation. we have assigned a fundamental series of quantities and functions of *x*, in which *x* itself is included, such that all quantities formed by a finite number of additions, subtractions, multiplications, divisions *and differentiations in regard to* *x*, of the terms of this series, are themselves members of this series. Then the quantities of this series, and only these, are called *rational*. By a rational function of quantities *p*, *q*, *r*, ... is meant a function formed from them and any of the fundamental rational quantities by a finite number of the five fundamental operations. Thus it is a function which would be called, simply, rational if the fundamental series were widened by the addition to it of the quantities *p*, *q*, *r*, ... and those derivable from them by the five fundamental operations. A rational ordinary differential equation, with *x* as independent and *y* as dependent variable, is then one which equates to zero a rational function of *y*, the order *k* of the differential equation being that of the highest differential coefficient *y*^{(k)} which enters; only such equations are here discussed. Such an equation P = 0 is called *irreducible* when, firstly, being arranged as an integral polynomial in *y*^{(k)}, this polynomial Irreducibility of a rational equation. is not the product of other polynomials in *y*^{(k)} also of rational form; and, secondly, the equation has no solution satisfying also a rational equation of lower order. From this it follows that if an irreducible equation P = 0 have one solution satisfying another rational equation Q = 0 of the same or higher order, then all the solutions of P = 0 also satisfy Q = 0. For from the equation P = 0 we can by differentiation express *y*^{(k+1)}, *y*^{(k+2)}, ... in terms of *x*, *y*, *y*^{(1)}, ... , *y*^{(k)}, and so put the function Q rationally in terms of these quantities only. It is sufficient, then, to prove the result when the equation Q = 0 is of the same order as P = 0. Let both the equations be arranged as integral polynomials in *y*^{(k)}; their algebraic eliminant in regard to *y*^{(k)} must then vanish identically, for they are known to have one common solution not satisfying an equation of lower order; thus the equation P = 0 involves Q = 0 for all solutions of P = 0.

Now let *y*^{(n)} = *a*_{1}*y*^{(n−1)} + ... + *a*_{n}*y* be a given rational homogeneous linear differential equation; let *y*_{1}, ... *y*_{n} be *n* particular functions of *x*, unconnected by any equation with constant coefficients of the form *c*_{1}*y*_{1} + ... + *c*_{n}*y*_{n} = 0, all satisfying The variant function for a linear equation. the differential equation; let η_{1}, ... η_{n} be linear functions of *y*_{1}, ... *y*_{n}, say η_{i} = A_{i1}*y*_{1} + ... + A_{in}*y*_{n}, where the constant coefficients A_{ij} have a non-vanishing determinant; write (η) = A(*y*), these being the equations of a general linear homogeneous group whose transformations may be denoted by A, B, .... We desire to form a rational function φ(η), or say φ(A(*y*)), of η_{1}, ... η, in which the η² constants A_{ij} shall all be essential, and not reduce effectively to a fewer number, as they would, for instance, if the *y*_{1}, ... *y*_{n} were connected by a linear equation with constant coefficients. Such a function is in fact given, if the solutions *y*_{1}, ... *y*_{n} be developable
in positive integral powers about *x* = *a*, by φ(η) = η_{1} + (*x* − *a*)^{n} η_{2} + ... + (*x* − *a*)^{(n−1)n} η_{n}. Such a function, V, we call a *variant*.

Then differentiating V in regard to *x*, and replacing η_{i}^{(n)} by its value *a*_{1}η^{(n−1)} + ... + *a*_{n}η, we can arrange *d*V/*dx*, and similarly each of *d*²/*dx*² ... *d*^{N}V/*dx*^{N}, where N = *n*², as a linear function of the N quantities η_{1}, ... η_{n}, ... η_{1}^{(n−1)}, ... η_{n}^{(n−1)}, and The resolvent equation. thence by elimination obtain a linear differential equation for V of order N with rational coefficients. This we denote by F = 0. Further, each of η_{1} ... η_{n} is expressible as a linear function of V, *d*V/*dx*, ... *d*^{N−1}V / *dx*^{N−1}, with rational coefficients not involving any of the *n*² coefficients A_{ij}, since otherwise V would satisfy a linear equation of order less than N, which is impossible, as it involves (linearly) the *n*² arbitrary coefficients A_{ij}, which would not enter into the coefficients of the supposed equation. In particular, *y*_{1} ,.. *y*_{n} are expressible rationally as linear functions of ω, *d*ω/*dx*, ... *d*^{N−1}ω / *dx*^{N−1}, where ω is the particular function φ(*y*). Any solution W of the equation F = 0 is derivable from functions ζ_{1}, ... ζ_{n}, which are linear functions of *y*_{1}, ... *y*_{n}, just as V was derived from η_{1}, ... η_{n}; but it does not follow that these functions ζ_{i}, ... ζ_{n} are obtained from *y*_{1}, ... *y*_{n} by a transformation of the linear group A, B, ... ; for it may happen that the determinant *d*(ζ_{1}, ... ζ_{n}) / (*dy*_{1}, ... *y*_{n}) is zero. In that case ζ_{1}, ... ζ_{n} may be called a singular set, and W a singular solution; it satisfies an equation of lower than the N-th order. But every solution V, W, ordinary or singular, of the equation F = 0, is expressible rationally in terms of ω, *d*ω / *dx*, ... *d*^{N−1}ω / *dx*^{N−1}; we shall write, simply, V = *r*(ω). Consider now the rational irreducible equation of lowest order, not necessarily a linear equation, which is satisfied by ω; as *y*_{1}, ... *y*_{n} are particular functions, it may quite well be of order less than N; we call it the *resolvent equation*, suppose it of order *p*, and denote it by γ(*v*). Upon it the whole theory turns. In the first place, as γ(*v*) = 0 is satisfied by the solution ω of F = 0, all the solutions of γ(*v*) are solutions F = 0, and are therefore rationally expressible by ω; any one may then be denoted by *r*(ω). If this solution of F = 0 be not singular, it corresponds to a transformation A of the linear group (A, B, ...), effected upon *y*_{1}, ... *y*_{n}. The coefficients A_{ij} of this transformation follow from the expressions before mentioned for η_{1} ... η_{n} in terms of V, *d*V/*dx*, *d*²V/*dx*², ... by substituting V = *r*(ω); thus they depend on the *p* arbitrary parameters which enter into the general expression for the integral of the equation γ(*v*) = 0. Without going into further details, it is then clear enough that the resolvent equation, being irreducible and such that any solution is expressible rationally, with *p* parameters, in terms of the solution ω, enables us to define a linear homogeneous group of transformations of *y*_{1} ... *y*_{n} depending on *p* parameters; and every operation of this (continuous) group corresponds to a rational transformation of the solution of the resolvent equation. This is the group called the *rationality group*, or the *group of transformations* of the original homogeneous linear differential equation.

The group must not be confounded with a subgroup of itself, the *monodromy group* of the equation, often called simply the group of the equation, which is a set of transformations, not depending on arbitrary variable parameters, arising for one particular fundamental set of solutions of the linear equation (see Groups, Theory of).

The importance of the rationality group consists in three propositions. (1) Any rational function of *y*_{1}, ... *y*_{n} which is unaltered in value by the transformations of the group can be written in rational form. (2) If any rational function be changed The fundamental theorem in regard to the rationality group. in form, becoming a rational function of *y*_{1}, ... *y*_{n}, a transformation of the group applied to its new form will leave its value unaltered. (3) Any homogeneous linear transformation leaving unaltered the value of every rational function of *y*_{1}, ... *y*_{n} which has a rational value, belongs to the group. It follows from these that any group of linear homogeneous transformations having the properties (1) (2) is identical with the group in question. It is clear that with these properties the group must be of the greatest importance in attempting to discover what functions of *x* must be regarded as rational in order that the values of *y*_{1} ... *y*_{n} may be expressed. And this is the problem of solving the equation from another point of view.

Literature.—(α) *Formal or Transformation Theories for Equations of the First Order*:—E. Goursat, *Leçons sur l’intégration des équations aux dérivées partielles du premier ordre* (Paris, 1891); E. v. Weber, *Vorlesungen über das Pfaff’sche Problem und die Theorie der partiellen Differentialgleichungen erster Ordnung* (Leipzig, 1900); S. Lie und G. Scheffers, *Geometrie der Berührungstransformationen*, Bd. i. (Leipzig, 1896); Forsyth, *Theory of Differential Equations, Part i., Exact Equations and Pfaff’s Problem* (Cambridge, 1890); S. Lie, “Allgemeine Untersuchungen über Differentialgleichungen, die eine continuirliche endliche Gruppe gestatten” (Memoir), *Mathem. Annal.*xxv. (1885), pp. 71-151; S. Lie und G. Scheffers, *Vorlesungen über Differentialgleichungen mit bekannten infinitesimalen Transformationen* (Leipzig, 1891). A very full bibliography is given in the book of E. v. Weber referred to; those here named are perhaps sufficiently representative of modern works. Of classical works may be named: Jacobi, *Vorlesungen über Dynamik* (von A. Clebsch, Berlin, 1866); *Werke, Supplementband*; G Monge, *Application de l’analyse à la géométrie* (par M. Liouville, Paris, 1850); J. L. Lagrange, *Leçons sur le calcul des fonctions* (Paris, 1806), and *Théorie des fonctions analytiques* (Paris, Prairial, an V); G. Boole, *A Treatise on Differential Equations* (London, 1859); and *Supplementary Volume* (London, 1865); Darboux, *Leçons sur la théorie générale des surfaces*, tt. i.-iv. (Paris, 1887-1896); S. Lie, *Théorie der transformationsgruppen* ii. (on Contact Transformations) (Leipzig, 1890).

(β) *Quantitative or Function Theories for Linear Equations*:—C. Jordan, *Cours d’analyse*, t. iii. (Paris, 1896); E. Picard, *Traité d’analyse*, tt. ii. and iii. (Paris, 1893, 1896); Fuchs, *Various Memoirs, beginning with that in Crelle’s Journal*, Bd. lxvi. p. 121; Riemann, *Werke*, 2^{r} Aufl. (1892); Schlesinger, *Handbuch der Theorie der linearen Differentialgleichungen*, Bde. i.-ii. (Leipzig, 1895-1898); Heffter, *Einleitung in die Theorie der linearen Differentialgleichungen mit einer unabhängigen Variablen* (Leipzig, 1894); Klein, *Vorlesungen über lineare Differentialgleichungen der zweiten Ordnung* (Autographed, Göttingen, 1894); and *Vorlesungen über die hypergeometrische Function* (Autographed, Göttingen, 1894); Forsyth, *Theory of Differential Equations, Linear Equations*.

(γ) *Rationality Group (of Linear Differential Equations)*:—Picard, *Traité d’Analyse*, as above, t. iii.; Vessiot, *Annales de l’École Normale*, série III. t. ix. p. 199 (Memoir); S. Lie, *Transformationsgruppen*, as above, iii. A connected account is given in Schlesinger, as above, Bd. ii., erstes Theil.

(δ) *Function Theories of Non-Linear Ordinary Equations*:—Painlevé, *Leçons sur la théorie analytique des équations différentielles* (Paris, 1897, Autographed); Forsyth, *Theory of Differential Equations, Part ii., Ordinary Equations not Linear* (two volumes, ii. and iii.) (Cambridge, 1900); Königsberger, *Lehrbuch der Theorie der Differentialgleichungen* (Leipzig, 1889); Painlevé, *Leçons sur l’intégration des équations differentielles de la mécanique et applications* (Paris, 1895).

(ε) *Formal Theories of Partial Equations of the Second and Higher Orders*:—E. Goursat, *Leçons sur l’intégration des équations aux dérivées partielles du second ordre*, tt. i. and ii. (Paris, 1896, 1898); Forsyth, *Treatise on Differential Equations* (London, 1889); and *Phil. Trans. Roy. Soc.* (A.), vol. cxci. (1898), pp. 1-86.

(ζ) See also the six extensive articles in the second volume of the German *Encyclopaedia of Mathematics*. (H. F. Ba.)