1911 Encyclopædia Britannica/Infinitesimal Calculus/Outlines 2

III. Outlines of the Infinitesimal Calculus (§32-39) Infinitesimal Calculus
III. Outlines of the Infinitesimal Calculus (§40-46)
III. Outlines of the Infinitesimal Calculus (§47-56)

III. Outlines of the Infinitesimal Calculus (§40-46) edit

40. Let u or ƒ (x, y) denote a function of two variables x and y. If we regard y as constant, u or ƒ becomes a function of one variable x, and we may seek to differentiate it with respect to x. If the function of x is differentiable, the differential coefficient which is formed in this way is called the Partial differentiation. “partial differential coefficient” of u or ƒ with respect to x, and is denoted by u/x or ∂ƒ/x. The symbol “∂” was appropriated for partial differentiation by C. G. J. Jacobi (1841). It had before been written indifferently with “d ” as a symbol of differentiation. Euler had written (dƒ/dx) for the partial differential coefficient of ƒ with respect to x. Sometimes it is desirable to put in evidence the variable which is treated as constant, and then the partial differential coefficient is written “ ” or “ ”. This course is often adopted by writers on Thermodynamics. Sometimes the symbols d or ∂ are dropped, and the partial differential coefficient is denoted by ux or ƒx. As a definition of the partial differential coefficient we have the formula

 

In the same way we may form the partial differential coefficient with respect to y by treating x as a constant.

The introduction of partial differential coefficients enables us to solve at once for a surface a problem analogous to the problem of tangents for a curve; and it also enables us to take the first step in the solution of the problem of maxima and minima for a function of several variables. If the equation of a surface is expressed in the form z = ƒ(x, y), the direction cosines of the normal to the surface at any point are in the ratios ∂ƒ/∂x : ∂ƒ/∂y : = 1. If ƒ is a maximum or a minimum at (x, y), then ∂ƒ/∂x and ∂ƒ/∂y vanish at that point.

In applications of the differential calculus to mathematical physics we are in general concerned with functions of three variables x, y, z, which represent the coordinates of a point; and then considerable importance attaches to partial differential coefficients which are formed by a particular rule. Let F(x, y, z) be the function, P a point (x, y, z), P′ a neighbouring point (x + Δx, y + Δy, z + Δz), and let Δs be the length of PP′. The value of F(x, y, z) at P may be denoted shortly by F(P). A limit of the same nature as a partial differential coefficient is expressed by the formula

 

in which Δs is diminished indefinitely by bringing P′ up to P, and P′ is supposed to approach P along a straight line, for example, the tangent to a curve or the normal to a surface. The limit in question is denoted by ∂F/∂h, in which it is understood that h indicates a direction, that of PP′. If l, m, n are the direction cosines of the limiting direction of the line PP′, supposed drawn from P to P′, then

 

The operation of forming ∂F/∂h is called “differentiation with respect to an axis” or “vector differentiation.”

41. The most important theorem in regard to partial differential coefficients is the theorem of the total differential. We may write down the equation

 

If ƒx is a continuous function of x when x lies between a and a + h and y = b + k, and if further ƒy is a continuous function of y when y lies between b and d+k, there exist values of θ and η which lie between 0 and 1 and have the properties expressed by the equations Theorem of the
Total Differential.

 

Further, ƒx(a + θh, b + k) and ƒy(a, b + ηk) tend to the limits ƒx(a, b) and ƒy(a, b) when h and k tend to zero, provided the differential coefficients ƒx, ƒy, are continuous at the point (a, b). Hence in this case the above equation can be written

 

where

 

In accordance with the notation of differentials this equation gives

 

Just as in the case of functions of one variable, dx and dy are arbitrary finite differences, and dƒ is not the difference of two values of ƒ, but is so much of this difference as need be retained for the purpose of forming differential coefficients.

The theorem of the total differential is immediately applicable to the differentiation of implicit functions. When y is a function of x which is given by an equation of the form ƒ(x, y) = 0, and it is either impossible or inconvenient to solve this equation so as to express y as an explicit function of x, the differential coefficient dy/dx can be formed without solving the equation. We have at once

 

This rule was known, in all essentials, to Fermat and de Sluse before the invention of the algorithm, of the differential calculus.

An important theorem, first proved by Euler, is immediately deducible from the theorem of the total differential. If ƒ(x, y) is a homogeneous function of degree n then

 

The theorem is applicable to functions of any number of variables and is generally known as Euler’s theorem of homogeneous functions.

42. Many problems in which partial differential coefficients occur are simplified by the introduction of certain determinants called “Jacobians” or “functional determinants.” They were introduced into Analysis by C. G. J. Jacobi (J. f. Math., Crelle, Bd. 22, 1841, p. 319). The Jacobian of u1, Jacobians. u2, . . . un with respect to x1, x2, . . . xn is the determinant

 

in which the constituents of the rth row are the n partial differential coefficients of ur, with respect to the n variables x. This determinant is expressed shortly by

 

Jacobians possess many properties analogous to those of ordinary differential coefficients, for example, the following:—

 

 

If n functions (u1, u2, . . . un) of n variables (x1, x2, . . . , xn) are not independent, but are connected by a relation ƒ(u1, u2, . . . un) = 0, then

 

and, conversely, when this condition is satisfied identically the functions u1, u2 . . . , un are not independent.

43. Partial differential coefficients of the second and higher orders can be formed in the same way as those of the first order. For example, when there are two variables x, y, the first partial derivatives ∂ƒ/∂x and ∂ƒ/∂y are functions of x and y, which we may seek to differentiate partially with Interchange of order of differentiations. respect to x or y. The most important theorem in relation to partial differential coefficients of orders higher than the first is the theorem that the values of such coefficients do not depend upon the order in which the differentiations are performed. For example, we have the equation

  (i.)

This theorem is not true without limitation. The conditions for its validity have been investigated very completely by H. A. Schwarz (see his Ges. math. Abhandlungen, Bd. 2, Berlin, 1890, p. 275). It is a sufficient, though not a necessary, condition that all the differential coefficients concerned should be continuous functions of x, y. In consequence of the relation (i.) the differential coefficients expressed in the two members of this relation are written

  or  

The differential coefficient

 

in which p+q+r = n, is formed by differentiating p times with respect to x, q times with respect to y, r times with respect to z, the differentiations being performed in any order. Abbreviated notations are sometimes used in such forms as

  or  

Differentials of higher orders are introduced by the defining equation

   
 

in which the expression (dx·∂/∂x + dy·∂/∂y)n is developed by the binomial theorem in the same way as if dx·∂/∂x and dy·∂/∂y were numbers, and (∂/∂x)r·(∂/∂y)nr ƒ is replaced by ∂nƒ/∂xrynr. When there are more than two variables the multinomial theorem must be used instead of the binomial theorem.

The problem of forming the second and higher differential coefficients of implicit functions can be solved at once by means of partial differential coefficients. For example, if ƒ(x, y) = 0 is the equation defining y as a function of x, we have

 

The differential expression Xdx + Ydy, in which both X and Y are functions of the two variables x and y, is a total differential if there exists a function ƒ of x and y which is such that

 

When this is the case we have the relation

  (ii.)

Conversely, when this equation is satisfied there exists a function ƒ which is such that

 

The expression Xdx + Ydy in which X and Y are connected by the relation (ii.) is often described as a “perfect differential.” The theory of the perfect differential can be extended to functions of n variables, and in this case there are 1/2n(n−1) such relations as (ii.).

In the case of a function of two variables x, y an abbreviated notation is often adopted for differential coefficients. The function being denoted by z, we write

  for  

Partial differential coefficients of the second order are important in geometry as expressing the curvature of surfaces. When a surface is given by an equation of the form z = ƒ(x, y), the lines of curvature are determined by the equation

 

 

and the principal radii of curvature are the values of R which satisfy the equation

 

44. The problem of change of variables was first considered by Brook Taylor in his Methodus incrementorum. In the case considered by Taylor y is expressed as a function of z, and z as a function of x, and it is desired to express the differential coefficients of y with respect to x without eliminating Change of variables. z. The result can be obtained at once by the rules for differentiating a product and a function of a function. We have

 
 
 
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The introduction of partial differential coefficients enables us to deal with more general cases of change of variables than that considered above. If u, v are new variables, and x, y are connected with them by equations of the type

  (i.)

while y is either an explicit or an implicit function of x, we have the problem of expressing the differential coefficients of various orders of y with respect to x in terms of the differential coefficients of v with respect to u. We have

 

by the rule of the total differential. In the same way, by means of differentials of higher orders, we may express d2y/dx2, and so on.

Equations such as (i.) may be interpreted as effecting a transformation by which a point (u, v) is made to correspond to a point (x, y). The whole theory of transformations, and of functions, or differential expressions, which remain invariant under groups of transformations, has been studied exhaustively by Sophus Lie (see, in particular, his Theorie der Transformationsgruppen, Leipzig, 1888–1893). (See also Differential Equations and Groups).

A more general problem of change of variables is presented when it is desired to express the partial differential coefficients of a function V with respect to x, y, . . . in terms of those with respect to u, v, . . ., where u, v, . . . are connected with x, y, . . . by any functional relations. When there are two variables x, y, and u, v are given functions of x, y, we have

 

 

and the differential coefficients of higher orders are to be formed by repeated applications of the rule for differentiating a product and the rules of the type

 

When x, y are given functions of u, v, . . . we have, instead of the above, such equations as

 

and ∂V/∂x, ∂V/∂y can be found by solving these equations, provided the Jacobian ∂(x, y)/∂(u, v) is not zero. The generalization of this method for the case of more than two variables need not detain us.

In cases like that here considered it is sometimes more convenient not to regard the equations connecting x, y with u, v as effecting a point transformation, but to consider the loci u = const., v = const. as two “families” of curves. Then in any region of the plane of (x, y) in which the Jacobian ∂(x, y)/∂(u, v) does not vanish or become infinite, any point (x, y) is uniquely determined by the values of u and v which belong to the curves of the two families that pass through the point. Such variables as u, v are then described as “curvilinear coordinates” of the point. This method is applicable to any number of variables. When the loci u = const., . . . intersect each other at right angles, the variables are “orthogonal” curvilinear coordinates. Three-dimensional systems of such coordinates have important applications in mathematical physics. Reference may be made to G. Lamé, Leçons sur les coordonnées curvilignes (Paris, 1859), and to G. Darboux, Leçons sur les coordonnées curvilignes et systèmes orthogonaux (Paris, 1898).

When such a coordinate as u is connected with x and y by a functional relation of the form ƒ(x, y, u) = 0 the curves u = const. are a family of curves, and this family may be such that no two curves of the family have a common point. When this is not the case the points in which a curve ƒ(x, y, u) = 0 is intersected by a curve ƒ(x, y, u + Δu) = 0 tend to limiting positions as Δu is diminished indefinitely. The locus of these limiting positions is the “envelope” of the family, and in general it touches all the curves of the family. It is easy to see that, if u, v are the parameters of two families of curves which have envelopes, the Jacobian ∂(x, y)/∂(u, v) vanishes at all points on these envelopes. It is easy to see also that at any point where the reciprocal Jacobian ∂(u, v)/∂(x, y) vanishes, a curve of the family u touches a curve of the family v.

If three variables x, y, z are connected by a functional relation ƒ(x, y, z) = 0, one of them, z say, may be regarded as an implicit function of the other two, and the partial differential coefficients of z with respect to x and y can be formed by the rule of the total differential. We have

 

and there is no difficulty in proceeding to express the higher differential coefficients. There arises the problem of expressing the partial differential coefficients of x with respect to y and z in terms of those of z with respect to x and y. The problem is known as that of “changing the dependent variable.” It is solved by applying the rule of the total differential. Similar considerations are applicable to all cases in which n variables are connected by fewer than n equations.

45. Taylor’s theorem can be extended to functions of several variables. In the case of two variables the general formula, with a remainder after n terms, can be written most simply in the form Extension of Taylor’s theorem.

 

in which

 

and

 

The last expression is the remainder after n terms, and in it θ denotes some particular number between 0 and 1. The results for three or more variables can be written in the same form. The extension of Taylor’s theorem was given by Lagrange (1797); the form written above is due to Cauchy (1823). For the validity of the theorem in this form it is necessary that all the differential coefficients up to the nth should be continuous in a region bounded by x = a ± h, y = b ± k. When all the differential coefficients, no matter how high the order, are continuous in such a region, the theorem leads to an expansion of the function in a multiple power series. Such expansions are just as important in analysis, geometry and mechanics as expansions of functions of one variable. Among the problems which are solved by means of such expansions are the problem of maxima and minima for functions of more than one variable (see Maxima and Minima).

46. In treatises on the differential calculus much space is usually devoted to the differential geometry of curves and surfaces. A few remarks and results relating to the differential geometry of plane curves are set down here.Plane curves.

(i.) If ψ denotes the angle which the radius vector drawn from the origin makes with the tangent to a curve at a point whose polar coordinates are r, θ and if p denotes the perpendicular from the origin to the tangent, then

cos ψ = dr/ds,   sin ψ = rdθ/ds = p/r,

where ds denotes the element of arc. The curve may be determined by an equation connecting p with r.

(ii.) The locus of the foot of the perpendicular let fall from the origin upon the tangent to a curve at a point is called the pedal of the curve with respect to the origin. The angle ψ for the pedal is the same as the angle ψ for the curve. Hence the (p, r) equation of the pedal can be deduced. If the pedal is regarded as the primary curve, the curve of which it is the pedal is the “negative pedal” of the primary. We may have pedals of pedals and so on, also negative pedals of negative pedals and so on. Negative pedals are usually determined as envelopes.

(iii.) If φ denotes the angle which the tangent at any point makes with a fixed line, we have

r2 = p2 + (dp/dφ)2.

(iv.) The “average curvature” of the arc Δs of a curve between two points is measured by the quotient

 

where the upright lines denote, as usual, that the absolute value of the included expression is to be taken, and φ is the angle which the tangent makes with a fixed line, so that Δφ is the angle between the tangents (or normals) at the points. As one of the points moves up to coincidence with the other this average curvature tends to a limit which is the “curvature” of the curve at the point. It is denoted by

 

Sometimes the upright lines are omitted and a rule of signs is given:—Let the arc s of the curve be measured from some point along the curve in a chosen sense, and let the normal be drawn towards that side to which the curve is concave; if the normal is directed towards the left of an observer looking along the tangent in the chosen sense of description the curvature is reckoned positive, in the contrary case negative. The differential dφ is often called the “angle of contingence.” In the 14th century the size of the angle between a curve and its tangent seems to have been seriously debated, and the name “angle of contingence” was then given to the supposed angle.

(v.) The curvature of a curve at a point is the same as that of a certain circle which touches the curve at the point, and the “radius of curvature” ρ is the radius of this circle. We have  . The centre of the circle is called the “centre of curvature”; it is the limiting position of the point of intersection of the normal at the point and the normal at a neighbouring point, when the second point moves up to coincidence with the first. If a circle is described to intersect the curve at the point P and at two other points, and one of these two points is moved up to coincidence with P, the circle touches the curve at the point P and meets it in another point; the centre of the circle is then on the normal. As the third point now moves up to coincidence with P, the centre of the circle moves to the centre of curvature. The circle is then said to “osculate” the curve, or to have “contact of the second order” with it at P.

(vi.) The following are formulae for the radius of curvature:—

 

 

(vii.) The points at which the curvature vanishes are “points of inflection.” If P is a point of inflection and Q a neighbouring point, then, as Q moves up to coincidence with P, the distance from P to the point of intersection of the normals at P and Q becomes greater than any distance that can be assigned. The equation which gives the abscissae of the points in which a straight line meets the curve being expressed in the form ƒ(x) = 0, the function ƒ(x) has a factor (xx0)3, where x0 is the abscissa of the point of inflection P, and the line is the tangent at P. When the factor (xx0) occurs (n + 1) times in ƒ(x), the curve is said to have “contact of the nth order” with the line. There is an obvious modification when the line is parallel to the axis of y.

(viii.) The locus of the centres of curvature, or envelope of the normals, of a curve is called the “evolute.” A curve which has a given curve as evolute is called an “involute” of the given curve. All the involutes are “parallel” curves, that is to say, they are such that one is derived from another by marking off a constant distance along the normal. The involutes are “orthogonal trajectories” of the tangents to the common evolute.

(ix.) The equation of an algebraic curve of the nth degree can be expressed in the form u0 + u1 + u2 + . . . + un = 0, where u0 is a constant, and ur is a homogeneous rational integral function of x, y of the r th degree. When the origin is on the curve, u0 vanishes, and u1 = 0 represents the tangent at the origin. If u1 also vanishes, the origin is a double point and u2 = 0 represents the tangents at the origin. If u2 has distinct factors, or is of the form a(yp1x) (yp2x), the value of y on either branch of the curve can be expressed (for points sufficiently near the origin) in a power series, which is either

  or  

where q1, . . . and q2, . . . are determined without ambiguity. If p1 and p2 are real the two branches have radii of curvature ρ1, ρ2 determined by the formulae

 

When p1 and p2 are imaginary the origin is the real point of intersection of two imaginary branches. In the real figure of the curve it is an isolated point. If u2 is a square, a(ypx)2, the origin is a cusp, and in general there is not a series for y in integral powers of x, which is valid in the neighbourhood of the origin. The further investigation of cusps and multiple points belongs rather to analytical geometry and the theory of algebraic functions than to differential calculus.

(x.) When the equation of a curve is given in the form u0 + u1 + . . . + un−1 + un = 0 where the notation is the same as that in (ix.), the factors of un determine the directions of the asymptotes. If these factors are all real and distinct, there is an asymptote corresponding to each factor. If un = L1L2 . . . Ln, where L1, . . . are linear in x, y, we may resolve un−1/un into partial fractions according to the formula

 

and then L1 + A1 = 0, L2 + A2 = 0, . . . are the equations of the asymptotes. When a real factor of un is repeated we may have two parallel asymptotes or we may have a “parabolic asymptote.” Sometimes the parallel asymptotes coincide, as in the curve x2(x2 + y2a2) = a4, where x = 0 is the only real asymptote. The whole theory of asymptotes belongs properly to analytical geometry and the theory of algebraic functions.