This page has been proofread, but needs to be validated.
546
INFINITESIMAL CALCULUS
[OUTLINES


as the “infinite part” of φ(x). The resolution of a function which becomes infinite into an infinite part and a finite part can often be effected by taking the infinite part to be infinite of the same order as one of the functions in the scale written above, or in some more comprehensive scale. This resolution is the inverse of the process of evaluating an indeterminate form of the type ∞ − ∞.

For example lim.x=0{(ex−1)−1x−1} is finite and equal to = 1/2, and the function (ex−1)−1x−1 can be expanded in a power series in x.

39. The nature of a function of two or more variables, and the meaning to be attached to continuity and limits in respect of such functions, have been explained under Function. The theorems of differential calculus which relate to such functions are in general the same whether the number Functions of
several variables.
of variables is two or any greater number, and it will generally be convenient to state the theorems for two variables.

40. Let u or ƒ (x, y) denote a function of two variables x and y. If we regard y as constant, u or ƒ becomes a function of one variable x, and we may seek to differentiate it with respect to x. If the function of x is differentiable, the differential coefficient which is formed in this way is called the Partial differentiation. “partial differential coefficient” of u or ƒ with respect to x, and is denoted by u/x or ∂ƒ/x. The symbol “∂” was appropriated for partial differentiation by C. G. J. Jacobi (1841). It had before been written indifferently with “d ” as a symbol of differentiation. Euler had written (dƒ/dx) for the partial differential coefficient of ƒ with respect to x. Sometimes it is desirable to put in evidence the variable which is treated as constant, and then the partial differential coefficient is written “” or “”. This course is often adopted by writers on Thermodynamics. Sometimes the symbols d or ∂ are dropped, and the partial differential coefficient is denoted by ux or ƒx. As a definition of the partial differential coefficient we have the formula

In the same way we may form the partial differential coefficient with respect to y by treating x as a constant.

The introduction of partial differential coefficients enables us to solve at once for a surface a problem analogous to the problem of tangents for a curve; and it also enables us to take the first step in the solution of the problem of maxima and minima for a function of several variables. If the equation of a surface is expressed in the form z = ƒ(x, y), the direction cosines of the normal to the surface at any point are in the ratios ∂ƒ/∂x : ∂ƒ/∂y : = 1. If ƒ is a maximum or a minimum at (x, y), then ∂ƒ/∂x and ∂ƒ/∂y vanish at that point.

In applications of the differential calculus to mathematical physics we are in general concerned with functions of three variables x, y, z, which represent the coordinates of a point; and then considerable importance attaches to partial differential coefficients which are formed by a particular rule. Let F(x, y, z) be the function, P a point (x, y, z), P′ a neighbouring point (x + Δx, y + Δy, z + Δz), and let Δs be the length of PP′. The value of F(x, y, z) at P may be denoted shortly by F(P). A limit of the same nature as a partial differential coefficient is expressed by the formula

in which Δs is diminished indefinitely by bringing P′ up to P, and P′ is supposed to approach P along a straight line, for example, the tangent to a curve or the normal to a surface. The limit in question is denoted by ∂F/∂h, in which it is understood that h indicates a direction, that of PP′. If l, m, n are the direction cosines of the limiting direction of the line PP′, supposed drawn from P to P′, then

The operation of forming ∂F/∂h is called “differentiation with respect to an axis” or “vector differentiation.”

41. The most important theorem in regard to partial differential coefficients is the theorem of the total differential. We may write down the equation

If ƒx is a continuous function of x when x lies between a and a + h and y = b + k, and if further ƒy is a continuous function of y when y lies between b and d+k, there exist values of θ and η which lie between 0 and 1 and have the properties expressed by the equations Theorem of the
Total Differential.

Further, ƒx(a + θh, b + k) and ƒy(a, b + ηk) tend to the limits ƒx(a, b) and ƒy(a, b) when h and k tend to zero, provided the differential coefficients ƒx, ƒy, are continuous at the point (a, b). Hence in this case the above equation can be written

where

In accordance with the notation of differentials this equation gives

Just as in the case of functions of one variable, dx and dy are arbitrary finite differences, and dƒ is not the difference of two values of ƒ, but is so much of this difference as need be retained for the purpose of forming differential coefficients.

The theorem of the total differential is immediately applicable to the differentiation of implicit functions. When y is a function of x which is given by an equation of the form ƒ(x, y) = 0, and it is either impossible or inconvenient to solve this equation so as to express y as an explicit function of x, the differential coefficient dy/dx can be formed without solving the equation. We have at once

This rule was known, in all essentials, to Fermat and de Sluse before the invention of the algorithm, of the differential calculus.

An important theorem, first proved by Euler, is immediately deducible from the theorem of the total differential. If ƒ(x, y) is a homogeneous function of degree n then

The theorem is applicable to functions of any number of variables and is generally known as Euler’s theorem of homogeneous functions.

42. Many problems in which partial differential coefficients occur are simplified by the introduction of certain determinants called “Jacobians” or “functional determinants.” They were introduced into Analysis by C. G. J. Jacobi (J. f. Math., Crelle, Bd. 22, 1841, p. 319). The Jacobian of u1, Jacobians. u2, . . . un with respect to x1, x2, . . . xn is the determinant

in which the constituents of the rth row are the n partial differential coefficients of ur, with respect to the n variables x. This determinant is expressed shortly by

Jacobians possess many properties analogous to those of ordinary differential coefficients, for example, the following:—

If n functions (u1, u2, . . . un) of n variables (x1, x2, . . . , xn) are not independent, but are connected by a relation ƒ(u1, u2, . . . un) = 0, then

and, conversely, when this condition is satisfied identically the functions u1, u2 . . . , un are not independent.

43. Partial differential coefficients of the second and higher orders can be formed in the same way as those of the first order. For example, when there are two variables x, y, the first partial derivatives ∂ƒ/∂x and ∂ƒ/∂y are functions of x and y, which we may seek to differentiate partially with Interchange of order of differentiations. respect to x or y. The most important theorem in relation to partial differential coefficients of orders higher than the first is the theorem that the values of such coefficients do not depend upon the order in which the differentiations are performed. For example, we have the equation

(i.)

This theorem is not true without limitation. The conditions for its validity have been investigated very completely by H. A. Schwarz (see his Ges. math. Abhandlungen, Bd. 2, Berlin, 1890, p. 275). It is a sufficient, though not a necessary, condition that all the differential coefficients concerned should be continuous functions of x, y. In consequence of the relation (i.) the differential coefficients expressed in the two members of this relation are written

or