# 1911 Encyclopædia Britannica/Infinitesimal Calculus/Outlines 2

 III. Outlines of the Infinitesimal Calculus (§32-39) Infinitesimal CalculusIII. Outlines of the Infinitesimal Calculus (§40-46) III. Outlines of the Infinitesimal Calculus (§47-56)

## III. Outlines of the Infinitesimal Calculus (§40-46)

40. Let u or ƒ (x, y) denote a function of two variables x and y. If we regard y as constant, u or ƒ becomes a function of one variable x, and we may seek to differentiate it with respect to x. If the function of x is differentiable, the differential coefficient which is formed in this way is called the Partial differentiation. “partial differential coefficient” of u or ƒ with respect to x, and is denoted by ux or ∂ƒx. The symbol “∂” was appropriated for partial differentiation by C. G. J. Jacobi (1841). It had before been written indifferently with “d ” as a symbol of differentiation. Euler had written (dƒ/dx) for the partial differential coefficient of ƒ with respect to x. Sometimes it is desirable to put in evidence the variable which is treated as constant, and then the partial differential coefficient is written “${\displaystyle \left({\frac {df}{dx}}\right)_{y}}$ ” or “${\displaystyle \left({\frac {|\partial f}{\partial x}}\right)_{y}}$ ”. This course is often adopted by writers on Thermodynamics. Sometimes the symbols d or ∂ are dropped, and the partial differential coefficient is denoted by ux or ƒx. As a definition of the partial differential coefficient we have the formula

${\displaystyle {\frac {\partial f}{\partial x}}=\lim _{h=0}{\frac {f(x+h,y)-f(x,y)}{h}}}$

In the same way we may form the partial differential coefficient with respect to y by treating x as a constant.

The introduction of partial differential coefficients enables us to solve at once for a surface a problem analogous to the problem of tangents for a curve; and it also enables us to take the first step in the solution of the problem of maxima and minima for a function of several variables. If the equation of a surface is expressed in the form z = ƒ(x, y), the direction cosines of the normal to the surface at any point are in the ratios ∂ƒ/∂x : ∂ƒ/∂y : = 1. If ƒ is a maximum or a minimum at (x, y), then ∂ƒ/∂x and ∂ƒ/∂y vanish at that point.

In applications of the differential calculus to mathematical physics we are in general concerned with functions of three variables x, y, z, which represent the coordinates of a point; and then considerable importance attaches to partial differential coefficients which are formed by a particular rule. Let F(x, y, z) be the function, P a point (x, y, z), P′ a neighbouring point (x + Δx, y + Δy, z + Δz), and let Δs be the length of PP′. The value of F(x, y, z) at P may be denoted shortly by F(P). A limit of the same nature as a partial differential coefficient is expressed by the formula

${\displaystyle \lim _{\Delta s=0}{\frac {{\text{F(P}}^{\prime })={\text{F(P)}}}{\Delta s}},}$

in which Δs is diminished indefinitely by bringing P′ up to P, and P′ is supposed to approach P along a straight line, for example, the tangent to a curve or the normal to a surface. The limit in question is denoted by ∂F/∂h, in which it is understood that h indicates a direction, that of PP′. If l, m, n are the direction cosines of the limiting direction of the line PP′, supposed drawn from P to P′, then

${\displaystyle {\frac {\partial {\text{F}}}{\partial h}}=l{\frac {\partial {\text{F}}}{\partial x}}+m{\frac {\partial {\text{F}}}{\partial y}}+n{\frac {\partial {\text{F}}}{\partial z}}.}$

The operation of forming ∂F/∂h is called “differentiation with respect to an axis” or “vector differentiation.”

41. The most important theorem in regard to partial differential coefficients is the theorem of the total differential. We may write down the equation

${\displaystyle f(a+h,b+k)-f(a,b)=f(a+h,b+k)-f(a,b+k)\,+f(a,b+k)-f(a,b).\,}$

If ƒx is a continuous function of x when x lies between a and a + h and y = b + k, and if further ƒy is a continuous function of y when y lies between b and d+k, there exist values of θ and η which lie between 0 and 1 and have the properties expressed by the equations Theorem of the
Total Differential.

${\displaystyle {\begin{matrix}f(a+h,b+k)-f(a,b+k)&=&hf_{z}(a+\theta h,b+k)\\f(a,b+k)-f(a,b)&=&kf_{y}(a,b+\eta k)\end{matrix}}}$

Further, ƒx(a + θh, b + k) and ƒy(a, b + ηk) tend to the limits ƒx(a, b) and ƒy(a, b) when h and k tend to zero, provided the differential coefficients ƒx, ƒy, are continuous at the point (a, b). Hence in this case the above equation can be written

${\displaystyle f(a+h,b+k)-f(a,b)=hf_{x}(a,b)+kf_{y}(a,b)+{\text{R}},\,}$

where

${\displaystyle \lim _{h=0,\,k=0}{\frac {\text{R}}{h}}=0{\mbox{ and }}\lim _{h=0,\,k=0}{\frac {\text{R}}{k}}=0.}$

In accordance with the notation of differentials this equation gives

${\displaystyle df={\frac {\partial f}{\partial x}}dx+{\frac {\partial f}{\partial y}}dy.}$

Just as in the case of functions of one variable, dx and dy are arbitrary finite differences, and dƒ is not the difference of two values of ƒ, but is so much of this difference as need be retained for the purpose of forming differential coefficients.

The theorem of the total differential is immediately applicable to the differentiation of implicit functions. When y is a function of x which is given by an equation of the form ƒ(x, y) = 0, and it is either impossible or inconvenient to solve this equation so as to express y as an explicit function of x, the differential coefficient dy/dx can be formed without solving the equation. We have at once

${\displaystyle {\frac {dy}{dx}}=-{\frac {\partial f}{\partial x}}\left/{\frac {\partial f}{\partial y}}\right.}$

This rule was known, in all essentials, to Fermat and de Sluse before the invention of the algorithm, of the differential calculus.

An important theorem, first proved by Euler, is immediately deducible from the theorem of the total differential. If ƒ(x, y) is a homogeneous function of degree n then

${\displaystyle x{\frac {\partial f}{\partial x}}+y{\frac {\partial f}{\partial y}}=nf(x,y).}$

The theorem is applicable to functions of any number of variables and is generally known as Euler’s theorem of homogeneous functions.

42. Many problems in which partial differential coefficients occur are simplified by the introduction of certain determinants called “Jacobians” or “functional determinants.” They were introduced into Analysis by C. G. J. Jacobi (J. f. Math., Crelle, Bd. 22, 1841, p. 319). The Jacobian of u1, Jacobians. u2, . . . un with respect to x1, x2, . . . xn is the determinant

${\displaystyle {\begin{vmatrix}{\frac {\partial u_{1}}{\partial x_{1}}}&{\frac {\partial u_{1}}{\partial x_{2}}}&\cdots &{\frac {\partial u_{1}}{\partial x_{n}}}\\{\frac {\partial u_{2}}{\partial x_{1}}}&{\frac {\partial u_{2}}{\partial x_{2}}}&\cdots &{\frac {\partial u_{2}}{\partial x_{n}}}\\\vdots &&&\\{\frac {\partial u_{n}}{\partial x_{1}}}&{\frac {\partial u_{n}}{\partial x_{2}}}&\cdots &{\frac {\partial u_{n}}{\partial x_{n}}}\end{vmatrix}}}$

in which the constituents of the rth row are the n partial differential coefficients of ur, with respect to the n variables x. This determinant is expressed shortly by

${\displaystyle {\frac {\partial (u_{1},u_{2},\ldots ,u_{n})}{\partial (x_{1},x_{2},\ldots ,x_{n})}}.}$

Jacobians possess many properties analogous to those of ordinary differential coefficients, for example, the following:—

${\displaystyle {\frac {\partial (u_{1},u_{2},\ldots ,u_{n})}{\partial (x_{1},x_{2},\ldots ,x_{n})}}\times {\frac {\partial (x_{1},x_{2},\ldots ,x_{n})}{\partial (u_{1},u_{2},\ldots ,u_{n})}}=1,}$

${\displaystyle {\frac {\partial (u_{1},u_{2},\ldots ,u_{n})}{\partial (y_{1},y_{2},\ldots ,y_{n})}}\times {\frac {\partial (y_{1},y_{2},\ldots ,y_{n})}{\partial (x_{1},x_{2},\ldots ,x_{n})}}={\frac {\partial (u_{1},u_{2},\ldots ,u_{n})}{\partial (x_{1},x_{2},\ldots ,x_{n})}}.}$

If n functions (u1, u2, . . . un) of n variables (x1, x2, . . . , xn) are not independent, but are connected by a relation ƒ(u1, u2, . . . un) = 0, then

${\displaystyle {\frac {\partial (u_{1},u_{2},\ldots ,u_{n})}{\partial (x_{1},x_{2},\ldots ,x_{n})}}=0;}$

and, conversely, when this condition is satisfied identically the functions u1, u2 . . . , un are not independent.

43. Partial differential coefficients of the second and higher orders can be formed in the same way as those of the first order. For example, when there are two variables x, y, the first partial derivatives ∂ƒ/∂x and ∂ƒ/∂y are functions of x and y, which we may seek to differentiate partially with Interchange of order of differentiations. respect to x or y. The most important theorem in relation to partial differential coefficients of orders higher than the first is the theorem that the values of such coefficients do not depend upon the order in which the differentiations are performed. For example, we have the equation

 ${\displaystyle {\frac {\partial }{\partial x}}\left({\frac {\partial f}{\partial y}}\right)={\frac {\partial }{\partial y}}\left({\frac {\partial f}{\partial x}}\right)}$ (i.)

This theorem is not true without limitation. The conditions for its validity have been investigated very completely by H. A. Schwarz (see his Ges. math. Abhandlungen, Bd. 2, Berlin, 1890, p. 275). It is a sufficient, though not a necessary, condition that all the differential coefficients concerned should be continuous functions of x, y. In consequence of the relation (i.) the differential coefficients expressed in the two members of this relation are written

${\displaystyle {\frac {\partial ^{2}f}{\partial x\partial y}}}$  or ${\displaystyle {\frac {\partial ^{2}f}{\partial y\partial x}}.}$

The differential coefficient

${\displaystyle {\frac {\partial ^{n}f}{\partial x^{p}\partial y^{q}\partial z^{r}}},}$

in which p+q+r = n, is formed by differentiating p times with respect to x, q times with respect to y, r times with respect to z, the differentiations being performed in any order. Abbreviated notations are sometimes used in such forms as

${\displaystyle f_{{x^{p}}{y^{q}}{z^{r}}}}$  or ${\displaystyle f_{x,y,z}^{(p,q,r)}.}$

Differentials of higher orders are introduced by the defining equation

 ${\displaystyle d^{n}f\,}$ ${\displaystyle =\left(dx{\frac {\partial }{\partial x}}+dy{\frac {\partial }{\partial y}}\right)^{n}f}$ ${\displaystyle =(dx)^{n}{\frac {\partial ^{n}f}{\partial x^{n}}}+n(dx)^{n-1}dy{\frac {\partial ^{n}f}{\partial x^{n-1}\partial y}}+\ldots }$

in which the expression (dx·∂/∂x + dy·∂/∂y)n is developed by the binomial theorem in the same way as if dx·∂/∂x and dy·∂/∂y were numbers, and (∂/∂x)r·(∂/∂y)nr ƒ is replaced by ∂nƒ/∂xrynr. When there are more than two variables the multinomial theorem must be used instead of the binomial theorem.

The problem of forming the second and higher differential coefficients of implicit functions can be solved at once by means of partial differential coefficients. For example, if ƒ(x, y) = 0 is the equation defining y as a function of x, we have

${\displaystyle {\frac {d^{2}y}{dx^{2}}}=\left({\frac {\partial f}{\partial y}}\right)^{-3}\left\{\left({\frac {\partial f}{\partial y}}\right)^{2}{\frac {d^{2}f}{dx^{2}}}-2{\frac {\partial f}{\partial x}}\cdot {\frac {\partial f}{\partial y}}\cdot {\frac {\partial ^{2}f}{\partial x\partial y}}+\left({\frac {\partial f}{\partial y}}\right)^{2}{\frac {d^{2}f}{dy^{2}}}\right\}.}$

The differential expression Xdx + Ydy, in which both X and Y are functions of the two variables x and y, is a total differential if there exists a function ƒ of x and y which is such that

${\displaystyle \partial f/\partial x={\text{X}},\quad \partial f/\partial y={\text{Y}}.}$

When this is the case we have the relation

 ${\displaystyle \partial {\text{Y}}/\partial x=\partial {\text{X}}/\partial y.}$ (ii.)

Conversely, when this equation is satisfied there exists a function ƒ which is such that

${\displaystyle df={\text{X}}dx+{\text{Y}}dy.\,}$

The expression Xdx + Ydy in which X and Y are connected by the relation (ii.) is often described as a “perfect differential.” The theory of the perfect differential can be extended to functions of n variables, and in this case there are 12n(n−1) such relations as (ii.).

In the case of a function of two variables x, y an abbreviated notation is often adopted for differential coefficients. The function being denoted by z, we write

${\displaystyle p,q,r,s,t\,}$  for ${\displaystyle {\frac {\partial z}{\partial x}},{\frac {\partial z}{\partial y}},{\frac {\partial ^{2}z}{\partial x^{2}}},{\frac {\partial ^{2}z}{\partial x\partial y}},{\frac {\partial ^{2}z}{\partial y^{2}}}.}$

Partial differential coefficients of the second order are important in geometry as expressing the curvature of surfaces. When a surface is given by an equation of the form z = ƒ(x, y), the lines of curvature are determined by the equation

${\displaystyle \left\{(1+q^{2})s-pqt\right\}(dy)^{2}+\left\{(1+q^{2})r-(1+p^{2})t\right\}dxdy}$

${\displaystyle -\left\{(1+p^{2})s-pqr\right\}(dx)^{2}=0,}$

and the principal radii of curvature are the values of R which satisfy the equation

${\displaystyle {\text{R}}^{2}(rt-s^{2})-{\text{R}}\left\{(1+q^{2})r-2pqs+(1+p^{2})t\right\}{\sqrt {(}}1+p^{2}+q^{2})+(1+p^{2}+q^{2})^{2}=0.}$

44. The problem of change of variables was first considered by Brook Taylor in his Methodus incrementorum. In the case considered by Taylor y is expressed as a function of z, and z as a function of x, and it is desired to express the differential coefficients of y with respect to x without eliminating Change of variables. z. The result can be obtained at once by the rules for differentiating a product and a function of a function. We have

 ${\displaystyle {\frac {dy}{dx}}={\frac {dy}{dz}}\cdot {\frac {dz}{dx}},}$ ${\displaystyle {\frac {d^{2}y}{dx^{2}}}={\frac {dy}{dz}}\cdot {\frac {d^{2}z}{dx^{2}}}+{\frac {d^{2}y}{dz^{2}}}\cdot \left({\frac {dz}{dx}}\right)^{2},}$ ${\displaystyle {\frac {d^{3}y}{dx^{3}}}={\frac {dy}{dz}}\cdot {\frac {d^{3}z}{dx^{3}}}+3{\frac {d^{2}y}{dz^{2}}}\cdot {\frac {dz}{dx}}\cdot {\frac {d^{2}z}{dx^{2}}}+{\frac {d^{3}y}{dx^{3}}}\cdot \left({\frac {dz}{dx}}\right)^{3},}$ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The introduction of partial differential coefficients enables us to deal with more general cases of change of variables than that considered above. If u, v are new variables, and x, y are connected with them by equations of the type

 ${\displaystyle x=f_{1}(u,v),\quad y=f_{2}(u,v),}$ (i.)

while y is either an explicit or an implicit function of x, we have the problem of expressing the differential coefficients of various orders of y with respect to x in terms of the differential coefficients of v with respect to u. We have

${\displaystyle {\frac {dy}{dx}}=\left({\frac {\partial f_{2}}{\partial u}}+{\frac {\partial f_{2}}{\partial v}}{\frac {\partial v}{\partial u}}\right){\Bigg /}\left({\frac {\partial f_{1}}{\partial u}}+{\frac {\partial f_{1}}{\partial v}}{\frac {\partial v}{\partial u}}\right)}$

by the rule of the total differential. In the same way, by means of differentials of higher orders, we may express d2y/dx2, and so on.

Equations such as (i.) may be interpreted as effecting a transformation by which a point (u, v) is made to correspond to a point (x, y). The whole theory of transformations, and of functions, or differential expressions, which remain invariant under groups of transformations, has been studied exhaustively by Sophus Lie (see, in particular, his Theorie der Transformationsgruppen, Leipzig, 1888–1893). (See also Differential Equations and Groups).

A more general problem of change of variables is presented when it is desired to express the partial differential coefficients of a function V with respect to x, y, . . . in terms of those with respect to u, v, . . ., where u, v, . . . are connected with x, y, . . . by any functional relations. When there are two variables x, y, and u, v are given functions of x, y, we have

${\displaystyle {\frac {\partial V}{\partial x}}={\frac {\partial V}{\partial u}}{\frac {\partial u}{\partial x}}+{\frac {\partial V}{\partial v}}{\frac {\partial v}{\partial x}},}$

${\displaystyle {\frac {\partial V}{\partial y}}={\frac {\partial V}{\partial u}}{\frac {\partial u}{\partial y}}+{\frac {\partial V}{\partial v}}{\frac {\partial v}{\partial y}},}$

and the differential coefficients of higher orders are to be formed by repeated applications of the rule for differentiating a product and the rules of the type

${\displaystyle {\frac {\partial }{\partial x}}={\frac {\partial u}{\partial x}}{\frac {\partial }{\partial u}}+{\frac {\partial v}{\partial x}}{\frac {\partial }{\partial v}}.}$

When x, y are given functions of u, v, . . . we have, instead of the above, such equations as

${\displaystyle {\frac {\partial V}{\partial u}}={\frac {\partial V}{\partial x}}{\frac {\partial x}{\partial u}}+{\frac {\partial V}{\partial y}}{\frac {\partial y}{\partial u}};}$

and ∂V/∂x, ∂V/∂y can be found by solving these equations, provided the Jacobian ∂(x, y)/∂(u, v) is not zero. The generalization of this method for the case of more than two variables need not detain us.

In cases like that here considered it is sometimes more convenient not to regard the equations connecting x, y with u, v as effecting a point transformation, but to consider the loci u = const., v = const. as two “families” of curves. Then in any region of the plane of (x, y) in which the Jacobian ∂(x, y)/∂(u, v) does not vanish or become infinite, any point (x, y) is uniquely determined by the values of u and v which belong to the curves of the two families that pass through the point. Such variables as u, v are then described as “curvilinear coordinates” of the point. This method is applicable to any number of variables. When the loci u = const., . . . intersect each other at right angles, the variables are “orthogonal” curvilinear coordinates. Three-dimensional systems of such coordinates have important applications in mathematical physics. Reference may be made to G. Lamé, Leçons sur les coordonnées curvilignes (Paris, 1859), and to G. Darboux, Leçons sur les coordonnées curvilignes et systèmes orthogonaux (Paris, 1898).

When such a coordinate as u is connected with x and y by a functional relation of the form ƒ(x, y, u) = 0 the curves u = const. are a family of curves, and this family may be such that no two curves of the family have a common point. When this is not the case the points in which a curve ƒ(x, y, u) = 0 is intersected by a curve ƒ(x, y, u + Δu) = 0 tend to limiting positions as Δu is diminished indefinitely. The locus of these limiting positions is the “envelope” of the family, and in general it touches all the curves of the family. It is easy to see that, if u, v are the parameters of two families of curves which have envelopes, the Jacobian ∂(x, y)/∂(u, v) vanishes at all points on these envelopes. It is easy to see also that at any point where the reciprocal Jacobian ∂(u, v)/∂(x, y) vanishes, a curve of the family u touches a curve of the family v.

If three variables x, y, z are connected by a functional relation ƒ(x, y, z) = 0, one of them, z say, may be regarded as an implicit function of the other two, and the partial differential coefficients of z with respect to x and y can be formed by the rule of the total differential. We have

${\displaystyle {\frac {\partial z}{\partial x}}=-{\frac {\partial f}{\partial x}}\left/{\frac {\partial f}{\partial z}}\right.,\quad {\frac {\partial z}{\partial y}}=-{\frac {\partial f}{\partial y}}\left/{\frac {\partial f}{\partial z}}\right.,}$

and there is no difficulty in proceeding to express the higher differential coefficients. There arises the problem of expressing the partial differential coefficients of x with respect to y and z in terms of those of z with respect to x and y. The problem is known as that of “changing the dependent variable.” It is solved by applying the rule of the total differential. Similar considerations are applicable to all cases in which n variables are connected by fewer than n equations.

45. Taylor’s theorem can be extended to functions of several variables. In the case of two variables the general formula, with a remainder after n terms, can be written most simply in the form Extension of Taylor’s theorem.

${\displaystyle f(a+h,b+k)=f(a,b)+df(a,b)+{\frac {1}{2!}}d^{2}f(a,b)+\ldots +{\frac {1}{(n-1)!}}d^{n-1}f(a,b)+{\frac {1}{n!}}d^{n}f(a+\theta h,b+\theta k),}$

in which

${\displaystyle d^{r}f(a,b)=\left\lbrack \left(h{\frac {\partial }{\partial x}}+k{\frac {\partial }{\partial y}}\right)^{r}f(x,y)\right\rbrack _{x=a,y=b},}$

and

${\displaystyle d^{n}f(a+\theta h,b+\theta k)=\left\lbrack \left(h{\frac {\partial }{\partial x}}+k{\frac {\partial }{\partial y}}\right)^{n}f(x,y)\right\rbrack _{x=a+\theta h,y=b+\theta k}.}$

The last expression is the remainder after n terms, and in it θ denotes some particular number between 0 and 1. The results for three or more variables can be written in the same form. The extension of Taylor’s theorem was given by Lagrange (1797); the form written above is due to Cauchy (1823). For the validity of the theorem in this form it is necessary that all the differential coefficients up to the nth should be continuous in a region bounded by x = a ± h, y = b ± k. When all the differential coefficients, no matter how high the order, are continuous in such a region, the theorem leads to an expansion of the function in a multiple power series. Such expansions are just as important in analysis, geometry and mechanics as expansions of functions of one variable. Among the problems which are solved by means of such expansions are the problem of maxima and minima for functions of more than one variable (see Maxima and Minima).

46. In treatises on the differential calculus much space is usually devoted to the differential geometry of curves and surfaces. A few remarks and results relating to the differential geometry of plane curves are set down here.Plane curves.

(i.) If ψ denotes the angle which the radius vector drawn from the origin makes with the tangent to a curve at a point whose polar coordinates are r, θ and if p denotes the perpendicular from the origin to the tangent, then

cos ψ = dr/ds,   sin ψ = rdθ/ds = p/r,

where ds denotes the element of arc. The curve may be determined by an equation connecting p with r.

(ii.) The locus of the foot of the perpendicular let fall from the origin upon the tangent to a curve at a point is called the pedal of the curve with respect to the origin. The angle ψ for the pedal is the same as the angle ψ for the curve. Hence the (p, r) equation of the pedal can be deduced. If the pedal is regarded as the primary curve, the curve of which it is the pedal is the “negative pedal” of the primary. We may have pedals of pedals and so on, also negative pedals of negative pedals and so on. Negative pedals are usually determined as envelopes.

(iii.) If φ denotes the angle which the tangent at any point makes with a fixed line, we have

r2 = p2 + (dp/dφ)2.

(iv.) The “average curvature” of the arc Δs of a curve between two points is measured by the quotient

${\displaystyle \left|{\frac {\Delta \phi }{\Delta s}}\right|}$

where the upright lines denote, as usual, that the absolute value of the included expression is to be taken, and φ is the angle which the tangent makes with a fixed line, so that Δφ is the angle between the tangents (or normals) at the points. As one of the points moves up to coincidence with the other this average curvature tends to a limit which is the “curvature” of the curve at the point. It is denoted by

${\displaystyle \left|{\frac {d\phi }{ds}}\right|}$

Sometimes the upright lines are omitted and a rule of signs is given:—Let the arc s of the curve be measured from some point along the curve in a chosen sense, and let the normal be drawn towards that side to which the curve is concave; if the normal is directed towards the left of an observer looking along the tangent in the chosen sense of description the curvature is reckoned positive, in the contrary case negative. The differential dφ is often called the “angle of contingence.” In the 14th century the size of the angle between a curve and its tangent seems to have been seriously debated, and the name “angle of contingence” was then given to the supposed angle.

(v.) The curvature of a curve at a point is the same as that of a certain circle which touches the curve at the point, and the “radius of curvature” ρ is the radius of this circle. We have ${\displaystyle {\frac {1}{\rho }}=\left|{\frac {d\phi }{ds}}\right|}$ . The centre of the circle is called the “centre of curvature”; it is the limiting position of the point of intersection of the normal at the point and the normal at a neighbouring point, when the second point moves up to coincidence with the first. If a circle is described to intersect the curve at the point P and at two other points, and one of these two points is moved up to coincidence with P, the circle touches the curve at the point P and meets it in another point; the centre of the circle is then on the normal. As the third point now moves up to coincidence with P, the centre of the circle moves to the centre of curvature. The circle is then said to “osculate” the curve, or to have “contact of the second order” with it at P.

(vi.) The following are formulae for the radius of curvature:—

${\displaystyle {\frac {1}{\rho }}=\left|\left\{1+\left({\frac {dy}{dx}}\right)^{2}\right\}^{-{\frac {3}{2}}}{\frac {d^{2}y}{dx^{2}}}\right\vert ,}$

${\displaystyle \left|r{\frac {dr}{dp}}\right\vert =\left|p+{\frac {d^{2}p}{d\phi ^{2}}}\right\vert }$

(vii.) The points at which the curvature vanishes are “points of inflection.” If P is a point of inflection and Q a neighbouring point, then, as Q moves up to coincidence with P, the distance from P to the point of intersection of the normals at P and Q becomes greater than any distance that can be assigned. The equation which gives the abscissae of the points in which a straight line meets the curve being expressed in the form ƒ(x) = 0, the function ƒ(x) has a factor (xx0)3, where x0 is the abscissa of the point of inflection P, and the line is the tangent at P. When the factor (xx0) occurs (n + 1) times in ƒ(x), the curve is said to have “contact of the nth order” with the line. There is an obvious modification when the line is parallel to the axis of y.

(viii.) The locus of the centres of curvature, or envelope of the normals, of a curve is called the “evolute.” A curve which has a given curve as evolute is called an “involute” of the given curve. All the involutes are “parallel” curves, that is to say, they are such that one is derived from another by marking off a constant distance along the normal. The involutes are “orthogonal trajectories” of the tangents to the common evolute.

(ix.) The equation of an algebraic curve of the nth degree can be expressed in the form u0 + u1 + u2 + . . . + un = 0, where u0 is a constant, and ur is a homogeneous rational integral function of x, y of the r th degree. When the origin is on the curve, u0 vanishes, and u1 = 0 represents the tangent at the origin. If u1 also vanishes, the origin is a double point and u2 = 0 represents the tangents at the origin. If u2 has distinct factors, or is of the form a(yp1x) (yp2x), the value of y on either branch of the curve can be expressed (for points sufficiently near the origin) in a power series, which is either

${\displaystyle p_{1}x+{\frac {1}{2}}q_{1}x^{2}+\ldots ,}$  or ${\displaystyle p_{2}x+{\frac {1}{2}}q_{2}x^{2}+\ldots ,}$

where q1, . . . and q2, . . . are determined without ambiguity. If p1 and p2 are real the two branches have radii of curvature ρ1, ρ2 determined by the formulae

${\displaystyle {\frac {1}{\rho _{1}}}=\left|\left(1+p_{1}^{2}\right)^{-{\frac {3}{2}}}q_{1}\right\vert ,\quad {\frac {1}{\rho _{2}}}=\left|\left(1+p_{2}^{2}\right)^{-{\frac {3}{2}}}q_{2}\right\vert .}$

When p1 and p2 are imaginary the origin is the real point of intersection of two imaginary branches. In the real figure of the curve it is an isolated point. If u2 is a square, a(ypx)2, the origin is a cusp, and in general there is not a series for y in integral powers of x, which is valid in the neighbourhood of the origin. The further investigation of cusps and multiple points belongs rather to analytical geometry and the theory of algebraic functions than to differential calculus.

(x.) When the equation of a curve is given in the form u0 + u1 + . . . + un−1 + un = 0 where the notation is the same as that in (ix.), the factors of un determine the directions of the asymptotes. If these factors are all real and distinct, there is an asymptote corresponding to each factor. If un = L1L2 . . . Ln, where L1, . . . are linear in x, y, we may resolve un−1/un into partial fractions according to the formula

${\displaystyle {\frac {u_{n-1}}{u_{n}}}={\frac {{\text{A}}_{1}}{{\text{L}}_{1}}}+{\frac {{\text{A}}_{2}}{{\text{L}}_{2}}}+\ldots +{\frac {{\text{A}}_{n}}{{\text{L}}_{n}}},}$

and then L1 + A1 = 0, L2 + A2 = 0, . . . are the equations of the asymptotes. When a real factor of un is repeated we may have two parallel asymptotes or we may have a “parabolic asymptote.” Sometimes the parallel asymptotes coincide, as in the curve x2(x2 + y2a2) = a4, where x = 0 is the only real asymptote. The whole theory of asymptotes belongs properly to analytical geometry and the theory of algebraic functions.