Scientific Papers of Josiah Willard Gibbs, Volume 2/Chapter III

III.

ELEMENTS OF VECTOR ANALYSIS.

[Privately printed, New Haven, pp. 17–50, 1881; pp. 50–90, 1884.]

(The fundamental principles of the following analysis are such as are familiar under a slightly different form to students of quaternions. The manner in which the subject is developed is somewhat different from that followed in treatises on quaternions, since the object of the writer does not require any use of the conception of the quaternion, being simply to give a suitable notation for those relations between vectors, or between vectors and scalars, which seem most important, and which lend themselves most readily to analytical transformations, and to explain some of these transformations. As a precedent for such a departure from quaternionic usage, Clifford's Kinematic may be cited. In this connection, the name of Grassmann may also be mentioned, to whose system the following method attaches itself in some respects more closely than to that of Hamilton.)

CHAPTER I.

concerning the algebra of vectors.

Fundamental Notions.

1. Definition.—If anything has magnitude and direction, its magnitude and direction taken together constitute what is called a vector.

The numerical description of a vector requires three numbers, but nothing prevents us from using a single letter for its symbolical designation. An algebra or analytical method in which a single letter or other expression is used to specify a vector may be called a vector algebra or vector analysis.

Def.—As distinguished from vectors the real (positive or negative) quantities of ordinary algebra are called scalars.[1]

As it is convenient that the form of the letter should indicate whether a vector or a scalar is denoted, we shall use the small Greek letters to denote vectors, and the small English letters to denote scalars. (The three letters, ${\displaystyle i,j,k}$, will make an exception, to be mentioned more particularly hereafter. Moreover, ${\displaystyle \pi }$ will be used in its usual scalar sense, to denote the ratio of the circumference of a circle to its diameter.)

2. Def.—Vectors are said to be equal when they are the same both in direction and in magnitude. This equality is denoted by the ordinary sign, as ${\displaystyle \alpha =\beta }$. The reader will observe that this vector equation is the equivalent of three scalar equations.

A vector is said to be equal to zero, when its magnitude is zero. Such vectors may be set equal to one another, irrespectively of any considerations relating to direction.

3. Perhaps the most simple example of a vector is afforded by a directed straight line, as the line drawn from ${\displaystyle {\text{A}}}$ to ${\displaystyle {\text{B}}}$. We may use the notation ${\displaystyle {\overline {\text{AB}}}}$ to denote this line as a vector, i.e., to denote its length and direction without regard to its position in other respects. The points ${\displaystyle {\text{A}}}$ and ${\displaystyle {\text{B}}}$ may be distinguished as the origin and the terminus of the vector. Since any magnitude may be represented by a length, any vector may be represented by a directed line; and it will often be convenient to use language relating to vectors, which refers to them as thus represented.

Reversal of Direction, Scalar Multiplication and Division.

4. The negative sign (${\displaystyle -}$) reverses the direction of a vector. (Sometimes the sign ${\displaystyle +}$ may be used to call attention to the fact that the vector has not the negative sign.)

Def.—A vector is said to be multiplied or divided by a scalar when its magnitude is multiplied or divided by the numerical value of the scalar and its direction is either unchanged or reversed according as the scalar is positive or negative. These operations are represented by the same methods as multiplication and division in algebra, and are to be regarded as substantially identical with them. The terms scalar multiplication and scalar division are used to denote multiplication and division by scalars, whether the quantity multiplied or divided is a scalar or a vector.

5. Def.—A unit vector is a vector of which the magnitude is unity.

Any vector may be regarded as the product of a positive scalar (the magnitude of the vector) and a unit vector.

The notation ${\displaystyle \alpha _{0}}$ may be used to denote the magnitude of the vector ${\displaystyle \alpha }$.

6. Def.—The sum of the vectors ${\displaystyle \alpha ,\beta }$, etc. (written ${\displaystyle \alpha +\beta +{\text{etc.}}}$) is the vector found by the following process. Assuming any point ${\displaystyle {\text{A}}}$, we determine successively the points ${\displaystyle {\text{B, C}}}$, etc., so that ${\displaystyle {\overline {\text{AB}}}=\alpha ,{\overline {\text{BC}}}=\beta }$, etc. The vector drawn from ${\displaystyle {\text{A}}}$ to the last point thus determined is the sum required. This is sometimes called the geometrical sum, to distinguish it from an algebraic sum or an arithmetical sum. It is also called the resultant, and ${\displaystyle \alpha ,\beta }$, etc. are called the components. When the vectors to be added are all parallel to the same straight line, geometrical addition reduces to algebraic; when they have all the same direction, geometrical addition like algebraic reduces to arithmetical. It may easily be shown that the value of a sum is not affected by changing the order of two consecutive terms, and therefore that it is not affected by any change in the order of the terms. Again, it is evident from the definition that the value of a sum is not altered by uniting any of its terms in brackets, as ${\displaystyle \alpha +[\beta +\gamma ]+{\text{etc.,}}}$ which is in effect to substitute the sum of the terms enclosed for the terms themselves among the vectors to be added. In other words, the conmiutative and associative principles of arithmetical and algebraic addition hold true of geometrical addition.

7. Def.— A vector is said to be subtracted when it is added after reversal of direction. This is indicated by the use of the sign ${\displaystyle -}$ instead of ${\displaystyle +}$.

8. It is easily shown that the distributive principle of arithmetical and algebraic multiplication applies to the multiplication of sums of vectors by scalars or sums of scalars, i.e.,

 {\displaystyle {\begin{aligned}(m+n+{\text{etc.}})[\alpha +\beta +{\text{etc.}}]&=m\alpha +n\beta &+{\text{etc.}}\\&+m\beta +n\beta &+{\text{etc.}}\\&&+{\text{etc.}}\end{aligned}}}
9. Vector Equations.— If we have equations between sums and differences of vectors, we may transpose terms in them, multiply or divide by any scalar, and add or subtract the equations, precisely as in the case of the equations of ordinary algebra. Hence, if we have several such equations containing known and unknown vectors, the processes of elimination and reduction by which the unknown vectors may be expressed in terms of the known are precisely the same, and subject to the same limitations, as if the letters representing vectors represented scalars. This will be evident if we consider that in the multiplications incident to elimination in the supposed scalar equations the multipliers are the coefficients of the unknown quantities, or functions of these coefficients, and that such multiplications may be applied to the vector equations, since the coefficients are scalars.

10. Linear relation of four vectors, Coordinates.—If ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$ are any given vectors not parallel to the same plane, any other vector ${\displaystyle \rho }$ may be expressed in the form

 ${\displaystyle \rho =a\alpha +b\beta +c\gamma .}$
If ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$ are unit vectors, ${\displaystyle a,b}$, and ${\displaystyle c}$ are the ordinary scalar components of ${\displaystyle \rho }$ parallel to ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$. If ${\displaystyle \rho ={\overline {\text{OP}}}}$, (${\displaystyle \alpha ,\beta ,\gamma }$ being unit vectors), ${\displaystyle a,b}$, and ${\displaystyle c}$ are the cartesian coordinates of the point ${\displaystyle {\text{P}}}$ referred to axes through ${\displaystyle {\text{O}}}$ parallel to ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$. When the values of these scalars are given, ${\displaystyle \rho }$ is said to be given in terms of ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$. It is generally in this way that the value of a vector is specified, viz., in terms of three known vectors. For such purposes of reference, a system of three mutually perpendicular vectors has certain evident advantages.

11. Normal systems of unit vectors.—The letters ${\displaystyle i,j,k}$ are appropriated to the designation of a normal system of unit vectors, ie., three unit vectors, each of which is at right angles to the other two and determined in direction by them in a perfectly definite manner. We shall always suppose that ${\displaystyle k}$ is on the side of the ${\displaystyle i{\text{-}}j}$ plane on which a rotation from ${\displaystyle i}$ to ${\displaystyle j}$ (through one right angle) appears counter-clockwise. In other words, the directions of ${\displaystyle i,j}$, and ${\displaystyle k}$ are to be so determined that if they be turned (remaining rigidly connected with each other) so that ${\displaystyle i}$ points to the east, and ${\displaystyle j}$ to the north, ${\displaystyle k}$ will point upward. When rectangular axes of ${\displaystyle {\text{X}},{\text{Y}}}$, and ${\displaystyle {\text{Z}}}$ are employed, their directions will be conformed to a similar condition, and ${\displaystyle i,j,k}$ (when the contrary is not stated) will be supposed parallel to these axes respectively. We may have occasion to use more than one such system of unit vectors, just as we may use more than one system of coordinate axes. In such cases, the different systems may be distinguished by accents or otherwise.

12. Numerical computation of a geometrical sum,—If

 ${\displaystyle \rho =a\alpha +b\beta +c\gamma ,}$ ${\displaystyle \sigma =a'\alpha +b'\beta +c'\gamma ,}$ etc.,
then
 ${\displaystyle \rho +\sigma +{\text{etc.}}=(a+a'+{\text{etc.}})\alpha +(b+b'+{\text{etc.}})\beta +(c+c'+{\text{etc.}})\gamma ,}$
i.e., the coefficients by which a geometrical sum is expressed in terms of three vectors are the sums of the coefficients by which the separate terms of the geometrical sum are expressed in terms of the same three vectors.

Direct and Skew Products of Vectors.

13. Def.—The direct product of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ (written ${\displaystyle \alpha .\beta }$) is the scalar quantity obtained by multiplying the product of their magnitudes by the cosine of the angle made by their directions.

14. Def—The skew product of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ (written ${\displaystyle \alpha \times \beta }$) is a vector function of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$. Its magnitude is obtained by multiplying the product of the magnitudes of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ by the sine of the angle made by their directions. Its direction is at right angles to ${\displaystyle \alpha }$ and ${\displaystyle \beta }$, and on that side of the plane containing ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ (supposed drawn from a common origin) on which a rotation from ${\displaystyle \alpha }$ to ${\displaystyle \beta }$ through an arc of less than 180° appears counter-clockwise.

The direction of ${\displaystyle \alpha \times \beta }$ may also be defined as that in which an ordinary screw advances as it turns so as to carry ${\displaystyle \alpha }$ toward ${\displaystyle \beta }$.

Again, if ${\displaystyle \alpha }$ be directed toward the east, and ${\displaystyle \beta }$ lie in the same horizontal plane and on the north side of ${\displaystyle \alpha }$, ${\displaystyle \alpha \times \beta }$ will be directed upward.

15. It is evident from the preceding definitions that

 ${\displaystyle \alpha .\beta =\beta .\alpha ,}$⁠and⁠${\displaystyle \alpha \times \beta =\beta \times \alpha .}$

16. Moreover,${\displaystyle [n\alpha ].\beta =\alpha .[n\beta ],}$

and${\displaystyle [n\alpha ]\times \beta =\alpha \times [n\beta ].}$

The brackets may therefore be omitted in such expressions.

17. From the definitions of No. 11 it appears that

 ${\displaystyle i.i=j.j=k.k=1,}$ ${\displaystyle i.j=j.i=i.k=k.i=j.k=k.j=0,}$ ${\displaystyle i\times i=0,}$ ${\displaystyle j\times j=0,}$ ${\displaystyle k\times k=0,}$ ${\displaystyle i\times j=k,}$ ${\displaystyle j\times k=i,}$ ${\displaystyle k\times i=j,}$ ${\displaystyle j\times i=-k,}$ ${\displaystyle k\times j=-i,}$ ${\displaystyle i\times k=-j.}$

18. If we resolve ${\displaystyle \beta }$ into two components ${\displaystyle \beta '}$ and ${\displaystyle \beta ''}$, of which the first is parallel and the second perpendicular to ${\displaystyle \alpha }$, we shall have

 ${\displaystyle \alpha .\beta =\alpha .\beta '}$⁠and⁠${\displaystyle \alpha \times \beta =\alpha \times \beta ''.}$

19. ${\displaystyle \alpha .[\beta +\gamma ]=\alpha .\beta +\alpha .\gamma }$ and ${\displaystyle \alpha \times [\beta +\gamma ]=\alpha \times \beta +\alpha \times \gamma .}$

To prove this, let ${\displaystyle \sigma =\beta +\gamma }$, and resolve each of the vectors ${\displaystyle \alpha ,\beta ,\sigma }$ into two components, one parallel and the other perpendicular to ${\displaystyle \alpha }$. Let these be ${\displaystyle \beta ',\beta '',\gamma ',\gamma '',\sigma ',\sigma ''.}$ Then the equations to be proved will reduce by the last section to

 ${\displaystyle \alpha .\sigma '=\alpha .\beta '+\alpha .\gamma '}$⁠and⁠${\displaystyle \alpha \times \sigma ''=\alpha \times \beta ''+\alpha \times \gamma ''.}$
Now since ${\displaystyle \sigma =\beta +\gamma }$ we may form a triangle in space, the sides of which shall be ${\displaystyle \beta ,\gamma }$, and ${\displaystyle \sigma .}$ Projecting this on a plane perpendicular to ${\displaystyle \alpha }$, we obtain a triangle having the sides ${\displaystyle \beta '',\gamma '',}$ and ${\displaystyle \sigma '',}$ which affords the relation ${\displaystyle \sigma ''=\beta ''+\gamma ''.}$ If we pass planes perpendicular to a through the vertices of the first triangle, they will give on a line parallel to a segments equal to ${\displaystyle \beta ',\gamma ',\sigma '.}$ Thus we obtain the relation ${\displaystyle \sigma '=\beta '+\gamma '.}$ Therefore ${\displaystyle \alpha .\sigma '=\alpha .\beta '+\alpha .\gamma ',}$ since all the cosines involved in these products are equal to unity. Moreover, if ${\displaystyle \alpha }$ is a unit vector, we shall evidently have ${\displaystyle \alpha \times \sigma ''=\alpha \times \beta ''+\alpha \times \gamma '',}$ since the effect of the skew multiplication by a upon vectors in a plane perpendicular to a is simply to rotate them all 90° in that plane. But any case may be reduced to this by dividing both sides of the equation to be proved by the magnitude of ${\displaystyle \alpha .}$ The propositions are therefore proved.

20. Hence,

 ${\displaystyle [\alpha +\beta ].\gamma =\alpha .\gamma +\beta .\gamma ,}$⁠${\displaystyle [\alpha +\beta ]\times \gamma =\alpha \times \gamma +\beta \times \gamma ,}$${\displaystyle [\alpha +\beta ].[\gamma +\delta ]=\alpha .\gamma +\alpha .\delta +\beta .\gamma +\beta .\delta ,}$⁠${\displaystyle [\alpha +\beta ]\times [\gamma +\delta ]=\alpha \times \gamma +\alpha \times \delta +\beta \times \gamma +\beta \times \delta ;}$
and, in general, direct and skew products of sums of vectors may be expanded precisely as the products of sums in algebra, except that in skew products the order of the factors must not be changed without compensation in the sign of the term. If any of the terms in the factors have negative signs, the signs of the expanded product (when there is no change in the order of the factors) will be determined by the same rules as in algebra. It is on account of this analogy with algebraic products that these functions of vectors are called products and that other terms relating to multiplication are applied to them.

21. Numerical calculation of direct and skew products.—The properties demonstrated in the last two paragraphs (which may be briefly expressed by saying that the operations of direct and skew multiplication are distributive) afford the rule for the numerical calculation of a direct product, or of the components of a skew product, when the rectangular components of the factors are given numerically. In fact, if

 ${\displaystyle \alpha =xi+yj+zk,}$⁠and⁠${\displaystyle \beta =x'i+y'j+z'k;}$
 ${\displaystyle \alpha .\beta =xx'+yy'+zz',}$
and
${\displaystyle \alpha \times \beta =(yz'-zy')i+(zx'-xz')j+(xy'-yx')k.}$

22. Representation of the area of a parallelogram by a skew product.—It will be easily seen that ${\displaystyle \alpha \times \beta }$ represents in magnitude the area of the parallelogram of which ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ (supposed drawn from a common origin) are the sides, and that it represents in direction the normal to the plane of the parallelogram on the side on which the rotation from ${\displaystyle \alpha }$ toward ${\displaystyle \beta }$ appears coiuter-clockwise.

23. Representation of the volume of a parallelopiped by a triple product.—It will also be seen that ${\displaystyle \alpha \times \beta .\gamma }$[2] represents in numerical value the volume of the parallelopiped of which ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$ (supposed drawn from a common origin) are the edges, and that the value of the expression is positive or negative according as ${\displaystyle \gamma }$ lies on the side of the plane of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ on which the rotation from ${\displaystyle \alpha }$ to ${\displaystyle \beta }$ appears counter-clockwise, or on the opposite side.

24. Hence,

 {\displaystyle {\begin{aligned}\alpha \times \beta .\gamma &=\gamma \times \alpha .\beta =\gamma .\alpha \times \beta =\alpha .\beta \times \gamma \\&=\beta .\gamma \times \alpha =-\beta \times \alpha .\gamma =-\gamma \times \beta .\alpha =-\alpha \times \gamma .\beta \\&=-\gamma .\beta \times \alpha =-\alpha .\gamma \times \beta =-\beta .\alpha \times \gamma .\end{aligned}}}
It will be observed that all the products of this type, which can be made with three given vectors, are the same in numerical value, and that any two such products are of the same or opposite character in respect to sign, according as the cyclic order of the letters is the same or different. The product vanishes when two of the vectors are parallel to the same line, or when the three are parallel to the same plane.

This kind of product may be called the scalar product of the three vectors. There are two other kinds of products of three vectors, both of which are vectors, viz., products of the type ${\displaystyle (\alpha .\beta )\gamma }$ or ${\displaystyle \gamma (\alpha .\beta )}$, and products of the type ${\displaystyle \alpha \times [\beta \times \gamma ]}$ or ${\displaystyle [\gamma \times \beta ]\times \alpha }$.

25. ${\displaystyle i.j\times k=j.k\times i=k.i\times j=1.}$${\displaystyle i.k\times j=k.j\times i=j.i\times k=-1.}$

From these equations, which follow immediately from those of No. 17, the propositions of the last section might have been derived, viz., by substituting for ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$, respectively, expressions of the form ${\displaystyle xi+yj+zk,x'i+y'j+z'k}$, and ${\displaystyle x''i+y''j+z''k}$.[3] Such a method, which may be called expansion of terms of ${\displaystyle i,j}$, and ${\displaystyle k}$, will on many occasions afford very simple, although perhaps lengthy, demonstrations.

26. Triple products containing only two different letters.—The significance and the relations of ${\displaystyle (\alpha .\alpha )\beta ,(\alpha .\beta )\alpha }$, and ${\displaystyle \alpha \times [\alpha \times \beta ]}$ will be most evident, if we consider ${\displaystyle \beta }$ as made up of two components, ${\displaystyle \beta '}$ and ${\displaystyle \beta ''}$, respectively parallel and perpendicular to ${\displaystyle \alpha }$. Then

 ${\displaystyle \beta }$ ${\displaystyle =\beta '+\beta '',}$ ${\displaystyle (\alpha .\beta )\alpha }$ ${\displaystyle =(\alpha .\beta ')\alpha =(\alpha .\alpha )\beta ',}$ ${\displaystyle \alpha \times [\alpha \times \beta ]}$ ${\displaystyle =\alpha \times [\alpha \times \beta '']=-(\alpha .\alpha )\beta ''.}$ Hence, ⁠ ${\displaystyle \alpha \times [\alpha \times \beta ]}$ ${\displaystyle =(\alpha .\beta )\alpha -(\alpha .\alpha )\beta .}$

27. General relation of the vector prodAicts of three factors.—In the triple product ${\displaystyle \alpha \times [\beta \times \gamma ]}$ we may set

 ${\displaystyle \alpha =l\beta +m\gamma +n\beta \times \gamma ,}$
unless ${\displaystyle \beta }$ and ${\displaystyle \gamma }$ have the same direction. Then
 {\displaystyle {\begin{aligned}\alpha \times [\beta \times \gamma ]&=l\beta \times [\beta \times \gamma ]+m\gamma \times [\beta \times \gamma ]\\&=l(\beta .\gamma )-l(\beta .\beta )\gamma -m(\gamma .\beta )\gamma +m(\gamma .\gamma )\beta \\&=(l\beta .\gamma +m\gamma .\gamma )\beta -(l\beta .\beta +m\gamma .\beta )\gamma .\end{aligned}}}
But
${\displaystyle l\beta .\gamma +m\gamma .\gamma =\alpha .\gamma }$,and${\displaystyle l\beta .\beta +m\gamma .\beta =\alpha .\beta .}$
Therefore
${\displaystyle \alpha \times [\beta \times \gamma ]=(\alpha .\gamma )\beta -\gamma (\beta .\alpha ),}$
which is evidently true, when ${\displaystyle \beta }$ and ${\displaystyle \gamma }$ have the same directions. It may also be written
 ${\displaystyle [\gamma \times \beta ]\times \alpha =\beta (\gamma .\alpha )-\gamma (\beta .\alpha ).}$
28. This principle may be used in the transformation of more complex products. It will be observed that its application will always simultaneously eliminate, or introduce, two signs of skew multiplication.

The student will easily prove the following identical equations, which, although of considerable importance, are here given principally as exercises in the application of the preceding formulæ.

29. ${\displaystyle \alpha \times [\beta \times \gamma ]+\beta \times [\gamma \times \alpha ]+\gamma \times [\alpha \times \beta ]=0.}$.

 31. ${\displaystyle [\alpha \times \beta ].[\gamma \times \delta ]}$ ${\displaystyle =(\alpha .\gamma )(\beta .\delta )-(\alpha .\delta )(\beta .\gamma ).}$ 32. ${\displaystyle [\alpha \times \beta ]\times [\gamma \times \delta ]}$ ${\displaystyle =(\alpha .\gamma \times \delta )\beta -(\beta .\gamma \times \delta )\alpha =(\alpha .\beta \times \delta )\gamma -(\alpha .\beta \times \gamma )\delta .}$ 32. ${\displaystyle \alpha \times [\beta \times [\gamma \times \delta ]]}$ ${\displaystyle =(\alpha .\gamma \times \delta )\beta -(\alpha .\beta )\gamma \times \delta =(\beta .\delta )\alpha \times \gamma -(\beta .\gamma )\alpha \times \delta .}$ 33. ${\displaystyle [\alpha \times \beta ].[\gamma \times \delta ]\times [\epsilon \times \zeta ]}$ ${\displaystyle =(\alpha .\beta \times \delta )(\gamma .\epsilon \times \zeta )-(\alpha .\beta \times \gamma )(\delta .\epsilon \times \zeta )}$ ${\displaystyle =(\alpha .\beta \times \epsilon )(\zeta .\gamma \times \delta )-(\alpha .\beta \times \zeta )(\epsilon .\gamma \times \delta )}$ ${\displaystyle =(\gamma .\delta \times \alpha )(\beta .\epsilon \times \zeta )-(\gamma .\delta \times \beta )(\alpha .\epsilon \times \zeta ).}$ 34. ${\displaystyle [\alpha \times \beta ].[\gamma \times \delta ]\times [\gamma \times \alpha ]}$ ${\displaystyle =(\alpha .\beta \times \gamma )^{2}.}$

35. The student will also easily convince himself that a product formed of any number of letters (representing vectors) combined in any possible way by scalar, direct, and skew multiplications may be reduced by the principles of Nos. 24 and 27 to a sum of products, each of which consists of scalar factors of the forms ${\displaystyle \alpha .\beta }$ and ${\displaystyle \alpha .\beta \times \gamma }$, with a single vector factor of the form ${\displaystyle \alpha }$ or ${\displaystyle \alpha \times \beta }$, when the original product is a vector.

36. Elimination of scalars from vector equations.—It has already been observed that the elimination of vectors from equations of the form

 ${\displaystyle a\alpha +b\beta +c\gamma +d\delta +{\text{etc.}}=0}$
is performed by the same rule as the eliminations of ordinary algebra. (See No. 9.) But the elimination of scalars from such equations is at least formally different. Since a single vector equation is the equivalent of three scalar equations, we must be able to deduce from such an equation a scalar equation from which two of the scalars which appear in the original vector equation have been eliminated. We shall see how this may be done, if we consider the scalar equation
 ${\displaystyle a\alpha .\lambda +b\beta .\lambda +c\gamma .\lambda +d\delta .\gamma +{\text{etc.}}=0,}$
which is derived from the above vector equation by direct multiplication by a vector ${\displaystyle \lambda }$. We may regard the original equation as the equivalent of the three scalar equations obtained by substituting for ${\displaystyle \alpha ,\beta ,\gamma ,\delta }$, etc., their ${\displaystyle {\text{X-}},{\text{Y-}}}$, and ${\displaystyle {\text{Z-}}}$components. The second equation would be derived from these by multiplying them respectively by the ${\displaystyle {\text{X-}},{\text{Y-}}}$, and ${\displaystyle {\text{Z-}}}$components of ${\displaystyle \lambda }$ and adding. Hence the second equation may be regarded as the most general form of a scalar equation of the first degree in ${\displaystyle a,b,c,d,}$ etc., which can be derived from the original vector equation or its equivalent three scalar equations. If we wish to have two of the scalars, as ${\displaystyle b}$ and ${\displaystyle c}$, disappear, we have only to choose for ${\displaystyle \lambda }$ a vector perpendicular to ${\displaystyle \beta }$ and ${\displaystyle \gamma }$. Such a vector is ${\displaystyle \beta \times \gamma }$. We thus obtain
 ${\displaystyle a\alpha .\beta \times \gamma +d\delta .\beta \times \gamma +{\text{etc.}}=0.}$
37. Relations of four vectors.—By this method of elimination we may find the values of the coefficients ${\displaystyle a,b}$, and ${\displaystyle c}$ in the equation
 ${\displaystyle \rho =a\alpha +b\beta +c\gamma ,}$ (1)
by which any vector ${\displaystyle \rho }$ is expressed in terms of three others. (See No. 10.) If we multiply directly by ${\displaystyle \beta \times \gamma ,\gamma \times \alpha }$, and ${\displaystyle \alpha \times \beta }$, we obtain
 ${\displaystyle \rho .\beta \times \gamma =a\alpha .\beta \times \gamma ,}$⁠${\displaystyle \rho .\gamma \times \alpha =b\beta .\gamma \times \alpha ,}$⁠${\displaystyle \rho .\alpha \times \beta =c\gamma .\alpha \times \beta ;}$ (2)
whence
 ${\displaystyle a={\frac {\rho .\beta \times \gamma }{\alpha .\beta \times \gamma }},}$⁠${\displaystyle b={\frac {\rho .\gamma \times \alpha }{\alpha .\beta \times \gamma }},}$⁠${\displaystyle c={\frac {\rho .\alpha \times \beta }{\alpha .\beta \times \gamma }}\cdot }$ (3)
By substitution of these values, we obtain the identical equation,
 ${\displaystyle (\alpha .\beta \times \gamma )\rho =(\rho .\beta \times \gamma )\alpha +(\rho .\gamma \times \alpha )\beta +(\rho .\alpha \times \beta )\gamma .}$ (4)
(Compare No. 31.) If we wish the four vectors to appear symmetrically in the equation we may write
 ${\displaystyle (\alpha .\beta \times \gamma )\rho -(\beta .\gamma \times \rho )\alpha +(\gamma .\rho \times \alpha )\beta -(\rho .\alpha \times \beta )\gamma =0.}$ (5)
If we wish to express ${\displaystyle \rho }$ as a sum of vectors having directions perpendicular to the planes of ${\displaystyle \alpha }$ and ${\displaystyle \beta ,}$ of ${\displaystyle \beta }$ and ${\displaystyle \gamma }$, and of ${\displaystyle \gamma }$ and ${\displaystyle \alpha }$, we may write
 ${\displaystyle \rho =e\beta \times \gamma +f\gamma \times \alpha +g\alpha \times \beta .}$ (6)
To obtain the values of ${\displaystyle e,f,g}$, we multiply directly by ${\displaystyle a}$, by ${\displaystyle \beta }$, and by ${\displaystyle \gamma }$. This gives
 ${\displaystyle e={\frac {\rho .\alpha }{\beta .\gamma \times \alpha }},}$⁠${\displaystyle f={\frac {\rho .\beta }{\gamma .\alpha \times \beta }},}$⁠${\displaystyle g={\frac {\rho .\gamma }{\alpha .\beta \times \gamma }}\cdot }$ (7)
Substituting these values we obtain the identical equation
 ${\displaystyle (\alpha .\beta \times \gamma )\rho =(\rho .\alpha )\beta \times \gamma +(\rho .\beta )\gamma \times \alpha +(\rho .\gamma )\alpha \times \beta .}$ (8)
(Compare No. 32.)

38. Reciprocal systems of vectors.— The results of the preceding section may be more compactly expressed if we use the abbreviations

 ${\displaystyle \alpha '={\frac {\beta \times \gamma }{\alpha .\beta \times \gamma }},}$⁠${\displaystyle \beta '={\frac {\gamma \times \alpha }{\beta .\gamma \times \alpha }},}$⁠ ${\displaystyle \gamma '={\frac {\alpha \times \beta }{\gamma .\alpha \times \beta }}\cdot }$ (1)
The identical equations (4) and (8) of the preceding number thus become
 ${\displaystyle \rho =(\rho .\alpha ')\alpha +(\rho .\beta ')\beta +(\rho .\gamma ')\gamma ,}$ (2)
 ${\displaystyle \rho =(\rho .\alpha )\alpha '+(\rho .\beta )\beta '+(\rho .\gamma )\gamma '.}$ (3)
We may infer from the similarity of these equations that the relations of ${\displaystyle \alpha ,\beta ,\gamma }$, and ${\displaystyle \alpha ',\beta ',\gamma '}$ are reciprocal, a proposition which is easily proved directly. For the equations
 ${\displaystyle \alpha ={\frac {\beta '\times \gamma '}{\alpha '.\beta '\times \gamma '}},}$⁠${\displaystyle \beta ={\frac {\gamma '\times \alpha '}{\beta '.\gamma '\times \alpha '}},}$⁠${\displaystyle \gamma ={\frac {\alpha '\times \beta '}{\gamma '.\alpha '\times \beta '}}}$ (4)
are satisfied identically by the substitution of the values of ${\displaystyle \alpha ',\beta '}$, and ${\displaystyle \gamma '}$ given in equations (1). (See Nos. 31 and 34.)

Def.—It will be convenient to use the term reciprocal to designate these relations, i.e., we shall say that three vectors are reciprocals of three others, when they satisfy relations similar to those expressed in equations (1) or (4).

With this underatanding we may say:—

The coefficients by which any vector is expressed in terms of three other vectors are the direct products of that vector with the reciprocals of the three.

Among other relations which are satisfied by reciprocal systems of vectors are the following:

 ${\displaystyle \alpha .\alpha '=\beta .\beta '=\gamma .\gamma '=1,}$
 ${\displaystyle \alpha .\beta =0,}$⁠${\displaystyle \alpha .\gamma '=0,}$⁠${\displaystyle \beta .\alpha '=0,}$⁠${\displaystyle \beta .\gamma '=0,}$⁠${\displaystyle \gamma .\alpha '=0,}$⁠${\displaystyle \gamma .\beta '=0.}$ (5)
These nine equations may be regarded as defining the relations between ${\displaystyle \alpha ,\beta ,\gamma }$, and ${\displaystyle \alpha ',\beta ',\gamma '}$ as reciprocals.
 ${\displaystyle (\alpha .\beta \times \gamma )(\alpha '.\beta '\times \gamma ')=1.}$ (6)
(See No. 34.)
 ${\displaystyle \alpha \times \alpha '+\beta \times \beta '+\gamma \times \gamma '=0.}$ (7)
(See No. 29.)

A system of three mutually perpendicular unit vectors is reciprocal to itself, and only such a system. The identical equation

 ${\displaystyle \rho =(\rho .i)i+(\rho .j)j+(\rho .k)k}$ (8)
may be regarded as a particular case of equation (2).

The system reciprocal to ${\displaystyle \alpha \times \beta ,\beta \times \gamma ,\gamma \times \alpha }$ is

 ${\displaystyle \alpha '\times \beta ',}$⁠${\displaystyle \beta '\times \gamma ',}$⁠${\displaystyle \gamma '\times \alpha ',}$
or
 ${\displaystyle {\frac {\alpha }{\alpha .\beta \times \gamma }},}$⁠${\displaystyle {\frac {\beta }{\alpha .\beta \times \gamma }},}$⁠${\displaystyle {\frac {\gamma }{\alpha .\beta \times \gamma }}\cdot }$
38a. If we multiply the identical equation (8) of No. 37 by ${\displaystyle \sigma \times \tau }$, we obtain the equation
 ${\displaystyle (\alpha .\beta \times \gamma )(\rho .\sigma \times \tau )=\alpha .\rho (\beta .\sigma \,\gamma .\tau -\beta .\tau \,\gamma .\sigma )+\beta .\rho (\gamma .\sigma \,\alpha .\tau -\gamma .\tau \,\alpha .\sigma )+\gamma .\rho (\alpha .\sigma \,\beta .\tau -\alpha .\tau \,\beta .\sigma ),}$
which is therefore identical. But this equation cannot subsist identically, unless
 ${\displaystyle (\alpha .\beta \times \gamma )\sigma \times \tau =\alpha (\beta .\sigma \,\gamma .\tau -\beta .\tau \,\gamma .\sigma )+\beta (\gamma .\sigma \,\alpha .\tau -\gamma .\tau \,\alpha .\sigma )+\gamma (\alpha .\sigma \,\beta .\tau -\alpha .\tau \,\beta .\sigma )}$
is also an identical equation. (The reader will observe that in each of these equations the second member may be expressed as a determinant.)

From these transformations, with those already given, it follows that a product formed of any number of letters (representing vectors and scalars), combined in any possible way by scalar, direct, and skew multiplications, may be reduced to a sum of products, containing each the sign ${\displaystyle \times }$ once and only once, when the original product contains it an odd number of times, or entirely free from the sign, when the original product contains it an even number of times.

39. Scalar equations of the first degree with respect to an unknown vector.—It is easily shown that any scalar equation of the first degree with respect to an unknown vector ${\displaystyle \rho }$, in which all the other quantities are known, may be reduced to the form

 ${\displaystyle \rho .\alpha =a,}$
in which ${\displaystyle \alpha }$ and ${\displaystyle a}$ are known. (See Na 35.) Three such equations will afford the value of ${\displaystyle \rho }$ (by equation (8) of No. 37, or equation (3) of No. 38), which may be used to eliminate ${\displaystyle \rho }$ from any other equation either scalar or vector.

When we have four scalar equations of the first degree with respect to ${\displaystyle \rho }$, the elimination may be performed most symmetrically by substituting the values of ${\displaystyle \rho .\alpha }$, etc., in the equation

 ${\displaystyle (\rho .\alpha )(\beta .\gamma \times \delta )-(\rho .\beta )(\gamma .\delta \times \alpha )+(\rho .\gamma )(\delta .\alpha \times \beta )-(\rho .\delta )(\alpha .\beta \times \gamma )=0,}$
which is obtained from equation (8) of No. 37 by multiplying directly by ${\displaystyle \delta }$, It may also be obtained from equation (5) of No. 37 by writing ${\displaystyle \delta }$ for ${\displaystyle \rho }$, and then multiplying directly by ${\displaystyle \rho }$.

40. Solution of a vector equation of the first degree with respect to the unknown vector.—It is now easy to solve an equation of the form

 ${\displaystyle \delta =\alpha (\lambda .\rho )+\beta (\mu .\rho )+\gamma (\nu .\rho ),}$ (1)
where ${\displaystyle \alpha ,\beta ,\gamma ,\delta ,\lambda ,\mu }$, and ${\displaystyle \nu }$ represent known vectors. Multiplying directly by ${\displaystyle \beta \times \gamma }$, by ${\displaystyle \gamma \times \alpha }$, and by ${\displaystyle \alpha \times \beta }$, we obtain
 ${\displaystyle \beta .\gamma \times \delta =(\beta .\gamma \times \alpha )(\lambda .\rho ),}$⁠${\displaystyle \gamma .\alpha \times \delta =(\gamma .\alpha \times \beta )(\mu .\rho ),}$
 ${\displaystyle \alpha .\beta \times \delta =(\alpha .\beta \times \delta )(\nu .\rho );}$
or
${\displaystyle \alpha '.\delta =\lambda .\rho ,}$${\displaystyle \beta '.\delta =\mu .\rho ,}$${\displaystyle \gamma '.\delta =\nu .\rho ,}$
where ${\displaystyle \alpha ',\beta ',\gamma '}$ are the reciprocals of ${\displaystyle \alpha ,\beta ,\gamma }$. Substituting these values in the identical equation
 ${\displaystyle \rho =\lambda '(\lambda .\rho )+\mu '(\mu .\rho )+\nu '(\nu .\rho ),}$
in which ${\displaystyle \lambda ',\mu ',\nu '}$ are the reciprocals of ${\displaystyle \lambda ,\mu ,\nu }$ (see No. 88), we have
 ${\displaystyle \rho =\lambda '(\alpha '.\delta )+\mu '(\beta '.\delta )+\nu '(\gamma '.\delta ),}$ (2)
which is the solution required.

It results from the principle stated in No. 85, that any vector equation of the first degree with respect to ${\displaystyle \rho }$ may be reduced to the form

 ${\displaystyle \delta =}$ ${\displaystyle \alpha (\lambda .\rho )+\beta (\mu .\rho )+\gamma (\nu .\rho )+a\rho +\epsilon \times \rho .}$ But⁠ ${\displaystyle a\rho =}$ ${\displaystyle a\lambda '(\lambda .\rho )+a\mu '(\mu .\rho )+a\nu '(\nu .\rho ),}$ and⁠ ${\displaystyle \epsilon \times \rho =}$ ${\displaystyle \epsilon \times \lambda '(\lambda .\rho )+\epsilon \times \mu '(\mu .\rho )+\epsilon \times \nu '(\nu .\rho ),}$

where ${\displaystyle \lambda ',\mu ',\nu '}$ represent, as before, the reciprocals of ${\displaystyle \lambda ,\mu ,\nu }$. By substitution of these values the equation is reduced to the form of equation (1), which may therefore be regarded as the most general form of a vector equation of the first degree with respect to ${\displaystyle \rho }$.

41. Relations between two normal systems of unit vectors.—If ${\displaystyle i,j,k}$, and ${\displaystyle i',j',k'}$ are two normal systems of unit vectors, we have

 ${\displaystyle i'=(i.i')i+(j.i')j+(k.i')k,}$ ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\\\ \ \end{matrix}}\right\}\,}}$⁠(1) ${\displaystyle j'=(i.j')i+(j.j')j+(k.j')k,}$ ${\displaystyle k'=(i.k')i+(j.k')j+(k.k')k,}$ and⁠ ${\displaystyle j=(i.i')i'+(i.j')j'+(i.k')k',}$ ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\\\ \ \end{matrix}}\right\}\,}}$⁠(2) ${\displaystyle j=(j.i')i'+(j.j')j+(j.j')k',}$ ${\displaystyle k'=(k.i')i'+(k.j')j'+(k.k')k',}$

(See equation (8) of No. 38.)

The nine coefficients in these equations are evidently the cosines of the nine angles made by a vector of one system with a vector of the other system. The principal relations of these cosines are easily deduced. By direct multiplication of each of the preceding equations with itself, we obtain siz equations of the type

 ${\displaystyle (i.i')^{2}+(j.i')^{2}+(k.i')^{2}=1.}$ (3)
By direct multiplication of equations (1) with each other, and of equations (2) with each other, we obtain six of the type
 ${\displaystyle (i.i')(i.j')+(j.i')(j.j')+(k.i')(k.j')=0.}$ (4)
By skew multiplication of equations (1) with each other, we obtain three of the type
 {\displaystyle {\begin{aligned}k'=\{(j.i')(k.j')-(k.i')(j.j')\}i+\{(k.i')(i.j')&-(i.i')(k.j')\}j\\&+\{(i.i')(j.j')-(j.i')(i.j')\}k.\end{aligned}}}
Comparing these three equations with the original three, we obtain nine of the type
 ${\displaystyle i.k'=(j.i')(k.j')-(k.i')(j.j').}$ (5)
Finally, if we equate the scalar product of the three right hand members of (1) with that of the three left hand members, we obtain
 {\displaystyle {\begin{aligned}(i.i')(j&.j')(k.k')+(i.j')(j.k')(k.i')+(i.k')(j.i')(k.j')\\&-(k.i')(j.j')(i.k')-(k.j')(j.k')(i.i')-(k.k')(j.i')(i.j')=1.\end{aligned}}} (6)
Equations (1) and (2) (if the expressions in the parentheses are supposed replaced by numerical values) represent the linear relations which subsist between one vector of one system and the three vectors of the other system. If we desire to express the similar relations which sabsist between two vectors of one system and two of the other, we may take the skew products of equations (1) with equations (2), after transposing all terms in the latter. This will afford nine equations of the type
 ${\displaystyle (i.j')k'-(i.k')j'=(k.i')j-(j.i')k.}$ (7)
We may divide an equation by an indeterminate direct factor. [MS. note by author.]

CHAPTER II.

concerning the differential and integral calculus of vectors.

42. Differentials of vectors.—The differential of a vector is the geometrical difference of two values of that vector which differ infinitely little. It is itself a vector, and may make any angle with the vector differentiated. It is expressed by the same sign (${\displaystyle d}$) as the differentials of ordinary analysis.

With reference to any fixed axes, the components of the differential of a vector are manifestly equal to the differentials of the components of the vector, i.e., if ${\displaystyle \alpha ,\beta }$, and ${\displaystyle \gamma }$ are fixed unit vectors, and

 ${\displaystyle \rho =x\alpha +y\beta +z\gamma ,}$
 ${\displaystyle d\rho =dx\,\alpha +dy\,\beta +dz\,\gamma .}$
43. Differential of a function of several variables.—The differential of a vector or scalar function of any number of vector or scalar variables is evidently the sum (geometrical or algebraic, according as the function is vector or scalar) of the differentials of the function due to the separate variation of the several variables.

44. Differential of a product.—The differential of a product of any kind due to the variation of a single factor is obtained by prefixing the sign of differentiation to that factor in the product. This is evidently true of differentials, since it will hold true even of finite differences.

45. From these principles we obtain the following identical equations:

 ${\displaystyle d(\alpha +\beta )=d\alpha +d\beta ,}$ (1)
 ${\displaystyle d(n\alpha )=dn\,\alpha +n\,d\alpha ,}$ (2)
 ${\displaystyle d(\alpha .\beta )=d\alpha .\beta +\alpha .d\beta ,}$ (3)
 ${\displaystyle d[\alpha \times \beta ]=d\alpha \times \beta +\alpha \times d\beta ,}$ (4)
 ${\displaystyle d(\alpha .\beta \times \gamma )=d\alpha .\beta \times \gamma +\alpha .d\beta \times \gamma +\alpha .\beta \times d\gamma ,}$ (5)
 ${\displaystyle d[(\alpha .\beta )\gamma ]=(d\alpha .\beta )\gamma +(\alpha .d\beta )\gamma +(\alpha .\beta )d\gamma .}$ (6)
46. Differential coefficient with respect to a scalar.—The quotient obtained by dividing the differential of a vector due to the variation of any scalar of which it is a function by the differential of that scalar is called the differential coefficient of the vector with respect to the scalar, and is indicated in the same manner as the differential coefficients of ordinary analysis.

If we suppose the quantities occurring in the six equations of the last section to be functions of a scalar ${\displaystyle t}$, we may substitute ${\displaystyle {\frac {d}{dt}}}$ for ${\displaystyle d}$ in those equations since this is only to divide all terms by the scalar ${\displaystyle dt}$.

47. Successive differentiations.—The differential coefficient of a vector with respect to a scalar is of course a finite vector, of which we may take the differential, or the differential coefficient with respect to the same or any other scalar. We thus obtain differential coefficients of the higher orders, which are indicated as in the scalar calculus.

A few examples will serve for illustration.

If ${\displaystyle \rho }$ is the vector drawn from a fixed origin to a moving point at any time ${\displaystyle t,{\frac {d\rho }{dt}}}$ will be the vector representing the velocity of the point, and ${\displaystyle {\frac {d^{2}\rho }{dt^{2}}}}$ the vector representing its acceleration.

If ${\displaystyle \rho }$ is the vector drawn from a fixed origin to any point on a curve, and ${\displaystyle s}$ the distance of that point measured on the curve from any fixed point, ${\displaystyle {\frac {d\rho }{ds}}}$ is a unit vector, tangent to the curve and having the direction in which ${\displaystyle s}$ increases; ${\displaystyle {\frac {d^{2}\rho }{ds^{2}}}}$ is a vector directed from a point on the curve to the center of curvature, and equal to the curvature; ${\displaystyle {\frac {d\rho }{ds}}\times {\frac {d^{2}\rho }{ds^{2}}}}$ is the normal to the osculating plane, directed to the side on which the curve appears described counter-clockwise about the center of curvature, and equal to the curvature. The tortuosity (or rate of rotation of the osculating plane, considered as positive when the rotation appears counter-clockwise as seen from the direction in which ${\displaystyle s}$ increases) is represented by

 ${\displaystyle {\frac {{\frac {d\rho }{ds}}\cdot {\frac {d^{2}\rho }{ds^{2}}}\times {\frac {d^{3}\rho }{ds^{3}}}}{{\frac {d^{2}\rho }{ds^{2}}}\cdot {\frac {d^{2}\rho }{ds^{2}}}}}\cdot }$

48. Integration of an equation between differentials.—If ${\displaystyle t}$ and ${\displaystyle u}$ are two single-valued continuous scalar functions of any number of scalar or vector variables, and

 ${\displaystyle dt=}$ ${\displaystyle du,}$ then⁠ ${\displaystyle t=}$ ${\displaystyle u+a,}$
where ${\displaystyle a}$ is a scalar constant.

Or, if ${\displaystyle \tau }$ and ${\displaystyle \omega }$ are two single-valued continuous vector functions of any number of scalar or vector variables, and

 ${\displaystyle d\tau =}$ ${\displaystyle d\omega ,}$ then⁠ ${\displaystyle \tau =}$ ${\displaystyle \omega +\alpha ,}$
where ${\displaystyle \alpha }$ is a vector constant.

When the above hypotheses are not satisfied in general, but will be satisfied if the variations of the independent variables are confined within certain limits, then the conclusions will hold within those limits, provided that we can pass by continuous variation of the independent variables from any values within the limits to any other values within them, without transgressing the limits.

49. So far, it will be observed, all operations have been entirely analogous to those of the ordinary calculus.

Functions of Position in Space.

60. Def.—If ${\displaystyle u}$ is any scalar function of position in space (ie., any scalar quantity having continuously varying values in space), ${\displaystyle \nabla u}$ is the vector function of position in space which has everywhere the direction of the most rapid increase of ${\displaystyle u}$, and a magnitude equal to the rate of that increase per unit of length. ${\displaystyle \nabla u}$ may be called the derivative of ${\displaystyle u}$, and ${\displaystyle u}$, the primitive of ${\displaystyle \nabla u.}$

We may also take any one of the Nos. 51, 52, 58 for the definition of ${\displaystyle \nabla u.}$

51. If ${\displaystyle \rho }$ is the vector defining the position of a point in space,

 ${\displaystyle du=\nabla u.d\rho .}$
52.
${\displaystyle \nabla u=i{\frac {du}{dx}}+j{\frac {du}{dy}}+k{\frac {du}{dz}}\cdot }$
53.
${\displaystyle {\frac {du}{dx}}=i.\nabla u,}$${\displaystyle {\frac {du}{dy}}=j.\nabla u,}$${\displaystyle {\frac {du}{dz}}=k.\nabla u.}$

54. Def.—If ${\displaystyle \omega }$ is a vector having continuously varying values in space,

 ${\displaystyle \nabla .\,\omega =}$ ${\displaystyle i.{\frac {d\omega }{dx}}+j.{\frac {d\omega }{dy}}+k.{\frac {d\omega }{dz}},}$ ⁠(1) and⁠ ${\displaystyle \nabla \times \omega =}$ ${\displaystyle i\times {\frac {d\omega }{dx}}+j\times {\frac {d\omega }{dy}}+k\times {\frac {d\omega }{dz}}\cdot }$ ⁠(2)

${\displaystyle \nabla .\,\omega }$ is called the divergence of ${\displaystyle \omega }$ and ${\displaystyle \nabla \times \omega }$ its curl.

If we set

 ${\displaystyle \omega ={\text{X}}i+{\text{Y}}j+{\text{Z}}k,}$
we obtain by substitution the equations
 ${\displaystyle \nabla .\,\omega ={\frac {d{\text{X}}}{dx}}+{\frac {d{\text{Y}}}{dy}}+{\frac {d{\text{Z}}}{dz}}}$
and
${\displaystyle \nabla \times \omega =i\left({\frac {d{\text{Z}}}{dy}}-{\frac {d{\text{Y}}}{dz}}\right)+j\left({\frac {d{\text{X}}}{dz}}-{\frac {d{\text{Z}}}{dx}}\right)+k\left({\frac {d{\text{Y}}}{dx}}-{\frac {d{\text{X}}}{dy}}\right),}$
which may also be regarded as defining ${\displaystyle \nabla .\,\omega }$ and ${\displaystyle \nabla \times \omega .}$

55. Surface integrals.—The integral ${\displaystyle \iint \omega .d\sigma ,}$ in which ${\displaystyle d\sigma }$ represents an element of some surface, is called the surface-integral of ${\displaystyle \omega }$ for that surface. It is understood here and elsewhere, when a vector is said to represent a plane surface (or an element of surface which may be regarded as plane), that the magnitude of the vector represents the area of the surface, and that the direction of the vector represents that of the normal drawn toward the positive side of the surface. When the surface is defined as the boundary of a certain space, the outside of the surface is regarded as positive.

The surface-integral of any given space (ie., the surface-integral of the surface bounding that space) is evidently equal to the sum of the surface-integrals of all the parts into which the original space may be divided. For the integrals relating to the surfaces dividing the parts will evidently cancel in such a sum.

The surface-integral of ${\displaystyle \omega }$ for a closed surface bounding a space ${\displaystyle dv}$ infinitely small in all its dimensions is

 ${\displaystyle \nabla .\,\omega \,dv.}$
This follows immediately from the definition of ${\displaystyle \nabla \omega ,}$ when the space is a parallelopiped bounded by planes perpendicular to ${\displaystyle i,j,k.}$ In other cases, we may imagine the space—or rather a space nearly coincident with the given space and of the same volume ${\displaystyle dv}$—to be divided up into such parallelopipeds. The surface-integral for the space made up of the parallelopipeds will be the sum of the surface-integrals of all the parallelopipeds, and will therefore be expressed by ${\displaystyle \nabla .\,\omega dv.}$ The surface-integral of the original space will have sensibly the same value, and will therefore be represented by the same formula. It follows that the value of ${\displaystyle \nabla .\,\omega }$ does not depend upon the system of unit vectors employed in its definition.

It is possible to attribute such a physical signification to the quantities concerned in the above proposition, as shall make it evident almost without demonstration. Let us suppose ${\displaystyle \omega }$ to represent a flux of any substance. The rate of decrease of the density of that substance at any point will be obtained by dividing the surface-integral of the flux for any infinitely small closed surface about the point by the volume enclosed. This quotient must therefore be independent of the form of the surface. We may define ${\displaystyle \nabla .\,\omega }$ as representing that quotient, and then obtain equation (1) of No. 54 by applying the general principle to the case of the rectangular parallelopiped.

56. Skew surface-integrals.—The integral ${\displaystyle \iint d\sigma \times \omega }$ may be called the skew surface-integral of ${\displaystyle \omega .}$ It is evidently a vector. For a closed surface bounding a space ${\displaystyle dv}$ infinitely small in all dimensions this integral reduces to ${\displaystyle \nabla \times \omega dv,}$ as is easily shown by reasoning like that of No. 55.

57. Integration.—If ${\displaystyle dv}$ represents an element of any space, and ${\displaystyle d\sigma }$ an element of the bounding surface,

 ${\displaystyle \iiint \nabla .\omega dv=\iint \omega .d\sigma .}$
For the first member of this equation represents the sum of the surface-integrals of all the elements of the given space. We may regard this principle as affording a means of integration, since we may use it to reduce a triple integral (of a certain form) to a double integral.

The principle may also be expressed as follows:

The surface-integral of any vector function of position in space for a closed surface is equal to the volume-integral of the divergence of that function for the space enclosed.

58. Line-integrals.—The integral ${\displaystyle \int \omega .d\rho ,}$ in which ${\displaystyle d\rho }$ denotes the element of a line, is called the line-integral of ${\displaystyle \omega }$ for that line. It is implied that one of the directions of the line is distinguished as positive. When the line is regarded as bounding a surface, that side of the surface will always be regarded as positive, on which the surface appears to be circumscribed counter-clockwise.

59. Integration.—From No. 51 we obtain directly

 ${\displaystyle \int \nabla u.d\rho =u''-u',}$
where the single and double accents distinguish the values relating to the beginning and end of the line.

In other words,—The line-integral of the derivative of any (continuous and single-valued) scalar function of position in space is equal to the difference of the values of the function at the extremities of the line. For a closed line the integral vanishes.

60. Integration.—The following principle may be used to reduce double integrals of a certain form to simple integrals.

If ${\displaystyle d\sigma }$ represents an element of any surface, and dp an element of the bounding line,

 ${\displaystyle \iint \nabla \omega \times \omega .d\sigma =\int \omega .d\rho .}$
In other words,—The line-integral of any vector function of position in space for a closed line is equal to the surface-integral of the curl of that function for any surface bounded by the line.

To prove this principle, we will consider the variation of the line-integral which is due to a variation in the closed line for which the integral is taken. We have, in the first place,

 ${\displaystyle \delta \int \omega .d\rho =}$ ${\displaystyle \int \delta \omega .d\rho +\int \omega .\delta d\rho .}$ But⁠ ${\displaystyle \omega .\delta \,d\rho =}$ ${\displaystyle d(\omega .\delta \rho )-d\omega .\delta \rho .}$

Therefore, since ${\displaystyle \int d(\omega .\delta \rho )=0}$ for a closed line,

 ${\displaystyle \delta \int \omega .d\rho =\int \delta \omega .d\rho -\int d\omega .\delta \rho .}$
 Now⁠ ${\displaystyle \delta \omega =}$ ${\displaystyle \textstyle \sum \displaystyle \left[{\frac {d\omega }{dx}}\delta x\right]=}$ ${\displaystyle \textstyle \sum \displaystyle \left[{\frac {d\omega }{dx}}\delta (i.\delta \rho )\right],}$ and⁠ ${\displaystyle d\omega =}$ ${\displaystyle \textstyle \sum \displaystyle \left[{\frac {d\omega }{dx}}dx\right]=}$ ${\displaystyle \textstyle \sum \displaystyle \left[{\frac {d\omega }{dx}}\delta (i.d\rho )\right],}$

where the summation relates to the coordinate axes and connected quantities. Substituting these values in the preceding equation,

 ${\displaystyle \delta \int \omega .d\rho =\int \textstyle \sum \displaystyle \left((i.\delta \rho )\left({\frac {d\omega }{dx}}.d\rho \right)-(i.d\rho )\left({\frac {d\omega }{dx}}.\delta \rho \right)\right),}$
or by No. 30,
 ${\displaystyle \delta \int \omega .d\rho =\int \textstyle \sum \displaystyle \left[i\times {\frac {d\omega }{dx}}\right].[\delta \rho \times d\rho ]=\int \nabla \times \omega .[\delta \rho \times d\rho ].}$
But ${\displaystyle \delta \rho \times d\rho }$ represents an element of the surface generated by the motion of the element ${\displaystyle d\rho ,}$ and the last member of the equation is the surface-integral of ${\displaystyle \nabla \times \omega }$ for the infinitesimal surface generated by the motion of the whole line. Hence, if we conceive of a closed curve passing gradually from an infinitesimal loop to any finite form, the differential of the line-integral of ${\displaystyle \omega }$ for that curve will be equal to the differential of the surface integral of ${\displaystyle \nabla \times \omega }$ for the surface generated: therefore, since both integrals commence with the value zero, they must always be equal to each other. Such a mode of generation will evidently apply to any surface closing any loop.

61. The line-integral of ${\displaystyle \omega }$ for a closed line bounding a plane surface ${\displaystyle d\sigma }$ infinitely small in all its dimensions is therefore

 ${\displaystyle \nabla \times \omega .d\sigma .}$
This principle affords a definition of ${\displaystyle \nabla \times \omega }$ which is independent of any reference to coordinate axes. If we imagine a circle described about a fixed point to vary its orientation while keeping the same size, there will be a certain position of the circle for which the line-integral of ${\displaystyle \omega }$ will be a maximum, unless the line-integral vanishes for all positions of the circle. The axis of the circle in this position, drawn toward the side on which a positive motion in the circle appears counter-clockwise, gives the direction of ${\displaystyle \nabla \times \omega ,}$ and the quotient of the integral divided by the area of the circle gives the magnitude of ${\displaystyle \nabla \times \omega .}$

${\displaystyle \nabla ,\nabla .,}$ and ${\displaystyle \nabla \times }$ applied to Functions of Functions of Position.

62. A constant scalar factor after ${\displaystyle \nabla ,\nabla .,}$ or ${\displaystyle \nabla \times }$ may be placed before the symbol.

63. If ${\displaystyle f(u)}$ denotes any scalar function of ${\displaystyle u}$, and ${\displaystyle f'(u)}$ the derived function,

 ${\displaystyle \nabla f(u)=f'(u)\nabla u.}$
64. If ${\displaystyle u}$ or ${\displaystyle \omega }$ is a function of several scalar or vector variables which are themselves functions of the position of a single point, the value of ${\displaystyle \nabla u}$ or ${\displaystyle \nabla .\omega }$ or ${\displaystyle \nabla \times \omega }$ will be equal to the sum of the values obtained by making successively all but each one of these variables constant.

65. By the use of this principle we easily derive the following identical equations:

 ${\displaystyle \nabla (t+u)=\nabla t+\nabla u.}$ (1)
 ${\displaystyle \nabla .(\tau +\omega )=\nabla .\tau +\nabla .\omega .}$⁠${\displaystyle \nabla \times [\tau +\omega ]=\nabla \times \tau +\nabla \times \omega .}$ (2)
 ${\displaystyle \nabla (tu)=u\nabla t+t\nabla u.}$ (3)
 ${\displaystyle \nabla .(u\omega )=\omega .\nabla u+u\nabla .\omega .}$ (4)
 ${\displaystyle \nabla \times [u\omega ]=u\times \nabla \omega -\omega \nabla \times u.}$ (5)
 ${\displaystyle \nabla .[\tau \times \omega ]=\omega .\nabla \times \tau -\tau .\nabla \times \omega .}$ (6)
The student will observe an analogy between these equations and the formulæ of multiplication. (In the last four equations the analogy appears most distinctly when we regard all the factors but one as constant.) Some of the more curious features of this analogy are due to the fact that the ${\displaystyle \nabla }$ contains implicitly the vectors ${\displaystyle i,j}$ and ${\displaystyle k,}$ which are to be multiplied into the following quantities.

Combinations of the Operators ${\displaystyle \nabla ,\nabla .,}$ and ${\displaystyle \nabla \times .}$

66. If ${\displaystyle u}$ is any scalar function of position in space,

 ${\displaystyle \nabla \times \nabla u=0,}$
as may be derived directly from the definitions of these operators.

67. Conversely, if ${\displaystyle \omega }$ is such a vector function of position in space that

 ${\displaystyle \nabla \times \omega =0,}$
${\displaystyle \omega }$ is the derivative of a scalar function of position in space. This will appear from the following considerations: The line-integral ${\displaystyle \int \omega .d\rho }$ will vanish for any closed line, since it may be expressed as the surface-integral of ${\displaystyle \nabla \times \omega .}$ (No. 60.) The line-integral taken from one given point ${\displaystyle P'}$ to another given point ${\displaystyle P''}$ is independent of the line between the points for which the integral is taken. (For, if two lines joining tiie same points gave different values, by reversing one we should obtain a closed line for which the integral would not vanish.) If we set ${\displaystyle u}$ equal to this line-integral, supposing ${\displaystyle P''}$ to be variable and ${\displaystyle P'}$ to be constant in position, ${\displaystyle u}$ will be a scalar function of the position of the point ${\displaystyle P'',}$ satisfying the condition ${\displaystyle du=\omega .d\rho ,}$ or, by No. 51, ${\displaystyle \nabla u=\omega .}$ There will evidentily be an infinite number of functions satisfying this condition, which will differ from one another by constant quantities.

If the region for which ${\displaystyle \nabla \times \omega =0}$ is unlimited, these functions will be angle-valued. If the region is limited, but acyclic,[4] the functions will still be single-valued and satisfy the condition ${\displaystyle \nabla u=\omega }$ within the same region. If the region is cyclic, we may determine functions satisfying the condition ${\displaystyle \nabla u=\omega }$ within the region, but they will not necessarily be single-valued. 68. If ${\displaystyle \omega }$ is any vector function of position in space, ${\displaystyle \nabla .\nabla \times \omega =0.}$ This may be deduced directly from the definitions of No. 54.

The converse of this proposition will be proved hereafter.

69. If ${\displaystyle u}$ is any scalar function of position in space, we have by Nos. 52 and 54

 ${\displaystyle \nabla .\nabla u=\left({\frac {d^{2}}{dx^{2}}}+{\frac {d^{2}}{dy^{2}}}+{\frac {d^{2}}{dz^{2}}}\right)u.}$

70. Def.—If ${\displaystyle \omega }$ is any vector function of position in space, we may define ${\displaystyle \nabla .\nabla \omega }$ by the equation

 ${\displaystyle \nabla .\nabla \omega =\left({\frac {d^{2}}{dx^{2}}}+{\frac {d^{2}}{dy^{2}}}+{\frac {d^{2}}{dz^{2}}}\right)\omega ,}$
the expression ${\displaystyle \nabla .\nabla }$ being regarded, for the present at least, as a single operator when appKed to a vector. (It will be remembered that no meaning has been attributed to ${\displaystyle \nabla }$ before a vector.) It should be noticed that if
 ${\displaystyle \omega =i{\text{X}}+j{\text{Y}}+k{\text{Z}},}$${\displaystyle \nabla .\nabla \omega =i\nabla .\nabla {\text{X}}+j\nabla .\nabla {\text{Y}}+k\nabla .\nabla {\text{Z}},}$
that is, the operator ${\displaystyle \nabla .\nabla }$ applied to a vector affects separately its scalar components.

71. From the above definition with those of Nos, 52 and 54 we may easily obtain

 ${\displaystyle \nabla .\nabla \omega =\nabla \nabla .\omega -\nabla \times \nabla \times \omega .}$
The effect of the operator ${\displaystyle \nabla .\nabla }$ is therefore independent of the directions of the axes used in its definition.

72. The expression ${\displaystyle -{\tfrac {1}{6}}a^{2}\nabla .\nabla u}$, where ${\displaystyle a}$ is any infinitesimal scalar, evidently represents the excess of the value of the scalar function ${\displaystyle u}$ at the point considered above the average of its values at six points at the following vector distances: ${\displaystyle ai,-ai,aj,-aj,ak,-ak.}$ Since the directions of ${\displaystyle i,j,}$ and ${\displaystyle k}$ are immaterial (provided that they are at right angles to each other), the excess of the value of ${\displaystyle u}$ at the central point above its average value in a spherical surface of radius a constructed about that point as the center will be represented by the same expression, ${\displaystyle -{\tfrac {1}{6}}a^{2}\nabla .\nabla u.}$

Precisely the same is true of a vector function, if it is understood that the additions and subtractions implied in the terms average and excess are geometrical additions and subtractions.

Maxwell has called ${\displaystyle -\nabla .\nabla u}$ the concentration of ${\displaystyle u,}$ whether ${\displaystyle u}$ is scalar or vector. We may call ${\displaystyle \nabla .\nabla u}$ (or ${\displaystyle \nabla .\nabla \omega }$), which is proportioned to the excess of the average value of the function in an infinitesimal spherical surface above the value at the center, the dispersion of ${\displaystyle u}$ (or ${\displaystyle \omega }$).

Transformation of Definite Integrals.

73. From the equations of No. 65, with the principles of integration of Nos. 57, 59, and 60, we may deduce various transformations of definite integrals, which are entirely analogous to those known in the scalar calculus under the name of integration by parts. The following formulæ (like those of Nos. 67, 59, and 60) are written for the case of continuous values of the quantities (scalar and vector) to which the signs ${\displaystyle \nabla ,\nabla .,}$ and ${\displaystyle \nabla \times }$ are applied. It is left to the student to complete the formulæ for cases of discontinuity in these values. The manner in which this is to be done may in each case be inferred from the nature of the formula itself. The most important discontinuities of scalars are those which occur at surfaces: in the case of vectors discontinuities at surfaces, at lines, and at points, should be considered.

74. From equation (3) we obtain

 ${\displaystyle \int \nabla (tu).d\rho =t''u''-t'u'=\int u\nabla t.d\rho +\int t\nabla u.d\rho ,}$
where the accents distinguish the quantities relating to the limits of the line-integrals. We are thus able to reduce a line-integral of the form ${\displaystyle \int u\nabla t.d\rho }$ to the form ${\displaystyle -\int t\nabla u.d\rho }$ with quantities free from the sign of integration.

76. From equation (5) we obtain

 ${\displaystyle \iint \nabla \times (u\omega ).d\sigma =\int u\omega .d\rho =\iint u\nabla \times \omega .d\sigma -\iint \omega \times \nabla u.d\sigma ,}$
where, as elsewhere in these equations, the line-integral relates to the boundary of the surface-integral.

From this, by substitution of ${\displaystyle \nabla t}$ for ${\displaystyle \omega ,}$ we may derive as a particular case

 ${\displaystyle \iint \nabla u\times \nabla t.d\sigma =\int u\nabla t.d\rho =-\int t\nabla u.d\rho .}$
76. From equation (4) we obtain
 ${\displaystyle \iiint \nabla .[u\omega ]dv=\iint u\omega -d\sigma =\iiint \omega .\nabla udv+\iiint u\nabla .\omega dv,}$
where, as elsewhere in these equations, the surface-integral relates to the boundary of the volume-integrals.

From this, by substitution of ${\displaystyle \nabla t}$ for ${\displaystyle \omega ,}$ we derive as a particular case

 ${\displaystyle \iiint \nabla t.\nabla udv=\iint u\nabla t.d\sigma -\iiint u\nabla .\nabla tdv=\iint t\nabla u.d\sigma -\iiint t\nabla .\nabla udv,}$
which is Green's Theorem. The substitution of ${\displaystyle s\nabla t}$ for ${\displaystyle \omega }$ gives the more general form of this theorem which is due to Thomson, viz.,
 {\displaystyle {\begin{aligned}\iiint s\nabla t.\nabla udv&=\iint us\,t.d\sigma -\iiint u\nabla .[s\nabla t]dv\\&=\iint ts\nabla u.d\sigma -\iiint t\nabla .[s\nabla u]dv.\end{aligned}}}
77. From equation (6) we obtain
 ${\displaystyle \iiint \nabla .[\tau \times \omega ]dv=\iint \tau \times \omega .d\sigma =\iiint \omega .\nabla \times \tau dv-\iiint \tau .\nabla \times \omega dv.}$
A particular case is
 ${\displaystyle \iiint \nabla u.\nabla \times \omega dv=\iint \omega \times \nabla u.d\sigma .}$

Integration of Differential Equations.

78. If throughout any continuous space (or in all space)

 ${\displaystyle \nabla u=0,}$
then throughout the same space
 ${\displaystyle u={\text{constant,}}}$
79. If throughout any continuous space (or in all space) and in any finite part of that space, or in any finite surface in or bounding it,
 ${\displaystyle \nabla u=0,}$
then throughout the whole space
 ${\displaystyle \nabla u=0,}$⁠and⁠${\displaystyle u={\text{constant.}}}$
This will appear from the following considerations:

If ${\displaystyle \nabla u=0}$ in any finite part of the space, ${\displaystyle u}$ is constant in that part.

If ${\displaystyle u}$ is not constant throughout, let us imagine a sphere situated principally in the part in which ${\displaystyle u}$ is constant, but projecting slightly into a part in which ${\displaystyle u}$ has a greater value, or else into a part in which ${\displaystyle u}$ has a less. The surface-integral of ${\displaystyle \nabla u}$ for the part of the spherical surface in the region where ${\displaystyle u}$ is constant will have the value zero: for the other part of the surface, the integral will be either greater than zero, or less than zero. Therefore the whole surface-integral for the spherical surface will not have the value zero, which is required by the general condition, ${\displaystyle \nabla .\nabla u=0.}$

Again, if ${\displaystyle \nabla u=0}$ only in a surface in or bounding the space in which ${\displaystyle \nabla .\nabla u=0,u}$ will be constant in this surface, and the surface will be contiguous to a region in which ${\displaystyle \nabla .\nabla u=0}$ and ${\displaystyle u}$ has a greater value than in the surface, or else a less value than in the surface. Let us imagine a sphere lying principally on the other side of the surface, but projecting slightly into this region, and let us particularly consider the surface-integral of ${\displaystyle \nabla u}$ for the small segment cut off by the surface ${\displaystyle \nabla u=0.}$ The integral for that part of the surface of the segment which consists of part of the surface ${\displaystyle \nabla u=0}$ will have the value zero, the integral for the spherical part will have a value either greater than zero or else less than zero. Therefore the integral for the whole surface of the segment cannot have the value zero, which is demanded by the general condition, ${\displaystyle \nabla .\nabla u=0.}$

80. If throughout a certain space (which need not be continuous, and which may extend to infinity)

 ${\displaystyle \nabla .\nabla u=0,}$
and in all the bounding surfaces
 ${\displaystyle u={\text{constant}}=a,}$
and (in case the space extends to infinity) if at infinite distances within the space ${\displaystyle u=a,}$—then throughout the space
 ${\displaystyle \nabla u=0,}$⁠and⁠${\displaystyle u=a.}$
For, if anywhere in the interior of the space ${\displaystyle \nabla u}$ has a value different from zero, we may find a point ${\displaystyle P}$ where such is the case, and where ${\displaystyle u}$ has a value ${\displaystyle b}$ different from ${\displaystyle a,}$—to fix our ideas we will say less. Imagine a surface enclosing all of the space in which ${\displaystyle u (This must be possible, since that part of the space does not reach to infinity.) The surface-integral of ${\displaystyle \nabla u}$ for this surface has the value zero in virtue of the general condition ${\displaystyle \nabla .\nabla u=0.}$ But, from the manner in which the surface is defined, no part of the integral can be negative. Therefore no part of the integral can be positive, and the supposition made with respect to the point ${\displaystyle P}$ is untenable. That the supposition that ${\displaystyle b>a}$ is untenable may be shown in a similar manner. Therefore the value of ${\displaystyle u}$ is constant.

This proposition may be generalized by substituting the condition ${\displaystyle \nabla .[t\nabla u]=0}$ for ${\displaystyle \nabla .\nabla u=0,t}$ denoting any positive (or any negative) scalar function of position in space. The conclusion would be the same, and the demonstration similar.

81. If throughout a certain space (which need not be continuous, and which may extend to infinity)

 ${\displaystyle \nabla .\nabla u=0,}$
and in all the bounding surfaces the normal component of ${\displaystyle \nabla u}$ vanishes, and at infinite distances within the space (if such there are) ${\displaystyle r^{2}{\frac {du}{dr}}=0,}$ where ${\displaystyle r}$ denotes the distance from a fixed origin, then throughout the space
 ${\displaystyle \nabla u=0,}$
and in each continuous portion of the same
 ${\displaystyle u={\text{constant.}}}$
For, if anywhere in the space in question ${\displaystyle \nabla u}$ has a value different from zero, let it have such a value at a point ${\displaystyle P,}$ and let ${\displaystyle u}$ be there equal to ${\displaystyle b.}$ Imagine a spherical surface about the above-mentioned origin as center, enclosing the point ${\displaystyle P,}$ and with a radius ${\displaystyle r.}$ Consider that portion of the space to which the theorem relates which is within the sphere and in which ${\displaystyle u The surface integral of ${\displaystyle \nabla u}$ for this space is equal to zero in virtue of the general condition ${\displaystyle \nabla .\nabla u=0.}$ That part of the integral (if any) which relates to a portion of the spherical surface has a value numerically not greater than ${\displaystyle 4\pi r^{2}\left({\frac {du}{dr}}\right)',}$ where ${\displaystyle \left({\frac {du}{dr}}\right)'}$ denotes the greatest numerical value of ${\displaystyle {\frac {du}{dr}}}$ in the portion of the spherical surface considered. Hence, the value of this part of the surface-integral may be made less (numerically) than any assignable quantity by giving to ${\displaystyle r}$ a sufficiently great value. Hence, the other part of the surface-integral (viz., that relating to the surface in which ${\displaystyle u=b,}$ and to the boundary of the space to which the theorem relates) may be given a value differing from zero by less than any assignable quantity. But no part of the integral relating to this surface can be negative. Therefore no part can be positive, and the supposition relative to the point ${\displaystyle P}$ is untenable.

This proposition also may be generalized by substituting ${\displaystyle \nabla .[t\nabla u]=0}$ for ${\displaystyle \nabla .\nabla u=0,}$ and ${\displaystyle tr^{2}{\frac {du}{dr}}}$ for ${\displaystyle r^{2}{\frac {du}{dr}}=0.}$

82. If throughout any continuous space (or in all space)

 ${\displaystyle \nabla t=\nabla u,}$
then throughout the same space
 ${\displaystyle t=u+{\text{const.}}}$
The truth of this and the three following theorems will be apparent if we consider the difference ${\displaystyle t-u.}$

83. If throughout any continuous space (or in all space)

 ${\displaystyle \nabla .\nabla t=\nabla .\nabla u,}$
and in any finite part of that space, or in any finite surface in or bounding it,
 ${\displaystyle \nabla t=\nabla u,}$
then throughout the whole space
 ${\displaystyle \nabla t=\nabla u,}$⁠and⁠${\displaystyle t=u+{\text{const.}}}$
84. If throughout a certain space (which need not be continuous, and which may extend to infinity)
 ${\displaystyle \nabla .\nabla t=\nabla .\nabla u,}$
and in all the bounding surfaces
 ${\displaystyle t=u,}$
and at infinite distances within the space (if such there are)
 ${\displaystyle t=u,}$
then throughout the space
 ${\displaystyle t=u.}$

86. If throughout a certain space (which need not be continuous, and which may extend to infinity)

 ${\displaystyle \nabla .\nabla t=\nabla .\nabla u,}$
and in all the bounding surfaces the normal components of ${\displaystyle \nabla t}$ and ${\displaystyle \nabla u}$ are equal, and at infinite distances within the space (if such there are) ${\displaystyle r^{2}\left({\frac {dt}{dr}}-{\frac {du}{dr}}\right)=0,}$ where ${\displaystyle r}$ denotes the distance from some fixed origin,—then throughout the space
 ${\displaystyle \nabla t=\nabla u,}$
and in each continuous part of which the space consists
 ${\displaystyle t-u={\text{constant.}}}$

86. If throughout any continuous space (or in all space)

 ${\displaystyle \nabla \times \tau =\nabla \times \omega }$⁠and⁠${\displaystyle \nabla .\tau =\nabla .\omega ,}$
and in any finite part of that space, or in any finite surface in or bounding it,
 ${\displaystyle \tau =\omega ,}$
then throughout the whole space
 ${\displaystyle \tau =\omega .}$
For, since ${\displaystyle \nabla \times (\tau -\omega )=0,}$ we may set ${\displaystyle \nabla u=\tau -\omega ,}$ making the space acyclic (if necessary) by diaphragms. Then in the whole space ${\displaystyle u}$ is single-valued and ${\displaystyle \nabla .\nabla u=0,}$ and in a part of the space, or in a surface in or bounding it, ${\displaystyle \nabla u=0.}$ Hence throughout the space ${\displaystyle \nabla u=\tau -\omega =0.}$

87. If throughout an aperiphractic[5] space contained within finite boundaries but not necessarily continuous

 ${\displaystyle \nabla \times \tau =\nabla \times \omega }$⁠and⁠${\displaystyle \nabla .\tau =\nabla .\omega ,}$
and in all the bounding surfaces the tangential components of ${\displaystyle \tau }$ and ${\displaystyle \omega }$ are equal, then throughout the space
 ${\displaystyle \tau =\omega .}$
It is evidently sufficient to prove this proposition for a continuous space. Setting ${\displaystyle \nabla u=\tau -\omega ,}$ we have ${\displaystyle \nabla .\nabla u=0}$ for the whole space, and ${\displaystyle u={\text{constant}}}$ for its boundary, which will be a single surface for a continuous aperiphractic space. Hence throughout the space
 ${\displaystyle \nabla u=\tau -\omega =0.}$
88. If throughout an allelic space contained within finite boundaries but not necessarily continuous
 ${\displaystyle \nabla \times \tau =\nabla \times \omega }$⁠and⁠${\displaystyle \nabla .\tau =\nabla .\omega ,}$
and in all the bounding surfaces the normal components of ${\displaystyle \tau }$ and ${\displaystyle \omega }$ are equal, then throughout the whole space
 ${\displaystyle \tau =\omega .}$
Setting ${\displaystyle \nabla u=\tau -\omega ,}$ we have ${\displaystyle \nabla .\nabla u=0,}$ throughout the space, and the normal component of ${\displaystyle \nabla u}$ at the boundary equal to zero. Hence throughout the whole space ${\displaystyle \nabla u=\tau -\omega =0.}$

89. If throughout a certain space (which need not be continuoas, and which may extend to infinity)

 ${\displaystyle \nabla .\nabla \tau =\nabla .\nabla \omega }$
and in all the bounding surfaces
 ${\displaystyle \tau =\omega ,}$
and at infinite distances within the space (if such there are)
 ${\displaystyle \tau =\omega ,}$
then throughout the whole space
 ${\displaystyle \tau =\omega .}$
This will be apparent if we consider separately each of the scalar components of ${\displaystyle \tau }$ and ${\displaystyle \omega .}$

Minimum values of the Volume-integral ${\displaystyle \iiint u\omega .\omega dv.}$

(Thomson's Theorems.)

90. Let it be required to determine for a certain space a vector function of position ${\displaystyle \omega }$ subject to certain conditions (to be specified hereafter), so that the volume-integral

 ${\displaystyle \iiint u\omega .\omega dv}$
for that space shall have a minimum value, ${\displaystyle u}$ denoting a given positive scalar function of position.

a. In the first place, let the vector ${\displaystyle \omega }$ be subject to the conditions that ${\displaystyle \nabla .\omega }$ is given within the space, and that the normal component of ${\displaystyle \omega }$ is given for the bounding surface. (This component must of course be such that the surface-integral of ${\displaystyle \omega }$ shall be equal to the volume-integral ${\displaystyle \int \nabla .\omega dv.}$ If the space is not continuous, this must be true of each continuous portion of it See No. 57.) The solution is that ${\displaystyle \nabla \times (u\omega )=0,}$ or more generally, that the line-integral of ${\displaystyle u\omega }$ for any dosed curve in the space shall vanish.

The existence of the minimum requires that

 ${\displaystyle \iiint u\omega .\delta \omega dv=0,}$
while ${\displaystyle \delta \omega }$ is subject to the limitation that
 ${\displaystyle \nabla .\delta \omega =0,}$
and that the normal component of 8m at the bounding surface vanishes. To prove that the line-integral of ${\displaystyle u\omega }$ vanishes for any closed curve within the space, let us imagine the curve to be surrounded by an infinitely slender tube of normal section ${\displaystyle dz,}$ which may be either constant or variable. We may satisfy the equation ${\displaystyle \nabla .\delta \omega =0}$ by making ${\displaystyle \delta \omega =0}$ outside of the tube, and ${\displaystyle \delta \omega \,dz=\delta a{\frac {d\rho }{ds}}}$ within it, ${\displaystyle \delta a}$ denoting an arbitrary infinitesimal constant, ${\displaystyle \rho }$ the position-vector, and ${\displaystyle ds}$ an element of the length of the tube or closed curve. We have then
 ${\displaystyle \iiint u\omega .\delta \omega \,dv=\int u\omega .\delta \omega \,dz\,ds=\int u\omega .d\rho \,\delta a=\delta a\int u\omega .d\rho =0,}$
whence
q.e.d.
${\displaystyle \int u\omega .d\rho =0.}$
We may express this result by saying that ${\displaystyle u\omega }$ is the derivative of a single-valued scalar function of position in space. (See No. 67.)

If for certain parts of the surface the normal component of ${\displaystyle \omega }$ is not given for each point, but only the surface-integral of ${\displaystyle \omega }$ for each such part, then the above reasoning will apply not only to closed curves, but also to curves commencing and ending in such a part of the surface. The primitive of ${\displaystyle u\omega }$ will then have a constant value in each such part.

If the space extends to infinity and there is no special condition respecting the value of ${\displaystyle \omega }$ at infinite distances, the primitive of ${\displaystyle u\omega }$ will have a constant value at infinite distances within the space or within each separate continuous part of it.

If we except those cases in which the problem has no definite meaning because the data are such that the integral ${\displaystyle \int u\omega .\omega \,dv}$ must be infinite, it is evident that a minimum must always exist, and (on account of the quadratic form of the integral) that it is unique. That the conditions just found are sufficient to insure this minimum, is evident from the consideration that any allowable values of ${\displaystyle \delta \omega }$ may be made up of such values as we have supposed. Therefore, there will be one and only one vector function of position in space which satisfies these conditions together with those enumerated at the banning of this number.

b. In the second place, let the vector ${\displaystyle \omega }$ be subject to the conditions that ${\displaystyle \nabla \times \omega }$ is given throughout the space, and that the tangential component of ${\displaystyle \omega }$ is given at the bounding surface. The solution is that

 ${\displaystyle \nabla .[u\omega ]=0,}$
and, if the space is periphractic, that the surface-integral of ${\displaystyle u\omega }$ vanishes for each of the bounding surfaces.

The existence of the minimum requires that

 ${\displaystyle \iiint u\omega .\delta \omega \,dv=0,}$
while ${\displaystyle \delta \omega }$ is subject to the conditions that
 ${\displaystyle \nabla \times \delta \omega =0,}$
and that the tangential component of ${\displaystyle \delta \omega }$ in the bounding surface vanishes. In virtue of these conditions we may set
 ${\displaystyle \delta \omega =\nabla \delta q,}$
where ${\displaystyle \delta q}$ is an arbitrary infinitesimal scalar function of position, subject only to the condition that it is constant in each of the bounding surfaces. (See No. 67.) By substitution of this value we obtain
 ${\displaystyle \iiint u\omega .\nabla \,\delta q\,dv=0,}$
or integrating by parts (No. 76)
 ${\displaystyle \iint u\omega .d\sigma \,\delta q-\iiint \nabla .[u\omega ]\delta q\,dv=0.}$
Since ${\displaystyle \delta q}$ is arbitrary in the volume-integral, we have throughout the whole space
 ${\displaystyle \nabla .[u\omega ]=0;}$
and since ${\displaystyle \delta q}$ has an arbitrary constant value in each of the bounding surfaces (if the boundary of the space consists of separate parts), we have for each such part
 ${\displaystyle \iint u\omega .d\sigma =0.}$

Potentials, Newtonians, Laplacians.

91. Def.—If ${\displaystyle u'}$ is the scalar quantity of something situated at a certain point ${\displaystyle \rho ',}$ the potential of ${\displaystyle u'}$ for any point ${\displaystyle \rho }$ is a scalar function of ${\displaystyle \rho ,}$ defined by the equation

 ${\displaystyle {\text{pot }}u'={\frac {u'}{[\rho '-\rho ]_{0}}},}$
and the Newtonian of ${\displaystyle u'}$ for any point ${\displaystyle \rho }$ is a vector function of ${\displaystyle \rho }$ defined by the equation
 ${\displaystyle {\text{new }}u'={\frac {\rho '-\rho }{[\rho '-\rho ]_{0}^{3}}}u'.}$
Again, if ${\displaystyle \omega '}$ is the vector representing the quantity and direction of something situated at the point ${\displaystyle \rho ',}$ the potential and the Laplacian of ${\displaystyle \omega '}$ for any point ${\displaystyle \rho }$ are vector functions of ${\displaystyle \rho }$ defined by the equations
 {\displaystyle {\begin{aligned}{\text{pot }}\omega '&={\frac {\omega '}{[\rho '-\rho ]_{0}}},\\{\text{lap }}\omega '&={\frac {\rho '-\rho }{[\rho '-\rho ]_{0}^{3}}}\times \omega '.\end{aligned}}}
92. If ${\displaystyle u}$ or ${\displaystyle \omega }$ is a scalar or vector function of position in space, we may write ${\displaystyle {\text{Pot }}u,{\text{New }}u,{\text{Pot }}\omega ,{\text{Lap }}\omega }$ for the volume-integrals of ${\displaystyle {\text{pot }}u',}$ etc, taken as functions of ${\displaystyle \rho ';}$ i.e., we may set
 ${\displaystyle {\text{Pot }}u}$ ${\displaystyle =\iiint {\text{pot }}u'dv'}$ ${\displaystyle =\iiint {\frac {u'}{[\rho '-\rho ]_{0}}}dv',}$ ${\displaystyle {\text{New }}u}$ ${\displaystyle =\iiint {\text{new }}u'dv'}$ ${\displaystyle =\iiint {\frac {\rho '-\rho }{[\rho '-\rho ]_{0}^{3}}}u'\,dv',}$ ${\displaystyle {\text{Pot }}\omega }$ ${\displaystyle =\iiint {\text{pot }}\omega 'dv'}$ ${\displaystyle =\iiint {\frac {\omega '}{[\rho '-\rho ]_{0}}}dv',}$ ${\displaystyle {\text{Lap }}\omega }$ ${\displaystyle =\iiint {\text{lap }}\omega 'dv'}$ ${\displaystyle =\iiint {\frac {\rho '-\rho }{[\rho '-\rho ]_{0}^{3}}}\times \omega '\,dv',}$
where the ${\displaystyle \rho }$ is to be regarded as constant in the integration. This extends over all space, or wherever the ${\displaystyle u'}$ or ${\displaystyle \omega '}$ have any values other than zero. These integrals may themselves be called (integral) potentials, Newtonians, and Laplacians.
93.
${\displaystyle {\frac {d{\text{ Pot }}u}{dx}}={\text{Pot }}{\frac {du}{dx}},}$${\displaystyle {\frac {d{\text{ Pot }}\omega }{dx}}={\text{Pot }}{\frac {d\omega }{dx}}\cdot }$
This will be evident with respect both to scalar and to vector functions, if we suppose that when we differentiate the potential with respect to ${\displaystyle x}$ (thus varying the position of the point for which the potential is taken) each element of volume ${\displaystyle dv'}$ in the implied integral remains fixed, not in absolute position, but in position relative to the point for which the potential is taken. This supposition is evidently allowable whenever the integration indicated by the symbol ${\displaystyle {\text{Pot}}}$ tends to a definite limit when the limits of integration are indefinitely extended.

Since we may substitute ${\displaystyle y}$ and ${\displaystyle z}$ for ${\displaystyle x}$ in the preceding formula, and since a constant factor of any kind may be introduced under the sign of integration, we have

 {\displaystyle {\begin{aligned}\nabla {\text{ Pot }}u&={\text{ Pot }}\nabla u,\\\nabla .{\text{ Pot }}\omega &={\text{ Pot }}\nabla .\omega ,\\\nabla \times {\text{ Pot }}\omega &={\text{ Pot }}\times \omega ,\\\nabla .\nabla {\text{ Pot }}u&={\text{ Pot }}\nabla .\nabla u,\\\nabla .\nabla {\text{ Pot }}\omega &={\text{ Pot }}\nabla .\nabla \omega ,\end{aligned}}}
i.e., the symbols ${\displaystyle \nabla ,\nabla .,\nabla \times ,\nabla .\nabla }$ may be applied indifferently before or after the sign ${\displaystyle {\text{Pot}}.}$

Yet a certain restriction is to be observed. When the operation of taking the (integral) potential does not give a definite finite value, the first members of these equations are to be regarded as entirely indeterminate, but the second members may have perfectly definite values. This would be the case, for example, if ${\displaystyle u}$ or ${\displaystyle \omega }$ had a constant value throughout all space. It might seem harmless to set an indefinite expression equal to a definite, but it would be dangerous, since we might with equal right set the indefinite expression equal to other definite expressions, and then be misled into supposing these definite expressions to be equal to one another. It will be safe to say that the above equations will hold, provided that the potential of ${\displaystyle u}$ or ${\displaystyle \omega }$ has a definite value. It will be observed that whenever ${\displaystyle {\text{Pot }}u}$ or ${\displaystyle {\text{Pot }}\omega }$ has a definite value in general (i.e., with the possible exception of certain points, lines, and surfaces),[6] the first members of all these equations will have definite values in general, and therefore the second members of the equation, being necessarily equal to the first members, when these have definite values, will also have definite values in general. 94. Again, whenever Potu has a definite value we may write

 ${\displaystyle \nabla {\text{Pot }}u=\nabla \iiint {\frac {u'}{r}}dv'=\iiint \nabla {\frac {1}{r}}u'\,dv',}$
where ${\displaystyle r}$ stands for ${\displaystyle [\rho '-\rho ]_{0}.}$ But
 ${\displaystyle \nabla {\frac {1}{r}}={\frac {\rho '-\rho }{r^{3}}},}$
whence
${\displaystyle \nabla {\text{Pot }}u={\text{New }}u.}$
Moreover, ${\displaystyle {\text{New }}u}$ will in general have a definite value, if ${\displaystyle {\text{Pot }}u}$ has.

95. In like manner, whenever ${\displaystyle {\text{Pot }}\omega }$ has a definite value,

 ${\displaystyle \nabla \times {\text{Pot }}\omega =\nabla \times \iiint {\frac {\omega '}{r}}dv'=\iiint \nabla \times {\frac {\omega '}{r}}dv'=\iiint \nabla {\frac {1}{r}}\times \omega '\,dv'.}$
Substituting the value of ${\displaystyle \nabla {\frac {1}{r}}}$ given above we have
 ${\displaystyle \nabla \times {\text{Pot }}\omega ={\text{Lap }}\omega .}$
${\displaystyle {\text{Lap }}\omega }$ will have a definite value in general whenever ${\displaystyle {\text{Pot }}\omega }$ has.

96. Hence, with the aid of No. 93, we obtain

 {\displaystyle {\begin{aligned}\nabla \times {\text{Lap }}\omega &={\text{Lap }}\nabla \times \omega ,\\\nabla .{\text{Lap }}\omega &=0,\end{aligned}}}
whenever ${\displaystyle {\text{Pot }}\omega }$ has a definite value.

97. By the method of No. 93 we obtain

 ${\displaystyle \nabla .{\text{New }}u=\nabla .\iiint {\frac {\rho '-\rho }{r^{3}}}u'\,dv'=\iiint \nabla u'.{\frac {\rho '-\rho }{r^{3}}}dv'.}$
To find the value of this integral, we may regard the point ${\displaystyle \rho ,}$ which is constant in the integration, as the center of polar coordinates. Then ${\displaystyle r}$ becomes the radius vector of the point ${\displaystyle \rho ',}$ and we may set
 ${\displaystyle dv'=r^{2}\,dq\,dr,}$
where ${\displaystyle r^{2}dq}$ is the element of a spherical surface having center at ${\displaystyle \rho }$ and radius ${\displaystyle r.}$ We may also set
 ${\displaystyle \nabla u'.{\frac {\rho '-\rho }{r}}={\frac {du'}{dr}}\cdot }$
We thus obtain
 ${\displaystyle \nabla .{\text{New }}u=\iiint {\frac {du'}{dr}}dq\,dr=4\pi \int {\frac {d{\bar {u}}'}{dr}}dr=4\pi {\bar {u}}'_{r=\infty }-4\pi {\bar {u}}'_{r=0},}$
where ${\displaystyle {\bar {u}}}$ denotes the average value of ${\displaystyle u}$ in a spherical surface of radius ${\displaystyle r}$ about the point ${\displaystyle \rho }$ as center.

Now if ${\displaystyle {\text{Pot }}u}$ has in general a definite value, we must have ${\displaystyle {\bar {u}}'=0}$ for ${\displaystyle r=\infty .}$ Also, ${\displaystyle \nabla .{\text{New }}u}$ will have in general a definite value. For ${\displaystyle r=0,}$ the value of ${\displaystyle {\bar {u}}'}$ is evidently ${\displaystyle u.}$ We have, therefore,

 ${\displaystyle \nabla .{\text{New }}u}$ ${\displaystyle =-4\pi u,}$ ${\displaystyle \nabla .\nabla {\text{Pot }}u}$ ${\displaystyle =-4\pi u.}$[7]
98. If ${\displaystyle {\text{Pot }}\omega }$ has in general a definite value,
 {\displaystyle {\begin{aligned}\nabla .\nabla {\text{Pot }}\omega &=\nabla .\nabla {\text{Pot }}[ui+vj+wk]\\&=\nabla .\nabla {\text{Pot }}ui+\nabla .\nabla {\text{Pot }}vj+\nabla .\nabla {\text{Pot }}wk\\&=-4\pi ui-4\pi vj-4\pi wk\\&=-4\pi \omega .\end{aligned}}}
Hence, by No. 71,
 ${\displaystyle \nabla \times \nabla \times {\text{Pot }}\omega -\nabla \nabla .{\text{Pot }}\omega =4\pi \omega .}$
That is,
${\displaystyle {\text{Lap }}\nabla \times \omega -{\text{New }}\nabla .\omega =4\pi \omega .}$
If we set
 ${\displaystyle \omega _{1}={\frac {1}{4\pi }}{\text{Lap }}\nabla \times \omega ,}$⁠${\displaystyle \omega _{2}={\frac {-1}{4\pi }}{\text{New }}\nabla .\omega ,}$
we have
${\displaystyle \omega =\omega _{1}+\omega _{2},}$
where ${\displaystyle \omega _{1}}$ and ${\displaystyle \omega _{2}}$ are such functions of position that ${\displaystyle \nabla .\omega _{1}=0,}$ and ${\displaystyle \nabla \times \omega _{2}=0.}$ This is expressed by saying that ${\displaystyle \omega _{1}}$ is solenoidal, and ${\displaystyle \omega _{2}}$ irrotational. ${\displaystyle {\text{Pot }}\omega _{1}}$ and ${\displaystyle {\text{Pot }}\omega _{2},}$ like ${\displaystyle {\text{Pot }}\omega ,}$ will have in general definite values.

It is worth while to notice that there is only one way in which a vector function of position in space having a definite potential can be thus divided into solenoidal and irrotational parts having definite potentials. For if ${\displaystyle \omega _{1}+\epsilon ,\,\omega _{2}-\epsilon }$ are two other such parts,

 ${\displaystyle \nabla .\epsilon =0}$⁠and⁠${\displaystyle \nabla \times \epsilon =0.}$
Moreover, ${\displaystyle {\text{Pot }}\epsilon }$ has in general a definite value, and therefore
q.e.d.
${\displaystyle \epsilon ={\frac {1}{4\pi }}{\text{Lap }}\nabla \times \epsilon -{\frac {1}{4\pi }}{\text{New }}\nabla .\epsilon =0.}$
99. To assist the memory of the student, some of the principal results of Nos. 93–98 may be expressed as follows:

Let ${\displaystyle \omega _{1}}$ be any solenoidal vector function of position in space, ${\displaystyle \omega _{2}}$ any irrotational vector function, and ${\displaystyle u}$ any scalar function, satisfying the conditions that their potentials have in general definite values.

With respect to the solenoidal function ${\displaystyle \omega _{1},{\frac {1}{4\pi }}{\text{Lap}}}$ and ${\displaystyle \nabla \times }$ are inverse operators; i.e.,

 ${\displaystyle {\frac {1}{4\pi }}{\text{Lap }}\nabla \times \omega _{1}=\nabla \times {\frac {1}{4\pi }}{\text{Lap }}\omega _{1}=\omega _{1}.}$
Applied to the irrotational function ${\displaystyle \omega _{2},}$ either of these operators gives zero; i.e.,
 ${\displaystyle {\text{Lap }}\omega _{2}=0,}$⁠${\displaystyle \nabla \times \omega _{2}=0.}$
With respect to the irrotational function ${\displaystyle \omega _{2},}$ or the scalar function ${\displaystyle u,{\frac {1}{4\pi }}{\text{New}}}$ and ${\displaystyle -\nabla .}$ are inverse operators; i.e.,
 ${\displaystyle -{\frac {1}{4\pi }}{\text{New }}\nabla .\omega _{2}=\omega _{2},}$⁠${\displaystyle -\nabla .{\frac {1}{4\pi }}{\text{New }}u=u.}$
Applied to the solenoidal function ${\displaystyle \omega _{1},}$ the operator ${\displaystyle \nabla .}$ gives zero; i.e.
 ${\displaystyle \nabla .\omega _{1}=0.}$
Since the most general form of a vector function having in general a definite potential may be written ${\displaystyle \omega _{1}+\omega _{2},}$ the effect of these operators on such a function needs no especial mention.

With respect to the solenoidal function ${\displaystyle \omega _{1},{\frac {1}{4\pi }}{\text{Pot }}}$ and ${\displaystyle \nabla \times \nabla \times }$ are inverse operators; i.e.,

 ${\displaystyle {\frac {1}{4\pi }}{\text{Pot }}\nabla \times \nabla \times \omega _{1}=\nabla \times {\frac {1}{4\pi }}{\text{Pot }}\nabla \times \omega _{1}=\nabla \times \nabla \times {\frac {1}{4\pi }}{\text{Pot }}\omega _{1}=\omega _{1}.}$
With respect to the irrotational function ${\displaystyle \omega _{2},{\frac {1}{4\pi }}{\text{Pot }}}$ and ${\displaystyle -\nabla \nabla .}$ are inverse operators; i.e.,
 ${\displaystyle -{\frac {1}{4\pi }}{\text{Pot }}\nabla \nabla .\omega _{2}=-\nabla {\frac {1}{4\pi }}{\text{Pot }}\nabla .\omega _{2}=-\nabla \nabla .{\frac {1}{4\pi }}{\text{Pot }}\omega _{2}=\omega _{2}.}$
With respect to any scalar or vector function having in general a definite potential ${\displaystyle {\frac {1}{4\pi }}{\text{Pot}}}$ and ${\displaystyle -\nabla .\nabla }$ are inverse operators; i.e.,
 ${\displaystyle -{\frac {1}{4\pi }}{\text{Pot }}\nabla .\nabla u=-\nabla {\frac {1}{4\pi }}{\text{Pot }}\nabla u=-\nabla .\nabla {\frac {1}{4\pi }}{\text{Pot }}u=u,}$${\displaystyle -{\frac {1}{4\pi }}{\text{Pot }}\nabla .\nabla [\omega _{1}+\omega _{2}]=-\nabla .\nabla {\frac {1}{4\pi }}{\text{Pot }}[\omega _{1}+\omega _{2}]=\omega _{1}+\omega _{2}.}$
With respect to the solenoidal function ${\displaystyle \omega _{1},-\nabla .\nabla }$ and ${\displaystyle \nabla \times \nabla \times }$ are equivalent; with respect to the irrotational function ${\displaystyle \omega _{2},\nabla .\nabla }$ and ${\displaystyle \nabla \nabla .}$ are equivalent; i.e.,
 ${\displaystyle -\nabla .\nabla \omega _{1}=\nabla \times \nabla \times \omega _{1},}$⁠${\displaystyle \nabla .\nabla \omega _{2}=\nabla \nabla .\omega _{2}.}$
100. On the interpretation of the preceding formulæ.— Infinite values of the quantity which occurs in a volume-integral as the coefficient of the element of volume will not necessarily make the value of the integral infinite, when they are confined to certain surfaces, lines, or pointa Yet these surfaces, lines, or points may contribute a certain finite amount to the value of the volume-integral, which must be separately calculated, and in the case of surfaces or lines is naturally expressed as a surface-or line-integral. Such cases are easily treated by substituting for the surface, line, or point, a very thin shell, or filament, or a solid very small in all dimensions, within which the function may be supposed to have a very large value.

The only cases which we shall here consider in detail are those of surfaces at which the functions of position (${\displaystyle u}$ or ${\displaystyle \omega }$) are discontinuous, and the values of ${\displaystyle \nabla u,\nabla \times \omega ,\nabla .\omega }$ thus become infinite. Let the function ${\displaystyle u}$ have the value ${\displaystyle u_{1}}$ on the side of the surface which we regard as the negative, and the value ${\displaystyle u_{2}}$ on the positive side. Let ${\displaystyle \Delta u=u_{2}-u_{1}.}$ If we substitute for the surface a shell of very small thickness ${\displaystyle a,}$ within which the value of ${\displaystyle u}$ varies uniformly as we pass through the shell, we shall have ${\displaystyle \nabla u=\nu {\frac {\nabla u}{a}}}$ within the shell, ${\displaystyle \nu }$ denoting a unit normal on the positive side of the surface. The elements of volume which compose the shell may be expressed by ${\displaystyle a[d\sigma ]_{0},}$ where ${\displaystyle [d\sigma ]_{0}}$ is the magnitude of an element of the surface, ${\displaystyle d\sigma }$ being the vector element. Hence,

 ${\displaystyle \nabla udv=\nu \Delta u[d\sigma ]_{0}=\Delta u\,d\sigma .}$
Hence, when there are surfaces at which the values of ${\displaystyle u}$ are discontinuous, the full value of ${\displaystyle {\text{Pot }}\nabla u}$ should always be understood as including the surface-integral
 ${\displaystyle \iint {\frac {\Delta u'}{[\rho '-\rho ]_{0}}}d\sigma '}$
relating to such surfaces. (${\displaystyle \Delta u'}$ and ${\displaystyle d\sigma '}$ are accented in the formula to indicate that they relate to the point ${\displaystyle \rho '.}$)

In the case of a vector function which is discontinuous at a surface, the expressions ${\displaystyle \nabla .\omega \,dv}$ and ${\displaystyle \nabla \times \omega \,dv,}$ relating to the element of the shell which we substitute for the surface of discontinuity, are easily transformed by the principle that these expressions are the direct and skew surface-integrals of ${\displaystyle \omega }$ for the element of the shell. (See Nos. 65, 56.) The part of the surface-integrals relating to the edge of the element may evidently be neglected, and we shall have

 {\displaystyle {\begin{aligned}\nabla .\omega \,dv&=\omega _{2}.d\sigma -\omega _{1}.d\sigma =\Delta \omega .d\sigma ,\\\nabla \times \omega \,dv&=d\sigma \times \omega _{2}-d\sigma \times \omega _{1}=d\sigma \times \Delta \omega .\end{aligned}}}
Whenever, therefore, ${\displaystyle \omega }$ is discontinuous at surfaces, the expressions ${\displaystyle {\text{Pot }}\nabla .\omega }$ and ${\displaystyle {\text{New }}\nabla .\omega }$ must be regarded as implicitly including the surface-integrals
 ${\displaystyle \iint {\frac {1}{[\rho '-\rho ]_{0}}}\Delta \omega '.d\sigma '}$⁠and⁠${\displaystyle \iint {\frac {\rho '-\rho }{[\rho '-\rho ]_{0}^{3}}}\Delta \omega '.d\sigma '}$
respectively, relating to such surfaces, and the expressions ${\displaystyle {\text{Pot }}\nabla \times \omega }$ and ${\displaystyle {\text{Lap }}\nabla \times \omega }$ as including the surface-integrals
 ${\displaystyle \iint {\frac {1}{[\rho '-\rho ]_{0}}}d\sigma '\times \Delta \omega '}$⁠and⁠${\displaystyle \iint {\frac {\rho '-\rho }{[\rho '-\rho ]_{0}^{3}}}\times [d\sigma '\times \Delta \omega ']}$
respectively, relating to such surfaces.

101. We have already seen that if ${\displaystyle \omega }$ is the curl of any vector function of position, ${\displaystyle \nabla .\omega =0.}$ (No. 68.) The converse is evidently true, whenever the equation ${\displaystyle \nabla .\omega =0}$ holds throughout all space, and ${\displaystyle \omega }$ has in general a definite potential; for then

 ${\displaystyle \omega =\nabla \times {\frac {1}{4\pi }}{\text{Lap }}\omega .}$
Again, if ${\displaystyle \nabla .\omega =0}$ within any aperiphractic space ${\displaystyle {\text{A}},}$ contained within finite boundaries, we may suppose that space to be enclosed by a shell ${\displaystyle {\text{B}}}$ having its inner surface coincident with the surface of ${\displaystyle {\text{A}}.}$ We may imagine a function of position ${\displaystyle \omega ',}$ such that