FUNCTION,^{[1]} in mathematics, a variable number the value of which depends upon the values of one or more other variable numbers. The theory of functions is conveniently divided into (I.) Functions of Real Variables, wherein real, and only real, numbers are involved, and (II.) Functions of Complex Variables, wherein complex or imaginary numbers are involved.
I. Functions of Real Variables
1. Historical.—The word function, defined in the above sense, was introduced by Leibnitz in a short note of date 1694 concerning the construction of what we now call an “envelope” (Leibnizens mathematische Schriften, edited by C. I. Gerhardt, Bd. v. p. 306), and was there used to denote a variable length related in a defined way to a variable point of a curve. In 1698 James Bernoulli used the word in a special sense in connexion with some isoperimetric problems (Joh. Bernoulli, Opera, t. i. p. 255). He said that when it is a question of selecting from an infinite set of like curves that one which best fulfils some function, then of two curves whose intersection determines the thing sought one is always the “line of the function” (Linea functionis). In 1718 John Bernoulli (Opera, t. ii. p. 241) defined a “function of a variable magnitude” as a quantity made up in any way of this variable magnitude and constants; and in 1730 (Opera, t. iii. p. 174) he noted a distinction between “algebraic” and “transcendental” functions. By the latter he meant integrals of algebraic functions. The notation for a function of a variable was introduced by Leonhard Euler in 1734 (Comm. Acad. Petropol. t. vii. p. 186), in connexion with the theorem of the interchange of the order of differentiations. The notion of functionality or functional relation of two magnitudes was thus of geometrical origin; but a function soon came to be regarded as an analytical expression, not necessarily an algebraic expression, containing the variable or variables. Thus we may have rational integral algebraic functions such as , or rational algebraic functions which are not integral, such as
or irrational algebraic functions, such as , or, more generally the algebraic functions that are determined implicitly by an algebraic equation, as, for instance,
where , mean homogeneous expressions in and having constant coefficients, and having the degrees indicated by the suffixes, and is a constant. Or again we may have trigonometrical functions, such as and , or inverse trigonometrical functions, such as , or exponential functions, such as and , or logarithmic functions, such as and . We may have these functional symbols combined in various ways, and thus there arises a great number of functions. Further we may have functions of more than one variable, as, for instance, the expression , in which both and are regarded as variable. Such functions were introduced into analysis somewhat unsystematically as the need for them arose, and the later developments of analysis led to the introduction of other classes of functions.
2. Graphic Representation.—In the case of a function of one variable , any value of and the corresponding value of the function can be the co-ordinates of a point in a plane. To any value of there corresponds a point on the axis of , in accordance with the rule that is the abscissa of . The corresponding value of determines a point in accordance with the rule that is the abscissa and the ordinate of . The ordinate gives the value of the function which corresponds to that value of the variable which is specified by ; and it may be described as “the value of the function at .” Since there is a one-to-one correspondence of the points and the numbers , we may also describe the ordinate as “the value of the function at .” In simple cases the aggregate of the points which are determined by any particular function (of one variable) is a curve, called the “graph of the function” (see §14). In like manner a function of two variables defines a surface.
3. The Variable.—Graphic methods of representation, such as those just described, enabled mathematicians to deal with irrational values of functions and variables at the time when there was no theory of irrational numbers other than Euclid’s theory of incommensurables. In that theory an irrational number was the ratio of two incommensurable geometric magnitudes. In the modern theory of number irrational numbers are defined in a purely arithmetical manner, independent of the measurement of any quantities or magnitudes, whether geometric or of any other kind. The definition is effected by means of the system of ordinal numbers (see Number). When this formal system is established, the theory of measurement may be founded upon it; and, in particular, the co-ordinates of a point are defined as numbers (not lengths), which are assigned in accordance with a rule. This rule involves the measurement of lengths. The theory of functions can be developed without any reference to gaphs, or co-ordinates or lengths. The process by which analysis has been freed from any consideration of measurable quantities has ben called the “arithmetization of analysis.” In the theory so developed, the variable upon which a function depends is always to be regarded as a number, and the corresponding value of the function is also a number. Any reference to points or co-ordinates is to be regarded as a picturesque mode of expression, pointing to a possible application of the theory to geometry. The development of “arithmetized analysis” in the 19th century is associated with the name of Karl Weierstrass.
All possible values of a variable are numbers. In what follows we shall confine our attention to the case where the numbers are real. When complex numbers are introduced instead of real ones, the theory of functions receives a wide extension, which is accompanied by appropriate limitations (see below, II. Functions of Complex Variables). The set of all real numbers forms a continuum. In fact the notion of a one-dimensional continuum first becomes precise in virtue of the establishment of the system of real numbers.
4. Domain of a Variable.—Theory of Aggregates.—The notion of a “variable” is that of a number to which we may assign at pleasure any one of the values that belong to some chosen set, or aggregate, of numbers; and this set, or aggregate, is called the “domain of the variable.” This domain may be an “interval,” that is to say it may consist of two terminal numbers all the numbers between them and no others. When this is the case the number is said to be “continuously variable.” When the domain consists of all real numbers, the variable is said to be “unrestricted.” A domain which consists of all the real numbers which exceed some fixed number may be described as an “interval unlimited towards the right”; similarly we may have an interval” unlimited towards the left.”
In more complicated cases we must have some rule or process for assigning the aggregate of numbers which constitute the domain of a variable. The methods of definition of particular types of aggregates, and the theorems relating to them, form a branch of analysis called the “theory of aggregates” (Mengenlehre, Théorie des ensembles, Theory of sets of points). The notion of an “aggregate” in general underlies the system of ordinal numbers. An aggregate is said to be “infinite” when it is possible to effect a one-to-one correspondence of all its elements to some of its elements. For example, we may make all the integers correspond to the even integers, by making correspond to , to , and generally to . The aggregate of positive integers is an infinite aggregate. The aggregates of all rational numbers and of all real numbers and of points on a line are other examples of infinite aggregates. An aggregate whose elements are real numbers is said to “extend to infinite values” if, after any number , however great, is specified, it is possible to find in the aggregate numbers which exceed in absolute value. Such an aggregate is always infinite. The “neighbourhood of a number (or point) for a positive number ” is the aggregate of all numbers (or points) for which the absolute value of denoted by , does not exceed .
5. General Notion of Functionality.—A function of one variable was for a long time commonly regarded as the ordinate of a curve; and the two notions (1) that which is determined by a curve supposed drawn, and (2) that which is determined by an analytical expression supposed written down, were not for a long time clearly distinguished. It was for this reason that Fourier’s discovery that a single analytical expression is capable of representing (in different parts of an interval) what would in his time have been called different functions so profoundly struck mathematicians (§23). The analysts who, in the middle of the 19th century. occupied themselves with the theory of the convergence of Fourier’s series were led to impose a restriction on the character of a function in order that it should admit of such representation, and thus the door was opened for the introduction of the general notion of functional dependence. This notion may be expressed as follows: We have a variable number, , and another variable number, , a domain of the variable , and a rule for assigning one or more definite values to when is any point in the domain; then is said to be a “function” of the variable , and is called the “argument” of the function. According to this notion a function is, as it were, an indefinitely extended table, like a table of logarithms; to each point in the domain of the argument there correspond values for the function, but it remains arbitrary what values the function is to have at any such point.
For the specification of any particular function two things are requisite: (1) a statement of the values of the variable, or of the aggregate of points, to which values of the function are to be made to correspond, i.e. of the “domain of the argument”; (2) a rule for assigning the value or values of the function that correspond to any point in this domain. We may refer to the second of these two essentials as “the rule of calculation.” The relation of functions to analytical expressions may then be stated in the form that the rule of calculation is: “Give the function the value of the expression at any point at which the expression has a determinate value,” or again more generally, “Give the function the value of the expression at all polnts of a definite aggregate included in the domain of the argument.” The former of these is the rule of those among the earlier analysts who regarded an analytical expression and a function as the same thing, and their usage may be retained without causing confusion and with the advantage of brevity, the analytical expression serving to specify the domain of the argument as well as the rule of calculation, e.g. we may speak of “the function .” This function is defined by the analytical expression at all points except the point . But in complicated cases separate statements of the domain of the argument and the rule of calculation cannot be dispensed with. In general, when the rule of calculation is determined as above by an analytical expression at any aggregate of points, the function is said to be “represented” by the expression at those points.
When the rule of calculation assigns a single definite value for a function at each point in the domain of the argument the function is “uniform” or “one-valued.” In what follows it is to be understood that all the functions considered are one-valued, and the values assigned by the rule of calculation real. In the most important cases the domain of the argument of a function of one variable is an interval, with the possible exception of isolated points.
6. Limits.—Let be a function of a variable number ; and let be a point such that there are points of the domain of the argument in the neighbourhood of for any number , however small. If there is a number which has the property that, after any positive number , however small, has been specified, it is possible to find a positive number , so that < for all points of the domain (other than ) for which , then is the “limit of at the point .” The condition for the existence of is that, after the positive number has been specified, it must be possible to find a positive number , so that for all points and of the domain (other than ) for which and .
It is a fundamental theorem that, when this condition is satisfied, there exists a perfectly definite number which is the limit of at the point as defined above. The limit of at the point is denoted by , or by .
If is a function of one variable in a domain which extends to infinite values, and if, after has been specified, it is possible to find a number , so that for all values of and which are in the domain and exceed , then there is a number which has the property that for all such values of . In this case has a limit at . In like manner may have a limit at . This statement includes the case where the domain of the argument consists exclusively of positive integers. The values of the function then form a “sequence,” and this sequence can have a limit at .
The principle common to the above definitions and theorems is called, after P. du Bois Reymond, “the general principle of convergence to a limit.”
It must be understood that the phrase “” does not mean that takes some particular value which is infinite. There is no such value. The phrase always refers to a limiting process in which, as the process is carried out, the variable number increases without limit: it may, as in the above example of a sequence, increase by taking successively the values of all the integral numbers; in other cases it may increase by taking the values that belong to any domain which “extends to infinite values.”
A very important type of limits is furnished by infinite series. When a sequence of numbers is given, we may form a new sequence from it by the rules or by the equivalent rules , . If the new sequence has a limit at , this limit is called the “sum of the infinite series” , and the series is said to be “convergent” (see Series).
A function which has not a limit at a point may be such that if a certain aggregate of points is chosen out of the domain of the argument, and the points in the neighbourhood of are restricted to belong to this aggregate. then the function has a limit at . For example, has limit zero at if is restricted to the aggregate or to the aggregate but if takes all values in the neighbourhood of , has not a limit at . Again, there may be a limit at if the points in the neighbourhood of are restricted by the condition that is positive; then we have a “limit on the right” at ; similarly we may have a “limit on the left” at a point. Any such limit is described as a “limit for a restricted domain.” The limits on the left and on the right are denoted by and .
The limit of at stands in no necessary relation to the value of at . If the point is in the domain of the argument, the value of at is assigned by the rule of calculation, and may be different from . In case the limit is said to be “attained.” If the point is not in the domain of the argument, there is no value for at . In the case where is defined for all points in an interval containing , except the point , and has a limit at , we may arbitrarily annex the point to the domain of the argument and assign to the value ; the function may then be said to be “extrinsically defined.” The so-called “indeterminate forms” (see Infinitesimal Calculus) are examples.
7. Superior and Inferior Limits; Infinities.—The value of a function at every point in the domain of its argument is finite, since, by definition, the value can be assigned, but this does not necessarily imply that there is a number which exceeds all the values (or is less than all the values). It may happen that, however great a number we take, there are among the values of the function numbers which exceed (or are less than ).
If a number can be found which is greater than every value of the function, then either (α) there is one value of the function which exceeds all the others, or (β) there is a number which exceeds every value of the function but is such that, however small a positive number we take, there are values of the function which exceed . In the case (α) the function has a greatest value; in case (β) the function has a “superior limit” , and then there must be a point which has the property that there are points of the domain of the argument, in the neighbourhood of for any , at which the values of the function differ from by less than . Thus is the limit of the function at , either for the domain of the argument or for some more restricted domain. If is in the domain of the argument, and if, after omission of , there is a superior limit which is in this way the limit of the function at , if further , then is the greatest value of the function; in this case the greatest value is a limit (at any rate for a restricted domain) which is attained; it may be called a “superior limit which is attained.” In like manner we may have a “smallest value” or an “inferior limit,” and a smallest value may be an “inferior limit which is attained.”
All that has been said here may be adapted to the description of greatest values, superior limits, &c., of a function in a restricted domain contained in the domain of the argument. In particular, the domain of the argument may contain an interval; and therein the function may have a superior limit, or an inferior limit, which is attained. Such a limit is a maximum value or a minimum value of the function.
Again, if, after any number , however great, has been specified, it is possible to find points of the domain of the argument at which the value of the function exceeds , the values of the function are said to have an “infinite superior limit,” and then there must be a point which has the property that there are points of the domain, in the neighbourhood of for any , at which the value of the function exceeds . If the point is in the domain of the argument the function is said to “tend to become infinite” at ; it has of course a finite value at . If the point is not in the domain of the argument the function is said to “become infinite” at ; it has of course no value at . In like manner we may have a (negatively) infinite inferior limit. Again, after any number , however great, has been specified and a number found, so that all the values of the function, at points in the neighbourhood of for , exceed in absolute value, all these values may have the same sign; the function is then said to become, or to tend to become, “determinately (positively or negatively) infinite”; otherwise it is said to become or to tend to become, “indeterminately infinite.”
All the infinities that occur in the theory of functions are of the nature of variable finite numbers, with the single exception of the infinity of an infinite aggregate. The latter is described as an “actual infinity,” the former as “improper infinities.” There is no “actual infinitely small” corresponding to the actual infinity. The only “infinitely small” is zero. All “infinite values” are of the nature of superior and inferior limits which are not attained.
8. Increasing and Decreasing Functions.—A function of one variable , defined in the interval between and , is “increasing throughout the interval” if, whenever and are two numbers in the interval and , then ; the function “never decreases throughout the interval” if, and being as before, . Similarly for decreasing functions, and for functions which never increase throughout an interval. A function which either never increases or never diminishes throughout an interval is said to be “monotonous throughout” the interval. If we take in the above definition , the definition may apply to a function under the restriction that is not and is not ; such a function is “monotonous within” the interval. In this case we have the theorem that the function (if it never decreases) has a limit on the left at and a limit on the right at , and these are the superior and inferior limits of its values at all points within the interval (the ends excluded); the like holds mutatis mutandis if the function never increases. If the function is monotonous throughout the interval, is the greatest (or least) value of in the interval; and if is the limit of on the left at , such a greatest (or least) value is an example of a superior (or inferior) limit which is attained. In these cases the function tends continually to its limit.
These theorems and definitions can be extended, with obvious modifications, to the cases of a domain which is not an interval, or extends to infinite values. By means of them we arrive at sufficient, but not necessary, criteria for the existence of a limit; and these are frequently easier to apply than the general principle of convergence to a limit (§ 6), of which principle they are particular cases. For example, the function represented by x log (1/x) continually diminishes when 1/e > x > 0 and x diminishes towards zero, and it never becomes negative. It therefore has a limit on the right at x = 0. This limit is zero. The function represented by x sin (1/x) does not continually diminish towards zero as x diminishes towards zero, but is sometimes greater than zero and sometimes less than zero in any neighbourhood of x = 0, however small. Nevertheless, the function has the limit zero at x = 0.
9. Continuity of Functions.—A function ƒ(x) of one variable x is said to be continuous at a point a if (1) ƒ(x) is defined in an interval containing a; (2) ƒ(x) has a limit at a; (3) ƒ(a) is equal to this limit. The limit in question must be a limit for continuous variation, not for a restricted domain. If ƒ(x) has a limit on the left at a and ƒ(a) is equal to this limit, the function may be said to be “continuous to the left” at a; similarly the function may be “continuous to the right” at a.
A function is said to be “continuous throughout an interval” when it is continuous at every point of the interval. This implies continuity to the right at the smaller end-value and continuity to the left at the greater end-value. When these conditions at the ends are not satisfied the function is said to be continuous “within” the interval. By a “continuous function” of one variable we always mean a function which is continuous throughout an interval.
The principal properties of a continuous function are:
1. The function is practically constant throughout sufficiently small intervals. This means that, after any point a of the interval has been chosen, and any positive number ε, however small, has been specified, it is possible to find a number h, so that the difference between any two values of the function in the interval between a − h and a + h is less than ε. There is an obvious modification if a is an end-point of the interval.
2. The continuity of the function is “uniform.” This means that the number h which corresponds to any ε as in (1) may be the same at all points of the interval, or, in other words, that the numbers h which correspond to ε for different values of a have a positive inferior limit.
3. The function has a greatest value and a least value in the interval, and these are superior and inferior limits which are attained.
4. There is at least one point of the interval at which the function takes any value between its greatest and least values in the interval.
5. If the interval is unlimited towards the right (or towards the left), the function has a limit at ∞ (or at −∞).
10. Discontinuity of Functions.—The discontinuities of a function of one variable, defined in an interval with the possible exception of isolated points, may be classified as follows:
(1) The function may become infinite, or tend to become infinite, at a point.
(2) The function may be undefined at a point.
(3) The function may have a limit on the left and a limit on the right at the same point; these may be different from each other, and at least one of them must be different from the value of the function at the point.
(4) The function may have no limit at a point, or no limit on the left, or no limit on the right, at a point.
In case a function ƒ(x), defined as above, has no limit at a point a, there are four limiting values which come into consideration. Whatever positive number h we take, the values of the function at points between a and a + h (a excluded) have a superior limit (or a greatest value), and an inferior limit (or a least value); further, as h decreases, the former never increases and the latter never decreases; accordingly each of them tends to a limit. We have in this way two limits on the right—the inferior limit of the superior limits in diminishing neighbourhoods, and the superior limit of the inferior limits in diminishing neighbourhoods. These are denoted by ƒ(a + 0) and ƒ(a + 0), and they are called the “limits of indefiniteness” on the right. Similar limits on the left are denoted by ƒ(a − 0) and ƒ(a − 0). Unless ƒ(x) becomes, or tends to become, infinite at a, all these must exist, any two of them may be equal, and at least one of them must be different from ƒ(a), if ƒ(a) exists. If the first two are equal there is a limit on the right denoted by ƒ(a + 0); if the second two are equal, there is a limit on the left denoted by ƒ(a − 0). In case the function becomes, or tends to become, infinite at a, one or more of these limits is infinite in the sense explained in § 7; and now it is to be noted that, e.g. the superior limit of the inferior limits in diminishing neighbourhoods on the right of a may be negatively infinite; this happens if, after any number N, however great, has been specified, it is possible to find a positive number h, so that all the values of the function in the interval between a and a + h (a excluded) are less than −N; in such a case ƒ(x) tends to become negatively infinite when x decreases towards a; other modes of tending to infinite limits may be described in similar terms.
11. Oscillation of Functions.—The difference between the greatest and least of the numbers ƒ(a), ƒ(a + 0), ƒ(a + 0), ƒ(a − 0), ƒ(a − 0), when they are all finite, is called the “oscillation” or “fluctuation” of the function ƒ(x) at the point a. This difference is the limit for h = 0 of the difference between the superior and inferior limits of the values of the function at points in the interval between a − h and a + h. The corresponding difference for points in a finite interval is called the “oscillation of the function in the interval.” When any of the four limits of indefiniteness is infinite the oscillation is infinite in the sense explained in § 7.
For the further classification of functions we divide the domain of the argument into partial intervals by means of points between the end-points. Suppose that the domain is the interval between a and b. Let intermediate points x_{1}, x_{2} ... x_{n−1}, be taken so that b>x_{n−1}>x_{n−2} ... >x_{1}>a. We may devise a rule by which, as n increases indefinitely, all the differences b − x_{n−1}, x_{n−1} − x_{n−2}, ... x_{1} − a tend to zero as a limit. The interval is then said to be divided into “indefinitely small partial intervals.”
A function defined in an interval with the possible exception of isolated points may be such that the interval can be divided into a set of finite partial intervals within each of which the function is monotonous (§ 8). When this is the case the sum of the oscillations of the function in those partial intervals is finite, provided the function does not tend to become infinite. Further, in such a case the sum of the oscillations will remain below a fixed number for any mode of dividing the interval into indefinitely small partial intervals. A class of functions may be defined by the condition that the sum of the oscillations has this property, and such functions are said to have “restricted oscillation.” Sometimes the phrase “limited fluctuation” is used. It can be proved that any function with restricted oscillation is capable of being expressed as the sum of two monotonous functions, of which one never increases and the other never diminishes throughout the interval. Such a function has a limit on the right and a limit on the left at every point of the interval. This class of functions includes all those which have a finite number of maxima and minima in a finite-interval, and some which have an infinite number. It is to be noted that the class does not include all continuous functions.
12. Differentiable Function.—The idea of the differentiation of a continuous function is that of a process for measuring the rate of growth; the increment of the function is compared with the increment of the variable. If ƒ(x) is defined in an interval containing the point a, and a − k and a + k are points of the interval, the expression
ƒ(a + h) − ƒ(a)h | (1) |
represents a function of h, which we may call φ(h), defined at all points of an interval for h between −k and k except the point 0. Thus the four limits φ(+0), φ(+0), φ(−0), φ(−0) exist, and two or more of them may be equal. When the first two are equal either of them is the “progressive differential coefficient” of ƒ(x) at the point a; when the last two are equal either of them is the “regressive differential coefficient” of ƒ(x) at a; when all four are equal the function is said to be “differentiable” at a, and either of them is the “differential coefficient” of ƒ(x) at a, or the “first derived function” of ƒ(x) at a. It is denoted by dƒ(x)dx or by ƒ′(x). In this case φ(h) has a definite limit at h = 0, or is determinately infinite at h = 0 (§ 7). The four limits here in question are called, after Dini, the “four derivates” of ƒ(x) at a. In accordance with the notation for derived functions they may be denoted by
ƒ′ + (a), ƒ′ + (a), ƒ′ − (a), ƒ′ − (a).
A function which has a finite differential coefficient at all points of an interval is continuous throughout the interval, but if the differential coefficient becomes infinite at a point of the interval the function may or may not be continuous throughout the interval; on the other hand a function may be continuous without being differentiable. This result, comparable in importance, from the point of view of the general theory of functions, with the discovery of Fourier’s theorem, is due to G. F. B. Riemann; but the failure of an attempt made by Ampère to prove that every continuous function must be differentiable may be regarded as the first step in the theory. Examples of analytical expressions which represent continuous functions that are not differentiable have been given by Riemann, Weierstrass, Darboux and Dini (see § 24). The most important theorem in regard to differentiable functions is the “theorem of intermediate value.” (See Infinitesimal Calculus.)
13. Analytic Function.—If ƒ(x) and its first n differential coefficients, denoted by ƒ′(x), ƒ″(x), ... ƒ(^{n}) (x), are continuous in the interval between a and a + h, then
ƒ(a + h) = ƒ(a) + hƒ′(a) + | h^{2} | ƒ″(a) + ... + | h^{n−1} | ƒ^{(n−1)}(a) + R_{n}, |
2! | (n − 1)! |
where R_{n} may have various forms, some of which are given in the article Infinitesimal Calculus. This result is known as “Taylor’s theorem.”
When Taylor’s theorem leads to a representation of the function by means of an infinite series, the function is said to be “analytic” (cf. § 21).
14. Ordinary Function.—The idea of a curve representing a continuous function in an interval is that of a line which has the following properties: (1) the co-ordinates of a point of the curve are a value x of the argument and the corresponding value y of the function; (2) at every point the curve has a definite tangent; (3) the interval can be divided into a finite number of partial intervals within each of which the function is monotonous; (4) the property of monotony within partial intervals is retained after interchange of the axes of co-ordinates x and y. According to condition (2) y is a continuous and differentiable function of x, but this condition does not include conditions (3) and (4): there are continuous partially monotonous functions which are not differentiable, there are continuous differentiable functions which are not monotonous in any interval however small; and there are continuous, differentiable and monotonous functions which do not satisfy condition (4) (cf. § 24). A function which can be represented by a curve, in the sense explained above, is said to be “ordinary,” and the curve is the graph of the function (§2). All analytic functions are ordinary, but not all ordinary functions are analytic.
15. Integrable Function.—The idea of integration is twofold. We may seek the function which has a given function as its differential coefficient, or we may generalize the question of finding the area of a curve. The first inquiry leads directly to the indefinite integral, the second directly to the definite integral. Following the second method we define “the definite integral of the function ƒ(x) through the interval between a and b” to be the limit of the sum
when the interval is divided into ultimately indefinitely small partial intervals by points x_{1}, x_{2}, ... x_{n−1}. Here x′_{r} denotes any point in the rth partial interval, x_{0} is put for a, and x_{n} for b. It can be shown that the limit in question is finite and independent of the mode of division into partial intervals, and of the choice of the points such as x′_{r} , provided (1) the function is defined for all points of the interval, and does not tend to become infinite at any of them; (2) for any one mode of division of the interval into ultimately indefinitely small partial intervals, the sum of the products of the oscillation of the function in each partial interval and the difference of the end-values of that partial interval has limit zero when n is increased indefinitely. When these conditions are satisfied the function is said to be “integrable” in the interval. The numbers a and b which limit the interval are usually called the “lower and upper limits.” We shall call them the “nearer and further end-values.” The above definition of integration was introduced by Riemann in his memoir on trigonometric series (1854). A still more general definition has been given by Lebesgue. As the more general definition cannot be made intelligible without the introduction of some rather recondite notions belonging to the theory of aggregates, we shall, in what follows, adhere to Riemann’s definition.
We have the following theorems:—
1. Any continuous function is integrable.
2. Any function with restricted oscillation is integrable.
3. A discontinuous function is integrable if it does not tend to become infinite, and if the points at which the oscillation of the function exceeds a given number σ, however small, can be enclosed in partial intervals the sum of whose breadths can be diminished indefinitely.
These partial intervals must be a set chosen out of some complete set obtained by the process used in the definition of integration.
4. The sum or product of two integrable functions is integrable.
As regards integrable functions we have the following theorems:
1. If S and I are the superior and inferior limits (or greatest and least values) of ƒ(x) in the interval between a and b, ∫ ba ƒ(x)dx is intermediate between S(b − a) and I(b − a).
2. The integral is a continuous function of each of the end-values.
3. If the further end-value b is variable, and if ∫ xa ƒ(x)dx = F(x), then if ƒ(x) is continuous at b, F(x) is differentiable at b, and F′(b) = ƒ(b).
4. In case ƒ(x) is continuous throughout the interval F(x) is continuous and differentiable throughout the interval, and F′(x) = ƒ(x) throughout the interval.
5. In case ƒ′(x) is continuous throughout the interval between a and b,
6. In case ƒ(x) is discontinuous at one or more points of the interval between a and b, in which it is integrable,
is a function of x, of which the four derivates at any point of the interval are equal to the limits of indefiniteness of ƒ(x) at the point.
7. It may be that there exist functions which are differentiable throughout an interval in which their differential coefficients are not integrable; if, however, F(x) is a function whose differential coefficient, F′(x), is integrable in an interval, then
where a is a fixed point, and x a variable point, of the interval. Similarly, if any one of the four derivates of a function is integrable in an interval, all are integrable, and the integral of either differs from the original function by a constant only.
The theorems (4), (6), (7) show that there is some discrepancy between the indefinite integral considered as the function which has a given function as its differential coefficient, and as a definite integral with a variable end-value.
We have also two theorems concerning the integral of the product of two integrable functions ƒ(x) and φ(x); these are known as “the first and second theorems of the mean.” The first theorem of the mean is that, if φ(x) is one-signed throughout the interval between a and b, there is a number M intermediate between the superior and inferior limits, or greatest and least values, of ƒ(x) in the interval, which has the property expressed by the equation
The second theorem of the mean is that, if ƒ(x) is monotonous throughout the interval, there is a number ξ between a and b which has the property expressed by the equation
(See Fourier’s Series.)
16. Improper Definite Integrals.—We may extend the idea of integration to cases of functions which are not defined at some point, or which tend to become infinite in the neighbourhood of some point, and to cases where the domain of the argument extends to infinite values. If c is a point in the interval between a and b at which ƒ(x) is not defined, we impose a restriction on the points x′_{r} of the definition: none of them is to be the point c. This comes to the same thing as defining ƒ(x)dx to be
where, to fix ideas, b is taken > a, and ε and ε′ are positive. The same definition applies to the case where ƒ(x) becomes infinite, or tends to become infinite, at c, provided both the limits exist. This definition may be otherwise expressed by saying that a partial interval containing the point c is omitted from the interval of integration, and a limit taken by diminishing the breadth of this partial interval indefinitely; in this form it applies to the cases where c is a or b.
Again, when the interval of integration is unlimited to the right, or extends to positively infinite values, we have as a definition
provided this limit exists. Similar definitions apply to
, and to .
All such definite integrals as the above are said to be “improper.” For example, is improper in two ways. It means
Lt | Lt | , |
h=∞ | ε=0 |
in which the positive number ε is first diminished indefinitely, and the positive number h is afterwards increased indefinitely.
The “theorems of the mean” (§ 15) require modification when the integrals are improper (see Fourier’s Series).
When the improper definite integral of a function which becomes, or tends to become, infinite, exists, the integral is said to be “convergent.” If ƒ(x) tends to become infinite at a point c in the interval between a and b, and the expression (1) does not exist, then the expression , which has no value, is called a “divergent integral,” and it may happen that there is a definite value for
provided that ε and ε′ are connected by some definite relation, and both, remaining positive, tend to limit zero. The value of the above limit is then called a “principal value” of the divergent integral. Cauchy’s principal value is obtained by making ε′ = ε, i.e. by taking the omitted interval so that the infinity is at its middle point. A divergent integral which has one or more principal values is sometimes described as “semi-convergent.”
17. Domain of a Set of Variables.—The numerical continuum of n dimensions (C_{n}) is the aggregate that is arrived at by attributing simultaneous values to each of n variables x_{1}, x_{2}, . . . x_{n}, these values being any real numbers. The elements of such an aggregate are called “points,” and the numbers x_{1}, x_{2} . . . x_{n} the “co-ordinates” of a point. Denoting in general the points (x_{1}, x_{2}, . . . x_{n}) and (x′_{1}, x′_{2} . . . x′_{n}) by x and x′, the sum of the differences |x_{1}−x′_{1}| + |x_{2}−x′_{2}| + . . . + |x_{n}−x′_{n}| may be denoted by |x−x′| and called the “difference of the two points.” We can in various ways choose out of the continuum an aggregate of points, which may be an infinite aggregate, and any such aggregate can be the “domain” of a “variable point.” The domain is said to “extend to an infinite distance” if, after any number N, however great, has been specified, it is possible to find in the domain points of which one or more co-ordinates exceed N in absolute value. The “neighbourhood” of a point a for a (positive) number h is the aggregate constituted of all the points x, which are such that the “difference” denoted by |x−a| <h. If an infinite aggregate of points does not extend to an infinite distance, there must be at least one point a, which has the property that the points of the aggregate which are in the neighbourhood of a for any number h, however small, themselves constitute an infinite aggregate, and then the point a is called a “limiting point” of the aggregate; it may or may not be a point of the aggregate. An aggregate of points is “perfect” when all its points are limiting points of it, and all its limiting points are points of it; it is “connected” when, after taking any two points a, b of it, and choosing any positive number ε, however small, a number m and points x′, x″, . . . x^{(m)} of the aggregate can be found so that all the differences denoted by |x′−a|,|x″−x′|, . . .|b−x^{(m)}| are less than ε. A perfect connected aggregate is a continuum. This is G. Cantor’s definition.
The definition of a continuum in C_{n} leaves open the question of the number of dimensions of the continuum, and a further explanation is necessary in order to define arithmetically what is meant by a “homogeneous part” H_{n} of C_{n}. Such a part would correspond to an interval in C_{1}, or to an area bounded by a simple closed contour in C_{2}; and, besides being perfect and connected, it would have the following properties: (1) There are points of C_{n}, which are not points of H_{n}; these form a complementary aggregate H′_{n}. (2) There are points “within” H_{n}; this means that for any such point there is a neighbourhood consisting exclusively of points of H_{n}. (3) The points of H_{n} which do not lie “within” H_{n} are limiting points of H′_{n}; they are not points of H′_{n}, but the neighbourhood of any such point for any number h, however small, contains points within H_{n} and points of H′_{n}: the aggregate of these points is called the “boundary” of H_{n}. (4) When any two points a, b within H_{n} are taken, it is possible to find a number ε and a corresponding number m, and to choose points x′, x″, . . . x^{(m)}, so that the neighbourhood of a for ε contains x′, and consists exclusively of points within H_{n}, and similarly for x′ and x″, x″ and x″′, . . . x^{(m)} and b. Condition (3) would exclude such an aggregate as that of the points within and upon two circles external to each other and a line joining a point on one to a point on the other, and condition (4) would exclude such an aggregate as that of the points within and upon two circles which touch externally.
18. Functions of Several Variables.—A function of several variables differs from a function of one variable in that the argument of the function consists of a set of variables, or is a variable point in a C_{n} when there are n variables. The function is definable by means of the domain of the argument and the rule of calculation. In the most important cases the domain of the argument is a homogeneous part H_{n} of C_{n} with the possible exception of isolated points, and the rule of calculation is that the value of the function in any assigned part of the domain of the argument is that value which is assumed at the point by an assigned analytical expression. The limit of a function at a point a is defined in the same way as in the case of a function of one variable.
We take a positive fraction ε and consider the neighbourhood of a for h, and from this neighbourhood we exclude the point a, and we also exclude any point which is not in the domain of the argument. Then we take x and x′ to be any two of the retained points in the neighbourhood. The function ƒ has a limit at a if for any positive ε, however small, there is a corresponding h which has the property that |ƒ(x′)−ƒ(x)| < ε, whatever points x, x′ in the neighbourhood of a for h we take (a excluded). For example, when there are two variables x_{1}, x_{2}, and both are unrestricted, the domain of the argument is represented by a plane, and the values of the function are correlated with the points of the plane. The function has a limit at a point a, if we can mark out on the plane a region containing the point a within it, and such that the difference of the values of the function which correspond to any two points of the region (neither of the points being a) can be made as small as we please in absolute value by contracting all the linear dimensions of the region sufficiently. When the domain of the argument of a function of n variables extends to an infinite distance, there is a “limit at an infinite distance” if, after any number ε, however small, has been specified, a number N can be found which is such that |ƒ(x′)−ƒ(x)| < ε, for all points x and x′ (of the domain) of which one or more co-ordinates exceed N in absolute value. In the case of functions of several variables great importance attaches to limits for a restricted domain. The definition of such a limit is verbally the same as the corresponding definition in the case of functions of one variable (§ 6). For example, a function of x_{1} and x_{2} may have a limit at (x_{1} = 0, x_{2} = 0) if we first diminish x_{1} without limit, keeping x_{2} constant, and afterwards diminish x_{2} without limit. Expressed in geometrical language, this process amounts to approaching the origin along the axis of x_{2}. The definitions of superior and inferior limits, and of maxima and minima, and the explanations of what is meant by saying that a function of several variables becomes infinite, or tends to become infinite, at a point, are almost identical verbally with the corresponding definitions and explanations in the case of a function of one variable (§ 7). The definition of a continuous function (§ 9) admits of immediate extension; but it is very important to observe that a function of two or more variables may be a continuous function of each of the variables, when the rest are kept constant, without being a continuous function of its argument. For example, a function of x and y may be defined by the conditions that when x = 0 it is zero whatever value y may have, and when x ≠ 0 it has the value of sin {4 tan^{−1} (y/x)}. When y has any particular value this function is a continuous function of x, and, when x has any particular value this function is a continuous function of y; but the function of x and y is discontinuous at (x = 0, y = 0).
19. Differentiation and Integration.—The definition of partial differentiation of a function of several variables presents no difficulty. The most important theorems concerning differentiable functions are the “theorem of the total differential,” the theorem of the interchangeability of the order of partial differentiations, and the extension of Taylor’s theorem (see Infinitesimal Calculus).
With a view to the establishment of the notion of integration through a domain, we must define the “extent” of the domain. Take first a domain consisting of the point a and all the points x for which |x−a| < 12h, where h is a chosen positive number; the extent of this domain is h^{n}, n being the number of variables; such a domain may be described as “square,” and the number h may be called its “breadth”; it is a homogeneous part of the numerical continuum of n dimensions, and its boundary consists of all the points for which |x−a|＝12h. Now the points of any domain, which does not extend to an infinite distance, may be assigned to a finite number m of square domains of finite breadths, so that every point of the domain is either within one of these square domains or on its boundary, and so that no point is within two of the square domains; also we may devise a rule by which, as the number m increases indefinitely, the breadths of all the square domains are diminished indefinitely. When this process is applied to a homogeneous part, H, of the numerical continuum C_{n}, then, at any stage of the process, there will be some square domains of which all the points belong to H, and there will generally be others of which some, but not all, of the points belong to H. As the number m is increased indefinitely the sums of the extents of both these categories of square domains will tend to definite limits, which cannot be negative; when the second of these limits is zero the domain H is said to be “measurable,” and the first of these limits is its “extent”; it is independent of the rule adopted for constructing the square domains and contracting their breadths. The notion thus introduced may be adapted by suitable modifications to continua of lower dimensions in C_{n}.
The integral of a function ƒ(x) through a measurable domain H, which is a homogeneous part of the numerical continuum of n dimensions, is defined in just the same way as the integral through an interval, the extent of a square domain taking the place of the difference of the end-values of a partial interval; and the condition of integrability takes the same form as in the simple case. In particular, the condition is satisfied when the function is continuous throughout the domain. The definition of an integral through a domain may be adapted to any domain of measurable extent. The extensions to “improper” definite integrals may be made in the same way as for a function of one variable; in the particular case of a function which tends to become infinite at a point in the domain of integration, the point is enclosed in a partial domain which is omitted from the integration, and a limit is taken when the extent of the omitted partial domain is diminished indefinitely; a divergent integral may have different (principal) values for different modes of contracting the extent of the omitted partial domain. In applications to mathematical physics great importance attaches to convergent integrals and to principal values of divergent integrals. For example, any component of magnetic force at a point within a magnet, and the corresponding component of magnetic induction at the same point are expressed by different principal values of the same divergent integral. Delicate questions arise as to the possibility of representing the integral of a function of n variables through a domain H_{n}, as a repeated integral, of evaluating it by successive integrations with respect to the variables one at a time and of interchanging the order of such integrations. These questions have been discussed very completely by C. Jordan, and we may quote the result that all the transformations in question are valid when the function is continuous throughout the domain.
20. Representation of Functions in General.—We have seen that the notion of a function is wider than the notion of an analytical expression, and that the same function may be “represented” by one expression in one part of the domain of the argument and by some other expression in another part of the domain (§ 5). Thus there arises the general problem of the representation of functions. The function may be given by specifying the domain of the argument and the rule of calculation, or else the function may have to be determined in accordance with certain conditions; for example, it may have to satisfy in a prescribed domain an assigned differential equation. In either case the problem is to determine, when possible, a single analytical expression which shall have the same value as the function at all points in the domain of the argument. For the representation of most functions for which the problem can be solved recourse must be had to limiting processes. Thus we may utilize infinite series, or infinite products, or definite integrals; or again we may represent a function of one variable as the limit of an expression containing two variables in a domain in which one variable remains constant and another varies. An example of this process is afforded by the expression Lt_{y}＝∞xy / (x^{2}y+1), which represents a function of x vanishing at x＝0 and at all other values of x having the value of 1/x. The method of series falls under this more general process (cf. § 6). When the terms u_{1}, u_{2}, . . . of a series are functions of a variable x, the sum s_{n} of the first n terms of the series is a function of x and n; and, when the series is convergent, its sum, which is Lt_{n}＝∞s_{n}, can represent a function of x. In most cases the series converges for some values of x and not for others, and the values for which it converges form the “domain of convergence.” The sum of the series represents a function in this domain.
The apparently more general method of representation of a function of one variable as the limit of a function of two variables has been shown by R. Baire to be identical in scope with the method of series, and it has been developed by him so as to give a very complete account of the possibility of representing functions by analytical expressions. For example, he has shown that Riemann’s totally discontinuous function, which is equal to 1 when x is rational and to 0 when x is irrational, can be represented by an analytical expression. An infinite process of a different kind has been adapted to the problem of the representation of a continuous function by T. Brodén. He begins with a function having a graph in the form of a regular polygon, and interpolates additional angular points in an ordered sequence without limit. The representation of a function by means of an infinite product falls clearly under Baire’s method, while the representation by means of a definite integral is analogous to Brodén’s method. As an example of these two latter processes we may cite the Gamma function [Γ(x)] defined for positive values of x by the definite integral
e^{−t}t ^{x−1}dt,
or by the infinite product
Lt_{n＝∞} n^{x}/x(1 + x)(1 + 12x) . . . (1 +xn − 1)
The second of these expressions avails for the representation of the function at all points at which x is not a negative integer.
21. Power Series.—Taylor’s theorem leads in certain cases to a representation of a function by an infinite series. We have under certain conditions (§ 13)
ƒ(x)＝ƒ(a) + (x − a)^{r}r !ƒ^{(r)}(a)+R_{n} ;
and this becomes
ƒ(x)＝ƒ(a) + (x − a)^{r}r !ƒ^{(r)}(a),
provided that (α) a positive number k can be found so that at all points in the interval between a and a+k (except these points) ƒ(x) has continuous differential coefficients of all finite orders, and at a has progressive differential coefficients of all finite orders; (β) Cauchy’s form of the remainder R_{n}, viz. (x − a)^{n}(n − 1)! (1 − θ)^{n−1} ƒ^{n} {a + θ(x − a)}, has the limit zero when n increases indefinitely, for all values of θ between 0 and 1, and for all values of x in the interval between a and a + k, except possibly a + k. When these conditions are satisfied, the series (1) represents the function at all points of the interval between a and a + k, except possibly a + k, and the function is “analytic” (§ 13) in this domain. Obvious modifications admit of extension to an interval between a and a − k, or between a − k and a + k. When a series of the form (1) represents a function it is called “the Taylor’s series for the function.”
Taylor’s series is a power series, i.e. a series of the form
a_{n} (x − a)^{n}.
As regards power series we have the following theorems:
1. If the power series converges at any point except a there is a number k which has the property that the series converges absolutely in the interval between a − k and a + k, with the possible exception of one or both end-points.
2. The power series represents a continuous function in its domain of convergence (the end-points may have to be excluded).
3. This function is analytic in the domain, and the power series representing it is the Taylor’s series for the function.
The theory of power series has been developed chiefly from the point of view of the theory of functions of complex variables.
22. Uniform Convergence.—We shall suppose that the domain of convergence of an infinite series of functions is an interval with the possible exception of isolated points. Let ƒ(x) be the sum of the series at any point x of the domain, and ƒ_{n}(x) the sum of the first n + 1 terms. The condition of convergence at a point a is that, after any positive number ε, however small, has been specified, it must be possible to find a number n so that |ƒ_{m}(a)−ƒ_{p}(a)|<ε for all values of m and p which exceed n. The sum, ƒ(a), is the limit of the sequence of numbers ƒ_{n}(a) at n＝∞. The convergence is said to be “uniform” in an interval if, after specification of ε, the same number n suffices at all points of the interval to make |ƒ(x) − ƒ_{m}(x)| < ε for all values of m which exceed n. The numbers n corresponding to any ε, however small, are all finite, but, when ε is less than some fixed finite number, they may have an infinite superior limit (§ 7); when this is the case there must be at least one point, a, of the interval which has the property that, whatever number N we take, ε can be taken so small that, at some point in the neighbourhood of a, n must be taken > N to make |ƒ(x) − ƒ_{m}(x)| < ε when m > n; then the series does not converge uniformly in the neighbourhood of a. The distinction may be otherwise expressed thus: Choose a first and ε afterwards, then the number n is finite; choose ε first and allow a to vary, then the number n becomes a function of a, which may tend to become infinite, or may remain below a fixed number; if such a fixed number exists, however small ε may be, the convergence is uniform.
For example, the series sin x−12 sin 2x＋13 sin 3x−. . . is convergent for all real values of x, and, when π > x > −π its sum is 12x; but, when x is but a little less than π, the number of terms which must be taken in order to bring the sum at all near to the value of 12x is very large, and this number tends to increase indefinitely as x approaches π. This series does not converge uniformly in the neighbourhood of x＝π. Another example is afforded by the series , of which the remainder after n terms is nx/(n^{2}x^{2} + 1). If we put x＝1/n, for any value of n, however great, the remainder is 12; and the number of terms required to be taken to make the remainder tend to zero depends upon the value of x when x is near to zero—it must, in fact, be large compared with 1/x. The series does not converge uniformly in the neighbourhood of x＝0.
As regards series whose terms represent continuous functions we have the following theorems:
(1) If the series converges uniformly in an interval it represents a function which is continuous throughout the interval.
(2) If the series represents a function which is discontinuous in an interval it cannot converge uniformly in the interval.
(3) A series which does not converge uniformly in an interval may nevertheless represent a function which is continuous throughout the interval.
(4) A power series converges uniformly in any interval contained within its domain of convergence, the end-points being excluded.
(5) If converges uniformly in the interval between a and b
or a series which converges uniformly may be integrated term by term.
(6) If converges uniformly in an interval, then converges in the interval, and represents a continuous differentiable function, φ(x); in fact we have
,
or a series can be differentiated term by term if the series of derived functions converges uniformly.
A series whose terms represent functions which are not continuous
throughout an interval may converge uniformly in the
interval. If is such a series, and if all the
functions ƒ_{r}(x) have limits at a, then ƒ(x) has a limit at a, which
is Lt
x＝a . A similar theorem holds for limits on the left
or on the right.
23. Fourier’s Series.—An extensive class of functions admit of being represented by series of the form
(i.) |
and the rule for determining the coefficients a_{n}, b_{n} of such a series, in order that it may represent a given function ƒ(x) in the interval between −c and c, was given by Fourier, viz. we have
The interval between −c and c may be called the “periodic interval,” and we may replace it by any other interval, e.g. that between 0 and 1, without any restriction of generality. When this is done the sum of the series takes the form
Lt
n＝∞
and this is
Lt n＝∞ |
(ii.) |
Fourier’s theorem is that, if the periodic interval can be divided into a finite number of partial intervals within each of which the function is ordinary (§ 14), the series represents the function within each of those partial intervals. In Fourier’s time a function of this character was regarded as completely arbitrary.
By a discussion of the integral (ii.) based on the Second Theorem of the Mean (§ 15) it can be shown that, if ƒ(x) has restricted oscillation in the interval (§ 11), the sum of the series is equal to 12{ƒ(x + 0) + ƒ(x − 0)} at any point x within the interval, and that it is equal to 12 {ƒ(+0) + ƒ(1 − 0} at each end of the interval. (See the article Fourier’s Series.) It therefore represents the function at any point of the periodic interval at which the function is continuous (except possibly the end-points), and has a definite value at each point of discontinuity. The condition of restricted oscillation includes all the functions contemplated in the statement of the theorem and some others. Further, it can be shown that, in any partial interval throughout which ƒ(x) is continuous, the series converges uniformly, and that no series of the form (i), with coefficients other than those determined by Fourier’s rule, can represent the function at all points, except points of discontinuity, in the same periodic interval. The result can be extended to a function ƒ(x) which tends to become infinite at a finite number of points a of the interval, provided (1) ƒ(x) tends to become determinately infinite at each of the points a, (2) the improper definite integral of ƒ(x) through the interval is convergent, (3) ƒ(x) has not an infinite number of discontinuities or of maxima or minima in the interval.
24. Representation of Continuous Functions by Series.—If the series for ƒ(x) formed by Fourier’s rule converges at the point a of the periodic interval, and if ƒ(x) is continuous at a, the sum of the series is ƒ(a); but it has been proved by P. du Bois Reymond that the function may be continuous at a, and yet the series formed by Fourier’s rule may be divergent at a. Thus some continuous functions do not admit of representation by Fourier’s series. All continuous functions, however, admit of being represented with arbitrarily close approximation in either of two forms, which may be described as “terminated Fourier’s series” and “terminated power series,” according to the two following theorems:
(1) If ƒ(x) is continuous throughout the interval between 0 and 2π, and if any positive number ε however small is specified, it is possible to find an integer n, so that the difference between the value of ƒ(x) and the sum of the first n terms of the series for ƒ(x), formed by Fourier’s rule with periodic interval from 0 to 2π, shall be less than ε at all points of the interval. This result can be extended to a function which is continuous in any given interval.
(2) If ƒ(x) is continuous throughout an interval, and any positive number ε however small is specified, it is possible to find an integer n and a polynomial in x of the nth degree, so that the difference between the value of ƒ(x) and the value of the polynomial shall be less than ε at all points of the interval.
Again it can be proved that, if ƒ(x) is continuous throughout a given interval, polynomials in x of finite degrees can be found, so as to form an infinite series of polynomials whose sum is equal to ƒ(x) at all points of the interval. Methods of representation of continuous functions by infinite series of rational fractional functions have also been devised.
Particular interest attaches to continuous functions which are not differentiable. Weierstrass gave as an example the function represented by the series , where a is positive and less than unity, and b is an odd integer exceeding (1 + 32π)/a. It can be shown that this series is uniformly convergent in every interval, and that the continuous function ƒ(x) represented by it has the property that there is, in the neighbourhood of any point x_{0}, an infinite aggregate of points x′, having x_{0} as a limiting point, for which {ƒ(x′) − ƒ(x_{0})} / (x′ − x_{0}) tends to become infinite with one sign when x′ − x_{0} approaches zero through positive values, and infinite with the opposite sign when x′−x_{0} approaches zero through negative values. Accordingly the function is not differentiable at any point. The definite integral of such a function ƒ(x) through the interval between a fixed point and a variable point x, is a continuous differentiable function F(x), for which F′(x)＝ƒ(x); and, if ƒ(x) is one-signed throughout any interval F(x) is monotonous throughout that interval, but yet F(x) cannot be represented by a curve. In any interval, however small, the tangent would have to take the same direction for infinitely many points, and yet there is no interval in which the tangent has everywhere the same direction. Further, it can be shown that all functions which are everywhere continuous and nowhere differentiable are capable of representation by series of the form Σa_{n}φ_{n}(x), where Σa_{n} is an absolutely convergent series of numbers, and φ_{n}(x) is an analytic function whose absolute value never exceeds unity.
25. Calculations with Divergent Series.—When the series described in (1) and (2) of § 24 diverge, they may, nevertheless, be used for the approximate numerical calculation of the values of the function, provided the calculation is not carried beyond a certain number of terms. Expansions in series which have the property of representing a function approximately when the expansion is not carried too far are called “asymptotic expansions.” Sometimes they are called “semi-convergent series”; but this term is avoided in the best modern usage, because it is often used to describe series whose convergence depends upon the order of the terms, such as the series 1−12＋13−. . .
In general, let ƒ_{0}(x)＋ƒ_{1}(x)＋. . . be a series of functions which does not converge in a certain domain. It may happen that, if any number ε, however small, is first specified, a number n can afterwards be found so that, at a point a of the domain, the value ƒ(a) of a certain function ƒ(x) is connected with the sum of the first n＋1 terms of the series by the relation |ƒ(a) − ƒ_{r}(a)| < ε. It must also happen that, if any number N, however great, is specified, a number n′(>n) can be found so that, for all values of m which exceed n′, |ƒ_{r}(a)|>N. The divergent series ƒ_{0}(x)＋ƒ_{1}(x)＋. . . is then an asymptotic expansion for the function ƒ(x) in the domain.
The best known example of an asymptotic expansion is Stirling’s formula for n! when n is large, viz.
where is some number lying between 0 and 1. This formula is included in the asymptotic expansion for the Gamma function. We have in fact
where is the function defined by the definite integral
The multiplier of e^{−tx} under the sign of integration can be expanded in the power series
B_{1} | − | B_{2} | t ^{2} + | B_{3} | t ^{4} − . . ., |
2! | 4! | 6! |
where B_{1}, B_{2}, . . . are “Bernoulli’s numbers” given by the formula
B_{m}＝2.2m! (2π)^{−2m} (r^{−2m}).
When the series is integrated term by term, the right-hand member of the equation for takes the form
B_{1} | 1 | − | B_{2} | 1 | + | B_{3} | 1 | − . . ., | |||
1·2 | x | 3·4 | x^{3} | 5·6 | x^{5} |
This series is divergent; but, if it is stopped at any term, the difference between the sum of the series so terminated and the value of is less than the last of the retained terms. Stirling’s formula is obtained by retaining the first term only. Other well-known examples of asymptotic expansions are afforded by the descending series for Bessel’s functions. Methods of obtaining such expansions for the solutions of linear differential equations of the second order were investigated by G. G. Stokes (Math. and Phys. Papers, vol. ii. p. 329), and a general theory of asymptotic expansions has been developed by H. Poincaré. A still more general theory of divergent series, and of the conditions in which they can be used, as above, for the purposes of approximate calculation has been worked out by É. Borel. The great merit of asymptotic expansions is that they admit of addition, subtraction, multiplication and division, term by term, in the same way as absolutely convergent series, and they admit also of integration term by term; that is to say, the results of such operations are asymptotic expansions for the sum, difference, product, quotient, or integral, as the case may be.
26. Interchange of the Order of Limiting Operations.—When we require to perform any limiting operation upon a function which is itself represented by the result of a limiting process, the question of the possibility of interchanging the order of the two processes always arises. In the more elementary problems of analysis it generally happens that such an interchange is possible; but in general it is not possible. In other words, the performance of the two processes in different orders may lead to two different results; or the performance of them in one of the two orders may lead to no result. The fact that the interchange is possible under suitable restrictions for a particular class of operations is a theorem to be proved.
Among examples of such interchanges we have the differentiation and integration of an infinite series term by term (§ 22), and the differentiation and integration of a definite integral with respect to a parameter by performing the like processes upon the subject of integration (§ 19). As a last example we may take the limit of the sum of an infinite series of functions at a point in the domain of convergence. Suppose that the series represents a function (ƒx) in an interval containing a point a, and that each of the functions ƒ_{r}(x) has a limit at a. If we first put x=a, and then sum the series, we have the value ƒ(a); if we first sum the series for any x, and afterwards take the limit of the sum at x＝a, we have the limit of ƒ(x) at a; if we first replace each function ƒ_{r}(x) by its limit at a, and then sum the series, we may arrive at a value different from either of the foregoing. If the function ƒ(x) is continuous at a, the first and second results are equal; if the functions ƒ_{r}(x) are all continuous at a, the first and third results are equal; if the series is uniformly convergent, the second and third results are equal. This last case is an example of the interchange of the order of two limiting operations, and a sufficient, though not always a necessary, condition, for the validity of such an interchange will usually be found in some suitable extension of the notion of uniform convergence.
Authorities.—Among the more important treatises and memoirs connected with the subject are: R. Baire, Fonctions discontinues (Paris, 1905); O. Biermann, Analytische Functionen (Leipzig, 1887); É. Borel, Théorie des fonctions (Paris, 1898) (containing an introductory account of the Theory of Aggregates), and Séries divergentes (Paris, 1901), also Fonctions de variables réelles (Paris, 1905); T. J. I’A. Bromwich, Introduction to the Theory of Infinite Series (London, 1908); H. S. Carslaw, Introduction to the Theory of Fourier’s Series and Integrals (London, 1906); U. Dini, Functionen e. reellen Grösse (Leipzig, 1892), and Serie di Fourier (Pisa, 1880); A. Genocchi u. G. Peano, Diff.- u. Int.-Rechnung (Leipzig, 1899); J. Harkness and F. Morley, Introduction to the Theory of Analytic Functions (London, 1898); A. Harnack, Diff. and Int. Calculus (London, 1891); E. W. Hobson, The Theory of Functions of a real Variable and the Theory of Fourier’s Series (Cambridge, 1907); C. Jordan, Cours d’analyse (Paris, 1893–1896); L. Kronecker, Theorie d. einfachen u. vielfachen Integrale (Leipzig, 1894); H. Lebesgue, Leçons sur l’intégration (Paris, 1904); M. Pasch, Diff.- u. Int.-Rechnung (Leipzig, 1882); E. Picard, Traité d’analyse (Paris, 1891); O. Stolz, Allgemeine Arithmetik (Leipzig, 1885), and Diff.- u. Int.-Rechnung (Leipzig, 1893–1899); J. Tannery, Théorie des fonctions (Paris, 1886); W. H. and G. C. Young, The Theory of Sets of Points (Cambridge, 1906); Brodén, “Stetige Functionen e. reellen Veränderlichen,” Crelle, Bd. cxviii.; G. Cantor, A series of memoirs on the “Theory of Aggregates” and on “Trigonometric series” in Acta Math. tt. ii., vii., and Math. Ann. Bde. iv.-xxiii.; Darboux, “Fonctions discontinues,” Ann. Sci. École normale sup. (2), t. iv.; Dedekind, Was sind u. was sollen d. Zahlen? (Brunswick, 1887), and Stetigkeit u. irrationale Zahlen (Brunswick, 1872); Dirichlet, “Convergence des séries trigonométriques,” Crelle, Bd. iv.; P. Du Bois Reymond, Allgemeine Functionentheorie (Tübingen, 1882), and many memoirs in Crelle and in Math. Ann.; Heine, “Functionenlehre,” Crelle, Bd. lxxiv.; J. Pierpont, The Theory of Functions of a real Variable (Boston, 1905); F. Klein, “Allgemeine Functionsbegriff,” Math. Ann. Bd. xxii.; W. F. Osgood, “On Uniform Convergence,” Amer. J. of Math. vol. xix.; Pincherle, “Funzioni analitiche secondo Weierstrass,” Giorn. di mat. t. xviii.; Pringsheim, “Bedingungen d. Taylorschen Lehrsatzes,” Math. Ann. Bd. xliv.; Riemann, “Trigonometrische Reihe,” Ges. Werke (Leipzig, 1876); Schoenflies, “Entwickelung d. Lehre v. d. Punktmannigfaltigkeiten,” Jahresber. d. deutschen Math.-Vereinigung, Bd. viii.; Study, Memoir on “Functions with Restricted Oscillation,” Math. Ann. Bd. xlvii.; Weierstrass, Memoir on “Continuous Functions that are not Differentiable,” Ges. math. Werke, Bd. ii. p. 71 (Berlin, 1895), and on the “Representation of Arbitrary Functions,” ibid. Bd. iii. p. 1; W. H. Young, “On Uniform and Non-uniform Convergence,” Proc. London Math. Soc. (Ser. 2) t. 6. Further information and very full references will be found in the articles by Pringsheim, Schoenflies and Voss in the Encyclopädie der math. Wissenschaften, Bde. i., ii. (Leipzig, 1898, 1899). (A. E. H. L.)
II—Functions of Complex Variables
In the preceding section the doctrine of functionality is discussed with respect to real quantities; in this section the theory when complex or imaginary quantities are involved receives treatment. The following abstract explains the arrangement of the subject matter: (§ 1), Complex numbers, states what a complex number is; (§ 2), Plotting of simple expressions involving complex numbers, illustrates the meaning in some simple cases, introducing the notion of conformal representation and proving that an algebraic equation has complex, if not real, roots; (§ 3), Limiting operations, defines certain simple functions of a complex variable which are obtained by passing to a limit, in particular the exponential function, and the generalized logarithm, here denoted by λ(z); (§ 4), Functions of a complex variable in general, after explaining briefly what is to be understood by a region of the complex plane and by a path, and expounding a logical principle of some importance, gives the accepted definition of a function of a complex variable, establishes the existence of a complex integral, and proves Cauchy’s theorem relating thereto; (§ 5), Applications, considers the differentiation and integration of series of functions of a complex variable, proves Laurent’s theorem, and establishes the expansion of a function of a complex variable as a power series, leading, in (§ 6), Singular points, to a definition of the region of existence and singular points of a function of a complex variable, and thence, in (§ 7), Monogenic Functions, to what the writer believes to be the simplest definition of a function of a complex variable, that of Weierstrass; (§ 8), Some elementary properties of single valued functions, first discusses the meaning of a pole, proves that a single valued function with only poles is rational, gives Mittag-Leffler’s theorem, and Weierstrass’s theorem for the primary factors of an integral function, stating generalized forms for these, leading to the theorem of (§ 9), The construction of a monogenic function with a given region of existence, with which is connected (§10), Expression of a monogenic function by rational functions in a given region, of which the method is applied in (§ 11), Expression of (1 − z)^{−1} by polynomials, to a definite example, used here to obtain (§ 12), An expansion of an arbitrary function by means of a series of polynomials, over a star region, also obtained in the original manner of Mittag-Leffler; (§ 13), Application of Cauchy’s theorem to the determination of definite integrals, gives two examples of this method; (§ 14), Doubly Periodic Functions, is introduced at this stage as furnishing an excellent example of the preceding principles. The reader who wishes to approach the matter from the point of view of Integral Calculus should first consult the section (§ 20) below, dealing with Elliptic Integrals; (§ 15), Potential Functions, Conformal representation in general, gives a sketch of the connexion of the theory of potential functions with the theory of conformal representation, enunciating the Schwarz-Christoffel theorem for the representation of a polygon, with the application to the case of an equilateral triangle; (§ 16), Multiple-valued Functions, Algebraic Functions, deals for the most part with algebraic functions, proving the residue theorem, and establishing that an algebraic function has a definite Order; (§ 17), Integrals of Algebraic Functions, enunciating Abel’s theorem; (§ 18), Indeterminateness of Algebraic Integrals, deals with the periods associated with an algebraic integral, establishing that for an elliptic integral the number of these is two; (§ 19), Reversion of an algebraic integral, mentions a problem considered below in detail for an elliptic integral; (§ 20), Elliptic Integrals, considers the algebraic reduction of any elliptic integral to one of three standard forms, and proves that the function obtained by reversion is single-valued; (§ 21), Modular Functions, gives a statement of some of the more elementary properties of some functions of great importance, with a definition of Automorphic Functions, and a hint of the connexion with the theory of linear differential equations; (§ 22), A property of integral functions, deduced from the theory of modular functions, proves that there cannot be more than one value not assumed by an integral function, and gives the basis of the well-known expression of the modulus of the elliptic functions in terms of the ratio of the periods; (§ 23), Geometrical applications of Elliptic Functions, shows that any plane curve of deficiency unity can be expressed by elliptic functions, and gives a geometrical proof of the addition theorem for the function ℜ(u); (§ 24), Integrals of Algebraic Functions in connexion with the theory of plane curves, discusses the generalization to curves of any deficiency; (§ 25), Monogenic Functions of several independent variables, describes briefly the beginnings of this theory, with a mention of some fundamental theorems: (§ 26), Multiply-Periodic Functions and the Theory of Surfaces, attempts to show the nature of some problems now being actively pursued.
Beside the brevity necessarily attaching to the account here given of advanced parts of the subject, some of the more elementary results are stated only, without proof, as, for instance: the monogeneity of an algebraic function, no reference being made, moreover, to the cases of differential equations whose integrals are monogenic; that a function possessing an algebraic addition theorem is necessarily an elliptic function (or a particular case of such); that any area can be conformally represented on a half plane, a theorem requiring further much more detailed consideration of the meaning of area than we have given; while the character and properties, including the connectivity, of a Riemann surface have not been referred to. The theta functions are referred to only once, and the principles of the theory of Abelian Functions have been illustrated only by the developments given for elliptic functions.
§ 1. Complex Numbers.—Complex numbers are numbers of the form x + iy, where x, y are ordinary real numbers, and i is a symbol imagined capable of combination with itself and the ordinary real numbers, by way of addition, subtraction, multiplication and division, according to the ordinary commutative, associative and distributive laws; the symbol i is further such that i ^{2} = −1.
Taking in a plane two rectangular axes Ox, Oy, we assume that every point of the plane is definitely associated with two real numbers x, y (its co-ordinates) and conversely; thus any point of the plane is associated with a single complex number; in particular, for every point of the axis Ox, for which y = O, the associated number is an ordinary real number; the complex numbers thus include the real numbers. The axis Ox is often called the real axis, and the axis Oy the imaginary axis. If P be the point associated with the complex variable z = x + iy, the distance OP be called r, and the positive angle less than 2π between Ox and OP be called θ, we may write z = r (cos θ + i sin θ); then r is called the modulus or absolute value of z and often denoted by |z| and θ is called the phase or amplitude of z, and often denoted by ph (z); strictly the phase is ambiguous by additive multiples of 2π. If z′ = x′ + iy′ be represented by P′, the complex argument z′ + z is represented by a point P″ obtained by drawing from P′ a line equal to and parallel to OP; the geometrical representation involves for its validity certain properties of the plane; as, for instance, the equation z′ + z = z + z′ involves the possibility of constructing a parallelogram (with OP″ as diagonal). It is important constantly to bear in mind, what is capable of easy algebraic proof (and geometrically is Euclid’s proposition III. 7), that the modulus of a sum or difference of two complex numbers is generally less than (and is never greater than) the sum of their moduli, and is greater than (or equal to) the difference of their moduli; the former statement thus holds for the sum of any number of complex numbers. We shall write E(iθ) for cos θ + i sin θ; it is at once verified that E(iα). E(iβ) = E[i(α + β)], so that the phase of a product of complex quantities is obtained by addition of their respective phases.
§ 2. Plotting and Properties of Simple Expressions involving a Complex Number.—If we put ζ = (z−i)/(z + i), and, putting ζ = ξ + iη, take a new plane upon which ξ, η are rectangular co-ordinates, the equations ξ= (x^{2} + y^{2}− 1)/[x^{2} + (y + 1)^{2}], η = −2xy/[x^{2} + (y + i)^{2}] will determine, corresponding to any point of the first plane, a point of the second plane. There is the one exception of z = −i, that is, x = 0, y = −1, of which the corresponding point is at infinity. It can now be easily proved that as z describes the real axis in its plane the point ζ describes once a circle of radius unity, with centre at ζ = 0, and that there is a definite correspondence of point to point between points in the z-plane which are above the real axis and points of the ζ-plane which are interior to this circle; in particular z = i corresponds to ζ = 0.
Moreover, ζ being a rational function of z, both ξ and η are continuous differentiable functions of x and y, save when ζ is infinite; writing ζ = ƒ(x, y) = ƒ(z − iy, y), the fact that this is really independent of y leads at once to ∂f/∂x + i∂f/∂y = 0, and hence to
∂ξ | = | ∂η | , | ∂ξ | = − | ∂η | , | ∂^{2}ξ | + | ∂^{2}ξ | = 0; |
∂x | ∂y′ | ∂y | ∂x′ | ∂x^{2} | ∂y^{2} |
so that ξ is not any arbitrary function of x, y, and when ξ is known η is determinate save for an additive constant. Also, in virtue of these equations, if ζ, ζ′ be the values of ζ corresponding to two near values of z, say z and z′, the ratio (ζ′ − ζ)/(z′ − z) has a definite limit when z′ = z, independent of the ultimate phase of z′ − z, this limit being therefore equal to ∂ζ/∂x, that is, ∂ξ/∂x + i∂η)/∂x. Geometrically this fact is interpreted by saying that if two curves in the z-plane intersect at a point P, at which both the differential coefficients ∂ξ/∂x, ∂η/∂x are not zero, and P′, P″ be two points near to P on these curves respectively, and the corresponding points of the ζ-plane be Q, Q′, Q″, then (1) the ratios PP″/PP′, QQ″/QQ′ are ultimately equal, (2) the angle P′PP″ is equal to Q′QQ″, (3) the rotation from PP′ to PP″ is in the same sense as from QQ′ to QQ″, it being understood that the axes of ξ, η in the one plane are related as are the axes of x, y. Thus any diagram of the z-plane becomes a diagram of the ζ-plane with the same angles; the magnification, however, which is equal to [ (∂ξ∂x)^{2} + (∂ξ∂y)^{2} ]^{1/2} varies from point to point. Conversely, it appears subsequently that the expression of any copy of a diagram (say, a map) which preserves angles requires the intervention of the complex variable.
As another illustration consider the case when ζ is a polynomial in z,
H being an arbitrary real positive number, it can be shown that a radius R can be found such for every |z| > R we have |ζ| > H; consider the lower limit of |ζ| for |z| < R; as ξ^{2} + η^{2} is a real continuous function of x, y for |z| < R, there is a point (x, y), say (x_{0}, y_{0}), at which |ζ| is least, say equal to ρ, and therefore within a circle in the ζ-plane whose centre is the origin, of radius ρ, there are no points ζ representing values corresponding to |z| < R. But if ζ_{0} be the value of ζ corresponding to (x_{0}, y_{0}), and the expression of ζ − ζ_{0} near z_{0} = x_{0} + iy_{0}, in terms of z − z_{0}, be A(z − z_{0})^{m} + B(z − z_{0})^{m + 1} + ..., where A is not zero, to two points near to (x_{0}, y_{0}), say (x_{1}, y_{1}) or z_{1} and z_{2} = z_{0} + (z_{1} − z_{0}) (cos π/m + i sin π/m), will correspond two points near to ζ_{0}, say ζ_{1}, and 2ζ_{0} − ζ′_{1}, situated so that ζ_{0} is between them. One of these must be within the circle (ρ). We infer then that ρ = 0, and have proved that every polynomial in z vanishes for some value of z, and can therefore be written as a product of factors of the form z − α, where α denotes a complex number. This proposition alone suffices to suggest the importance of complex numbers.
§ 3. Limiting Operations.—In order that a complex number ζ = ξ + iη may have a limit it is necessary and sufficient that each of ξ and η has a limit. Thus an infinite series w_{0} + w_{1} + w_{2} + ..., whose terms are complex numbers, is convergent if the real series formed by taking the real parts of its terms and that formed by the imaginary terms are both convergent. The series is also convergent if the real series formed by the moduli of its terms is convergent; in that case the series is said to be absolutely convergent, and it can be shown that its sum is unaltered by taking the terms in any other order. Generally the necessary and sufficient condition of convergence is that, for a given real positive ε, a number m exists such that for every n > m, and every positive p, the batch of terms w_{n} + w_{n+1} + ... + wn+p is less than ε in absolute value. If the terms depend upon a complex variable z, the convergence is called uniform for a range of values of z, when the inequality holds, for the same ε and m, for all the points z of this range.
The infinite series of most importance are those of which the general term is a_{n}z^{n}, wherein a_{n} is a constant, and z is regarded as variable, n = 0, 1, 2, 3, ... Such a series is called a power series, if a real and positive number M exists such that for z = z_{0} and every n, |a_{n}z_{0}^{n}| < M, a condition which is satisfied, for instance, if the series converges for z = z_{0}, then it is at once proved that the series converges absolutely for every z for which |z| < |z_{0}|, and converges uniformly over every range |z| < r ′ for which r ′ < |z_{0}|. To every power series there belongs then a circle of convergence within which it converges absolutely and uniformly; the function of z represented by it is thus continuous within the circle (this being the result of a general property of uniformly convergent series of continuous functions); the sum for an interior point z is, however, continuous with the sum for a point z_{0} on the circumference, as z approaches to z_{0} provided the series converges for z = z_{0}, as can be shown without much difficulty. Within a common circle of convergence two power series Σ a_{n}z^{n}, Σ b_{n}z^{n} can be multiplied together according to the ordinary rule, this being a consequence of a theorem for absolutely convergent series. If r_{1} be less than the radius of convergence of a series Σ a_{n} z^{n} and for |z| = r_{1}, the sum of the series be in absolute value less than a real positive quantity M, it can be shown that for |z| = r_{1} every term is also less than M in absolute value, namely, |a_{n}| < Mr_{1}^{−n}. If in every arbitrarily small neighbourhood of z=0 there be a point for which two converging power series Σa_{n} z^{n}, Σb_{n}z^{n} agree in value, then the series are identical, or a_{n} = b_{n}; thus also if Σa_{n}z^{n} vanish at z = 0 there is a circle of finite radius about z = 0 as centre within which no other points are found for which the sum of the series is zero. Considering a power series ƒ(z) = Σa_{n}z^{n} of radius of convergence R, if |z_{0}| < R and we put z = z_{0} + t with |t| < R-|z_{0}|, the resulting series Σa_{n}(z_{0} + t)^{n} may be regarded as a double series in z_{0} and t, which, since |z_{0}| + t < R, is absolutely convergent; it may then be arranged according to powers of t. Thus we may write ƒ(z) = Σ A_{n}t^{n}; hence A_{0} = ƒ(z_{0}), and we have [ƒ(z_{0} + t) − ƒ(z_{0})]/t = A_{n}t^{n−1}, wherein the continuous series on the right reduces to A_{1} for t = 0; thus the ratio on the left has a definite limit when t = 0, equal namely to A_{1} or Σna_{n}z_{0}^{n − 1}. In other words, the original series may legitimately be differentiated at any interior point z_{0} of its circle of convergence. Repeating this process we find ƒ(z_{0} + t) = Σt^{n}ƒ^{(n)}(z_{0})/n!, where ƒ^{(n)}(z_{0}) is the nth differential coefficient. Repeating for this power series, in t, the argument applied about z = 0 for Σa_{n}z^{n}, we infer that for the series ƒ(z) every point which reduces it to zero is an isolated point, and of such points only a finite number lie within a circle which is within the circle of convergence of ƒ(z).
Perhaps the simplest possible power series is e^{z} = exp(z) = 1 + z^{2}/2! + z^{3}/3! + ... of which the radius of convergence is infinite. By multiplication we have exp(z)·exp(z^{1}) = exp(z + z^{1}). In particular when x, y are real, and z = x + iy, exp(z) = exp(x)exp(iy). Now the functions
all vanish for y = 0, and the differential coefficient of any one after the first is the preceding one; as a function (of a real variable) is increasing when its differential coefficient is positive, we infer, for y positive, that each of these functions is positive; proceeding to a limit we hence infer that
for positive, and hence, for all values of y. We thus have exp(iy) = cos y + i sin y, and exp (z) = exp (x)·(cos y + i sin y). In other words, the modulus of exp (z) is exp (x) and the phase is y. Hence also
which we express by saying that exp (z) has the period 2πi, and hence also the period 2kπi, where k is an arbitrary integer. From the fact that the constantly increasing function exp (x) can vanish only for x = 0, we at once prove that exp (z) has no other periods.
Taking in the plane of z an infinite strip lying between the lines y = 0, y = 2π and plotting the function ζ = exp (z) upon a new plane, it follows at once from what has been said that every complex value of ζ arises when z takes in turn all positions in this strip, and that no value arises twice over. The equation ζ = exp(z) thus defines z, regarded as depending upon ζ, with only an additive ambiguity 2kπi, where k is an integer. We write z = λ(ζ); when ζ is real this becomes the logarithm of ζ; in general λ(ζ) = log |ζ| + i ph (ζ) + 2kπi, where k is an integer; and when ζ describes a closed circuit surrounding the origin the phase of ζ increases by 2π, or k increases by unity. Differentiating the series for ζ we have dζ/dz = ζ, so that z, regarded as depending upon ζ, is also differentiable, with dz/dζ = ζ^{− 1}. On the other hand, consider the series ζ − 1 − 12(ζ− 1)^{2} + 13(ζ − 1)^{3} − ...; it converges when ζ = 2 and hence converges for |ζ − 1| < 1; its differential coefficient is, however, 1 − (ζ − 1) + (ζ − 1)^{2} − ..., that is, (1 + ζ − 1)^{− 1}. Wherefore if φ(ζ) denote this series, for |ζ − 1| < 1, the difference λ(ζ) − φ(ζ), regarded as a function of ξ and η, has vanishing differential coefficients; if we take the value of λ(ζ) which vanishes when ζ = 1 we infer thence that for |ζ − 1| < 1, λ(ζ) = Σn=1 [(−1)^{(n−1)}/n (ζ − 1)^{n}. It is to be remarked that it is impossible for ζ while subject to |ζ − 1| < 1 to make a circuit about the origin. For values of ζ for which |ζ − 1| ≮ 1, we can also calculate λ(ζ) with the help of infinite series, utilizing the fact that λ(ζζ′) = λ(ζ) + λ(ζ′).
The function λ(ζ) is required to define ζ^{a} when ζ and a are complex numbers; this is defined as exp [aλ(ζ)], that is as Σn=0 a^{n} [λ(ζ)]^{n}/n!. When a is a real integer the ambiguity of λ(ζ) is immaterial here, since exp [aλ(ζ) + 2kaπi] = exp [aλ(ζ)]; when a is of the form 1/q, where q is a positive integer, there are q values possible for ζ^{1/q}, of the form exp [1/q λ(ζ)] exp (2kπi/q), with k = 0, 1, ... q − 1, all other values of k leading to one of these; the qth power of any one of these values is ζ; when a = p/q, where p, q are integers without common factor, q being positive, we have ζ^{p/q} = (ζ^{1/q})^{p}. The definition of the symbol ζ^{a} is thus a generalization of the ordinary definition of a power, when the numbers are real. As an example, let it be required to find the meaning of i ^{i}; the number i is of modulus unity and phase 12π; thus λ(i) = i (12π + 2kπ); thus
is always real, but has an infinite number of values.
The function exp (z) is used also to define a generalized form of the cosine and sine functions when z is complex; we write, namely, cos z = 12[exp (iz) + exp (−iz)] and sin z = −12i [exp (iz) − exp(−iz)]. It will be found that these obey the ordinary relations holding when z is real, except that their moduli are not inferior to unity. For example, cos i = 1 + 1/2! + 1/4! + ... is obviously greater than unity.
§4. Of Functions of a Complex Variable in General.—We have in what precedes shown how to generalize the ordinary rational, algebraic and logarithmic functions, and considered more general cases, of functions expressible by power series in z. With the suggestions furnished by these cases we can frame a general definition. So far our use of the plane upon which z is represented has been only illustrative, the results being capable of analytical statement. In what follows this representation is vital to the mode of expression we adopt; as then the properties of numbers cannot be ultimately based upon spatial intuitions, it is necessary to indicate what are the geometrical ideas requiring elucidation.
Consider a square of side a, to whose perimeter is attached a definite direction of description, which we take to be counter-clockwise; another square, also of side a, may be added to this, so that there is a side common; this common side being erased we have a composite region with a definite direction of perimeter; to this a third square of the same size may be attached, so that there is a side common to it and one of the former squares, and this common side may be erased. If this process be continued any number of times we obtain a region of the plane bounded by one or more polygonal closed lines, no two of which intersect; and at each portion of the perimeter there is a definite direction of description, which is such that the region is on the left of the describing point. Similarly we may construct a region by piecing together triangles, so that every consecutive two have a side in common, it being understood that there is assigned an upper limit for the greatest side of a triangle, and a lower limit for the smallest angle. In the former method, each square may be divided into four others by lines through its centre parallel to its sides; in the latter method each triangle may be divided into four others by lines joining the middle points of its sides; this halves the sides and preserves the angles. When we speak of a region of the plane in general, unless the contrary is stated, we shall suppose it capable of being generated in this latter way by means of a finite number of triangles, there being an upper limit to the length of a side of the triangle and a lower limit to the size of an angle of the triangle. We shall also require to speak of a path in the plane; this is to be understood as capable of arising as a limit of a polygonal path of finite length, there being a definite direction or sense of description at every point of the path, which therefore never meets itself. From this the meaning of a closed path is clear. The boundary points of a region form one or more closed paths, but, in general, it is only in a limiting sense that the interior points of a closed path are a region.
There is a logical principle also which must be referred to. We frequently have cases where, about every interior or boundary, point z_{0} of a certain region a circle can be put, say of radius r_{0}, such that for all points z of the region which are interior to this circle, for which, that is, |z − z_{0}| < r_{0}, a certain property holds. Assuming that to r_{0} is given the value which is the upper limit for z_{0}, of the possible values, we may call the points |z − z_{0}| < r_{0}, the neighbourhood belonging to or proper to z_{0}, and may speak of the property as the property (z, z_{0}). The value of r_{0} will in general vary with z_{0}; what is in most cases of importance is the question whether the lower limit of r_{0} for all positions is zero or greater than zero. (A) This lower limit is certainly greater than zero provided the property (z, z_{0}) is of a kind which we may call extensive; such, namely, that if it holds, for some position of z_{0} and all positions of z, within a certain region, then the property (z, z_{1}) holds within a circle of radius R about any interior point z_{1} of this region for all points z for which the circle |z − z_{1}| = R is within the region. Also in this case r_{0} varies continuously with z_{0}. (B) Whether the property is of this extensive character or not we can prove that the region can be divided into a finite number of sub-regions such that, for every one of these, the property holds, (1) for some point z_{0} within or upon the boundary of the sub-region, (2) for every point z within or upon the boundary of the sub-region.
We prove these statements (A), (B) in reverse order. To prove (B) let a region for which the property (z, z_{0}) holds for all points z and some point z_{0} of the region, be called suitable: if each of the triangles of which the region is built up be suitable, what is desired is proved; if not let an unsuitable triangle be subdivided into four, as before explained; if one of these subdivisions is unsuitable let it be again subdivided; and so on. Either the process terminates and then what is required is proved; or else we obtain an indefinitely continued sequence of unsuitable triangles, each contained in the preceding, which converge to a point, say ζ; after a certain stage all these will be interior to the proper region of ζ; this, however, is contrary to the supposition that they are all unsuitable.
We now make some applications of this result (B). Suppose a definite finite real value attached to every interior or boundary point of the region, say ƒ(x, y ). It may have a finite upper limit H for the region, so that no point (x, y ) exists for which ƒ(x, y ) > H, but points (x, y ) exist for which ƒ(x, y ) > H − ε, however small ε may be; if not we say that its upper limit is infinite. There is then at least one point of the region such that, for points of the region within a circle about this point, the upper limit of ƒ(x, y ) is H, however small the radius of the circle be taken; for if not we can put about every point of the region a circle within which the upper limit of ƒ(x, y ) is less than H; then by the result (B) above the region consists of a finite number of sub-regions within each of which the upper limit is less than H; this is inconsistent with the hypothesis that the upper limit for the whole region is H. A similar statement holds for the lower limit. A case of such a function ƒ(x, y ) is the radius r_{0} of the neighbourhood proper to any point z_{0}, spoken of above. We can hence prove the statement (A) above.
Suppose the property (z, z_{0}) extensive, and, if possible, that the lower limit of r_{0} is zero. Let then ζ be a point such that the lower limit of r_{0} is zero for points z_{0} within a circle about ζ however small; let r be the radius of the neighbourhood proper to ζ; take z_{0} so that |z_{0}-ζ| < 12r; the property (z, z_{0}), being extensive, holds within a circle, centre z_{0}, of radius r − |z_{0} − ζ|, which is greater than |z_{0} − ζ|, and increases to r as |z_{0} − ζ| diminishes; this being true for all points z_{0} near ζ, the lower limit of r_{0} is not zero for the neighbourhood of ζ, contrary to what was supposed. This proves (A). Also, as is here shown that r_{0} ⋝ r − |z_{0} − ζ|, may similarly be shown that r ⋝ r_{0} − |z_{0} − ζ|. Thus r_{0} differs arbitrarily little from r when |z_{0} − ζ| is sufficiently small; that is, r_{0} varies continuously with z_{0}. Next suppose the function ƒ(x, y ), which has a definite finite value at every point of the region considered, to be continuous but not necessarily real, so that about every point z_{0}, within or upon the boundary of the region, η being an arbitrary real positive quantity assigned beforehand, a circle is possible, so that for all points z of the region interior to this circle, we have |ƒ(x, y ) −ƒ(x_{0}, y_{0})| < 12η, and therefore (x′, y′) being any other point interior to this circle, |ƒ(x′, y′) − ƒ(x, y )| < η. We can then apply the result (A) obtained above, taking for the neighbourhood proper to any point z_{0} the circular area within which, for any two points (x, y ), (x′, y′), we have |ƒ(x′, x′) − ƒ(x, y )| < η. This is clearly an extensive property. Thus, a number r is assignable, greater than zero, such that, for any two points (x, y ), (x′, y′) within a circle |z − z_{0}| = r about any point z_{0}, we have |ƒ(x′, y′) − ƒ(x, y )| < η, and, in particular, |ƒ(x, y ) −ƒ(x_{0}, y_{0})| < η, where η is an arbitrary real positive quantity agreed upon beforehand.
Take now any path in the region, whose extreme points are z_{0}, z, and let z_{1}, ... z_{n−1} be intermediate points of the path, in order; denote the continuous function ƒ(x, y ) by ƒ(z), and let ƒ_{r} denote any quantity such that |ƒ_{r} − ƒ(z_{r})| ⋜ |ƒ(z_{r+1}) − ƒ(z_{r})|; consider the sum
By the definition of a path we can suppose, n being large enough, that the intermediate points z_{1}, ... zn − 1 are so taken that if z_{i }, zi + 1 be any two points intermediate, in order, to z_{r} and zr + 1, we have |zi + i -z_{i }| < |z_{r+1} − z_{r}|; we can thus suppose |z_{1} − z_{0}|, |z_{2} − z_{1}|, ... |z − z_{n−1}|all to converge constantly to zero. This being so, we can show that the sum above has a definite limit. For this it is sufficient, as in the case of an integral of a function of one real variable, to prove this to be so when the convergence is obtained by taking new points of division intermediate to the former ones. If, however, zr, 1, zr, 2, ... zr, m−1 be intermediate in order to z_{r} and z_{r+1}, and |ƒr, i − ƒ(zr, i )| < |ƒ(zr, i+1) − ƒ(zr, i )|, the difference between Σ(z_{r+1} − z_{r})ƒ_{r} and
which is equal to
is, when |z_{r+1} − z_{r}| is small enough, to ensure |ƒ(z_{r+1}) − ƒ(z_{r})| < η, less in absolute value than
which, if S be the upper limit of the perimeter of the polygon from which the path is generated, is < 2ηS, and is therefore arbitrarily small.
The limit in question is called . In particular when ƒ(z) = 1, it is obvious from the definition that its value is z − z_{0}; when ƒ(z) = z, by taking ƒ_{r} = 12(z_{r+1} − z_{r}), it is equally clear that its value is 12(z^{2} − z_{0}^{2}); these results will be applied immediately.
Suppose now that to every interior and boundary point z_{0} of a certain region there belong two definite finite numbers ƒ(z_{0}), F(z_{0}), such that, whatever real positive quantity η may be, a real positive number ε exists for which the condition
| | ƒ(z) − ƒ(z_{0}) | − F(z_{0}) | < η, |
z − z_{0} |
which we describe as the condition (z, z_{0}), is satisfied for every point z, within or upon the boundary of the region, satisfying the limitation |z − z_{0}| < ε. Then ƒ(z_{0}) is called a differentiable function of the complex variable z_{0} over this region, its differential coefficient being F(z_{0}). The function ƒ(z_{0}) is thus a continuous function of the real variables x_{0}, y_{0}, where z_{0} = x_{0} + iy_{0}, over the region; it will appear that F(z_{0}) is also continuous and in fact also a differentiable function of z_{0}.
Supposing η to be retained the same for all points z_{0} of the region, and σ_{0} to be the upper limit of the possible values of ε for the point z_{0}, it is to be presumed that σ_{0} will vary with z_{0}, and it is not obvious as yet that the lower limit of the values of σ_{0} as z_{0} varies over the region may not be zero. We can, however, show that the region can be divided into a finite number of sub-regions for each of which the condition (z, z_{0}), above, is satisfied for all points z, within or upon the boundary of this sub-region, for an appropriate position of z_{0}, within or upon the boundary of this sub-region. This is proved above as result (B).
Hence it can be proved that, for a differentiable function ƒ(z), the integral ∫ zz_{1} ƒ(z)dz has the same value by whatever path within the region we pass from z_{1} to z. This we prove by showing that when taken round a closed path in the region the integral ∫ƒ(z)dz vanishes. Consider first a triangle over which the condition (z, z_{0}) holds, for some position of z_{0} and every position of z, within or upon the boundary of the triangle. Then as
we have
which, as the path is closed, is η ∫θ(z − z_{0})dz. Now, from the theorem that the absolute value of a sum is less than the sum of the absolute values of the terms, this last is less, in absolute value, than ηap, where a is the greatest side of the triangle and p is its perimeter; if Δ be the area of the triangle, we have Δ = 12ab sin C > (α/π) ba, where α is the least angle of the triangle, and hence a(a + b + c) < 2a(b + c) < 4πΔ/α; the integral ∫ƒ(z)dz round the perimeter of the triangle is thus < 4πηΔ/α. Now consider any region made up of triangles, as before explained, in each of which the condition (z, z_{0}) holds, as in the triangle just taken. The integral ∫ƒ(z)dz round the boundary of the region is equal to the sum of the values of the integral round the component triangles, and thus less in absolute value than 4πηK/α, where K is the whole area of the region, and α is the smallest angle of the component triangles. However small η be taken, such a division of the region into a finite number of component triangles has been shown possible; the integral round the perimeter of the region is thus arbitrarily small. Thus it is actually zero, which it was desired to prove. Two remarks should be added: (1) The theorem is proved only on condition that the closed path of integration belongs to the region at every point of which the conditions are satisfied. (2) The theorem, though proved only when the region consists of triangles, holds also when the boundary points of the region consist of one or more closed paths, no two of which meet.
Hence we can deduce the remarkable result that the value of ƒ(z) at any interior point of a region is expressible in terms of the value of ƒ(z) at the boundary points. For consider in the original region the function ƒ(z)/(z − z_{0}), where z_{0} is an interior point: this satisfies the same conditions as ƒ(z) except in the immediate neighbourhood of z_{0}. Taking out then from the original region a small regular polygonal region with z_{0} as centre, the theorem holds for the remaining portion. Proceeding to the limit when the polygon becomes a circle, it appears that the integral ∫ dzƒ(z)/(z − z_{0}) round the boundary of the original region is equal to the same integral taken counter-clockwise round a small circle having z_{0} as centre; on this circle, however, if z − z_{0} = rE(iθ), dz/(z − z_{0}) = idθ, and ƒ(z) differs arbitrarily little from ƒ(z_{0}) if r is sufficiently small; the value of the integral round this circle is therefore, ultimately, when r vanishes, equal to 2πiƒ(z_{0}). Hence ƒ(z_{0}) = 1/2πi ∫ (dtƒ(t)/(t − z_{0}), where this integral is round the boundary of the original region. From this it appears that
F(z_{0}) = lim. | ƒ(z) − ƒ(z_{0}) | = | 1 | ∫ | dtƒ(t) |
z − z_{0} | 2πi | (t − z_{0})^{2} |
also round the boundary of the original region. This form shows, however, that F(z_{0}) is a continuous, finite, differentiable function of z_{0} over the whole interior of the original region.
§ 5. Applications.—The previous results have manifold applications.
(1) If an infinite series of differentiable functions of z be uniformly convergent along a certain path lying with the region of definition of the functions, so that S(2) = u_{0}(z) + u_{1}(z) + ... + u_{n−1}(z) + R_{n}(z), where |R_{n}(z)| < ε for all points of the path, we have
∫ zz_{0} S(z)dz = ∫ zz_{0} u_{0}(z)dz + ∫ zz0 u_{1}(z)dz + ... + ∫ zz_{0} u_{n−1}(z)dz + ∫ zz_{0} R_{n}(z)dz, |
wherein, in absolute value, ∫ zz_{0} R_{n}(z)dz < εL, if L be the length of the path. Thus the series may be integrated, and the resulting series is also uniformly convergent.
(2) If ƒ(x, y ) be definite, finite and continuous at every point of a region, and over any closed path in the region ∫ƒ(x, y )dz = 0, then ψ(z) = ∫ zz_{0} ƒ(x, y )dz, for interior points z_{0}, z, is a differentiable function of z, having for its differential coefficient the function ƒ(x, y ), which is therefore also a differentiable function of z at interior points.
(3) Hence if the series u_{0}(z) + u_{1}(z) + ... to ∞ be uniformly convergent over a region, its terms being differentiable functions of z, then its sum S(z) is a differentiable function of z, whose differential coefficient, given by (1/2πi) ∫ 2πi/(t − z)^{2}, is obtainable by differentiating the series. This theorem, unlike (1), does not hold for functions of a real variable.
(4) If the region of definition of a differentiable function ƒ(z) include the region bounded by two concentric circles of radii r, R, with centre at the origin, and z_{0} be an interior point of this region,
ƒ(z_{0}) = | 1 | ∫ | ƒ(t)dt | − | 1 | ∫ | ƒ(t)dt | , |
2πi | R^{t} − z_{0} | 2πi | r^{t} − z_{0} |
where the integrals are both counter-clockwise round the two circumferences respectively; putting in the first (t − z_{0})^{−1} = Σn=0 z_{0}^{n}/t^{n+1}, and in the second (t − z_{0})^{−1} = − Σn=0 t^{n}/z_{0}^{n+1}, we find ƒ(z_{0}) = Σ ∞−∞ A_{n}z_{0}^{n}, wherein A_{n} = (1/2πi) ∫ [ƒ(t)/t^{n+1}] dt, taken round any circle, centre the origin, of radius intermediate between r and R. Particular cases are: (α) when the region of definition of the function includes the whole interior of the outer circle; then we may take r = 0, the coefficients A_{n} for which n < 0 all vanish, and the function ƒ(z_{0}) is expressed for the whole interior |z_{0}| < R by a power series Σ ∞0 A_{n}z_{0}^{n}. In other words, about every interior point c of the region of definition a differentiable function of z is expressible by a power series in z − c; a very important result.
(β) If the region of definition, though not including the origin, extends to within arbitrary nearness of this on all sides, and at the same time the product z^{m}ƒ(z) has a finite limit when |z| diminishes to zero, all the coefficients A_{n} for which n < −m vanish, and we have
Such a case occurs, for instance, when ƒ(z) = cosec z, the number m being unity.
§ 6. Singular Points.—The region of existence of a differentiable function of z is an unclosed aggregate of points, each of which is an interior point of a neighbourhood consisting wholly of points of the aggregate, at every point of which the function is definite and finite and possesses a unique finite differential coefficient. Every point of the plane, not belonging to the aggregate, which is a limiting point of points of the aggregate, such, that is, that points of the aggregate lie in every neighbourhood of this, is called a singular point of the function.
About every interior point z_{0} of the region of existence the function may be represented by a power series in z − z_{0}, and the series converges and represents the function over any circle centre at z_{0} which contains no singular point in its interior. This has been proved above. And it can be similarly proved, putting z = 1/ζ, that if the region of existence of the function contains all points of the plane for which |z| > R, then the function is representable for all such points by a power series in z− 1 or ζ; in such case we say that the region of existence of the function contains the point z = ∞. A series in z− 1 has a finite limit when |z| = ∞; a series in z cannot remain finite for all points z for which |z| > R; for if, for |z| = R, the sum of a power series Σa_{n}z^{n} in z is in absolute value less than M, we have |a_{n}| < Mr^{−n}, and therefore, if M remains finite for all values of r however great, a_{n} = 0. Thus the region of existence of a function if it contains all finite points of the plane cannot contain the point z = ∞; such is, for instance, the case of the function exp (z) = Σz^{n}/n!. This may be regarded as a particular case of a well-known result (§ 7), that the circumference of convergence of any power series representing the function contains at least one singular point. As an extreme case functions exist whose region of existence is circular, there being a singular point in every arc of the circumference, however small; for instance, this is the case for the functions represented for |z| < 1 by the series , where m = n^{2}, the series where m = n!, and the series where , being a positive integer, although in the last case the series actually converges for every point of the circle of convergence |z| = 1. If z be a point interior to the circle of convergence of a series representing the function, the series may be rearranged in powers of z − z_{0}; as z_{0} approaches to a singular point of the function, lying on the circle of convergence, the radii of convergence of these derived series in z − z_{0} diminish to zero; when, however, a circle can be put about z_{0}, not containing any singular point of the function, but containing points outside the circle of convergence of the original series, then the series in z − z_{0} gives the value of the function for these external points. If the function be supposed to be given only for the interior of the original circle, by the original power series, the series in z − z_{0} converging beyond the original circle gives what is known as an analytical continuation of the function. It appears from what has been proved that the value of the function at all points of its region of existence can be obtained from its value, supposed given by a series in one original circle, by a succession of such processes of analytical continuation.
§ 7. Monogenic Functions.—This suggests an entirely different way of formulating the fundamental parts of the theory of functions of a complex variable, which appears to be preferable to that so far followed here.
Starting with a convergent power series, say in powers of z, this series can be arranged in powers of z − z_{0}, about any point z_{0} interior to its circle of convergence, and the new series converges certainly for |z − z_{0}| < r − |z_{0}|, if r be the original radius of convergence. If for every position of z_{0} this is the greatest radius of convergence of the derived series, then the original series represents a function existing only within its circle of convergence. If for some position of z_{0} the derived series converges for |z − z_{0}| < r − |z_{0}| + D, then it can be shown that for points z, interior to the original circle, lying in the annulus r − |z_{0}| < |z − z_{0}| < r − |z_{0}| + D, the value represented by the derived series agrees with that represented by the original series. If for another point z_{1} interior to the original circle the derived series converges for |z − z_{1}| < r − |z_{1}| + E, and the two circles |z − z_{0}| = r − |z_{0}| + D, |z − z_{1}| = r − |z_{1}| + E have interior points common, lying beyond |z| = r, then it can be shown that the values represented by these series at these common points agree. Either series then can be used to furnish an analytical continuation of the function as originally defined. Continuing this process of continuation as far as possible, we arrive at the conception of the function as defined by an aggregate of power series of which every one has points of convergence common with some one or more others; the whole aggregate of points of the plane which can be so reached constitutes the region of existence of the function; the limiting points of this region are the points in whose neighbourhood the derived series have radii of convergence diminishing indefinitely to zero; these are the singular points. The circle of convergence of any of the series has at least one such singular point upon its circumference. So regarded the function is called a monogenic function, the epithet having reference to the single origin, by one power series, of the expressions representing the function; it is also sometimes called a monogenic analytical function, or simply an analytical function; all that is necessary to define it is the value of the function and of all its differential coefficients, at some one point of the plane; in the method previously followed here it was necessary to suppose the function differentiable at every point of its region of existence. The theory of the integration of a monogenic function, and Cauchy’s theorem, that ∫ƒ(z)dz = 0 over a closed path, are at once deducible from the corresponding results applied to a single power series for the interior of its circle of convergence. There is another advantage belonging to the theory of monogenic functions: the theory as originally given here applies in the first instance only to single valued functions; a monogenic function is by no means necessarily single valued—it may quite well happen that starting from a particular power series, converging over a certain circle, and applying the process of analytical continuation over a closed path back to an interior point of this circle, the value obtained does not agree with the initial value. The notion of basing the theory of functions on the theory of power series is, after Newton, largely due to Lagrange, who has some interesting remarks in this regard at the beginning of his Théorie des fonctions analytiques. He applies the idea, however, primarily to functions of a real variable for which the expression by power series is only of very limited validity; for functions of a complex variable probably the systematization of the theory owes most to Weierstrass, whose use of the word monogenic is that adopted above. In what follows we generally suppose this point of view to be regarded as fundamental.
§ 8. Some Elementary Properties of Single Valued Functions.—A pole is a singular point of the function ƒ(z) which is not a singularity of the function 1/ƒ(z); this latter function is therefore, by the definition, capable of representation about this point, z_{0}, by a series [ƒ(z)]^{−1} = Σa_{n}(z − z_{0})^{n}. If herein a_{0} is not zero we can hence derive a representation for ƒ(z) as a power series about z_{0}, contrary to the hypothesis that z_{0} is a singular point for this function. Hence a_{0} = 0; suppose also a_{1} = 0, a_{2} = 0, ... am−1 = 0, but am ± 0. Then [ƒ(z)]^{−1} = (z − z_{0})^{m}[am + am+1 (z − z_{0}) + ...], and hence (z − z_{0})^{m}ƒ(z) = am^{−1} + Σb_{n} (z − z_{0})^{n}, namely, the expression of ƒ(z) about z = z_{0} contains a finite number of negative powers of z − z_{0} and a (finite or) infinite number of positive powers. Thus a pole is always an isolated singularity.
The integral ∫ƒ(z)dz taken by a closed circuit about the pole not containing any other singularity is at once seen to be 2πiA_{1}, where A_{1} is the coefficient of (z − z_{0})^{−1} in the expansion of ƒ(z) at the pole; this coefficient has therefore a certain uniqueness, and it is called the residue of ƒ(z) at the pole. Considering a region in which there are no other singularities than poles, all these being interior points, the integral 12πi ∫ ƒ(z)dz round the boundary of this region is equal to the sum of the residues at the included poles, a very important result. Any singular point of a function which is not a pole is called an essential singularity; if it be isolated the function is capable, in the neighbourhood of this point, of approaching arbitrarily near to any assigned value. For, the point being isolated, the function can be represented, in its neighbourhood, as we have proved, by a series Σ ∞−∞ a_{n}(z − z_{0})^{n}; it thus cannot remain finite in the immediate neighbourhood of the point. The point is necessarily an isolated essential singularity also of the function {ƒ(z) − A}^{−1} for if this were expressible by a power series about the point, so would also the function ƒ(z) be; as {ƒ(z) − A}− 1 approaches infinity, so does ƒ(z) approach the arbitrary value A. Similar remarks apply to the point z = ∞, the function being regarded as a function of ζ = z^{−1}. In the neighbourhood of an essential singularity, which is a limiting point also of poles, the function clearly becomes infinite. For an essential singularity which is not isolated the same result does not necessarily hold.
A single valued function is said to be an integral function when it has no singular points except z = ∞. Such is, for instance, an integral polynomial, which has z = ∞ for a pole, and the functions exp (z) which has z = ∞ as an essential singularity. A function which has no singular points for finite values of z other than poles is called a meromorphic function. If it also have a pole at z = ∞ it is a rational function; for then, if a_{1}, ... a_{s} be its finite poles, of orders m_{1}; m_{2}, ... m_{s}, the product (z − a_{1})m_{1} ... (z − a_{s}) m_{s}ƒ(z) is an integral function with a pole at infinity, capable therefore, for large values of z, of an expression (z^{−1})−m Σ r=0 a_{r}(z^{−1})^{r}; thus (z − a_{1})^{m1} ... (z − a_{s})m_{s}ƒ(z) is capable of a form Σ r=0 b_{r}z^{r}, but z−m Σ r=0 b_{r}z^{r} remains finite for z = ∞. Therefore b_{r+1} = br+2 = ... = 0, andƒ(z) is a rational function.
If for a single valued function F(z) every singular point in the finite part of the plane is isolated there can only be a finite number of these in any finite part of the plane, and they can be taken to be a_{1}, a_{2}, a_{3}, ... with |a_{1}| ⋜ |a_{2}| ⋜ |a_{3}| ... and limit |a_{n}| = ∞. About a_{s} the function is expressible as Σ ∞−∞ A_{n}(z − a_{s})^{n}; let ƒ_{s}(z) = Σ 1−∞ A^{n}(z − a_{s})^{n} be the sum of the negative powers in this expansion. Assuming z = 0 not to be a singular point, let ƒ_{s}(z) be expanded in powers of z, in the form Σ n=0 C_{n}z^{n}, and μ_{s} be chosen so that F_{s}(z) = ƒ_{s}(z) − Σ μ_{s}−11 C_{n}z^{n} = Σ ∞μ_{s} C_{n}z^{n} is, for |z| < r_{s} < |a_{s}|, less in absolute value than the general term ε_{s} of a fore-agreed convergent series of real positive terms. Then the series φ(z) = Σ ∞s=1 F_{s}(z) converges uniformly in any finite region of the plane, other than at the points a_{s}, and is expressible about any point by a power series, and near a_{s}, φ(z) − f_{s}(z) is expressible by a power series in z − a_{s}. Thus F(z) − φ(z) is an integral function. In particular when all the finite singularities of F(z) are poles, F(z) is hereby expressed as the sum of an integral function and a series of rational functions. The condition |F_{s}(z)| < ε_{s} is imposed only to render the series ΣF_{s}(z) uniformly convergent; this condition may in particular cases be satisfied by a series Σ G_{s}(z) where G_{s}(z) = ƒ_{s}(z) − Σ ν_{s}−11 C_{n}z^{n} and ν_{s} < μ_{s}. An example of the theorem is the function π cot πz − z− 1 for which, taking at first only half the poles, ƒ_{s}(z) = 1/(z − s); in this case the series Σ F_{s}(z) where F_{s}(z) = (z − s)^{−1} + s^{−1} is uniformly convergent; thus π cot πz − z^{−1} − Σ ∞−∞ [(z − s)^{−1} + s^{−1}], where s = 0 is excluded from the summation, is an integral function. It can be proved that this integral function vanishes.
Considering an integral function ƒ(z), if there be no finite positions of z for which this function vanishes, the function λ[ƒ(z)] is at once seen to be an integral function, φ(z), or ƒ(z) = exp[φ(z)]; if however great R may be there be only a finite number of values of z for which ƒ(z) vanishes, say z = a_{1}, ... am, then it is at once seen that ƒ(z) = exp [φ(z)]. (z − a_{1})h1...(z − am)hm, where φ(z) is an integral function, and h_{1}, ... hm are positive integers. If, however, ƒ(z) vanish for z = a_{1}, a_{2} ... where |a_{1}| ⋜ |a2| ⋜ ... and limit |a_{n}| = ∞, and if for simplicity we assume that z − 0 is not a zero and all the zeros a_{1}, a_{2}, ... are of the first order, we find, by applying the preceding theorem to the function [1 / ƒ(z)] [dƒ(z) / dz], that ƒ(z) = exp [φ(z)] Π ∞n=1 {(1 − z/a_{n}) exp φ_{n}(z)}, where φ(z) is an integral function, and φ_{n}(z) is an integral polynomial of the form φ_{n}(z) = z/a_{n} + z^{2}/2a_{n}^{2} + ... + z^{s}/sa_{n}^{s}. The number s may be the same for all values of n, or it may increase indefinitely with n; it is sufficient in any case to take s = n. In particular for the function sinπxπx, we have
sin πx | = { (1 − | x | ) exp ( | x | ) }, |
πx | n | n |
where n = 0 is excluded from the product. Or again we have
1 | = xeC_{x} { (1 + | x | ) exp ( − | x | ) }, |
Γ(x) | n | n |
where C is a constant, and Γ(x) is a function expressible when x is real and positive by the integral .
There exist interesting investigations as to the connexion of the value of s above, the law of increase of the modulus of the integral function ƒ(z), and the law of increase of the coefficients in the series ƒ(z) = Σa_{n}z^{n} as n increases (see the bibliography below under Integral Functions). It can be shown, moreover, that an integral function actually assumes every finite complex value, save, in exceptional cases, one value at most. For instance, the function exp (z) assumes every finite value except zero (see below under § 21, Modular Functions).
The two theorems given above, the one, known as Mittag-Leffler’s theorem, relating to the expression as a sum of simpler functions of a function whose singular points have the point z = ∞ as their only limiting point, the other, Weierstrass’s factor theorem, giving the expression of an integral function as a product of factors each with only one zero in the finite part of the plane, may be respectively generalized as follows:—
I. If a_{1}, a_{2}, a_{3}, ... be an infinite series of isolated points having the points of the aggregate (c) as their limiting points, so that in any neighbourhood of a point of (c) there exists an infinite number of the points a_{1}, a_{2}, ..., and with every point a_{i } there be associated a polynomial in (z − a_{i })^{−1}, say g_{i }; then there exists a single valued function whose region of existence excludes only the points (a) and the points (c), having in a point a_{i } a pole whereat the expansion consists of the terms g_{i }, together with a power series in z − a_{i }; the function is expressible as an infinite series of terms g_{i } − γ_{i }, where γ_{i } is also a rational function.
II. With a similar aggregate (a), with limiting points (c), suppose with every point a_{i } there is associated a positive integer r_{i }. Then there exists a single valued function whose region of existence excludes only the points (c), vanishing to order r_{i } at the point a_{i }, but not elsewhere, expressible in the form
( 1 − | a_{n} − c_{n} | ) r _{n} exp (g_{n}), |
z − c_{n} |
where with every point a_{n} is associated a proper point c_{n} of (c), and
g_{n} = r_{n} Σ μ _{n}s=1 | 1 | ( | a_{n} − c_{n} | ) s, |
s | z − c_{n} |
μ_{n} being a properly chosen positive integer.
If it should happen that the points (c) determine a path dividing the plane into separated regions, as, for instance, if a_{n} = R(1 − n^{−1}) exp (iπ √2.n), when (c) consists of the points of the circle |z| = R, the product expression above denotes different monogenic functions in the different regions, not continuable into one another.
§ 9. Construction of a Monogenic Function with a given Region of Existence.—A series of isolated points interior to a given region can be constructed in infinitely many ways whose limiting points are the boundary points of the region, or are boundary points of the region of such denseness that one of them is found in the neighbourhood of every point of the boundary, however small. Then the application of the last enunciated theorem gives rise to a function having no singularities in the interior of the region, but having a singularity in a boundary point in every small neighbourhood of every boundary point; this function has the given region as region of existence.
§ 10. Expression of a Monogenic Function by means of Rational Functions in a given Region.—Suppose that we have a region R_{0} of the plane, as previously explained, for all the interior or boundary points of which z is finite, and let its boundary points, consisting of one or more closed polygonal paths, no two of which have a point in common, be called C_{0}. Further suppose that all the points of this region, including the boundary points, are interior points of another region R, whose boundary is denoted by C. Let z be restricted to be within or upon the boundary of C_{0}; let a, b, ... be finite points upon C or outside R. Then when b is near enough to a, the fraction (a − b)/(z − b) is arbitrarily small for all positions of z; say
| | a − b | | < ε, for |a − b| < η; |
z − b |
the rational function of the complex variable t,
1 | [ 1 − ( | a − b | )n ], |
t − a | t − a |
in which n is a positive integer, is not infinite at t = a, but has a pole at t = b. By taking n large enough, the value of this function, for all positions z of t belonging to R_{0}, differs as little as may be desired from (t − a)^{−1}. By taking a sum of terms such as
F = Σ A_{p} { | 1 | [ 1 − ( | a − b | ) n ] } p, |
t − a | t − b |
we can thus build a rational function differing, in value, in R_{0}, as little as may be desired from a given rational function
and differing, outside R or upon the boundary of R, from ƒ, in the fact that while ƒ is infinite at t = a, F is infinite only at t = b. By a succession of steps of this kind we thus have the theorem that, given a rational function of t whose poles are outside R or upon the boundary of R, and an arbitrary point c outside R or upon the boundary of R, which can be reached by a finite continuous path outside R from all the poles of the rational function, we can build another rational function differing in R_{0} arbitrarily little from the former, whose poles are all at the point c.
Now any monogenic function ƒ(t) whose region of definition includes C and the interior of R can be represented at all points z in R_{0} by
ƒ(z) = | 1 | ∫ | ƒ(t)dt | , |
2πi | t − z |
where the path of integration is C. This integral is the limit of a sum
S = | 1 | Σ | ƒ(t_{i }) (t_{i+1} − t_{i }) | , |
2πi | t_{i } − z |
where the points t_{i } are upon C; and the proof we have given of the existence of the limit shows that the sum S converges to ƒ(z) uniformly in regard to z, when z is in R_{0}, so that we can suppose, when the subdivision of C into intervals ti+1 − t_{i }, has been carried sufficiently far, that
for all points z of R_{0}, where ε is arbitrary and agreed upon beforehand. The function S is, however, a rational function of z with poles upon C, that is external to R_{0}. We can thus find a rational function differing arbitrarily little from S, and therefore arbitrarily little from ƒ(z), for all points z of R_{0}, with poles at arbitrary positions outside R_{0} which can be reached by finite continuous curves lying outside R from the points of C.
In particular, to take the simplest case, if C_{0}, C be simple closed polygons, and Γ be a path to which C approximates by taking the number of sides of C continually greater, we can find a rational function differing arbitrarily little from ƒ(z) for all points of R_{0} whose poles are at one finite point c external to Γ. By a transformation of the form t − c = r^{−1}, with the appropriate change in the rational function, we can suppose this point c to be at infinity, in which case the rational function becomes a polynomial. Suppose ε_{1}, ε_{2}, ... to be an indefinitely continued sequence of real positive numbers, converging to zero, and P_{r} to be the polynomial such that, within C_{0}, |P_{r} − ƒ(z)| < ε_{r}; then the infinite series of polynomials
whose sum to n terms is P_{n}(z), converges for all finite values of z and represents ƒ(z) within C_{0}.
When C consists of a series of disconnected polygons, some of which may include others, and, by increasing indefinitely the number of sides of the polygons C, the points C become the boundary points Γ of a region, we can suppose the poles of the rational function, constructed to approximate to ƒ(z) within R_{0}, to be at points of Γ. A series of rational functions of the form
then, as before, represents ƒ(z) within R_{0}. And R_{0} may be taken to coincide as nearly as desired with the interior of the region bounded by Γ.
§ 11. Expression of (1 − z)^{−1} by means of Polynomials. Applications.—We pursue the ideas just cursorily explained in some further detail.
Let c be an arbitrary real positive quantity; putting the complex variable ζ = ξ + iη, enclose the points ζ = l, ζ = 1 + c by means of (i.) the straight lines η = ±a, from ξ = l to ξ = 1 + c, (ii.) a semicircle convex to ζ = 0 of equation (ξ − 1)^{2} + η^{2} = a^{2}, (iii.) a semicircle concave to ζ = 0 of equation (ξ − 1 − c)^{2} + η^{2} = a^{2}. The quantities c and a are to remain fixed. Take a positive integer r so that 1r(ca) is less than unity, and put σ = 1r(ca). Now take
if n_{1}, n_{2}, ... n_{r}, be positive integers, the rational function
1 | { 1 − ( | c_{1} − 1 | ) n_{1} } |
1 − ζ | c_{1} − ζ |
is finite at ζ = 1, and has a pole of order n_{1} at ζ = c_{1}; the rational function
1 | { 1 − ( | c_{1} − 1 | ) n_{1} } { 1 − ( | c_{2} − c_{1} | ) n_{2} } n_{1} |
1 − ζ | c_{1} − ζ | c_{2} − ζ |
is thus finite except for ζ = c_{2}, where it has a pole of order n_{1}n_{2}; finally, writing
x_{s} = ( | c_{s} − cs−1 | ) n_{s}, |
c_{s} − ζ |
the rational function
U = (1 − ζ)^{−1} (1 − x_{1}) (1 − x_{2})n_{1} (1 − x_{3})n_{1}n_{2} ... (1 − x_{r})n_{1}n_{2} ... nr − 1 |
has a pole only at ζ = 1 + c, of order n_{1}n_{2} ... n_{r}.
The difference (1 − ζ)^{−1} − U is of the form (1 − ζ)^{−1}P, where P, of the form
in which there are equalities among ρ_{1}, ρ_{2}, ... ρ_{k}, is of the form
therefore, if |r_{i }| = |ρ_{i }|, we have
|P| < Σ r_{1} + Σ r_{1}r_{2} + Σ r_{1}r_{2}r_{3} + ... < (1 + r_{1}) (1 + r_{2})...(1 + r_{k}) − 1; |
now, so long as ζ is without the closed curve above described round ζ = 1, ζ = 1 + c, we have
| | 1 | | < | 1 | , | | cm − cm−1 | | < | c/r | < σ, |
1 − ζ | a | cm − ζ | a |
and hence
|(1 − ζ)^{−1} − U| < a^{−1} {(1 + σn_{1}) (1 + σn_{2})n_{1} (1 + σn_{3})n_{1}n_{2} ... (1 + σn_{r})n_{1}n_{2} ... nr−1 − 1}. |
Take an arbitrary real positive ε, and μ, a positive number, so that εmu − 1 < εa, then a value of n_{1} such that σn_{1} < μ/(1 + μ) and therefore σn_{1}/(1 − σn_{1} < μ, and values for n_{2}, n_{3} ... such that σn_{2} < 1/n_{1} σ2n_{1}, σn_{3} < 1/n_{1}n_{2} σ3n_{1}, ... σ^{n}_{r} < 1/(n_{1} ... nr−1) σn_{r} n_{1}; then, as 1 + x < e^{x}, we have
|(−ζ)^{−1} − U| < a^{−1} {exp (σn_{1} + n_{1}σn_{2} + n_{1}n_{2}σn_{3} + ... + n_{1}n_{2} ... nr−1σn_{r}) − 1}, |
and therefore less than
which is less than
1 | [ exp ( | σn_{1} | ) − 1 ] |
a | 1 − σn_{1} |
and therefore less than ε.
The rational function U, with a pole at ζ = 1 + c, differs therefore from (1 − ζ)^{−1}, for all points outside the closed region put about ζ = 1, ζ = l + c, by a quantity numerically less than ε. So long as a remains the same, r and σ will remain the same, and a less value of ε will require at most an increase of the numbers n_{1}, n_{2}, ... n_{r}; but if a be taken smaller it may be necessary to increase r, and with this the complexity of the function U.
Now put
z = | cζ | , ζ = | (c + 1)z | ; |
c + 1 − ζ | c + z |
thereby the points ζ = 0, 1, 1 + c become the points z = 0, 1, ∞, the function (1 − z)^{−1} being given by (1 − z)^{−1} = c(c + 1)^{−1} (1 − ζ)^{−1} + (c + 1)^{−1}; the function U becomes a rational function of z with a pole only at z = ∞, that is, it becomes a polynomial in z, say [(c + 1)/c] H − 1/c, where H is also a polynomial in z, and
1 | − H = | c | [ | 1 | − U ]; |
1 − z | c + 1 | 1 − ζ |
the lines η = ±a become the two circles expressed, if z = x + iy, by
(x + c)^{2} + y^{2} = ± | c(c + 1) | y, |
a |
the points (η = 0, ξ = 1 − a), (η = 0, ξ = 1 + c + a) become respectively the points (y = 0, x = c(1 − a)/(c + a), (y = 0, x = −c(l + c + a)/a), whose limiting positions for a = 0 are respectively (y = 0, x = 1), (y = 0, x = −∞). The circle (x + c)^{2} + y^{2} = c(c + 1)y/a can be written
y = | (x + c)^{2} | + | (x + c)^{4} | {μ + √[μ^{2} − (x + c)^{2}]}^{−2}, |
2μ | 2μ |
where μ = 12c(c + 1)/a; its ordinate y, for a given value of x, can therefore be supposed arbitrarily small by taking a sufficiently small.
We have thus proved the following result; taking in the plane of z any finite region of which every interior and boundary point is at a finite distance, however short, from the points of the real axis for which 1 ⋜ x ⋜ ∞, we can take a quantity a, and hence, with an arbitrary c, determine a number r; then corresponding to an arbitrary ε_{s}, we can determine a polynomial P_{s}, such that, for all points interior to the region, we have
thus the series of polynomials
constructed with an arbitrary aggregate of real positive numbers ε_{1}, ε_{2}, ε_{3}, ... with zero as their limit, converges uniformly and represents (1 − z)^{−1} for the whole region considered.
§ 12. Expansion of a Monogenic Function in Polynomials, over a Star Region.—Now consider any monogenic function ƒ(z) of which the origin is not a singular point; joining the origin to any singular point by a straight line, let the part of this straight line, produced beyond the singular point, lying between the singular point and z = ∞, be regarded as a barrier in the plane, the portion of this straight line from the origin to the singular point being erased. Consider next any finite region of the plane, whose boundary points constitute a path of integration, in a sense previously explained, of which every point is at a finite distance greater than zero from each of the barriers before explained; we suppose this region to be such that any line joining the origin to a boundary point, when produced, does not meet the boundary again. For every point x in this region R we can then write
2πiƒ(x) = ∫ | ƒ(t) | ƒ(t) | , | |
t | 1 − xt^{−1} |
where ƒ(x) represents a monogenic branch of the function, in case it be not everywhere single valued, and t is on the boundary of the region. Describe now another region R_{0} lying entirely within R, and let x be restricted to be within R_{0} or upon its boundary; then for any point t on the boundary of R, the points z of the plane for which zt− 1 is real and positive and equal to or greater than 1, being points for which |z| = |t| or |z| > |t|, are without the region R_{0}, and not infinitely near to its boundary points. Taking then an arbitrary real positive ε we can determine a polynomial in xt− 1, say P(xt^{−1}), such that for all points x in R_{0} we have
the form of this polynomial may be taken the same for all points t on the boundary of R, and hence, if E be a proper variable quantity of modulus not greater than ε,
| 2πiƒ(x) − ∫ | dt | ƒ(t)P(xt^{−1}) | = | ∫ | dt | ƒ(t)E | ⋜ εLM, |
t | t |
where L is the length of the path of integration, the boundary of R, and M is a real positive quantity such that upon this boundary |t^{−1} ƒ(t)| < M. If now
and
1 | ∫ t−r−1 ƒ(t)dt = μ_{r}, |
2πi |
this gives
where the quantities μ_{0}, μ_{1}, μ_{2}, ... are the coefficients in the expansion of ƒ(x) about the origin.
If then an arbitrary finite region be constructed of the kind explained, excluding the barriers joining the singular points of ƒ(x) to x = ∞, it is possible, corresponding to an arbitrary real positive number σ, to determine a number m, and a polynomial Q(x), of order m, such that for all interior points of this region
Hence as before, within this region ƒ(x) can be represented by a series of polynomials, converging uniformly; when ƒ(x) is not a single valued function the series represents one branch of the function.
The same result can be obtained without the use of Cauchy’s integral. We explain briefly the character of the proof. If a monogenic function of t, φ(t) be capable of expression as a power series in t − x about a point x, for |t − x| ⋜ ρ, and for all points of this circle |φ(t)| < g, we know that |φ^{(n)}(x)| < gρ^{−n}(n!). Hence, taking |z| < 13ρ, and, for any assigned positive integer μ, taking m so that for n > m we have (μ + n)μ < (32)^{n}, we have
| | φ(μ + n)(x)·z^{n} | | < | φ(μ + n)(x) | (μ + n)μ |z|^{n} < | g | ( | 3 | ) n ( | ρ | ) n < | g | , |
n! | (μ + n)! | ρμ + n | 2 | 3 | ρμ 2^{n} |
and therefore
φμ (x + z) = Σ mn=0 | φ(μ + n) (x) | z^{n} + εμ, |
n! |
where
|εμ| < | g | Σ ∞n=m+1 | 1 | < | g | . |
ρμ | 2^{n} | ρμ 2^{m} |
Now draw barriers as before, directed from the origin, joining the singular point of φ(z) to z = ∞, take a finite region excluding all these barriers, let ρ be a quantity less than the radii of convergence of all the power series developments of φ(z) about interior points of this region, so chosen moreover that no circle of radius ρ with centre at an interior point of the region includes any singular point of φ(z), let g be such that |φ(z)| < g for all circles of radius ρ whose centres are interior points of the region, and, x being any interior point of the region, choose the positive integer n so that 1/n |x| < 13ρ; then take the points a_{1} = x/n, a_{2} = 2x/n, a_{3} = 3x/n, ... a_{n} = x; it is supposed that the region is so taken that, whatever x may be, all these are interior points of the region. Then by what has been said, replacing x, z respectively by 0 and x/n, we have
with
provided (μ + m_{1} + 1)μ < (23)m1 + 1; in fact for μ ⋜ 2n2n−2 it is sufficient to take m_{1} = n^{2n}; by another application of the same inequality, replacing x, z respectively by a_{1} and x/n, we have
φ(μ) (a_{2}) = Σ m_{2}λ_{2}=0 | φ(μ + λ_{2}) (a_{1}) | ( | x | ) λ_{2} + β′μ , |
λ_{2}! | n |
where
provided (μ + m_{2} + 1)μ < (32)m_{2} + 1; we take m_{2} = n2n − 2, supposing μ < 2n2n−4. So long as λ_{2} ⋜ m_{2} ⋜ n2n−2 and μ < 2n2n−4 we have μ + λ_{2} < 2n2n−2, and we can use the previous inequality to substitute here for φ(μ + λ_{2}) (a_{1}). When this is done we find
φ(μ) (a_{2}) =
Σ m_{2}λ_{2}=0 Σ m_{1}λ_{1}=0 |
φ(μ + λ_{1} + λ_{2}) (0) | ( | x | ) λ_{1} + λ_{2} + βμ , |
λ_{1}! λ_{2}! | n |
where |βμ| < 2g/ρμ 2m_{2}, the numbers m_{1}, m_{2} being respectively n^{2n} and n2n−2.
Applying then the original inequality to φ(μ) (a_{3}) = φ(μ) (a_{2} + x/n), and then using the series just obtained, we find a series for φ(μ) (a_{3}). This process being continued, we finally obtain
φ(x) =
Σ m_{1}λ_{1}=0 Σ m_{2}λ_{2}=0 ... Σ m_{n}λ_{n}=0 |
φ^{h} (0) | ( | x | ) h + ε , |
K | n |
where h = λ_{1} + λ_{2} + ... + λ_{n}, K = λ_{1}! λ_{2}! ... λ_{n}!, m_{1} = n^{2n}, m_{2} = n2n−2, ..., m_{n}= n^{2}, |ε| < 2g/2^{m}_{n}.
By this formula φ(x) is represented, with any required degree of accuracy, by a polynomial, within the region in question; and thence can be expressed as before by a series of polynomials converging uniformly (and absolutely) within this region.
§ 13. Application of Cauchy’s Theorem to the Determination of Definite Integrals.—Some reference must be made to a method whereby real definite integrals may frequently be evaluated by use of the theorem of the vanishing of the integral of a function of a complex variable round a contour within which the function is single valued and non singular.
We are to evaluate an integral ∫ ba ƒ(x)dx; we form a closed contour of which the portion of the real axis from x = a to x = b forms a part, and consider the integral ∫ƒ(z)dz round this contour, supposing that the value of this integral can be determined along the curve forming the completion of the contour. The contour being supposed such that, within it, ƒ(z) is a single valued and finite function of the complex variable z save at a finite number of isolated interior points, the contour integral is equal to the sum of the values of ∫ƒ(z)dz taken round these points. Two instances will suffice to explain the method. (1) The integral ∫ ∞0 [(tan x)/x] dx is convergent if it be understood to mean the limit when ε, ζ, σ, . . . all vanish of the sum of the integrals
tan x | dx, ∫ 3/2π−ζ1/2π+ε | tan x | dx, ∫ 5/2π−σ3/2π+ζ | tan x | dx, ... | |
x | x | x |
Now draw a contour consisting in part of the whole of the positive and negative real axis from x = −nπ to x = +nπ, where n is a positive integer, broken by semicircles of small radius whose centres are the points x = ±12π, x = ±34π, ... , the contour containing also the lines x = nπ and x = −nπ for values of y between 0 and nπ tan α, where α is a small fixed angle, the contour being completed by the portion of a semicircle of radius nπ sec α which lies in the upper half of the plane and is terminated at the points x = ±nπ, y = nπ tan α. Round this contour the integral ∫ [(tan z / z)] dz has the value zero. The contributions to this contour integral arising from the semicircles of centres −12(2s − 1)π, + 12(2s − 1)π, supposed of the same radius, are at once seen to have a sum which ultimately vanishes when the radius of the semicircles diminishes to zero. The part of the contour lying on the real axis gives what is meant by 2 ∫ nπ0 [(tan x / x)] dx. The contribution to the contour integral from the two straight portions at x = ±nπ is
∫ nπ tan α0 idy ( | tan iy | − | tan iy | ) |
nπ + iy | −nπ + iy |
where i tan iy, = −[exp(y) − exp(−y)]/[exp(y) + exp(−y)], is a real quantity which is numerically less than unity, so that the contribution in question is numerically less than
∫ nπ tan α0 dy | 2nπ | , that is than 2α. |
n^{2}π^{2} + y^{2} |
Finally, for the remaining part of the contour, for which, with R = nπ sec α, we have z = R(cos θ + i sin θ) = RE(iθ), we have
dz | = idθ, i tan z = | exp(−R sin θ) E(iR cos θ) − exp(R sin θ) E(−iR cos θ) | ; |
z | exp(−R sin θ) E(iR cos θ) + exp(R sin θ) E(−iR cos θ) |
when n and therefore R is very large, the limit of this contribution to the contour integral is thus
Making n very large the result obtained for the whole contour is
2 ∫ ∞0 | tan x | dx − (π − 2α) − 2αε = 0, |
x |
where ε is numerically less than unity. Now supposing α to diminish to zero we finally obtain
∫ ∞0 | tan x | dx = | π | . |
x | 2 |
(2) For another case, to illustrate a different point, we may take the integral
∫ | z^{a−1} | dz, |
1 + z |
wherein a is real quantity such that 0 < a < 1, and the contour consists of a small circle, z = rE(iθ), terminated at the points x = r cos α, y = ± r sin α, where α is small, of the two lines y = ± r sin α for r cos α ⋜ x ⋜ R cos β, where R sin β = r sin α, and finally of a large circle z = RE(iφ), terminated at the points x = R cos β, y = ±R sin β. We suppose α and β both zero, and that the phase of z is zero for r cos a ⋜ x ⋜ R cos β, y = r sin α = R sin β. Then on r cos α ⋜ x ⋜ R cos β, y = −r sin α, the phase of z will be 2π, and zα − 1 will be equal to xα − 1 exp [2πi(a − 1)], where x is real and positive. The two straight portions of the contour will thus together give a contribution
[1 − exp(2πiα)] ∫ R cos βr cos α | x^{a−1} | dx. |
1 + x |
It can easily be shown that if the limit of zƒ(z) for z = 0 is zero, the integral ∫ƒ(z)dz taken round an arc, of given angle, of a small circle enclosing the origin is ultimately zero when the radius of the circle diminishes to zero, and if the limit of zƒ(z) for z = ∞ is zero, the same integral taken round an arc, of given angle, of a large circle whose centre is the origin is ultimately zero when the radius of the circle increases indefinitely; in our case with ƒ(z) = zα−1/(1 + z), we have zƒ(z) = z^{a}/(1 + z), which, for 0 < a < 1, diminishes to zero both for z = 0 and for z = ∞. Thus, finally the limit of the contour integral when r = 0, R = ∞ is
Within the contour ƒ(z) is single valued, and has a pole at z = 1; at this point the phase of z is π and z^{a−1} is exp [iπ(a − 1)] or − exp(iπa); this is then the residue of ƒ(z) at z = −1; we thus have
that is
§ 14. Doubly Periodic Functions.—An excellent illustration of the preceding principles is furnished by the theory of single valued functions having in the finite part of the plane no singularities but poles, which have two periods.
Before passing to this it may be convenient to make here a few remarks as to the periodicity of (single valued) monogenic functions. To say that ƒ(z) is periodic is to say that there exists a constant ω such that for every point z of the interior of the region of existence of ƒ(z) we have ƒ(z + ω) = ƒ(z). This involves, considering all existing periods ω = ρ + iσ, that there exists a lower limit of ρ^{2} + σ^{2} other than zero; for otherwise all the differential coefficients of ƒ(z) would be zero, and ƒ(z) a constant; we can then suppose that not both ρ and σ are numerically less than ε, where ε > σ. Hence, if g be any real quantity, since the range (−g, ... g) contains only a finite number of intervals of length ε, and there cannot be two periods ω = ρ + iσ such that με ⋜ ρ < (μ + 1)ε, νε ⋜ σ < (ν + 1)ε, where μ, ν are integers, it follows that there is only a finite number of periods for which both ρ and σ are in the interval (−g ... g). Considering then all the periods of the function which are real multiples of one period ω, and in particular those periods λω wherein 0 < λ ⋜ 1, there is a lower limit for λ, greater than zero, and therefore, since there is only a finite number of such periods for which the real and imaginary parts both lie between −g and g, a least value of λ, say λ_{0}. If Ω = λ_{0}ω and λ = Mλ_{0} + λ′, where M is an integer and 0 ⋜ λ′ < λ_{0}, any period λω is of the form MΩ + λ′ω; since, however, Ω, MΩ and λω are periods, so also is λ′ω, and hence, by the construction of λ_{0}, we have λ′ = 0; thus all periods which are real multiples of ω are expressible in the form MΩ where M is an integer, and Ω a period.
If beside ω the functions have a period ω′ which is not a real multiple of ω, consider all existing periods of the form μω + νω′ wherein μ, ν are real, and of these those for which 0 ⋜ μ ⋜ 1, 0 < ν ⋜ 1; as before there is a least value for ν, actually occurring in one or more periods, say in the period Ω′ = μ_{0}ω + ν_{0}ω′; now take, if μω + νω′ be a period, ν = N′ν_{0} + ν′, where N′ is an integer, and 0 ⋜ ν′ < ν_{0}; thence μω + νω′ = μω + N′(Ω′ − μ_{0}ω) + ν′ω′; take then μ − Nμ_{0} = Nλ_{0} + λ′, where N is an integer and λ_{0} is as above, and 0 ⋜ λ′ < λ_{0}; we thus have a period NΩ + N′Ω′ + λ′ω + ν′ω′, and hence a period λ′ω + ν′ω′, wherein λ′ < λ_{0}, ν′ < ν_{0}; hence ν′ = 0 and λ′ = 0. All periods of the form μω + νω′ are thus expressible in the form NΩ + N′Ω′, where Ω, Ω′ are periods and N, N′ are integers. But in fact any complex quantity, P + iQ, and in particular any other possible period of the function, is expressible, with μ, ν real, in the form μω + νω′; for if ω = ρ + iσ, ω′ = ρ′ + iσ′, this requires only P = μρ + νρ′, Q = μσ + νσ′, equations which, since ω′/ω is not real, always give finite values for μ and ν.
It thus appears that if a single valued monogenic function of z be periodic, either all its periods are real multiples of one of them, and then all are of the form MΩ, where Ω is a period and M is an integer, or else, if the function have two periods whose ratio is not real, then all its periods are expressible in the form NΩ + N′Ω′, where Ω, Ω′ are periods, and N, N′ are integers. In the former case, putting ζ = 2πiz/Ω, and the function ƒ(z) = φ(ζ), the function φ(ζ) has, like exp (ζ), the period 2πi, and if we take t = exp (ζ) or ζ = λ(t) the function is a single valued function of t. If then in particular ƒ(z) is an integral function, regarded as a function of t, it has singularities only for t = 0 and t = ∞, and may be expanded in the form Σ ∞−∞ a_{n} t^{n}.
Taking the case when the single valued monogenic function has two periods ω, ω′ whose ratio is not real, we can form a network of parallelograms covering the plane of z whose angular points are the points c + mω + m′ω′, wherein c is some constant and m, m′ are all possible positive and negative integers; choosing arbitrarily one of these parallelograms, and calling it the primary parallelogram, all the values of which the function is at all capable occur for points of this primary parallelogram, any point, z′, of the plane being, as it is called, congruent to a definite point, z, of the primary parallelogram, z′ − z being of the form mω + m′ω′, where m, m′ are integers. Such a function cannot be an integral function, since then, if, in the primary parallelogram |ƒ(z)| < M, it would also be the case, on a circle of centre the origin and radius R, that |ƒ(z)| < M, and therefore, if Σa_{n} z^{n} be the expansion of the function, which is valid for an integral function for all finite values of z, we should have |a_{n}| < MR^{−n}, which can be made arbitrarily small by taking R large enough. The function must then have singularities for finite values of z.
We consider only functions for which these are poles. Of these there cannot be an infinite number in the primary parallelogram, since then those of these poles which are sufficiently near to one of the necessarily existing limiting points of the poles would be arbitrarily near to one another, contrary to the character of a pole. Supposing the constant c used in naming the corners of the parallelograms so chosen that no pole falls on the perimeter of a parallelogram, it is clear that the integral 1/(2πi) ∫ƒ(z) dz round the perimeter of the primary parallelogram vanishes; for the elements of the integral corresponding to two such opposite perimeter points as z, z + ω (or as z, z + ω′) are mutually destructive. This integral is, however, equal to the sum of the residues of ƒ(z) at the poles interior to the parallelogram. Which sum is therefore zero. There cannot therefore be such a function having only one pole of the first order in any parallelogram; we shall see that there can be such a function with two poles only in any parallelogram, each of the first order, with residues whose sum is zero, and that there can be such a function with one pole of the second order, having an expansion near this pole of the form (z-a)^{−2} + (power series in z − a).
Considering next the function φ(z) = [ƒ(z)]^{−1} dƒ(z)/dz, it is easily seen that an ordinary point of ƒ(z) is an ordinary point of φ(z), that a zero of order m for ƒ(z) in the neighbourhood of which ƒ(z) has a form, (z − a)^{m} multiplied by a power series, is a pole of φ(z) of residue m, and that a pole of ƒ(z) of order n is a pole of φ(z) of residue −n; manifestly φ(z) has the two periods of ƒ(z). We thus infer, since the sum of the residues of φ(z) is zero, that for the function ƒ(z), the sum of the orders of its vanishing at points belonging to one parallelogram, Σm, is equal to the sum of the orders of its poles, Σn; which is briefly expressed by saying that the number of its zeros is equal to the number of its poles. Applying this theorem to the function ƒ(z) − A, where A is an arbitrary constant, we have the result, that the function ƒ(z) assumes the value A in one of the parallelograms as many times as it becomes infinite. Thus, by what is proved above, every conceivable complex value does arise as a value for the doubly periodic function ƒ(z) in any one of its parallelograms, and in fact at least twice. The number of times it arises is called the order of the function; the result suggests a property of rational functions.
Consider further the integral ∫ z [ƒ′(z)/ƒ(z)] dz, where ƒ′(z) = dƒ(z)/dz taken round the perimeter of the primary parallelogram; the contribution to this arising from two opposite perimeter points such as z and z + ω is of the form −ω ∫ z [ƒ′(z)/ƒ(z)] dz, which, as z increases from z_{0} to z_{0} + ω′, gives, if λ denote the generalized logarithm, − ω {λ [ƒ(z_{0} + ω′)] − λ[ƒ(z_{0})]}, that is, since ƒ(z_{0} + ω′) = ƒ(z_{0}), gives 2πiNω, where N is an integer; similarly the result of the integration along the other two opposite sides is of the form 2πiN′ω′, where N′ is an integer. The integral, however, is equal to 2πi times the sum of the residues of zƒ′(z) / ƒ(z) at the poles interior to the parallelogram. For a zero, of order m, of ƒ(z) at z = a, the contribution to this sum is 2πima, for a pole of order n at z = b the contribution is −2πinb; we thus infer that Σma − Σnb = Nω + N′ω′; this we express in words by saying that the sum of the values of z where ƒ(z) = 0 within any parallelogram is equal to the sum of the values of z where ƒ(z) = ∞ save for integral multiples of the periods. By considering similarly the function ƒ(z) − A where A is an arbitrary constant, we prove that each of these sums is equal to the sum of the values of z where the function takes the value A in the parallelogram.
We pass now to the construction of a function having two arbitrary periods ω, ω′ of unreal ratio, which has a single pole of the second order in any one of its parallelograms.
For this consider first the network of parallelograms whose corners are the points Ω = mω + m′ω′, where m, m′ take all positive and negative integer values; putting a small circle about each corner of this network, let P be a point outside all these circles; this will be interior to a parallelogram whose corners in order may be denoted by z_{0}, z_{0} + ω, z_{0} + ω + ω′, z_{0} + ω′; we shall denote z_{0}, z_{0} + ω by A_{0}, B_{0}; this parallelogram Π_{0} is surrounded by eight other parallelograms, forming with Π_{0} a larger parallelogram Π_{1}, of which one side, for instance, contains the points z_{0} − ω − ω′, z_{0} − ω′, z_{0} − ω′ + ω, z_{0} − ω′ + 2ω, which we shall denote by A_{1}, B_{1}, C_{1}, D_{1}. This parallelogram Π_{1} is surrounded by sixteen of the original parallelograms, forming with Π_{1} a still larger parallelogram Π_{2} of which one side, for instance, contains the points z_{0} − 2ω − 2ω′, z_{0} − ω − 2ω′, z_{0} − 2ω′, z_{0} + ω − 2ω′, z_{0} + 2ω − 2ω′, z_{0} + 3ω − 2ω′, which we shall denote by A_{2}, B_{2}, C_{2}, D_{2}, E_{2}, F_{2}. And so on. Now consider the sum of the inverse cubes of the distances of the point P from the corners of all the original parallelograms. The sum will contain the terms
S_{0} = | 1 | + ( | 1 | + | 1 | + | 1 | ) + ( | 1 | + | 1 | + ... + | 1 | ) + ... |
PA_{0}^{3} | PA_{1}^{3} | PB_{1}^{3} | PC_{1}^{3} | PA_{2}^{3} | PB_{2}^{3} | PE_{2}^{3} |
and three other sets of terms, each infinite in number, formed in a similar way. If the perpendiculars from P to the sides A_{0}B_{0}, A_{1}B_{1}C_{1}, A_{2}B_{2}C_{2}D_{2}E_{2}, and so on, be p, p + q, p + 2q and so on, the sum S_{0} is at most equal to
1 | + | 3 | + | 5 | + ... + | 2n + 1 | + ... |
p^{3} | (p + q)^{3} | (p + 2q)^{3} | (p + nq)^{3} |
of which the general term is ultimately, when n is large, in a ratio of equality with 2q^{−3} n^{−2}, so that the series S_{0} is convergent, as we know the sum Σn^{−2} to be; this assumes that p ≠ 0; if P be on A_{0}B_{0} the proof for the convergence of S_{0} − 1/PA_{0}^{3}, is the same. Taking the three other sums analogous to S_{0} we thus reach the result that the series
where Ω is mω + m′ω′, and m, m′ are to take all positive and negative integer values, and z is any point outside small circles described with the points Ω as centres, is absolutely convergent. Its sum is therefore independent of the order of its terms. By the nature of the proof, which holds for all positions of z outside the small circles spoken of, the series is also clearly uniformly convergent outside these circles. Each term of the series being a monogenic function of z, the series may therefore be differentiated and integrated outside these circles, and represents a monogenic function. It is clearly periodic with the periods ω, ω′; for φ(z + ω) is the same sum as φ(z) with the terms in a slightly different order. Thus φ(z + ω) = φ(z) and φ(z + ω′) = φ(z).
Consider now the function
ƒ(z) = | 1 | + ∫ z0 { φ(z) + | 2 | } dz, |
z^{2} | z^{3} |
where, for the subject of integration, the area of uniform convergence clearly includes the point z = 0; this gives
dƒ(z) | = φ(z) |
dz |
and
ƒ(z) = | 1 | + Σ′ { | 1 | − | 1 | } , |
z^{2} | (z − Ω)^{2} | Ω^{2} |
wherein Σ′ is a sum excluding the term for which m = 0 and m′ = 0. Hence ƒ(z + ω) − ƒ(z) and ƒ(z + ω′) − ƒ(z) are both independent of z. Noticing, however, that, by its form, ƒ(z) is an even function of z, and putting z = −12ω, z = −12ω′ respectively, we infer that also ƒ(z) has the two periods ω and ω′. In the primary parallelogram Π_{0}, however, ƒ(z) is only infinite at z = 0 in the neighbourhood of which its expansion is of the form z^{−2} + (power series in z). Thus ƒ(z) is such a doubly periodic function as was to be constructed, having in any parallelogram of periods only one pole, of the second order.
It can be shown that any single valued meromorphic function of z with ω and ω′ as periods can be expressed rationally in terms of ƒ(z) and φ(z), and that [φ(z)]^{2} is of the form 4[ƒ(z)]^{3} + Aƒ(z) + B,
where A, B are constants. To prove the last of these results, we write, for |z| < |Ω|,
1 | − | 1 | = | 2z | + | 3z^{2} | + ..., |
(z − Ω)^{2} | Ω^{2} | Ω^{3} | Ω^{4} |
and hence, if Σ′Ω−2n = σ_{n}, since Σ′Ω−(2n−1) = 0, we have, for sufficiently small z greater than zero,
and
using these series we find that the function
contains no negative powers of z, being equal to a power series in z^{2} beginning with a term in z^{2}. The function F(z) is, however, doubly periodic, with periods ω, ω′, and can only be infinite when either ƒ(z) or φ(z) is infinite; this follows from its form in ƒ(z) and φ(z); thus in one parallelogram of periods it can be infinite only when z = 0; we have proved, however, that it is not infinite, but, on the contrary, vanishes, when z = 0. Being, therefore, never infinite for finite values of z it is a constant, and therefore necessarily always zero. Putting therefore ƒ(z) = ζ and φ(z) = dζ/dz we see that
dz | = (4ζ^{3} − 60σ_{2}ζ − 140σ_{3})^{−1/2}. |
dζ |
Historically it was in the discussion of integrals such as
regarded as a branch of Integral Calculus, that the doubly periodic functions arose. As in the familiar case
where ζ = sin z, it has proved finally to be simpler to regard ζ as a function of z. We shall come to the other point of view below, under § 20, Elliptic Integrals.
To prove that any doubly periodic function F(z) with periods ω, ω′, having poles at the points z = a_{1}, ... z = am of a parallelogram, these being, for simplicity of explanation, supposed to be all of the first order, is rationally expressible in terms of φ(z) and ƒ(z), and we proceed as follows:—
Consider the expression
Φ(z) = | (ζ, 1)m + η(ζ, 1)m−2 |
(ζ − A_{1}) (ζ − A_{2})...(ζ − Am) |
where A_{s} = ƒ(a_{s}), ζ is an abbreviation for ƒ(z) and η for φ(z), and (ζ, 1)m, (ζ, 1)m−2, denote integral polynomials in ζ, of respective orders m and m − 2, so that there are 2m unspecified, homogeneously entering, constants in the numerator. It is supposed that no one of the points a_{1}, ... am is one of the points mω + m′ω′ where ƒ(z) = ∞. The function Φ(z) is a monogenic function of z with the periods ω, ω′, becoming infinite (and having singularities) only when (1) ζ = ∞ or (2) one of the factors ζ-A_{s} is zero. In a period parallelogram including z = 0 the first arises only for z = 0; since for ζ = ∞, η is in a finite ratio to ζ^{3/2}; the function Φ(z) for ζ = ∞ is not infinite provided the coefficient of ζ^{m} in (ζ, 1)m is not zero; thus Φ(z) is regular about z = 0. When ζ − A_{s} = 0, that is ƒ(z) = ƒ(a_{s}), we have z = ±a_{s} + mω + m′ω′, and no other values of z, m and m′ being integers; suppose the unspecified coefficients in the numerator so taken that the numerator vanished to the first order in each of the m points −a_{1}, −a_{2}, ... −am; that is, if φ(a_{s}) = B_{s}, and therefore φ(−a_{s}) = −B_{s}, so that we have the m relations
then the function Φ(z) will only have the m poles a_{1}, ... am. Denoting further the m zeros of F(z) by a_{1}′, ... am′, putting ƒ(a_{s}′) = A_{s}′, φ(a_{s}′) = B_{s}′, suppose the coefficients of the numerator of Φ(z) to satisfy the further m − 1 conditions
for s = 1, 2, ... (m − 1). The ratios of the 2m coefficients in the numerator of Φ(z) can always be chosen so that the m + (m − 1) linear conditions are all satisfied. Consider then the ratio
it is a doubly periodic function with no singularity other than the one pole am′. It is therefore a constant, the numerator of Φ(z) vanishing spontaneously in am′. We have
where A is a constant; by which F(z) is expressed rationally in terms of ƒ(z) and φ(z), as was desired.
When z = 0 is a pole of F(z), say of order r, the other poles, each of the first order, being a_{1}, ... am, similar reasoning can be applied to a function
(ζ, 1)h + η(ζ, 1)_{k} | , |
(ζ − A_{1}) ... (ζ − Am) |
where h, k are such that the greater of 2h − 2m, 2k + 3 − 2m is equal to r; the case where some of the poles a_{1}, ... am are multiple is to be met by introducing corresponding multiple factors in the denominator and taking a corresponding numerator. We give a solution of the general problem below, of a different form.
One important application of the result is the theorem that the functions ƒ(z + t), φ(z + t), which are such doubly periodic function of z as have been discussed, can each be expressed, so far as they depend on z, rationally in terms of ƒ(z) and φ(z), and therefore, so far as they depend on z and t, rationally in terms of ƒ(z), ƒ(t), φ(z) and φ(t). It can in fact be shown, by reasoning analogous to that given above, that
ƒ(z + t) + ƒ(z) + ƒ(t) = 14 [ | φ(z) − φ(t) | ] 2. |
ƒ(z) − ƒ(t) |
This shows that if F(z) be any single valued monogenic function which is doubly periodic and of meromorphic character, then F(z + t) is an algebraic function of F(z) and F(t). Conversely any single valued monogenic function of meromorphic character, F(z), which is such that F(z + t) is an algebraic function of F(z) and F(t), can be shown to be a doubly periodic function, or a function obtained from such by degeneration (in virtue of special relations connecting the fundamental constants).
The functions ƒ(z), φ(z) above are usually denoted by ℜ(z), ℜ′(z); further the fundamental differential equation is usually written
and the roots of the cubic on the right are denoted by e_{1}, e_{2}, e_{3}; for the odd function, ℜ′z, we have, for the congruent arguments −12ωand 12ω, ℜ′ (12ω) = −ℜ′ (−12ω) = −ℜ′ (12ω), and hence ℜ′ (12ω) = 0; hence we can take e_{1} = ℜ (12ω), e_{2} = ℜ (12ω + 12ω′), e_{3} = ℜ (12ω). It can then be proved that [ℜ(z) − e_{1}] [ℜ (z + 12ω) − e_{1}] = (e_{1} − e_{2}) (e_{1} − e_{3}), with similar equations for the other half periods. Consider more particularly the function ℜ(z) − e_{1}; like ℜ(z) it has a pole of the second order at z = 0, its expansion in its neighbourhood being of the form z^{−2} (1 − e_{1}z^{2} + Az^{4} + ...); having no other pole, it has therefore either two zeros, or a double zero in a period parallelogram (ω, ω′). In fact near its zero 12ω its expansion is (x − 12ω) ℜ′ (12ω) + 12(z − 12ω)^{2} ℜ″ (12ω) + ...; we have seen that ℜ′ (12ω) = 0; thus it has a zero of the second order wherever it vanishes. Thus it appears that the square root [ℜ(z) − e_{1}]^{1/2}, if we attach a definite sign to it for some particular value of z, is a single valued function of z; for it can at most have two values, and the only small circuits in the plane which could lead to an interchange of these values are those about either a pole or a zero, neither of which, as we have seen, has this effect; the function is therefore single valued for any circuit. Denoting the function, for a moment, by ƒ_{1}(z), we have ƒ_{1}(z + ω) = ±ƒ_{1}(z), ƒ_{1}(z + ω′) = ±ƒ_{1}(z); it can be seen by considerations of continuity that the right sign in either of these equations does not vary with z; not both these signs can be positive, since the function has only one pole, of the first order, in a parallelogram (ω, ω′); from the expansion of ƒ_{1}(z) about z = 0, namely z− 1 (1 − 12e_{1}z^{2} + ...), it follows that ƒ_{1}(z) is an odd function, and hence ƒ_{1} (−12ω′) = −ƒ_{1} (12ω′), which is not zero since [ƒ_{1} (12ω′)]^{2} = e_{3} − e_{1}, so that we have ƒ_{1} (z + ω′) = −ƒ_{1}(z); an equation f_{1}(z + ω) = −ƒ_{1}(z) would then give ƒ_{1}(z + ω + ω′) = ƒ_{1}(z), and hence ƒ_{1}(12ω + 12ω′) = ƒ_{1}(−12ω − 12ω′), of which the latter is −ƒ_{1}(12ω + 12ω′); this would give ƒ_{1}(12ω + 12ω′) = 0, while [ƒ_{1}(12ω + 12ω′)]^{2} = e_{2} − e_{1}. We thus infer that ƒ_{1}(z + ω) = ƒ_{1}(z), ƒ_{1}(z + ω′) = −ƒ_{1}(z), ƒ_{1}(z + ω + ω′) = −ƒ_{1}(z). The function ƒ_{1}(z) is thus doubly periodic with the periods ω and 2ω′; in a parallelogram of which two sides are ω and 2ω′ it has poles at z = 0, z = ω′ each of the first order, and zeros of the first order at z = 12ω, z = 12ω + ω′; it is thus a doubly periodic function of the second order with two different poles of the first order in its parallelogram (ω, 2ω′). We may similarly consider the functions ƒ_{2}(z) = [ℜ(z) − e_{2}]^{1/2}, ƒ_{3}(z) = [ℜ(z) − e_{3}]^{1/2}; they give
ƒ_{2}(z + ω + ω′) = ƒ_{2}(z), ƒ_{2}(z + ω) = −ƒ_{2}(z), ƒ_{2}(z + ω′) = −ƒ_{2}(z), | ƒ_{3}(z + ω′) = ƒ_{3}z, ƒ_{3}(z + ω) = −ƒ_{3}(z), ƒ_{3}(z + ω + ω′) = −ƒ_{3}(z). |
Taking u = z (e_{1} − e_{3})^{1/2}, with a definite determination of the constant (e_{1} − e_{3})^{1/2}, it is usual, taking the preliminary signs so that for z = 0 each of zƒ_{1}(z), zƒ_{2}(z), zƒ_{3}(z) is equal to +1, to put
sn(u) = | (e_{1} − e_{3})^{1/2} | , cn(u) = | ƒ_{1}(z) | , dn(u) = | f_{2}(z) | , |
ƒ_{3}(z) | ƒ_{3}(z) | ƒ_{3}(z) |
k^{2} = (e_{2} − e_{3}) / (e_{1} − e_{3}), K = 12ω (e_{1} − e_{3})^{1/2}, iK′ = 12ω′ (e_{1} − e_{3})^{1/2}; |
thus sn(u) is an odd doubly periodic function of the second order with the periods 4K, 2iK, having poles of the first order at u = iK′, u = 2K + iK′, and zeros of the first order at u = 0, u = 2K; similarly cn(u), dn(u) are even doubly periodic functions whose periods can be written down, and sn^{2}(u) + cn^{2}(u) = 1, k^{2}sn^{2}(u) + dn^{2}(u) = 1; if x = sn(u) we at once find, from the relations given here, that
du | = [(1 − x^{2}) (1 − k^{2}x^{2})]^{−1/2}; |
dx |
if we put x = sinφ we have
du | = [1 − k^{2}sin^{2}φ]^{−1/2}, |
dφ |
and if we call φ the amplitude of u, we may write φ = am(u), x = sin·am(u), which explains the origin of the notation sn(u). Similarly cn(u) is an abbreviation of cos·am(u), and dn(u) of Δam(u), where Δ(φ) meant (1 − k^{2}sin^{2}φ)^{1/2}. The addition equation for each of the functions ƒ_{1}(z), ƒ_{2}(z), ƒ_{3}(z) is very simple, being
ƒ(z + t) = 12 ( | ∂ | + | ∂ | ) log | ƒ(z) + ƒ(t) | = | ƒ(z)ƒ′(t) − ƒ(t)ƒ′(z) | , |
∂z | ∂i | ƒ(z) − ƒ(t) | ƒ^{2}(z) − ƒ^{2}(t) |
where ƒ_{1}′(z) means dƒ_{1}(z)/dz, which is equal to −ƒ_{2}(z)·ƒ_{3}(z), and ƒ^{2}(z) means [ƒ(z)]^{2}. This may be verified directly by showing, if R denote the right side of the equation, that ∂R/∂z = ∂R/∂t; this will require the use of the differential equation
and in fact we find
( | ∂^{2} | − | ∂^{2} | ) log [ƒ(z) + ƒ(t)] = ƒ^{2}(z) − ƒ^{2}(t) = ( | ∂^{2} | − | ∂^{2} | ) log [ƒ(z) − ƒ(t)]; |
∂z^{2} | dt^{2} | ∂z^{2} | dt^{2} |
hence it will follow that R is a function of z + t, and R is at once seen to reduce to ƒ(z) when t = 0. From this the addition equation for each of the functions sn(u), cn(u), dn(u) can be deduced at once; if s_{1}, c_{1}, d_{1}, s_{2}, c_{2}, d_{2} denote respectively sn(u_{1}), cn(u_{1}), dn(u_{1}), sn(u_{2}), cn(u_{2}), dn(u_{2}), they can be put into the forms
sn(u_{1} + u_{2}) = (s_{1}c_{2}d_{2} + s_{2}c_{1}d_{1}) / D, | cn(u_{1} + u_{2}) = (c_{1}c_{2} − s_{1}s_{2}d_{1}d_{2}) / D, | dn(u_{1} + u_{2}) = (d_{1}d_{2} − k^{2}s_{1}s_{2}c_{1}c_{2}) / D, |
where
The introduction of the function ƒ_{1}(z) is equivalent to the introduction of the function ℜ(z; ω, 2ω′) constructed from the periods ω, 2ω′ as was ℜ(z) from ω and ω′; denoting this function by ℜ_{1}(z) and its differential coefficient by ℜ′_{1}(z), we have in fact
ƒ_{1}(z) = 12 | ℜ′_{1}(z) |
ℜ_{1}(ω′) − ℜ_{1}(z) |
as we see at once by considering the zeros and poles and the limit of zƒ_{1}(z) when z = 0. In terms of the function ℜ_{1}(z) the original function ℜ(z) is expressed by
as a consideration of the poles and expansion near z = 0 will show.
A function having ω, ω′ for periods, with poles at two arbitrary points a, b and zeros at a′, b′, where a′ + b′ = a + b save for an expression mω + m′ω′, in which m, m′ are integers, is a constant multiple of
{ℜ [z − 12(a′ + b′)] − ℜ [a′ − 12(a′ + b′)]} / {ℜ [z − 12(a + b)] − ℜ [a − 12(a + b)]}; |
if the expansion of this function near z = a be
the expansion near z = b is
as we see by remarking that if z′ − b = −(z − a) the function has the same value at z and z′; hence the differential equation satisfied by the function is easily calculated in terms of the coefficients in the expansions.
From the function ℜ(z) we can obtain another function, termed the Zeta-function; it is usually denoted by ζ(z), and defined by
ζ(z) − | 1 | = ∫ π0 [ | 1 | − ℜ(z) ] dz = Σ′ ( | 1 | + | 1 | + | z | ), |
z | z^{2} | z − Ω | Ω | Ω^{2} |
for which as before we have equations
ζ(z + ω) = ζ(z) + 2πiη, ζ(z + ω′) = ζ(z) + 2πiη′, |
where 2η, 2η′ are certain constants, which in this case do not both vanish, since else ζ(z) would be a doubly periodic function with only one pole of the first order. By considering the integral
round the perimeter of a parallelogram of sides ω, ω′ containing z = 0 in its interior, we find ηω′ − η′ω = 1, so that neither of η, η′ is zero. We have ζ′(z) =−ℜ(z). From ζ(z) by means of the equation
σ(z) | = exp { ∫ z0 [ ζ(x) − | 1 | ] dz } = Π′ [ ( 1 − | z | ) exp ( | z | + | z^{2} | ) ], |
z | z | Ω | Ω | 2Ω^{2} |
we determine an integral function σ(z), termed the Sigma-function, having a zero of the first order at each of the points z = Ω; it can be seen to satisfy the equations
σ(z + ω) | = −exp [2πiη(z + 12ω)], | σ(z + ω′) | = −exp [2πiη′ (z + 12ω′)]. |
σ(z) | σ(z) |
By means of these equations, if a_{1} + a_{2} + ... + am = a′_{1} + a′_{2} + ... + a′m, it is readily shown that
σ(z − a′_{1}) σ(z − a′_{2}) ... σ(z − a′m) |
σ(z − a_{1}) σ(z − a_{2}) ... σ(z − am) |
is a doubly periodic function having a_{1}, ... am as its simple poles, and a′_{1}, ... a′m as its simple zeros. Thus the function σ(z) has the important property of enabling us to write any meromorphic doubly periodic function as a product of factors each having one zero in the parallelogram of periods; these form a generalization of the simple factors, z − a, which have the same utility for rational functions of z. We have ζ(z) = σ′(z)/σ(z).
The functions ζ(z), ℜ(z) may be used to write any meromorphic doubly periodic function F(z) as a sum of terms having each only one pole; for if in the expansion of F(z) near a pole z = a the terms with negative powers of z − a be
then the difference
F(z) − A_{1}ζ (z − a) − A_{2}ℜ (z − a) − ... + | Am+1 | (−1)^{m} ℜ^{m−1} (z − a) |
m! |
will not be infinite at z = a. Adding to this a sum of further terms of the same form, one for each of the poles in a parallelogram of periods, we obtain, since the sum of the residues A is zero, a doubly periodic function without poles, that is, a constant; this gives the expression of F(z) referred to. The indefinite integral ∫F(z)dz can then be expressed in terms of z, functions ℜ(z − a) and their differential coefficients, functions ζ(z − a) and functions logσ(z − a).
§ 15. Potential Functions. Conformal Representation in General.—Consider a circle of radius a lying within the region of existence of a single valued monogenic function, u + iv, of the complex variable z, = x + iy, the origin z = 0 being the centre of this circle. If z = rE(iφ) = r(cosφ + i sinφ) be an internal point of this circle we have
u + iv = | 1 | ∫ | (U + iV) | dt, |
2πi | t − z |
where U + iV is the value of the function at a point of the circumference and t = aE(iθ); this is the same as
u + iv = | 1 | ∫ | (U + iV) [1 − (r/a) E (iθ − iφ)] | dθ. |
2π | 1 + (r/a)^{2} − 2(r/a) cos (θ − φ) |
If in the above formula we replace z by the external point (a^{2}/r) E(iφ) the corresponding contour integral will vanish, so that also
0 = | 1 | ∫ | (U + iV) [(r/a)^{2} − (r/a) E (iθ − iφ)] | dθ; |
2π | 1 + (r/a)^{2} − 2(r/a) cos (θ − φ) |
hence by subtraction we have
u = | 1 | ∫ | U(a^{2} − r^{2}) | dθ, |
2π | a^{2} + r^{2} − 2ar cos (θ − φ) |
and a corresponding formula for v in terms of V. If O be the centre of the circle, Q be the interior point z, P the point aE(iθ) of the circumference, and ω the angle which QP makes with OQ produced, this integral is at once found to be the same as
u = | 1 | ∫ Udω − | 1 | ∫ Udθ |
π | 2π |
of which the second part does not depend upon the position of z, and the equivalence of the integrals holds for every arc of integration.
Conversely, let U be any continuous real function on the circumference, U_{0} being the value of it at a point P_{0} of the circumference, and describe a small circle with centre at P_{0} cutting the given circle in A and B, so that for all points P of the arc AP_{0}B we have |U − U_{0}| < ε, where ε is a given small real quantity. Describe a further circle, centre P_{0} within the former, cutting the given circle in A′ and B′, and let Q be restricted to lie in the small space bounded by the arc A′P_{0}B′ and this second circle; then for all positions of P upon the greater arc AB of the original circle QP^{2} is greater than a definite finite quantity which is not zero, say QP^{2} > D^{2}. Consider now the integral
u′ = | 1 | ∫ U | (a^{2} − r^{2}) | dθ = | 1 | ∫ Udω − | 1 | ∫ Udθ, |
2π | a^{2} + r^{2} − 2ar cos (θ − φ) | π | 2π |
which we evaluate as the sum of two, respectively along the small arc AP_{0}B and the greater arc AB. It is easy to verify that, for the whole circumference,
U_{0} = | 1 | ∫ U_{0} | a^{2} − r^{2} | dθ = | 1 | ∫ U_{0} dω − | 1 | ∫ U_{0} dθ. |
2π | a^{2} + r^{2} − 2ar cos (θ − φ) | π | 2π |
Hence we can write
u′ − U_{0} = | 1 | ∫ AP_{0}B (U − U_{0})dω − | 1 | ∫ AP_{0}B (U − U_{0})dθ + | 1 | ∫ AB (U − U_{0}) | (a^{2} − r^{2}) | dθ. |
2π | 2π | 2π | QP^{2} |
If the finite angle between QA and QB be called Φ and the finite angle AOB be called Θ, the sum of the first two components is numerically less than
ε | (Φ + Θ). |
2π |
If the greatest value of |(U − U_{0})| on the greater arc AB be called H, the last component is numerically less than
H | (a^{2} − r^{2}) |
D^{2} |
of which, when the circle, of centre P_{0}, passing through A′B′ is sufficiently small, the factor a^{2} − r^{2} is arbitrarily small. Thus it appears that u′ is a function of the position of Q whose limit, when Q, interior to the original circle, approaches indefinitely near to P_{0}, is U_{0}. From the form
u′ = | 1 | ∫ Udω − | 1 | ∫ Udθ, |
π | 2π |
since the inclination of QP to a fixed direction is, when Q varies, P remaining fixed, a solution of the differential equation
∂^{2}ψ | + | ∂^{2} | = 0, |
∂x^{2} | ∂y^{2} |
where z, = x + iy, is the point Q, we infer that u′ is a differentiable function satisfying this equation; indeed, when r < a, we can write
1 | ∫ U | (a^{2} − r^{2}) | dθ = | 1 | ∫ U [ 1 + 2 | r | cos (θ − φ) + 2 | r^{2} | cos 2(θ − φ) + ... ] dθ |
2π | a^{2} + r^{2} − 2ar cos (θ − φ) | 2π | a | a^{2} |
where
a_{0} = | 1 | ∫ Udθ, a_{1} = | 1 | ∫ | U cosθ | dθ, b_{1} = | 1 | ∫ | U sinθ | dθ, |
2π | π | a | π | a |
a_{2} = | 1 | ∫ | U cos 2θ | dθ, b_{2} = | 1 | ∫ | U sin 2θ | dθ. |
π | a^{2} | π | a^{2} |
In this series the terms of order n are sums, with real coefficients, of the various integral polynomials of dimension n which satisfy the equation ∂^{2}ψ/∂x^{2} + ∂^{2}ψ/∂y^{2}; the series is thus the real part of a power series in z, and is capable of differentiation and integration within its region of convergence.
Conversely we may suppose a function, P, defined for the interior of a finite region R of the plane of the real variables x, y , capable of expression about any interior point x_{0}, y_{0} of this region by a power series in x − x_{0}, y − y_{0}, with real coefficients, these various series being obtainable from one of them by continuation. For any region R_{0} interior to the region specified, the radii of convergence of these power series will then have a lower limit greater than zero, and hence a finite number of these power series suffice to specify the function for all points interior to R_{0}. Each of these series, and therefore the function, will be differentiable; suppose that at all points of R_{0} the function satisfies the equation
∂^{2}P | + | ∂P^{2} | = 0, |
∂x^{2} | ∂y^{2} |
we then call it a monogenic potential function. From this, save for an additive constant, there is defined another potential function by means of the equation
Q = ∫ (x, y ) ( | ∂P | dy − | ∂P | dx ). |
∂x | ∂y |
The functions P, Q, being given by a finite number of power series, will be single valued in R_{0}, and P + iQ will be a monogenic function of z within R_{0}· In drawing this inference it is supposed that the region R_{0} is such that every closed path drawn in it is capable of being deformed continuously to a point lying within R_{0}, that is, is simply connected.
Suppose in particular, c being any point interior to R_{0}, that P approaches continuously, as z approaches to the boundary of R, to the value log r, where r is the distance of c to the points of the perimeter of R. Then the function of z expressed by
will be developable by a power series in (z − z_{0}) about every point z_{0} interior to R_{0}, and will vanish at z = c; while on the boundary of R it will be of constant modulus unity. Thus if it be plotted upon a plane of ζ the boundary of R will become a circle of radius unity with centre at ζ=0, this latter point corresponding to z=c. A closed path within R_{0}, passing once round z=c, will lead to a closed path passing once about ζ = 0. Thus every point of the interior of R will give rise to one point of the interior of the circle. The converse is also true, but is more difficult to prove; in fact, the differential coefficient dζ/dz does not vanish for any point interior to R. This being assumed, we obtain a conformal representation of the interior of the region R upon the interior of a circle, in which the arbitrary interior point c of R corresponds to the centre of the circle, and, by utilizing the arbitrary constant arising in determining the function Q, an arbitrary point of the boundary of R corresponds to an arbitrary point of the circumference of the circle.
There thus arises the problem of the determination of a real monogenic potential function, single valued and finite within a given arbitrary region, with an assigned continuous value at all points of the boundary of the region. When the region is circular this problem is solved by the integral 1/π ∫ Udω − 1/π ∫ Udθ previously given. When the region is bounded by the outermost portions of the circumferences of two overlapping circles, it can hence be proved that the problem also has a solution; more generally, consider a finite simply connected region, whose boundary we suppose to consist of a single closed path in the sense previously explained, ABCD; joining A to C by two non-intersecting paths AEC, AFC lying within the region, so that the original region may be supposed to be generated by the overlapping regions AECD, CFAB, of which the common part is AECF; suppose now the problem of determining a single valued finite monogenic potential function for the region AECD with a given continuous boundary value can be solved, and also the same problem for the region CFAB; then it can be shown that the same problem can be solved for the original area. Taking indeed the values assigned for the original perimeter ABCD, assume arbitrarily values for the path AEC, continuous with one another and with the values at A and C; then determine the potential function for the interior of AECD; this will prescribe values for the path CFA which will be continuous at A and C with the values originally proposed for ABC; we can then determine a function for the interior of CFAB with the boundary values so prescribed. This in its turn will give values for the path AEC, so that we can determine a new function for the interior of AECD. With the values which this assumes along CFA we can then again determine a new function for the interior of CFAB. And so on. It can be shown that these functions, so alternately determined, have a limit representing such a potential function as is desired for the interior of the original region ABCD. There cannot be two functions with the given perimeter values, since their difference would be a monogenic potential function with boundary value zero, which can easily be shown to be everywhere zero. At least two other methods have been proposed for the solution of the same problem.
A particular case of the problem is that of the conformal representation of the interior of a closed polygon upon the upper half of the plane of a complex variable t. It can be shown without much difficulty that if a, b, c, ... be real values of t, and α, β, γ, ... be n real numbers, whose sum is n − 2, the integral
as t describes the real axis, describes in the plane of z a polygon of n sides with internal angles equal to απ, βπ, ..., and, a proper sign being given to the integral, points of the upper half of the plane of t give rise to interior points of the polygon. Herein the points a, b, ... of the real axis give rise to the corners of the polygon; the condition Σα = n − 2 ensures merely that the point t = ∞ does not correspond to a corner; if this condition be not regarded, an additional corner and side is introduced in the polygon. Conversely it can be shown that the conformal representation of a polygon upon the half plane can be effected in this way; for a polygon of given position of more than three sides it is necessary for this to determine the positions of all but three of a, b, c, ...; three of them may always be supposed to be at arbitrary positions, such as t = 0, t = 1, t = ∞.
As an illustration consider in the plane of z = x + iy, the portion of the imaginary axis from the origin to z = ih, where h is positive and less than unity; let C be this point z = ih; let BA be of length unity along the positive real axis, B being the origin and A the point z = 1; let DE be of length unity along the negative real axis, D being also the origin and E the point z = − 1; let EFA be a semicircle of radius unity, F being the point z = i. If we put ζ = [(z^{2} + h^{2})/(1 + h^{2}z^{2})]^{1/2}, with ζ = 1 when z = 1, the function is single valued within the semicircle, in the plane of z, which is slit along the imaginary axis from the origin to z = ih; if we plot the value of ζ upon another plane, as z describes the continuous curve ABCDE, ζ will describe the real axis from ζ = 1 to ζ = − 1, the point C giving ζ = 0, and the points B, D giving the points ζ = ±h. Near z = 0 the expansion of ζ is ζ − h = z^{2} (1 − h^{4} / 2h) + ..., or ζ + h = −z^{2} (1 − h^{4} / 2h) + ...; in either case an increase of 12π in the phase of z gives an increase of π in the phase of ζ − h or ζ + h. Near z = ih the expansion of ζ is ζ = (z − ih)^{1/2} [2ih/(1 − h^{4})]^{1/2} + ..., and an increase of 2π in the phase of z − ih also leads to an increase of π in the phase of ζ. Then as z describes the semicircle EFA, ζ also describes a semicircle of radius unity, the point z = i becoming ζ = i. There is thus a conformal representation of the interior of the slit semicircle in the z-plane, upon the interior of the whole semicircle in the ζ-plane, the function
being single valued in the latter semicircle. By means of a transformation t = (ζ + 1)^{2} / (ζ − 1)^{2}, the semicircle in the plane of ζ can further be conformably represented upon the upper half of the whole plane of t.
As another illustration we may take the conformal representation of an equilateral triangle upon a half plane. Taking the elliptic function ℜ(u) for which ℜ′^{2}(u) = 4ℜ^{3}(u) − 4, so that, with ε = exp (23πi), we have e_{1} = 1, e_{2} = ε^{2}, e_{3} = ε, the half periods may be taken to be
12ω = ∫ ∞1 | dt | , 12ω′ = ∫ ∞e_{3} | dt | = 12εω; |
2(t^{3} − 1)^{1/2} | 2(t^{3} − 1)^{1/2} |
drawing the equilateral triangle whose vertices are O, of argument O, A of argument ω, and B of argument ω + ω′ = −ε^{2}ω, and the equilateral triangle whose angular points are O, B and C, of argument ω′, let E, of argument 13(2ω + ω′), and D, of argument 13(ω + 2ω′), be the centroids of these triangles respectively, and let BE, OE, AE cut OA, AB, BO in K, L, H respectively, and BD, OD, CD cut OC, BC, OB in F, G, H respectively; then if u = ξ + iη be any point of the interior of the triangle OEH and v = εu_{0} = ε(ξ − iη) be any point of the interior of the triangle OHD, the points respectively of the ten triangles OEK, EKA, EAL, ELB, EBH, DHB, DBG, DGC, DCF, DFO are at once seen to be given by −εv, ω + εu, ω − η^{2}v, ω + ω′ + ε^{2}u, ω + ω′ − v, ω + ω′ − u, ω + ω′ + εv, ω′ − εu, ω′ + ε^{2}v, −ε^{2}u. Further, when u is real, since the term − 2(u + mω + m′ε^{2}ω)^{−3}, which is the conjugate complex of −2(u + mω + m′ε^{2}ω)^{3}, arises in the infinite sum which expresses ℜ′(u), namely as −2(u + μω + μ′εω)^{−3}, where μ = m − m′, μ′ = −m′, it follows that ℜ′(u) is real; in a similar way we prove that ℜ′(u) is pure imaginary when u is pure imaginary, and that ℜ′(u) = ℜ′(εu) = ℜ′(ε^{2}u), as also that for v = εu_{0}, ℜ′(v) is the conjugate complex of ℜ′(u). Hence it follows that the variable
takes each real value once as u passes along the perimeter of the triangle ODE, being as can be shown respectively ∞, 1, 0, − 1 at O, D, H, E, and takes every complex value of imaginary part positive once in the interior of this triangle. This leads to
in accordance with the general theory.
It can be deduced that τ = t^{2} represents the triangle ODH on the upper half plane of τ, and ζ = (i − τ^{−1})^{1/2} represents similarly the triangle OBD.
§ 16. Multiple valued Functions. Algebraic Functions.—The explanations and definitions of a monogenic function hitherto given have been framed for the most part with a view to single valued functions. But starting from a power series, say in z − c, which represents a single value at all points of its circle of convergence, suppose that, by means of a derived series in z − c′, where c′ is interior to the circle of convergence, we can continue the function beyond this, and then by means of a series derived from the first derived series we can make a further continuation, and so on; it may well be that when, after a closed circuit, we again consider points in the first circle of convergence, the value represented may not agree with the original value. One example is the case z^{1/2}, for which two values exist for any value of z; another is the generalized logarithm λ(z), for which there is an infinite number of values. In such cases, as before, the region of existence of the function consists of all points which can be reached by such continuations with power series, and the singular points, which are the limiting points of the point-aggregate constituting the region of existence, are those points in whose neighbourhood the radii of convergence of derived series have zero for limit. In this description the point z = ∞ does not occupy an exceptional position, a power series in z − c being transformed to a series in 1/z when z is near enough to c by means of z − c = c(1 − cz^{−1}) [1 − (1 − cz^{−1})]^{−1}, and a series in 1/z to a series in z − c, when z is near enough to c, by means of 1/z = 1/c [1 + (z − c / c)]^{−1}.
The commonest case of the occurrence of multiple valued functions is that in which the function s satisfies an algebraic equation ƒ(s, z) = p_{0}s^{n} + p_{1}s^{n−1} + ... + p_{n} = 0, wherein p_{0}, p_{1}, ... p_{n} are integral polynomials in z. Assuming ƒ(s, z) incapable of being written as a product of polynomials rational in s and z, and excepting values of z for which the polynomial coefficient of s^{n} vanishes, as also the values of z for which beside ƒ(s, z) = 0 we have also ∂f(s, z)/∂s = 0, and also in general the point z = ∞, the roots of this equation about any point z=c are given by n power series in z − c. About a finite point z = c for which the equation ∂f(s, z)/∂s = 0 is satisfied by one or more of the roots s of ƒ(s, z) = 0, the n roots break up into a certain number of cycles, the r roots of a cycle being given by a set of power series in a radical (z − c)1/r, these series of the cycle being obtainable from one another by replacing (z − c)1/r by ω(z − r)1/r, where ω, equal to exp (2πih/r), is one of the rth roots of unity. Putting then z − c = t^{r} we may say that the r roots of a cycle are given by a single power series in t, an increase of 2π in the phase of t giving an increase of 2πr in the phase of z − c. This single series in t, giving the values of s belonging to one cycle in the neighbourhood of z = c when the phase of z − c varies through 2πr, is to be looked upon as defining a single place among the aggregate of values of z and s which satisfy ƒ(s, z) = 0; two such places may be at the same point (z = c, s = d) without coinciding, the corresponding power series for the neighbouring points being different. Thus for an ordinary value of z, z = c, there are n places for which the neighbouring values of s are given by n power series in z − c; for a value of z for which ∂f(s, z)/∂s = 0 there are less than n places. Similar remarks hold for the neighbourhood of z = ∞; there may be n places whose neighbourhood is given by n power series in z− 1 or fewer, one of these being associated with a series in t, where t = (z^{−1})1/r; the sum of the values of r which thus arise is always n. In general, then, we may say, with t of one of the forms (z − c), (z − c)1/r, z^{−1}, (z^{−1})1/r. that the neighbourhood of any place (c, d) for which ƒ(c, d) = 0 is given by a pair of expressions z = c + P(t), s = d + Q(t), where P(t) is a (particular case of a) power series vanishing for t = 0, and Q(t) is a power series vanishing for t = 0, and t vanishes at (c, d), the expression z − c being replaced by z^{−1} when c is infinite, and similarly the expression s − d by s^{−1} when d is infinite. The last case arises when we consider the finite values of z for which the polynomial coefficient of s^{n} vanishes. Of such a pair of expressions we may obtain a continuation by writing t = t_{0} + λ_{1}τ + λ_{2}τ^{2} + ..., where τ is a new variable and λ_{1} is not zero; in particular for an ordinary finite place this equation simply becomes t = t_{0} + τ. It can be shown that all the pairs of power series z = c + P(t), s = d + Q(t) which are necessary to represent all pairs of values of z, s satisfying the equation ƒ(s, z) = 0 can be obtained from one of them by this process of continuation, a fact which we express by saying that the equation ƒ(s, z) = 0 defines a monogenic algebraic construct. With less accuracy we may say that an irreducible algebraic equation ƒ(s, z) = 0 determines a single monogenic function s of z.
Any rational function of z and s, where ƒ(s, z) = 0, may be considered in the neighbourhood of any place (c, d) by substituting therein z = c + P(t), s = d + Q(t); the result is necessarily of the form t^{m}H(t), where H(t) is a power series in t not vanishing for t=0 and m is an integer. If this integer is positive, the function is said to vanish to order m at the place; if this integer is negative, = −μ, the function is infinite to order μ at the place. More generally, if A be an arbitrary constant, and, near (c, d), R(s, z) −A is of the form t^{m}H(t), where m is positive, we say that R(s, z) becomes m times equal to A at the place; if R(s, z) is infinite of order μ at the place, so also is R(s, z) − A. It can be shown that the sum of the values of m at all the places, including the places z = ∞, where R(s, z) vanishes, which we call the number of zeros of R(s, z) on the algebraic construct, is finite, and equal to the sum of the values of μ where R(s, z) is infinite, and more generally equal to the sum of the values of m where R(s, z) = A; this we express by saying that a rational function R(s, z) takes any value (including ∞) the same number of times on the algebraic construct; this number is called the order of the rational function.
That the total number of zeros of R(s, z) is finite is at once obvious, these values being obtainable by rational elimination of s between ƒ(s, z) = 0, R(s, z) = 0. That the number is equal to the total number of infinities is best deduced by means of a theorem which is also of more general utility. Let R(s, z) be any rational function of s, z, which are connected by ƒ(s, z) = 0; about any place (c, d) for which z = c + P(t), s = d + Q(t), expand the product
R(s, z) | dz |
dt |
in powers of t and pick out the coefficient of t^{−1}. There is only a finite number of places of this kind. The theorem is that the sum of these coefficients of t^{−1} is zero. This we express by
[ R(s, z) | dz | ]t^{−1} = 0. |
dt |
The theorem holds for the case n=1, that is, for rational functions of one variable z; in that case, about any finite point we have z − c = t, and about z = ∞ we have z^{−1} = t, and therefore dz/dt = −t^{−2}; in that case, then, the theorem is that in any rational function of z,
Σ ( | A_{1} | + | A_{2} | + ... + | Am | ) + Pz^{h} + Qzh−1 + ... + R, |
z − a | (z − a)^{2} | (z − a)^{m} |
the sum ΣA_{1} of the sum of the residues at the finite poles is equal to the coefficient of 1/z in the expansion, in ascending powers of 1/z, about z = ∞; an obvious result. In general, if for a finite place of the algebraic construct associated with ƒ(s, z) = 0, whose neighbourhood is given by z = c + t^{r}, s = d + Q(t), there be a coefficient of t^{−1} in R(s, z) dz/dt, this will be r times the coefficient of t−r in R(s, z) or R[d + Q(t), c + t^{r}], namely will be the coefficient of t−r in the sum of the r series obtainable from R [d + Q(t), c + t^{r}] by replacing t by ωt, where ω is an rth root of unity; thus the sum of the coefficients of t^{−1} in R(s, z) dz/dt for all the places which arise for z = c, and the corresponding values of s, is equal to the coefficient of (z − c)^{−1} in R(s_{1}, z) + R(s_{2}z) + ... + R(s_{n}, z), where s_{1}, ... s_{n} are the n values of s for a value of z near to z = c; this latter sum Σ R(s_{i }, z) is, however, a rational function of z only. Similarly, near z = ∞, for a place given by z^{−1}=t^{r}, s = d + Q(t), or s^{−1} = Q(t), the coefficient of t^{−1} in R(s, z) dz/dt is equal to −r times the coefficient of t^{r} in R[d + Q(t), t−r], that is equal to the negative coefficient of z−l in the sum of the r series R[d + Q(ωt), t−r], so that, as before, the sum of the coefficients of t^{−1} in R(s, z) dz/dt at the various places which arise for z = ∞ is equal to the negative coefficient of z− 1 in the same rational function of z, Σ R(s_{i }, z). Thus, from the corresponding theorem for rational functions of one variable, the general theorem now being proved is seen to follow.
Apply this theorem now to the rational function of s and z,
1 | dR(s, z) | ; | |
R(s, z) | dz |
at a zero of R(s, z) near which R(s, z) = t^{m}H(t), we have
1 | dR(s, z) | dz | = | d | {λ [R(s, z)] }, | ||
R(s, z) | dz | dt | dt |
where λ denotes the generalized logarithmic function, that is equal to
similarly at a place for which R(s, z) = t−μK(t); the theorem
[ | 1 | dR(s, z) | dz | ]t^{−1} = 0 | ||
R(s, z) | dz | dt |
thus gives Σm = Σμ, or, in words, the total number of zeros of R(s, z) on the algebraic construct is equal to the total number of its poles. The same is therefore true of the function R(s, z) − A, where A is an arbitrary constant; thus the number in question, being equal to the number of poles of R(s, z) − A, is equal also to the number of times that R(s, z) = A on the algebraic construct.
We have seen above that all single valued doubly periodic meromorphic functions, with the same periods, are rational functions of two variables s, z connected by an equation of the form s^{2} = 4z^{3} + Az + B. Taking account of the relation connecting these variables s, z with the argument of the doubly periodic functions (which was above denoted by z), it can then easily be seen that the theorem now proved is a generalization of the theorem proved previously establishing for a doubly periodic function a definite order. There exists a generalization of another theorem also proved above for doubly periodic functions, namely, that the sum of the values of the argument in one parallelogram of periods for which a doubly periodic function takes a given value is independent of that value; this generalization, known as Abel’s Theorem, is given § 17 below.
§ 17. Integrals of Algebraic Functions.—In treatises on Integral Calculus it is proved that if R(z) denote any rational function, an indefinite integral ∫R(z)dz can be evaluated in terms of rational and logarithmic functions, including the inverse trigonometrical functions. In generalization of this it was long ago discovered that if s^{2} = az^{2} + bz + c and R(s, z) be any rational function of s, z any integral ∫R(s, z)dz can be evaluated in terms of rational functions of s, z and logarithms of such functions; the simplest case is ∫s− 1dz or ∫(az^{2} + bz + c) −1/2dz. More generally if ƒ(s, z) = 0 be such a relation connecting s, z that when θ is an appropriate rational function of s and z both s and z are rationally expressible, in virtue of ƒ(s, z) = 0 in terms of θ, the integral ∫R(s, z)dz is reducible to a form ∫H(θ)dθ, where H(θ) is rational in θ, and can therefore also be evaluated by rational functions and logarithms of rational functions of s and z. It was natural to inquire whether a similar theorem holds for integrals ∫R(s, z)dz wherein s^{2} is a cubic polynomial in z. The answer is in the negative. For instance, no one of the three integrals
∫ | dz | , ∫ | zdz | , ∫ | dz |
s | s | (z − c)s |
can be expressed by rational and logarithms of rational functions of s and z; but it can be shown that every integral ∫R(s, z)dz can be expressed by means of integrals of these three types together with rational and logarithms of rational functions of s and z (see below under § 20, Elliptic Integrals). A similar theorem is true when s^{2} = quartic polynomial in z; in fact when s^{2} = A(z − a) (z − b) (z − c) (z − d), putting y = s(z − a)^{−2}, x = (z − a)^{−1}, we obtain y^{2} = cubic polynomial in x. Much less is the theorem true when the fundamental relation ƒ(s, z) = 0 is of more general type. There exists then, however, a very general theorem, known as Abel’s Theorem, which may be enunciated as follows: Beside the rational function R(s, z) occurring in the integral ∫R(s, z)dz, consider another rational function H(s, z); let (a_{1}), ... (am) denote the places of the construct associated with the fundamental equation ƒ(s, z) = 0, for which H(s, z) is equal to one value A, each taken with its proper multiplicity, and let (b_{1}), ... (bm) denote the places for which H(s, z) = B, where B is another value; then the sum of the m integrals ∫ (b_{i })(a_{i }) R(s, z)dz is equal to the sum of the coefficients of t^{−1} in the expansions of the function
R(s, z) | dz | λ ( | H(s, z) − B | ), |
dt | H(s, z) − A |
where λ denotes the generalized logarithmic function, at the various places where the expansion of R(s, z)dz/dt contains negative powers of t. This fact may be obtained at once from the equation
[ | 1 | R(s, z) | dz | ]t^{−1} = 0, |
H(s, z) − μ | dt |
wherein μ is a constant. (For illustrations see below, under § 20, Elliptic Integrals.)
§ 18. Indeterminateness of Algebraic Integrals.—The theorem that the integral ∫ xa ƒ(z)dz is independent of the path from a to z, holds only on the hypothesis that any two such paths are equivalent, that is, taken together from the complete boundary of a region of the plane within which ƒ(z) is finite and single valued, besides being differentiable. Suppose that these conditions fail only at a finite number of isolated points in the finite part of the plane. Then any path from a to z is equivalent, in the sense explained, to any other path together with closed paths beginning and ending at the arbitrary point a each enclosing one or more of the exceptional points, these closed paths being chosen, when ƒ(z) is not a single valued function, so that the final value of ƒ(z) at a is equal to its initial value. It is necessary for the statement that this condition may be capable of being satisfied.
For instance, the integral ∫ z1 z^{−1}dz is liable to an additive indeterminateness equal to the value obtained by a closed path about z = 0, which is equal to 2πi; if we put u = ∫ z1 z^{−1}dz and consider z as a function of u, then we must regard this function as unaffected by the addition of 2πi to its argument u; we know in fact that z = exp (u) and is a single valued function of u, with the period 2πi. Or again the integral ∫ z0 (1 + z^{2})^{−1}dz is liable to an additive indeterminateness equal to the value obtained by a closed path about either of the points z = ±i; thus if we put u = ∫ z0 (1 + z^{2})^{−1}dz, the function z of u is periodic with period π, this being the function tan (u). Next we take the integral u = ∫ (z)(0) (1 − z^{2})^{−1/2}dz, agreeing that the upper and lower limits refer not only to definite values of z, but to definite values of z each associated with a definite determination of the sign of the associated radical (1 − z^{2})^{−1/2}. We suppose 1 + z, 1 − z each to have phase zero for z = 0; then a single closed circuit of z = −1 will lead back to z = 0 with (l − z^{2})^{1/2} = −1; the additive indeterminateness of the integral, obtained by a closed path which restores the initial value of the subject of integration, may be obtained by a closed circuit containing both the points ±1 in its interior; this gives, since the integral taken about a vanishing circle whose centre is either of the points z = ±1 has ultimately the value zero, the sum
∫ −10 | dz | + ∫ 0−1 | dz | + ∫ 10 | dz | + ∫ 01 | dz | , |
(1 − z^{2})^{1/2} | −(1 − z^{2})^{1/2} | −(1 − z^{2})^{1/2} | (1 − z^{2})^{1/2} |
where, in each case, (1 − z^{2})^{1/2} is real and positive; that is, it gives
−4 ∫ 10 | dz |
(1 − z^{2})^{1/2} |
or 2π. Thus the additive indeterminateness of the integral is of the form 2kπ, where k is an integer, and the function z of u, which is sin (u), has 2π for period. Take now the case
u = ∫ (z)(z_{0}) | dz | , |
√{ (z − a) (z − b) (z − c) (z − d) } |
adopting a definite determination for the phase of each of the factors z − a, z − b, z − c, z − d at the arbitrary point z_{0}, and supposing the upper limit to refer, not only to a definite value of z, but also to a definite determination of the radical under the sign of integration. From z_{0} describe a closed loop about the point z = a, consisting, suppose, of a straight path from z_{0} to a, followed by a vanishing circle whose centre is at a, completed by the straight path from a to z_{0}. Let similar loops be imagined for each of the points b, c, d, no two of these having a point in common. Let A denote the value obtained by the positive circuit of the first loop; this will be in fact equal to twice the integral taken from z_{0} along the straight path to a; for the contribution due to the vanishing circle is ultimately zero, and the effect of the circuit of this circle is to change the sign of the subject of integration. After the circuit about a, we arrive back at z_{0} with the subject of integration changed in sign; let B, C, D denote the values of the integral taken by the loops enclosing respectively b, c and d when in each case the initial determination of the subject of integration is that adopted in calculating A. If then we take a circuit from z_{0} enclosing both a and b but not either c or d, the value obtained will be A − B, and on returning to z_{0} the subject of integration will have its initial value. It appears thus that the integral is subject to an additive indeterminateness equal to any one of the six differences such as A − B. Of these there are only two linearly independent; for clearly only A − B, A − C, A − D are linearly independent, and in fact, as we see by taking a closed circuit enclosing all of a, b, c, d, we have A − B + C − D = 0; for there is no other point in the plane beside a, b, c, d about which the subject of integration suffers a change of sign, and a circuit enclosing all of a, b, c, d may by putting z = 1/ζ be reduced to a circuit about ζ = 0 about which the value of the integral is zero. The general value of the integral for any position of z and the associated sign of the radical, when we start with a definite determination of the subject of integration, is thus seen to be of the form u_{0} + m(A − B) + n(A − C), where m and n are integers. The value of A − B is independent of the position of z_{0}, being obtainable by a single closed positive circuit about a and b only; it is thus equal to twice the integral taken once from a to b, with a proper initial determination of the radical under the sign of integration. Similar remarks to the above apply to any integral ∫ H(z)dz, in which H(z) is an algebraic function of z; in any such case H(z) is a rational function of z and a quantity s connected therewith by an irreducible rational algebraic equation ƒ(s, z) = 0. Such an integral ƒK(z, s)dz is called an Abelian Integral.
§ 19. Reversion of an Algebraic Integral.—In a limited number of cases the equation u = ∫ [z_{0} to z] H(z)dz, in which H(z) is an algebraic function of z, defines z as a single valued function of u. Several cases of this have been mentioned in the previous section; from what was previously proved under § 14, Doubly Periodic Functions, it appears that it is necessary for this that the integral should have at most two linearly independent additive constants of indeterminateness; for instance, for an integral
there are three such constants, of the form A − B, A − C, A − D, which are not connected by any linear equation with integral coefficients, and z is not a single valued function of u.
§ 20. Elliptic Integrals.—An integral of the form ∫ R(z, s)dz, where s denotes the square root of a quartic polynomial in z, which may reduce to a cubic polynomial, and R denotes a rational function of z and s, is called an elliptic integral.
To each value of z belong two values of s, of opposite sign; starting, for some particular value of z, with a definite one of these two values, the sign to be attached to s for any other value of z will be determined by the path of integration for z. When z is in the neighbourhood of any finite value z_{0} for which the radical s is not zero, if we put z − z_{0} = t, we can find s − s_{0} = a power series in t, say s=s_{0} + Q(t); when z is in the neighbourhood of a value, a, for which s vanishes, if we put z = a + t^{2}, we shall obtain s = tQ(t), where Q(t) is a power series in t; when z is very large and s^{2} is a quartic polynomial in z, if we put z^{−1} = t, we shall find s^{−1} = t^{2}Q(t); when z is very large and s^{2} is a cubic polynomial in z, if we put z^{−1} = t^{2}, we shall find s−l = t^{3}Q(t). By means of substitutions of these forms the character of the integral ∫ R(z, s)dz may be investigated for any position of z; in any case it takes a form ∫ [Ht−m + Kt−m+1 + ... + Pt^{−1} + R + St + ... ]dt involving only a finite number of negative powers of t in the subject of integration. Consider first the particular case ∫ s^{−1}dz; it is easily seen that neither for any finite nor for infinite values of z can negative powers of t enter; the integral is everywhere finite, and is said to be of the first kind; it can, moreover, be shown without difficulty that no integral ∫ R(z, s)dz, save a constant multiple of ∫ s^{−1}dz, has this property. Consider next, s^{2} being of the form a_{0}z^{4} + 4a_{1}z^{3} + ..., wherein a_{0} may be zero, the integral ∫ (a_{0}z^{2} + 2a_{1}z) s^{−1}dz; for any finite value of z this integral is easily proved to be everywhere finite; but for infinite values of z its value is of the form At^{−1} + Q(t), where Q(t) is a power series; denoting by √a_{0} a particular square root of a_{0} when a_{0} is not zero, the integral becomes infinite for z = ∞ for both signs of s, the value of A being + √a_{0} or − √a_{0} according as s is √a_{0}·z^{2} (1 + [2a_{1}/a_{0}] z^{−1} + ... ) or is the negative of this; hence the integral J_{1} = ∫ ( [a_{0}z^{2} + 2a_{1}z]/s + √a_{0}) dz becomes infinite when z is infinite, for the former sign of s, its infinite term being 2√a_{0}·t^{−1} or 2a_{0}·z, but does not become infinite for z infinite for the other sign of s. When a_{0} = 0 the signs of s for z = ∞ are not separated, being obtained one from the other by a circuit of z about an infinitely large circle, and the form obtained represents an integral becoming infinite as before for z = ∞, its infinite part being 2√a_{1}·t^{−1} or 2√a_{1}·√z. Similarly if z_{0} be any finite value of z which is not a root of the polynomial ƒ(z) to which s^{2} is equal, and s_{0} denotes a particular one of the determinations of s for z=z_{0}, the integral
J_{2} = ∫ { | s^{2}_{0} + 12(z − z_{0}) ƒ′(z_{0}) | + | s_{0} | } dz, |
(z − z_{0})^{2} s | (z − z_{0})^{2} |
wherein ƒ′(z) = dƒ(z)/dz, becomes infinite for z = z_{0}, s = s_{0}, but not for z = z_{0}, s = −s_{0}. its infinite term in the former case being the negative of 2s_{0}(z − z_{0}). For no other finite or infinite value of z is the integral infinite. If z = θ be a root of ƒ(z), in which case the corresponding value of s is zero, the integral
J_{3} = 12ƒ′(θ) ∫ | dz |
(z − θ) s |
becomes infinite for z=0, its infinite part being, if z − θ = t^{2}, equal to −[ƒ′(θ)]12 t^{−1}: and this integral is not elsewhere infinite. In each of these cases, of the integrals J_{1}, J_{2}, J_{3}, the subject of integration has been chosen so that when the integral is written near its point of infinity in the form ∫[At^{−2} + Bt^{−1} + Q(t)] dt, the coefficient B is zero, so that the infinity is of algebraic kind, and so that, when there are two signs distinguishable for the critical value of z, the integral becomes infinite for only one of these. An integral having only algebraic infinities, for finite or infinite values of z, is called an integral of the second kind, and it appears that such an integral can be formed with only one such infinity, that is, for an infinity arising only for one particular, and arbitrary, pair of values (s, z) satisfying the equation s^{2} = ƒ(z), this infinity being of the first order. A function having an algebraic infinity of the mth order (m > 1), only for one sign of s when these signs are separable, at (1) z = ∞, (2) z = z_{0}, (3) z = a, is given respectively by (s d/dz)^{m−1} J_{1}, (s d/dz)^{m−1} J_{2}, (s d/dz)^{m−1} J_{3}, as we easily see. If then we have any elliptic integral having algebraic infinities we can, by subtraction from it of an appropriate sum of constant multiples of J_{1}, J_{2}, J_{3} and their differential coefficients just written down, obtain, as the result, an integral without algebraic infinities. But, in fact, if J, J^{1} denote any two of the three integrals J_{1}, J_{2}, J_{3}, there exists an equation AJ + BJ′ + Cƒs^{−1}dz = rational function of s, z, where A, B, C are properly chosen constants. For the rational function
s + s_{0} | + z √a_{0} |
z − z_{0} |
is at once found to become infinite for (z_{0}, s_{0}), not for (z_{0}, −s_{0}), its infinite part for the first point being 2s/(z − z_{0}), and to become infinite for z infinitely large, and one sign of s only when these are separable, its infinite part there being 2z √a_{0} or 2 √a_{1} √z when a_{0} = 0. It does not become infinite for any other pair (z, s) satisfying the relation s^{2} = ƒ(z); this is in accordance with the easily verified equation
s + s_{0} | + z √a_{0} − J_{1} + J_{2} + (a_{0}z_{0}^{2} + 2a_{1}z_{0}) ∫ | dz | = 0; |
z − z0 | s |
and there exists the analogous equation
s | + z √a_{0} − J_{1} + J_{3} + (a_{0}θ^{2} + 2a_{1}θ) ∫ | dz | . |
z − θ | s |
Consider now the integral
P = ∫ ( | s + s_{0} | + z √a_{0} ) | dz | ; |
z − z_{0} | 2s |
this is at once found to be infinite, for finite values of z, only for (z_{0}, s_{0}), its infinite part being log (z − z_{0}), and for z = ∞, for one sign of s only when these are separable, its infinite part being −log t, that is −log z when a_{0} ≠ 0, and −log (z^{1/2}) when a_{0} = 0. And, if ƒ(θ) = 0, the integral
P_{1} = ∫ ( | s | + z √a_{0} ) | dz |
z − θ | 2s |
is infinite at z = θ, s = 0 with an infinite part log t, that is log (z − θ)^{1/2}, is not infinite for any other finite value of z, and is infinite like P for z = ∞. An integral possessing such logarithmic infinities is said to be of the third kind.
Hence it appears that any elliptic integral, by subtraction from it of an appropriate sum formed with constant multiples of the integral J_{3} and the rational functions of the form (s d/dz)^{m−1} J_{1} with constant multiples of integrals such as P or P_{1}, with constant multiples of the integral u = ∫s^{−1}dz, and with rational functions, can be reduced to an integral H becoming infinite only for z = ∞, for one sign of s only when these are separable, its infinite part being of the form A log t, that is, A log z or A log (z^{1/2}). Such an integral H = ∫R(z, s)dz does not exist, however, as we at once find by writing R(z, s) = P(z) + sQ(z), where P(z), Q(z) are rational functions of z, and examining the forms possible for these in order that the integral may have only the specified infinity. An analogous theorem holds for rational functions of z and s; there exists no rational function which is finite for finite values of z and is infinite only for z = ∞ for one sign of s and to the first order only; but there exists a rational function infinite in all to the first order for each of two or more pairs (z, s), however they may be situated, or infinite to the second order for an arbitrary pair (z, s); and any rational function may be formed by a sum of constant multiples of functions such as
s + s_{0} | + z √a_{0} or | s | + z √a_{0} |
z − z_{0} | z − θ |
and their differential coefficients.
The consideration of elliptic integrals is therefore reducible to that of the three
u = ∫ | dz | , J = ∫ ( | a_{0}z^{2} + 2a_{1}z | + z √a_{0} ) dz, P = ∫ ( | s + s_{0} | + z √a_{0} ) | dz |
s | s | z − z_{0} | 2s |
respectively of the first, second and third kind. Now the equation s^{2} = a_{0}z^{4} + ... = a_{0} (z − θ) (z − φ) (z − ψ) (z − χ), by putting
x = | 1 | + | 1 | ( | 1 | + | 1 | + | 1 | ) |
z − θ | 3 | θ − φ | θ − ψ | θ − χ |
is at once reduced to the form y^{2} = 4x^{3} − g_{2}x − g_{3} = 4(x − e_{1}) (x − e_{2}) (x − e_{3}), say; and these equations enable us to express s and z rationally in terms of x and y. It is therefore sufficient to consider three elliptic integrals
u = ∫ | dx | , J = ∫ | xdx | , P = ∫ | y + y_{0} | dx | . | |
y | y | x − x_{0} | 2y |
Of these consider the first, putting
u = ∫ (∞)(x) | dx | , |
y |
where the limits involve not only a value for x, but a definite sign for the radical y. When x is very large, if we put x^{−1} = t^{2}, y^{−1} = 2t^{3} (1 − 14 g_{2}t^{4} − 14 g_{3}t^{6})^{−1/2}, we have
of u, is found for t, and hence a definite power series for x, of the form
Let this expression be valid for 0 < |u| < R, and the function defined thereby, which has a pole of the second order for u=0, be denoted by φ(u). In the range in question it is single valued and satisfies the differential equation
in terms of it we can write x = φ(u), y = − φ′(u), and, φ′(u) being an odd function, the sign attached to y in the original integral for x = ∞ is immaterial. Now for any two values u, v in the range in question consider the function
F(u, v) = 14 [ | φ′(u) − φ′(v) | ]2 − φ(u) − φ(v); |
φ(u) − φ(v) |
it is at once seen, from the differential equation, to be such that ∂F/∂u = ∂F/∂v; it is therefore a function of u + v; supposing |u + v| < R we infer therefore, by putting v = 0, that
φ(u + v) = 14 [ | φ′(u) − φ′(v) | ]2 − φ(u) − φ(v). |
φ(u) − φ(v) |
By repetition of this equation we infer that if u_{1}, ... u_{n} be any arguments each of which is in absolute value less than R, whose sum is also in absolute value less than R, then φ(u_{1} + ... + u_{n}) is a rational function of the 2n functions φ(u_{s}), φ′(u_{s}); and hence, if |u| < R, that
φ(u) = H [ φ ( | u | ), φ′ ( | u | ) ], |
n | n |
where H is some rational function of the arguments φ(u/n), φ′(u/n). In fact, however, so long as |u/n| < R, each of the functions φ(u/n), φ′(u/n) is single valued and without singularity save for the pole at u=0; and a rational function of single valued functions, each of which has no singularities other than poles in a certain region, is also a single valued function without singularities other than poles in this region. We infer, therefore, that the function of u expressed by H [φ(u/n), φ′(u/n)] is single valued and without singularities other than poles so long as |u| < nR; it agrees with φ(u) when |u| < R, and hence furnishes a continuation of this function over the extended range |u| < nR. Moreover, from the method of its derivation, it satisfies the differential equation [φ′(u)]^{2} = 4[φ(u)]^{3} − g_{2}φ(u) − g_{3}. This equation has therefore one solution which is a single valued monogenic function with no singularities other than poles for any finite part of the plane, having in particular for u = 0, a pole of the second order; and the method adopted for obtaining this near u=0 shows that the differential equation has no other such solution. This, however, is not the only solution which is a single valued meromorphic function, a the functions φ(u + α), wherein α is arbitrary, being such. Taking now any range of values of u, from u = 0, and putting for any value of u, x = φ(u), y = −φ′(u), so that y^{2}=4x^{3}-g_{2}x-g_{3}, we clearly have
u = ∫ (∞)(x, y ) | dx | ; |
y |
conversely if x_{0} = φ(u_{0}), y_{0} = −φ′(u_{0}) and ξ, η be any values satisfying η_{2} = 4ξ^{2} − g_{2}ξ − g_{3}, which are sufficiently near respectively to x_{0}, y_{0}, while v is defined by
v − u_{0} = − ∫ (ξ, η)(x_{0}, y_{0}) | dξ | , |
η |
then ξ, η are respectively φ(v) and −φ′(v); for this equation leads to an expansion for ξ − x_{0} in terms of v = u_{0} and only one such expansion, and this is obtained by the same work as would be necessary to expand φ(v) when v is near to u_{0}; the function φ(u) can therefore be continued by the help of this equation, from v = u_{0}, provided the lower limit of |ξ − x_{0}| necessary for the expansions is not zero in the neighbourhood of any value (x_{0}, y_{0}). In fact the function φ(u) can have only a finite number of poles in any finite part of the plane of u; each of these can be surrounded by a small circle, and in the portion of the finite part of the plane of u which is outside these circles, the lower limit of the radii of convergence of the expansions of φ(u) is greater than zero; the same will therefore be the case for the lower limit of the radii |ξ − x_{0}| necessary for the continuations spoken of above provided that the values of (ξ, η) considered do not lead to infinitely increasing values of v; there does not exist, however, any definite point (ξ_{0}, η_{0}) in the neighbourhood of which the integral ∫ (ξ, η)(x_{0}, y_{0}) dξ/η increases indefinitely, it is only by a path of infinite length that the integral can so increase. We infer therefore that if (ξ, η) be any point, where η_{2} = 4ξ^{3} − g_{2}ξ − g_{3}, and v be defined by
v = ∫ (∞)(ξ, η) | dx | , |
y |
then ξ = φ(v) and η = −φ′(v). Thus this equation determines (ξ, η) without ambiguity. In particular the additive indeterminatenesses of the integral obtained by closed circuits of the point of integration are periods of the function φ(u); by considerations advanced above it appears that these periods are sums of integral multiples of two which may be taken to be
ω = 2 ∫ ∞e_{1} | dx | , ω′ = 2 ∫ ∞e_{3} | dx | ; |
y | y |
these quantities cannot therefore have a real ratio, for else, being periods of a monogenic function, they would, as we have previously seen, be each integral multiples of another period; there would then be a closed path for (x, y ), starting from an arbitrary point (x_{0}, y_{0}), other than one enclosing two of the points (e_{1}, 0), (e_{2}, 0), (e_{3}, 0), (∞, ∞), which leads back to the initial point (x_{0}, y_{0}), which is impossible. On the whole, therefore, it appears that the function φ(u) agrees with the function ℜ(u) previously discussed, and the discussion of the elliptic integrals can be continued in the manner given under § 14, Doubly Periodic Functions.
§ 21. Modular Functions.—One result of the previous theory is the remarkable fact that if
ω = 2 ∫ ∞e_{1} | dx | , ω′ = 2 ∫ ∞e_{3} | dx | ; |
y | y |
where y^{2} = 4(x − e_{1}) (x − e_{2}) (x − e_{3}), then we have
and a similar equation for e_{3}, where the summation refers to all integer values of m and m′ other than the one pair m = 0, m′ = 0. This, with similar results, has led to the consideration of functions of the complex ratio ω′/ω.
It is easy to see that the series for ℜ(u), u^{−2} + Σ′[(u + mω + m′ω′)^{2} − (mω + m′ω′)^{2}], is unaffected by replacing ω, ω′ by two quantities Ω, Ω′ equal respectively to pω + qω′, p′ω′ + q′ω′, where p, q, p′, q′ are any integers for which pq′ − p′q = ±1; further it can be proved that all substitutions with integer coefficients Ω = pω + qω′, Ω′ = p′ω + q′ω′, wherein pq′ − p′q = 1, can be built up by repetitions of the two particular substitutions (Ω = −ω′, Ω′ = ω), (Ω = ω, Ω′ = ω + ω′). Consider the function of the ratio ω′/ω expressed by
it is at once seen from the properties of the function ℜ(u) that by the two particular substitutions referred to we obtain the corresponding substitutions for h expressed by
thus, by all the integer substitutions Ω = pω + qω′, Ω′ = p′ω + q′ω′, in which pq′ − p′q = 1, the function h can only take one of the six values h, 1/h, 1 − h, 1/(1 − h), h/(h − 1), (h − 1)/h, which are the roots of an equation in θ,
(1 − θ + θ^{2})^{3} | = | (1 − h + h^{2})^{3} | ; |
θ^{2}(1 − θ)^{2} | h^{2}(1 − h)^{2} |
the function of τ, = ω′/ω, expressed by the right side, is thus unaltered by every one of the substitutions τ′ = (p′ + q′τ / p + qτ), wherein p, q, p′, q′ are integers having pq′ − p′q = 1. If the imaginary part σ, of τ, which we may write τ = ρ + iσ, is positive, the imaginary part of τ′, which is equal to σ(pq′ − p′q)/[(p + qρ)^{2} + q^{2}σ^{2}], is also positive; suppose σ to be positive; it can be shown that the upper half of the infinite plane of the complex variable τ can be divided into regions, all bounded by arcs of circles (or straight lines), no two of these regions overlapping, such that any substitution of the kind under consideration, τ′ = (p′ + q′τ)/(p + qτ) leads from an arbitrary point τ, of one of these regions, to a point τ′ of another; taking τ = ρ + iσ, one of these regions may be taken to be that for which −12 < ρ < 12, ρ^{2} + σ^{2} > 1, together with the points for which ρ is negative on the curves limiting this region; then every other region is obtained from this so-called fundamental region by one and only one of the substitutions τ = (p′ + q′τ)/(p + qτ), and hence by a definite combination of the substitutions τ′ = −1/τ, τ′ = 1 + τ. Upon the infinite half plane of τ, the function considered above,
z(τ) = 427 | [ℜ^{2} (12ω) + ℜ (z(12ω) ℜ (12ω′) + ℜ^{2} (12ω′)]^{3} | |
ℜ^{2} (12ω) ℜ^{2} (12ω′) [ℜ (12ω) + ℜ (12ω′]^{2} |
is a single valued monogenic function, whose only essential singularities are the points τ′ = (p′ + q′τ)/(p + qτ) for which τ = ∞, namely those for which τ′ is any real rational value; the real axis is thus a line over which the function z(τ) cannot be continued, having an essential singularity in every arc of it, however short; in the fundamental region, z(τ) has thus only the single essential singularity, r = ρ + iσ, where σ = ∞; in this fundamental region z(τ) takes any assigned complex value just once, the relation z(τ′) = z(τ) requiring, as can be shown, that τ′ is of the form (p′ + q′τ)/(p + qτ), in which p, q, p′, q′ are integers with pq′ − p′q = 1; the function z(τ) has thus a similar behaviour in every other of the regions. The division of the plane into regions is analogous to the division of the plane, in the case of doubly periodic functions, into parallelograms; in that case we considered only functions without essential singularities, and in each of the regions the function assumed every complex value twice, at least. Putting, as another function of τ, J(τ) = z(τ) [z(τ) − 1], it can be shown that J(τ) = 0 for τ = exp (23πi), that J(τ) = 1 for τ = i, these being values of τ on the boundary of the fundamental region; like z(τ) it has an essential singularity for τ = ρ + iσ, σ = + ∞. In the theory of linear differential equations it is important to consider the inverse function τ(J); this is infinitely many valued, having a cycle of three values for circulation of J about J = 0 (the circuit of this point leading to a linear substitution for τ of period 3, such as τ′ = −(1 + τ)^{−1}), having a cycle of two values about J = 1 (the circuit leading to a linear substitution for τ of period 2, such as τ′ = −τ^{−1}), and having a cycle of infinitely many values about J = ∞ (the circuit leading to a linear substitution for τ which is not periodic, such as τ′ = 1 + τ). These are the only singularities for the function τ(J). Each of the functions
[J(τ)]1/3, [J(τ) − 1]^{1/2}, [ − | ℜ (12ω) + 2ℜ (12ω′) | ]1/8, |
ℜ (12ω) − ℜ (12)ω′) |
beside many others (see below), is a single valued function of τ, and is expressible without ambiguity in terms of the single valued function of τ,
η(τ) = exp ( | iπτ | ) Π ∞n=1 [1 − exp (2iπnτ)] = exp ( | iπτ | ) Σ ∞m=−∞ (−1)^{m} exp [(3m^{2} + m) iπτ]. |