Listen to this text
 Listen to this text
 Listen to this text
 Listen to this text
 Listen to this text
 Listen to this text

CHAPTER XIV.
ON TRUE COMPOUND INTEREST AND THE LAW OF ORGANIC GROWTH.

Let there be a quantity growing in such a way that the increment of its growth, during a given time, shall always be proportional to its own magnitude. This resembles the process of reckoning interest on money at some fixed rate; for the bigger the capital, the bigger the amount of interest on it in a given time.

Now we must distinguish clearly between two cases, in our calculation, according as the calculation is made by what the arithmetic books call “simple interest,” or by what they call “compound interest.” For in the former case the capital remains fixed, while in the latter the interest is added to the capital, which therefore increases by successive additions.

(1) At simple interest. Consider a concrete case. Let the capital at start be £${\displaystyle 100}$, and let the rate of interest be ${\displaystyle 10}$ per cent. per annum. Then the increment to the owner of the capital will be £${\displaystyle 10}$ every year. Let him go on drawing his interest every year, and hoard it by putting it by in a stocking, or locking it up in his safe. Then, if he goes on for ${\displaystyle 10}$ years, by the end of that time he will have received ${\displaystyle 10}$ increments of £${\displaystyle 10}$ each, or £${\displaystyle 100}$, making, with the original £${\displaystyle 100}$, a total of £${\displaystyle 200}$ in all. His property will have doubled itself in ${\displaystyle 10}$ years. If the rate of interest had been ${\displaystyle 5}$ per cent., he would have had to hoard for ${\displaystyle 20}$ years to double his property. If it had been only ${\displaystyle 2}$ per cent., he would have had to hoard for ${\displaystyle 50}$ years. It is easy to see that if the value of the yearly interest is ${\displaystyle {\dfrac {1}{n}}}$ of the capital, he must go on hoarding for ${\displaystyle n}$ years in order to double his property.

Or, if ${\displaystyle y}$ be the original capital, and the yearly interest is ${\displaystyle {\frac {y}{n}}}$, then, at the end of n years, his property will be

${\displaystyle y+n{\dfrac {y}{n}}=2y}$.

(2) At compound interest. As before, let the owner begin with a capital of £${\displaystyle 100}$, earning interest at the rate of ${\displaystyle 10}$ per cent. per annum; but, instead of hoarding the interest, let it be added to the capital each year, so that the capital grows year by year. Then, at the end of one year, the capital will have grown to £${\displaystyle 110}$; and in the second year (still at ${\displaystyle 10\%}$) this will earn £${\displaystyle 11}$ interest. He will start the third year with £${\displaystyle 121}$, and the interest on that will be £${\displaystyle 12}$. ${\displaystyle 2}$s.; so that he starts the fourth year with £${\displaystyle 133}$. ${\displaystyle 2}$s., and so on. It is easy to work it out, and find that at the end of the ten years the total capital will have grown to £${\displaystyle 259}$. ${\displaystyle 7}$s. ${\displaystyle 6}$d. In fact, we see that at the end of each year, each pound will have earned 110 of a pound, and therefore, if this is always added on, each year multiplies the capital by ${\displaystyle {\tfrac {11}{10}}}$; and if continued for ten years (which will multiply by this factor ten times over) will multiply the original capital by ${\displaystyle 2.59374}$. Let us put this into symbols. Put ${\displaystyle y_{0}}$ for the original capital; ${\displaystyle {\dfrac {1}{n}}}$ for the fraction added on at each of the ${\displaystyle n}$ operations; and ${\displaystyle y_{n}}$ for the value of the capital at the end of the ${\displaystyle n^{th}}$ operation. Then

${\displaystyle y_{n}=y_{0}\left(1+{\frac {1}{n}}\right)^{n}}$.

But this mode of reckoning compound interest once a year, is really not quite fair; for even during the first year the £${\displaystyle 100}$ ought to have been growing. At the end of half a year it ought to have been at least £${\displaystyle 105}$, and it certainly would have been fairer had the interest for the second half of the year been calculated on £${\displaystyle 105}$. This would be equivalent to calling it ${\displaystyle 5\%}$ per half-year; with ${\displaystyle 20}$ operations, therefore, at each of which the capital is multiplied by ${\displaystyle {\tfrac {21}{20}}}$. If reckoned this way, by the end of ten years the capital would have grown to £${\displaystyle 265}$. ${\displaystyle 8}$s.; for

${\displaystyle (1+{\tfrac {1}{20}})^{20}=2.653}$.

But, even so, the process is still not quite fair; for, by the end of the first month, there will be some interest earned; and a half-yearly reckoning assumes that the capital remains stationary for six months at a time. Suppose we divided the year into ${\displaystyle 10}$ parts, and reckon a one-per-cent. interest for each tenth of the year. We now have ${\displaystyle 100}$ operations lasting over the ten years; or

${\displaystyle y_{n}=}$£${\displaystyle 100\left(1+{\tfrac {1}{100}}\right)^{100}}$;

which works out to £${\displaystyle 270}$. ${\displaystyle 8}$s.

Even this is not final. Let the ten years be divided into ${\displaystyle 1000}$ periods, each of ${\displaystyle {\tfrac {1}{100}}}$ of a year; the interest being ${\displaystyle {\tfrac {1}{10}}}$ per cent. for each such period; then

${\displaystyle y_{n}=}$£${\displaystyle 100\left(1+{\tfrac {1}{1000}}\right)^{1000}}$;

which works out to £${\displaystyle 271}$. ${\displaystyle 14}$s. ${\displaystyle 2{\tfrac {1}{2}}}$d.

Go even more minutely, and divide the ten years into ${\displaystyle 10,000}$ parts, each ${\displaystyle {\tfrac {1}{1000}}}$ of a year, with interest at ${\displaystyle {\tfrac {1}{100}}}$ of ${\displaystyle 1}$ per cent. Then

${\displaystyle y_{n}=}$£${\displaystyle 100\left(1+{\tfrac {1}{10,000}}\right)^{10,000}}$;

which amounts to £${\displaystyle 271}$. ${\displaystyle 16}$s. ${\displaystyle 4}$d.

Finally, it will be seen that what we are trying to find is in reality the ultimate value of the expression ${\displaystyle \left(1+{\dfrac {1}{n}}\right)^{n}}$, which, as we see, is greater than ${\displaystyle 2}$; and which, as we take ${\displaystyle n}$ larger and larger, grows closer and closer to a particular limiting value. However big you make ${\displaystyle n}$, the value of this expression grows nearer and nearer to the figure

${\displaystyle 2.71828\ldots }$

a number never to be forgotten.

Let us take geometrical illustrations of these things. In Fig. 36, ${\displaystyle OP}$ stands for the original value. ${\displaystyle OT}$ is the whole time during which the value is growing. It is divided into ${\displaystyle 10}$ periods, in each of which there is an equal step up. Here ${\displaystyle {\dfrac {dy}{dx}}}$ is a constant; and if each step up is ${\displaystyle {\tfrac {1}{10}}}$ of the original ${\displaystyle OP}$, then, by ${\displaystyle 10}$ such steps, the height is doubled. If we had taken ${\displaystyle 20}$ steps,

Fig. 36.

each of half the height shown, at the end the height would still be just doubled. Or ${\displaystyle n}$ such steps, each of ${\displaystyle {\dfrac {1}{n}}}$ of the original height ${\displaystyle OP}$, would suffice to double the height. This is the case of simple interest. Here is ${\displaystyle 1}$ growing till it becomes ${\displaystyle 2}$.

In Fig. 37, we have the corresponding illustration of the geometrical progression. Each of the successive ordinates is to be ${\displaystyle 1+{\dfrac {1}{n}}}$, that is, ${\displaystyle {\dfrac {n+1}{n}}}$ times as high as its predecessor. The steps up are not equal, because each step up is now ${\displaystyle {\dfrac {1}{n}}}$ of the ordinate at that part of the curve. If we had literally ${\displaystyle 10}$ steps, with ${\displaystyle \left(1+{\tfrac {1}{10}}\right)}$ for the multiplying factor, the final total would be ${\displaystyle (1+{\tfrac {1}{10}})^{10}}$ or ${\displaystyle 2.593}$ times the original ${\displaystyle 1}$. But if only we take n sufficiently large (and the corresponding ${\displaystyle {\dfrac {1}{n}}}$ sufficiently small), then the final value ${\displaystyle \left(1+{\dfrac {1}{n}}\right)^{n}}$ to to which unity will grow will be ${\displaystyle 2.71828}$.

Fig. 37.

Epsilon. To this mysterious number ${\displaystyle 2.7182818}$ etc., the mathematicians have assigned as a symbol the Greek letter ${\displaystyle \epsilon }$ (pronounced epsilon). All schoolboys know that the Greek letter ${\displaystyle \pi }$ (called pi) stands for ${\displaystyle 3.141592}$ etc.; but how many of them know that epsilon means ${\displaystyle 2.71828}$? Yet it is an even more important number than ${\displaystyle \pi }$!

What, then, is epsilon?

Suppose we were to let ${\displaystyle 1}$ grow at simple interest till it became ${\displaystyle 2}$; then, if at the same nominal rate of interest, and for the same time, we were to let ${\displaystyle 1}$ grow at true compound interest, instead of simple, it would grow to the value epsilon.

This process of growing proportionately, at every instant, to the magnitude at that instant, some people call a logarithmic rate of growing. Unit logarithmic rate of growth is that rate which in unit time will cause ${\displaystyle 1}$ to grow to ${\displaystyle 2.718281}$. It might also be called the organic rate of growing: because it is characteristic of organic growth (in certain circumstances) that the increment of the organism in a given time is proportional to the magnitude of the organism itself.

If we take ${\displaystyle 100}$ per cent. as the unit of rate, and any fixed period as the unit of time, then the result of letting ${\displaystyle 1}$ grow arithmetically at unit rate, for unit time, will be ${\displaystyle 2}$, while the result of letting ${\displaystyle 1}$ grow logarithmically at unit rate, for the same time, will be ${\displaystyle 2.71828\ldots .}$

A little more about Epsilon. We have seen that we require to know what value is reached by the expression ${\displaystyle \left(1+{\dfrac {1}{n}}\right)^{n}}$, when ${\displaystyle n}$ becomes indefinitely great. Arithmetically, here are tabulated a lot of values (which anybody can calculate out by the help of an ordinary table of logarithms) got by assuming ${\displaystyle n=2}$; ${\displaystyle n=5}$; ${\displaystyle n=10}$; and so on, up to ${\displaystyle n=10,000}$.

{\displaystyle {\begin{alignedat}{2}&(1+{\tfrac {1}{2}})^{2}&&=2.25.\\&(1+{\tfrac {1}{5}})^{5}&&=2.489.\\&(1+{\tfrac {1}{10}})^{10}&&=2.594.\\&(1+{\tfrac {1}{20}})^{20}&&=2.653.\\&(1+{\tfrac {1}{100}})^{100}&&=2.704.\\&(1+{\tfrac {1}{1000}})^{1000}&&=2.7171.\\&(1+{\tfrac {1}{10,000}})^{10,000}&&=2.7182.\end{alignedat}}}

It is, however, worth while to find another way of calculating this immensely important figure.

Accordingly, we will avail ourselves of the binomial theorem, and expand the expression ${\displaystyle \left(1+{\dfrac {1}{n}}\right)^{n}}$ in that well-known way.

The binomial theorem gives the rule that

{\displaystyle {\begin{aligned}(a+b)^{n}&=a^{n}+n{\dfrac {a^{n-1}b}{1!}}+n(n-1){\dfrac {a^{n-2}b^{2}}{2!}}\\&{\phantom {=a^{n}\ }}+n(n-1)(n-2){\dfrac {a^{n-3}b^{3}}{3!}}+{\text{etc}}.\\\end{aligned}}}

Putting ${\displaystyle a=1}$ and ${\displaystyle b={\dfrac {1}{n}}}$, we get

{\displaystyle {\begin{aligned}\left(1+{\dfrac {1}{n}}\right)^{n}&=1+1+{\dfrac {1}{2!}}\left({\dfrac {n-1}{n}}\right)+{\dfrac {1}{3!}}{\dfrac {(n-1)(n-2)}{n^{2}}}\\&{\phantom {=1+1\ }}+{\dfrac {1}{4!}}{\dfrac {(n-1)(n-2)(n-3)}{n^{3}}}+{\text{etc}}.\end{aligned}}}

Now, if we suppose n to become indefinitely great, say a billion, or a billion billions, then ${\displaystyle n-1}$, ${\displaystyle n-2}$, and ${\displaystyle n-3}$, etc., will all be sensibly equal to ${\displaystyle n}$; and then the series becomes

${\displaystyle \epsilon =1+1+{\dfrac {1}{2!}}+{\dfrac {1}{3!}}+{\dfrac {1}{4!}}+{\text{etc}}.\ldots }$

By taking this rapidly convergent series to as many terms as we please, we can work out the sum to any desired point of accuracy. Here is the working for ten terms:

 ${\displaystyle 1.000000}$ dividing by 1 ${\displaystyle 1.000000}$ dividing by 2 ${\displaystyle 0.500000}$ dividing by 3 ${\displaystyle 0.166667}$ dividing by 4 ${\displaystyle 0.041667}$ dividing by 5 ${\displaystyle 0.008333}$ dividing by 6 ${\displaystyle 0.001389}$ dividing by 7 ${\displaystyle 0.000198}$ dividing by 8 ${\displaystyle 0.000025}$ dividing by 9 ${\displaystyle 0.000002}$ Total ${\displaystyle 2.718281}$

${\displaystyle \epsilon }$ is incommensurable with ${\displaystyle 1}$, and resembles ${\displaystyle \pi }$ in being an interminable non-recurrent decimal.

The Exponential Series. We shall have need of yet another series.

Let us, again making use of the binomial theorem, expand the expression ${\displaystyle \left(1+{\dfrac {1}{n}}\right)^{nx}}$, which is the same as ${\displaystyle \epsilon ^{x}}$ when we make ${\displaystyle n}$ indefinitely great.

{\displaystyle {\begin{aligned}\epsilon ^{x}&=1^{nx}+nx{\frac {1^{nx-1}\left({\dfrac {1}{n}}\right)}{1!}}+nx(nx-1){\frac {1^{nx-2}\left({\dfrac {1}{n}}\right)^{2}}{2!}}\\&{\phantom {=1^{nx}\ }}+nx(nx-1)(nx-2){\frac {1^{nx-3}\left({\dfrac {1}{n}}\right)^{3}}{3!}}+{\text{etc}}.\\&=1+x+{\frac {1}{2!}}\cdot {\frac {n^{2}x^{2}-nx}{n^{2}}}+{\frac {1}{3!}}\cdot {\frac {n^{3}x^{3}-3n^{2}x^{2}+2nx}{n^{3}}}+{\text{etc}}.\\&=1+x+{\frac {x^{2}-{\dfrac {x}{n}}}{2!}}+{\frac {x^{3}-{\dfrac {3x^{2}}{n}}+{\dfrac {2x}{n^{2}}}}{3!}}+{\text{etc}}.\end{aligned}}}

{\displaystyle {\begin{aligned}&=1+x+{\frac {x^{2}-{\dfrac {x}{n}}}{2!}}+{\frac {x^{3}-{\dfrac {3x^{2}}{n}}+{\dfrac {2x}{n^{2}}}}{3!}}+{\text{etc}}.\end{aligned}}}

But, when ${\displaystyle n}$ is made indefinitely great, this simplifies down to the following:

${\displaystyle \epsilon ^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\text{etc.}}\dots }$

This series is called the exponential series.

The great reason why ${\displaystyle \epsilon }$ is regarded of importance is that ${\displaystyle \epsilon ^{x}}$ possesses a property, not possessed by any other function of ${\displaystyle x}$, that when you differentiate it its value remains unchanged; or, in other words, its differential coefficient is the same as itself. This can be instantly seen by differentiating it with respect to ${\displaystyle x}$, thus:

{\displaystyle {\begin{aligned}{\frac {d(\epsilon ^{x})}{dx}}&=0+1+{\frac {2x}{1\cdot 2}}+{\frac {3x^{2}}{1\cdot 2\cdot 3}}+{\frac {4x^{3}}{1\cdot 2\cdot 3\cdot 4}}\\&{\phantom {=0+1+{\frac {2x}{1\cdot 2}}+{\frac {3x^{2}}{1\cdot 2\cdot 3}}\ }}+{\frac {5x^{4}}{1\cdot 2\cdot 3\cdot 4\cdot 5}}+{\text{etc}}.\\or&=1+x+{\frac {x^{2}}{1\cdot 2}}+{\frac {x^{3}}{1\cdot 2\cdot 3}}+{\frac {x^{4}}{1\cdot 2\cdot 3\cdot 4}}+{\text{etc}}.,\end{aligned}}}

which is exactly the same as the original series.

Now we might have gone to work the other way, and said: Go to; let us find a function of ${\displaystyle x}$, such that its differential coefficient is the same as itself. Or, is there any expression, involving only powers of ${\displaystyle x}$, which is unchanged by differentiation? Accordingly; let us assume as a general expression that

${\displaystyle y=A+Bx+Cx^{2}+Dx^{3}+Ex^{4}+{\text{etc}}}$.,

(in which the coefficients ${\displaystyle A}$, ${\displaystyle B}$, ${\displaystyle C}$, etc. will have to be determined), and differentiate it.

${\displaystyle {\dfrac {dy}{dx}}=B+2Cx+3Dx^{2}+4Ex^{3}+{\text{etc}}}$.

Now, if this new expression is really to be the same as that from which it was derived, it is clear that ${\displaystyle A}$ must ${\displaystyle =B}$; that ${\displaystyle C={\dfrac {B}{2}}={\dfrac {A}{1\cdot 2}}}$; that ${\displaystyle D={\dfrac {C}{3}}={\dfrac {A}{1\cdot 2\cdot 3}}}$; that ${\displaystyle E={\dfrac {D}{4}}={\dfrac {A}{1\cdot 2\cdot 3\cdot 4}}}$, etc.

The law of change is therefore that

${\displaystyle y=A\left(1+{\dfrac {x}{1}}+{\dfrac {x^{2}}{1\cdot 2}}+{\dfrac {x^{3}}{1\cdot 2\cdot 3}}+{\dfrac {x^{4}}{1\cdot 2\cdot 3\cdot 4}}+{\text{etc}}.\right)}$.

If, now, we take ${\displaystyle A=1}$ for the sake of further simplicity, we have

${\displaystyle y=1+{\dfrac {x}{1}}+{\dfrac {x^{2}}{1\cdot 2}}+{\dfrac {x^{3}}{1\cdot 2\cdot 3}}+{\dfrac {x^{4}}{1\cdot 2\cdot 3\cdot 4}}+{\text{etc}}}$.

Differentiating it any number of times will give always the same series over again.

If, now, we take the particular case of ${\displaystyle A=1}$, and evaluate the series, we shall get simply

{\displaystyle {\begin{aligned}{\text{when }}x&=1,\quad &y&=2.718281{\text{ etc.}};&{\text{that is, }}y&=\epsilon ;\\{\text{when }}x&=2,\quad &y&=(2.718281{\text{ etc.}})^{2};&{\text{that is, }}y&=\epsilon ^{2};\\{\text{when }}x&=3,\quad &y&=(2.718281{\text{ etc.}})^{3};&{\text{that is, }}y&=\epsilon ^{3};\end{aligned}}}

and therefore

when ${\displaystyle x=x,\quad y=(2.718281{\text{ etc}}.)^{x}}$; that is, ${\displaystyle y=\epsilon ^{x},}$ thus finally demonstrating that

${\displaystyle \epsilon ^{x}=1+{\dfrac {x}{1}}+{\dfrac {x^{2}}{1\cdot 2}}+{\dfrac {x^{3}}{1\cdot 2\cdot 3}}+{\dfrac {x^{4}}{1\cdot 2\cdot 3\cdot 4}}+{\text{etc}}.}$

[Note.–How to read exponentials. For the benefit of those who have no tutor at hand it may be of use to state that ${\displaystyle \epsilon ^{x}}$ is read as “epsilon to the eksth power;” or some people read it “exponential eks.” So ${\displaystyle \epsilon ^{pt}}$ is read “epsilon to the pee-teeth-power” or “exponential pee tee.” Take some similar expressions:–Thus, ${\displaystyle \epsilon ^{-2}}$ is read “epsilon to the minus two power” or “exponential minus two.” ${\displaystyle \epsilon ^{-ax}}$ is read “epsilon to the minus ay-eksth” or “exponential minus ay-eks.”]

Of course it follows that ${\displaystyle \epsilon ^{y}}$ remains unchanged if differentiated with respect to ${\displaystyle y}$. Also ${\displaystyle \epsilon ^{ax}}$, which is equal to ${\displaystyle (\epsilon a)^{x}}$, will, when differentiated with respect to ${\displaystyle x}$, be ${\displaystyle a\epsilon ^{ax}}$, because a is ${\displaystyle a}$ constant.

Natural or Naperian Logarithms.

Another reason why ${\displaystyle \epsilon }$ is important is because it was made by Napier, the inventor of logarithms, the basis of his system. If ${\displaystyle y}$ is the value of ${\displaystyle \epsilon ^{x}}$, then ${\displaystyle x}$ is the logarithm, to the base ${\displaystyle \epsilon }$, of ${\displaystyle y}$. Or, if

{\displaystyle {\begin{aligned}y&=\epsilon ^{x},\\{\text{then}}\;x&=\log _{\epsilon }y.\end{aligned}}}

The two curves plotted in Fig. 38 and 39 represent these equations.

The points calculated are:

 For Fig. 38 ${\displaystyle {\begin{cases}\\\\\\\end{cases}}}$ ${\displaystyle \,\,\,x\,\,\,}$ ${\displaystyle \,\,\,0\,\,\,}$ ${\displaystyle 0.5}$ ${\displaystyle 1}$ ${\displaystyle 1.5}$ ${\displaystyle 2}$ ${\displaystyle y}$ ${\displaystyle 1}$ ${\displaystyle 1.65}$ ${\displaystyle 2.71}$ ${\displaystyle 4.50}$ ${\displaystyle 7.69}$
 For Fig. 39 ${\displaystyle {\begin{cases}\\\\\\\end{cases}}}$ ${\displaystyle \,\,\,x\,\,\,}$ ${\displaystyle \,\,\,1\,\,\,}$ ${\displaystyle 2}$ ${\displaystyle 3}$ ${\displaystyle 4}$ ${\displaystyle 8}$ ${\displaystyle y}$ ${\displaystyle 0}$ ${\displaystyle 0.69}$ ${\displaystyle 1.10}$ ${\displaystyle 1.39}$ ${\displaystyle 2.08}$
 Fig. 38. Fig. 39.

It will be seen that, though the calculations yield different points for plotting, yet the result is identical. The two equations really mean the same thing.

As many persons who use ordinary logarithms, which are calculated to base ${\displaystyle 10}$ instead of base ${\displaystyle \epsilon }$, are unfamiliar with the “natural” logarithms, it may be worth while to say a word about them. The ordinary rule that adding logarithms gives the logarithm of the product still holds good; or

${\displaystyle \log _{\epsilon }a+\log _{\epsilon }b=\log _{\epsilon }ab}$.

Also the rule of powers holds good;

${\displaystyle n\times \log _{\epsilon }a=\log _{\epsilon }a^{n}}$.

But as ${\displaystyle 10}$ is no longer the basis, one cannot multiply by ${\displaystyle 100}$ or ${\displaystyle 1000}$ by merely adding ${\displaystyle 2}$ or ${\displaystyle 3}$ to the index. One can change the natural logarithm to the ordinary logarithm simply by multiplying it by ${\displaystyle 0.4343}$; or

${\displaystyle \log _{10}x=0.4343\times \log _{\epsilon }x}$,

and conversely,

${\displaystyle \log _{\epsilon }x=2.3026\times \log _{10}x}$.

A Useful Tables of “Napierian Logarithms”
(Also called Natural Logarithms or Hyperbolic Logarithms)

 ${\displaystyle Number}$ ${\displaystyle \log _{\epsilon }}$ ${\displaystyle Number}$ ${\displaystyle \log _{\epsilon }}$ ${\displaystyle 1}$ ${\displaystyle 0.0000}$ ${\displaystyle 6}$ ${\displaystyle 1.7918}$ ${\displaystyle 1.1}$ ${\displaystyle 0.0953}$ ${\displaystyle 7}$ ${\displaystyle 1.9459}$ ${\displaystyle 1.2}$ ${\displaystyle 0.1823}$ ${\displaystyle 8}$ ${\displaystyle 2.0794}$ ${\displaystyle 1.5}$ ${\displaystyle 0.4055}$ ${\displaystyle 9}$ ${\displaystyle 2.1972}$ ${\displaystyle 1.7}$ ${\displaystyle 0.5306}$ ${\displaystyle 10}$ ${\displaystyle 2.3026}$ ${\displaystyle 2.0}$ ${\displaystyle 0.6931}$ ${\displaystyle 20}$ ${\displaystyle 2.9957}$ ${\displaystyle 2.2}$ ${\displaystyle 0.7885}$ ${\displaystyle 50}$ ${\displaystyle 3.9120}$ ${\displaystyle 2.5}$ ${\displaystyle 0.9163}$ ${\displaystyle 100}$ ${\displaystyle 4.6052}$ ${\displaystyle 2.7}$ ${\displaystyle 0.9933}$ ${\displaystyle 200}$ ${\displaystyle 5.2983}$ ${\displaystyle 2.8}$ ${\displaystyle 1.0296}$ ${\displaystyle 500}$ ${\displaystyle 6.2146}$ ${\displaystyle 3.0}$ ${\displaystyle 1.0986}$ ${\displaystyle 1,000}$ ${\displaystyle 6.9078}$ ${\displaystyle 3.5}$ ${\displaystyle 1.2528}$ ${\displaystyle 2,000}$ ${\displaystyle 7.6010}$ ${\displaystyle 4.0}$ ${\displaystyle 1.3863}$ ${\displaystyle 5,000}$ ${\displaystyle 8.5172}$ ${\displaystyle 4.5}$ ${\displaystyle 1.5041}$ ${\displaystyle 10,000}$ ${\displaystyle 9.2104}$ ${\displaystyle 5.0}$ ${\displaystyle 1.6094}$ ${\displaystyle 20,000}$ ${\displaystyle 9.9035}$

Exponential and Logarithmic Equations.

Now let us try our hands at differentiating certain expressions that contain logarithms or exponentials.

Take the equation:

${\displaystyle y=\log _{\epsilon }x}$.

First transform this into

${\displaystyle \epsilon ^{y}=x}$,

whence, since the differential of ${\displaystyle \epsilon ^{y}}$ with regard to ${\displaystyle y}$ is the original function unchanged (see p. 143),

${\displaystyle {\frac {dx}{dy}}=\epsilon ^{y}}$,

and, reverting from the inverse to the original function,

${\displaystyle {\frac {dy}{dx}}={\frac {1}{\ {\dfrac {dx}{dy}}\ }}={\frac {1}{\epsilon ^{y}}}={\frac {1}{x}}}$.

Now this is a very curious result. It may be written

${\displaystyle {\frac {d(\log _{\epsilon }x)}{dx}}=x^{-1}}$.

Note that ${\displaystyle x^{-1}}$ is a result that we could never have got by the rule for differentiating powers. That rule (page 25) is to multiply by the power, and reduce the power by ${\displaystyle 1}$. Thus, differentiating ${\displaystyle x^{3}}$ gave us ${\displaystyle 3x^{2}}$; and differentiating ${\displaystyle x^{2}}$ gave ${\displaystyle 2x^{1}}$. But differentiating ${\displaystyle x^{0}}$ does not give us ${\displaystyle x^{-1}}$ or ${\displaystyle 0\times x^{-1}}$, because ${\displaystyle x^{0}}$ is itself ${\displaystyle =1}$, and is a constant. We shall have to come back to this curious fact that differentiating ${\displaystyle \log _{\epsilon }x}$ gives us ${\displaystyle {\dfrac {1}{x}}}$ when we reach the chapter on integrating.

Now, try to differentiate

${\displaystyle y=\log _{\epsilon }(x+a)}$,

that is

${\displaystyle \epsilon ^{y}=x+a}$;

we have ${\displaystyle {\dfrac {d(x+a)}{dy}}=\epsilon ^{y}}$, since the differential of ${\displaystyle \epsilon ^{y}}$ remains ${\displaystyle \epsilon ^{y}}$.

This gives

${\displaystyle {\frac {dx}{dy}}=\epsilon ^{y}=x+a}$;

hence, reverting to the original function, we get

${\displaystyle {\frac {dy}{dx}}={\frac {1}{\;{\dfrac {dx}{dy}}\;}}={\frac {1}{x+a}}}$.

Next try

${\displaystyle y=\log _{10}x}$.

First change to natural logarithms by multiplying by the modulus ${\displaystyle 0.4343.}$ This gives us

${\displaystyle y=0.4343\log _{\epsilon }x}$;

whence

${\displaystyle {\frac {dy}{dx}}={\frac {0.4343}{x}}}$.

The next thing is not quite so simple. Try this:

${\displaystyle y=a^{x}}$.

Taking the logarithm of both sides, we get

{\displaystyle {\begin{aligned}\log _{\epsilon }y&=x\log _{\epsilon }a,\\{\text{ or}}\;x={\frac {\log _{\epsilon }y}{\log _{\epsilon }a}}&={\frac {1}{\log _{\epsilon }a}}\times \log _{\epsilon }y.\end{aligned}}}

Since ${\displaystyle {\dfrac {1}{\log _{\epsilon }a}}}$ is a constant, we get

${\displaystyle {\frac {dx}{dy}}={\frac {1}{\log _{\epsilon }a}}\times {\frac {1}{y}}={\frac {1}{a^{x}\times \log _{\epsilon }a}}}$;

hence, reverting to the original function.

${\displaystyle {\frac {dy}{dx}}={\frac {1}{\;{\dfrac {dx}{dy}}\;}}=a^{x}\times \log _{\epsilon }a}$.

We see that, since

${\displaystyle {\frac {dx}{dy}}\times {\frac {dy}{dx}}=1}$ and ${\displaystyle {\frac {dx}{dy}}={\frac {1}{y}}\times {\frac {1}{\log _{\epsilon }a}},\quad {\frac {1}{y}}\times {\frac {dy}{dx}}=\log _{\epsilon }a}$.

We shall find that whenever we have an expression such as ${\displaystyle \log _{\epsilon }y=}$ a function of ${\displaystyle x}$, we always have ${\displaystyle {\dfrac {1}{y}}\,{\dfrac {dy}{dx}}=}$ the differential coefficient of the function of ${\displaystyle x}$, so that we could have written at once, from ${\displaystyle \log _{\epsilon }y=x\log _{\epsilon }a}$,

${\displaystyle {\frac {1}{y}}\,{\frac {dy}{dx}}=\log _{\epsilon }a\quad {\text{and}}\quad {\frac {dy}{dx}}=a^{x}\log _{\epsilon }a.}$

Let us now attempt further examples.

Examples.

(1) ${\displaystyle y=\epsilon ^{-ax}}$. Let ${\displaystyle -ax=z}$; then ${\displaystyle y=\epsilon ^{z}}$.

${\displaystyle {\frac {dy}{dz}}=\epsilon ^{z}}$; ${\displaystyle {\frac {dz}{dx}}=-a}$; hence ${\displaystyle {\frac {dy}{dx}}=-a\epsilon ^{-ax}}$.

Or thus:

${\displaystyle \log _{\epsilon }y=-ax;\quad {\frac {1}{y}}\,{\frac {dy}{dx}}=-a;\quad {\frac {dy}{dx}}=-ay=-a\epsilon ^{-ax}}$.

(2) ${\displaystyle y=\epsilon ^{\frac {x^{2}}{3}}}$. Let ${\displaystyle {\dfrac {x^{2}}{3}}=z}$; then ${\displaystyle y=\epsilon ^{z}}$.

${\displaystyle {\frac {dy}{dz}}=\epsilon ^{z};\quad {\frac {dz}{dx}}={\frac {2x}{3}};\quad {\frac {dy}{dx}}={\frac {2x}{3}}\,\epsilon ^{\frac {x^{2}}{3}}}$.

Or thus:

${\displaystyle \log _{\epsilon }y={\frac {x^{2}}{3}};\quad {\frac {1}{y}}\,{\frac {dy}{dx}}={\frac {2x}{3}};\quad {\frac {dy}{dx}}={\frac {2x}{3}}\,\epsilon ^{\frac {x^{2}}{3}}}$.

(3) ${\displaystyle y=\epsilon ^{\frac {2x}{x+1}}}$.

{\displaystyle {\begin{aligned}\log _{\epsilon }y&={\frac {2x}{x+1}},\quad {\frac {1}{y}}\,{\frac {dy}{dx}}={\frac {2(x+1)-2x}{(x+1)^{2}}};\\hence{\frac {dy}{dx}}&={\frac {2}{(x+1)^{2}}}\epsilon ^{\frac {2x}{x+1}}\end{aligned}}}.

Check by writing ${\displaystyle {\dfrac {2x}{x+1}}=z}$.

(4) ${\displaystyle y=\epsilon ^{\sqrt {x^{2}+a}}}$. ${\displaystyle \log _{\epsilon }y=(x^{2}+a)^{\frac {1}{2}}}$.

${\displaystyle {\frac {1}{y}}\,{\frac {dy}{dx}}={\frac {x}{(x^{2}+a)^{\frac {1}{2}}}}\quad {\text{and}}\quad {\frac {dy}{dx}}={\frac {x\times \epsilon ^{\sqrt {x^{2}+a}}}{(x^{2}+a)^{\frac {1}{2}}}}}$.

(For if ${\displaystyle (x^{2}+a)^{\frac {1}{2}}=u}$ and ${\displaystyle x^{2}+a=v}$, ${\displaystyle u=v^{\frac {1}{2}}}$,

${\displaystyle {\frac {du}{dv}}={\frac {1}{{2v}^{\frac {1}{2}}}};\quad {\frac {dv}{dx}}=2x;\quad {\frac {du}{dx}}={\frac {x}{(x^{2}+a)^{\frac {1}{2}}}})}$.

Check by writing ${\displaystyle {\sqrt {x^{2}+a}}=z}$.

(5) ${\displaystyle y=\log(a+x^{3})}$. Let ${\displaystyle (a+x^{3})=z}$; then ${\displaystyle y=\log _{\epsilon }z}$.

${\displaystyle {\frac {dy}{dz}}={\frac {1}{z}};\quad {\frac {dz}{dx}}=3x^{2};\quad {\text{hence}}\quad {\frac {dy}{dx}}={\frac {3x^{2}}{a+x^{3}}}}$.

(6) ${\displaystyle y=\log _{\epsilon }\{{3x^{2}+{\sqrt {a+x^{2}}}}\}}$. Let ${\displaystyle 3x^{2}+{\sqrt {a+x^{2}}}=z}$; then ${\displaystyle y=\log _{\epsilon }z}$.

{\displaystyle {\begin{aligned}{\frac {dy}{dz}}&={\frac {1}{z}};\quad {\frac {dz}{dx}}=6x+{\frac {x}{\sqrt {x^{2}+a}}};\\{\frac {dy}{dx}}&={\frac {6x+{\dfrac {x}{\sqrt {x^{2}+a}}}}{3x^{2}+{\sqrt {a+x^{2}}}}}={\frac {x(1+6{\sqrt {x^{2}+a}})}{(3x^{2}+{\sqrt {x^{2}+a}}){\sqrt {x^{2}+a}}}}.\end{aligned}}}.

(7) ${\displaystyle y=(x+3)^{2}{\sqrt {x-2}}}$.

{\displaystyle {\begin{aligned}\log _{\epsilon }y&=2\log _{\epsilon }(x+3)+{\tfrac {1}{2}}\log _{\epsilon }(x-2).\\{\frac {1}{y}}\,{\frac {dy}{dx}}&={\frac {2}{(x+3)}}+{\frac {1}{2(x-2)}};\\{\frac {dy}{dx}}&=(x+3)^{2}{\sqrt {x-2}}\left\{{\frac {2}{x+3}}+{\frac {1}{2(x-2)}}\right\}.\end{aligned}}}

(8) ${\displaystyle y=(x^{2}+3)^{3}(x^{3}-2)^{\frac {2}{3}}}$.

{\displaystyle {\begin{aligned}\log _{\epsilon }y&=3\log _{\epsilon }(x^{2}+3)+{\tfrac {2}{3}}\log _{\epsilon }(x^{3}-2);\\{\frac {1}{y}}\,{\frac {dy}{dx}}&=3{\frac {2x}{(x^{2}+3)}}+{\frac {2}{3}}{\frac {3x^{2}}{x^{3}-2}}={\frac {6x}{x^{2}+3}}+{\frac {2x^{2}}{x^{3}-2}}.\end{aligned}}}

(For if ${\displaystyle y=\log _{\epsilon }(x^{2}+3)}$, let ${\displaystyle x^{2}+3=z}$ and ${\displaystyle u=\log _{\epsilon }z}$.

${\displaystyle {\frac {du}{dz}}={\frac {1}{z}};\quad {\frac {dz}{dx}}=2x;\quad {\frac {du}{dx}}={\frac {2x}{x^{2}+3}}.}$

Similarly, if ${\displaystyle v=\log _{\epsilon }(x^{3}-2)}$, ${\displaystyle {\dfrac {dv}{dx}}={\dfrac {3x^{2}}{x^{3}-2}}}$) and

${\displaystyle {\frac {dy}{dx}}=(x^{2}+3)^{3}(x^{3}-2)^{\frac {2}{3}}\left\{{\frac {6x}{x^{2}+3}}+{\frac {2x^{2}}{x^{3}-2}}\right\}.}$

(9) ${\displaystyle y={\dfrac {\sqrt[{2}]{x^{2}+a}}{\sqrt[{3}]{x^{3}-a}}}}$.

{\displaystyle {\begin{aligned}\log _{\epsilon }y&={\frac {1}{2}}\log _{\epsilon }(x^{2}+a)-{\frac {1}{3}}\log _{\epsilon }(x^{3}-a).\\{\frac {1}{y}}\,{\frac {dy}{dx}}&={\frac {1}{2}}\,{\frac {2x}{x^{2}+a}}-{\frac {1}{3}}\,{\frac {3x^{2}}{x^{3}-a}}={\frac {x}{x^{2}+a}}-{\frac {x^{2}}{x^{3}-a}}\\and\quad \quad \quad {\frac {dy}{dx}}&={\frac {\sqrt[{2}]{x^{2}+a}}{\sqrt[{3}]{x^{3}-a}}}\left\{{\frac {x}{x^{2}+a}}-{\frac {x^{2}}{x^{3}-a}}\right\}.\end{aligned}}}

(10) ${\displaystyle y={\dfrac {1}{\log _{\epsilon }x}}}$.

${\displaystyle {\frac {dy}{dx}}={\frac {\log _{\epsilon }x\times 0-1\times {\dfrac {1}{x}}}{\log _{\epsilon }^{2}x}}=-{\frac {1}{x\log _{\epsilon }^{2}x}}.}$

(11) ${\displaystyle y={\sqrt[{3}]{\log _{\epsilon }x}}=(\log _{\epsilon }x)^{\frac {1}{3}}}$. Let ${\displaystyle y=z^{\frac {1}{3}}}$.

${\displaystyle {\frac {dy}{dz}}={\frac {1}{3}}z^{-{\frac {2}{3}}};\quad {\frac {dz}{dx}}={\frac {1}{x}};\quad {\frac {dy}{dx}}={\frac {1}{3x{\sqrt[{3}]{\log _{\epsilon }^{2}x}}}}.}$

(12) ${\displaystyle y=\left({\dfrac {1}{a^{x}}}\right)^{ax}}$.

{\displaystyle {\begin{aligned}\log _{\epsilon }y&=ax(\log _{\epsilon }1-\log _{\epsilon }a^{x})=-ax\log _{\epsilon }a^{x}.\\{\frac {1}{y}}\,{\frac {dy}{dx}}&=-ax\times a^{x}\log _{\epsilon }a-a\log _{\epsilon }a^{x}.\\and\quad \quad \quad {\frac {dy}{dx}}&=-\left({\frac {1}{a^{x}}}\right)^{ax}(x\times a^{x+1}\log _{\epsilon }a+a\log _{\epsilon }a^{x}).\end{aligned}}}

Try now the following exercises.

Exercises XII. (See page 260 for Answers.)

(1) Differentiate ${\displaystyle y=b(\epsilon ^{ax}-\epsilon ^{-ax})}$.

(2) Find the differential coefficient with respect to ${\displaystyle t}$ of the expression ${\displaystyle u=at^{2}+2\log _{\epsilon }t}$.

(3) if ${\displaystyle y=n^{t}}$, find ${\displaystyle {\dfrac {d(\log _{\epsilon }y)}{dt}}}$.

(4) Show that if ${\displaystyle y={\dfrac {1}{b}}\cdot {\dfrac {a^{bx}}{\log _{\epsilon }a}}}$ ; ${\displaystyle {\dfrac {dy}{dx}}=a^{bx}}$.

(5) If ${\displaystyle w=pv^{n}}$, find ${\displaystyle {\dfrac {dw}{dv}}}$.

Differentiate

(6) ${\displaystyle y=\log _{\epsilon }x^{n}}$.

(7) ${\displaystyle y=3\epsilon ^{-{\frac {x}{x-1}}}}$.

(8) ${\displaystyle y=(3x^{2}+1)\epsilon ^{-5x}}$.

(9) ${\displaystyle y=\log _{\epsilon }(x^{a}+a)}$.

(10) ${\displaystyle y=(3x^{2}-1)({\sqrt {x}}+1)}$.

(11) ${\displaystyle y={\dfrac {\log _{\epsilon }(x+3)}{x+3}}}$.

(12) ${\displaystyle y=a^{x}\times x^{a}}$.

(13) It was shown by Lord Kelvin that the speed of signalling through a submarine cable depends on the value of the ratio of the external diameter of the core to the diameter of the enclosed copper wire. If this ratio is called ${\displaystyle y}$, then the number of signals ${\displaystyle s}$ that can be sent per minute can be expressed by the formula

${\displaystyle s=ay^{2}\log _{\epsilon }{\frac {1}{y}}}$;

where ${\displaystyle a}$ is a constant depending on the length and the quality of the materials. Show that if these are given, ${\displaystyle s}$ will be a maximum if ${\displaystyle y=1\div {\sqrt {\epsilon }}}$.

(15) Differentiate ${\displaystyle y=\log _{\epsilon }(ax\epsilon ^{x})}$.

(16) Differentiate ${\displaystyle y=(\log _{\epsilon }ax)^{3}}$.

The Logarithmic Curve.

Let us return to the curve which has its successive ordinates in geometrical progression, such as that represented by the equation ${\displaystyle y=bp^{x}}$.

We can see, by putting ${\displaystyle x=0}$, that ${\displaystyle b}$ is the initial height of ${\displaystyle y}$.

Then when

${\displaystyle x=1,\;y=bp;\;x=2,y=bp^{2};\;x=3,y=bp^{3},\;}$ etc.

Also, we see that ${\displaystyle p}$ is the numerical value of the ratio between the height of any ordinate and that of the next preceding it. In Fig. 40, we have taken ${\displaystyle p}$ as ${\displaystyle {\tfrac {6}{5}}}$; each ordinate being ${\displaystyle {\tfrac {6}{5}}}$ as high as the preceding one.

 Fig. 40. Fig. 41.

If two successive ordinates are related together thus in a constant ratio, their logarithms will have a constant difference; so that, if we should plot out a new curve, Fig. 41, with values of ${\displaystyle \log _{\epsilon }y}$ as ordinates, it would be a straight line sloping up by equal steps. In fact, it follows from the equation, that

{\displaystyle {\begin{aligned}\log _{\epsilon }y&=\log _{\epsilon }b+x\cdot \log _{\epsilon }p,\\{\text{whence }}\quad \quad \quad \log _{\epsilon }y&-\log _{\epsilon }b=x\cdot \log _{\epsilon }p.\end{aligned}}}

Now, since ${\displaystyle \log _{\epsilon }p}$ is a mere number, and may be written as ${\displaystyle \log _{\epsilon }p=a}$, it follows that

${\displaystyle \log _{\epsilon }{\frac {y}{b}}=ax}$,

and the equation takes the new form

${\displaystyle y=b\epsilon ^{ax}}$.

The Die-away Curve.

If we were to take ${\displaystyle p}$ as a proper fraction (less than unity), the curve would obviously tend to sink downwards, as in Fig. 42, where each successive ordinate is ${\displaystyle {\tfrac {3}{4}}}$ of the height of the preceding one.

The equation is still

${\displaystyle y=bp^{x}}$;

Fig. 42.

but since ${\displaystyle p}$ is less than one, ${\displaystyle \log _{\epsilon }p}$ will be a negative quantity, and may be written ${\displaystyle -a}$; so that ${\displaystyle p=\epsilon -a}$, and now our equation for the curve takes the form

${\displaystyle y=b\epsilon ^{-ax}}$.

The importance of this expression is that, in the case where the independent variable is time, the equation represents the course of a great many physical processes in which something is gradually dying away. Thus, the cooling of a hot body is represented (in Newton’s celebrated “law of cooling”) by the equation

${\displaystyle \theta _{t}=\theta _{0}\epsilon ^{-at}}$;

where ${\displaystyle \theta _{0}}$ is the original excess of temperature of a hot body over that of its surroundings, ${\displaystyle \theta _{t}}$ the excess of temperature at the end of time ${\displaystyle t}$, and ${\displaystyle a}$ is a constant—namely, the constant of decrement, depending on the amount of surface exposed by the body, and on its coefficients of conductivity and emissivity, etc.

A similar formula,

${\displaystyle Q_{t}=Q_{0}\epsilon ^{-at}}$,

is used to express the charge of an electrified body, originally having a charge ${\displaystyle Q_{0}}$, which is leaking away with a constant of decrement ${\displaystyle a}$; which constant depends in this case on the capacity of the body and on the resistance of the leakage-path.

Oscillations given to a flexible spring die out after a time; and the dying-out of the amplitude of the motion may be expressed in a similar way.

In fact ${\displaystyle \epsilon ^{-at}}$ serves as a die-away factor for all those phenomena in which the rate of decrease is proportional to the magnitude of that which is decreasing; or where, in our usual symbols, ${\displaystyle {\dfrac {dy}{dt}}}$ is proportional at every moment to the value that y has at that moment. For we have only to inspect the curve, Fig. 42 above, to see that, at every part of it, the slope ${\displaystyle {\dfrac {dy}{dx}}}$ is proportional to the height ${\displaystyle y}$; the curve becoming flatter as ${\displaystyle y}$ grows smaller. In symbols, thus

${\displaystyle y=b\epsilon ^{-ax}}$

or ${\displaystyle \log _{\epsilon }y=\log _{\epsilon }b-ax\log _{\epsilon }\epsilon =\log _{\epsilon }b-ax}$,

and, differentiating, ${\displaystyle {\frac {1}{y}}\,{\frac {dy}{dx}}=-a}$;

hence ${\displaystyle {\frac {dy}{dx}}=b\epsilon ^{-ax}\times (-a)=-ay}$; or, in words, the slope of the curve is downward, and proportional to ${\displaystyle y}$ and to the constant ${\displaystyle a}$.

We should have got the same result if we had taken the equation in the form

{\displaystyle {\begin{aligned}y&=bp^{x};\\{\text{for then}}\quad \quad \quad {\frac {dy}{dx}}&=bp^{x}\times \log _{\epsilon }p.\\{\text{But}}\quad \quad \quad \log _{\epsilon }p&=-a;\\{\text{giving us}}\quad \quad \quad {\frac {dy}{dx}}&=y\times (-a)=-ay,\\{\text{as before.}}\quad \quad \quad \quad \;\end{aligned}}}

The Time-constant. In the expression for the “die-away factor” ${\displaystyle \epsilon ^{-at}}$, the quantity ${\displaystyle a}$ is the reciprocal of another quantity known as “the time-constant,” which we may denote by the symbol ${\displaystyle T}$. Then the die-away factor will be written ${\displaystyle \epsilon ^{-{\frac {t}{T}}}}$; and it will be seen, by making ${\displaystyle t=T}$ that the meaning of ${\displaystyle T}$ (or of ${\displaystyle {\dfrac {1}{a}}}$) is that this is the length of time which it takes for the original quantity (called ${\displaystyle \theta _{0}}$ or ${\displaystyle Q_{0}}$ in the preceding instances) to die away ${\displaystyle {\dfrac {1}{\epsilon }}}$th part—that is to ${\displaystyle 0.3678}$—of its original value.

The values of ${\displaystyle \epsilon ^{x}}$ and ${\displaystyle \epsilon ^{-x}}$ are continually required in different branches of physics, and as they are given in very few sets of mathematical tables, some of the values are tabulated here for convenience.

${\displaystyle x}$ ${\displaystyle \epsilon ^{x}}$ ${\displaystyle \epsilon ^{-x}}$ ${\displaystyle 1-\epsilon ^{-x}}$
0 1.0000 1.0000 0.0000
0.10 1.1052 0.9048 0.0952
0.20 1.2214 0.8187 0.1813
0.50 1.6487 0.6065 0.3935
0.75 2.1170 0.4724 0.5276
0.90 2.4596 0.4066 0.5934
1.00 2.7183 0.3679 0.6321
1.10 3.0042 0.3329 0.6671
1.20 3.3201 0.3012 0.6988
1.25 3.4903 0.2865 0.7135
1.50 4.4817 0.2231 0.7769
1.75 5.755 0.1738 0.8262
2.00 7.389 0.1353 0.8647
2.50 12.182 0.0821 0.9179
3.00 20.086 0.0498 0.9502
3.50 33.115 0.0302 0.9698
4.00 54.598 0.0183 0.9817
4.50 90.017 0.0111 0.9889
5.00 148.41 0.0067 0.9933
5.50 244.69 0.0041 0.9959
6.00 403.43 0.00248 0.99752
7.50 1808.04 0.00055 0.99947
10.00 22026.5 0.000045 0.999955

As an example of the use of this table, suppose there is a hot body cooling, and that at the beginning of the experiment (i.e. when ${\displaystyle t=0}$) it is ${\displaystyle 72^{\circ }}$ hotter than the surrounding objects, and if the time-constant of its cooling is ${\displaystyle 20}$ minutes (that is, if it takes ${\displaystyle 20}$ minutes for its excess of temperature to fall to ${\displaystyle {\dfrac {1}{\epsilon }}}$ part of ${\displaystyle 72^{\circ }}$), then we can calculate to what it will have fallen in any given time ${\displaystyle t}$. For instance, let ${\displaystyle t}$ be ${\displaystyle 60}$ minutes. Then ${\displaystyle {\dfrac {t}{T}}=60\div 20=3}$, and we shall have to find the value of ${\displaystyle \epsilon ^{-3}}$, and then multiply the original ${\displaystyle 72^{\circ }}$ by this. The table shows that ${\displaystyle \epsilon ^{-3}}$ is ${\displaystyle 0.0498}$. So that at the end of ${\displaystyle 60}$ minutes the excess of temperature will have fallen to ${\displaystyle 72^{\circ }\times 0.0498=3.586^{\circ }}$.

Further Examples.

(1) The strength of an electric current in a conductor at a time ${\displaystyle t}$ secs. after the application of the electromotive force producing it is given by the expression ${\displaystyle C={\dfrac {E}{R}}\left\{1-\epsilon ^{-{\frac {Rt}{L}}}\right\}}$.

The time constant is ${\displaystyle {\dfrac {L}{R}}}$.

If ${\displaystyle E=10}$, ${\displaystyle R=1}$, ${\displaystyle L=0.01}$; then when ${\displaystyle t}$ is very large the term ${\displaystyle \epsilon ^{-{\frac {Rt}{L}}}}$ becomes ${\displaystyle 1}$, and ${\displaystyle C={\dfrac {E}{R}}=10}$; also

${\displaystyle {\frac {L}{R}}=T=0.01}$.

Its value at any time may be written:

${\displaystyle C=10-10\epsilon ^{-{\frac {t}{0.01}}}}$,

the time-constant being ${\displaystyle 0.01}$. This means that it takes ${\displaystyle 0.01}$ sec. for the variable term to fall by ${\displaystyle {\dfrac {1}{\epsilon }}=0.3678}$ of its initial value ${\displaystyle 10\epsilon ^{-{\frac {0}{0.01}}}=10}$.

To find the value of the current when ${\displaystyle t=0.001}$ sec., say, ${\displaystyle {\dfrac {t}{T}}=0.1}$, ${\displaystyle \epsilon ^{-0.1}=0.9048}$ (from table).

It follows that, after ${\displaystyle 0.001}$ sec., the variable term is ${\displaystyle 0.9048\times 10=9.048}$, and the actual current is ${\displaystyle 10-9.048=0.952}$.

Similarly, at the end of ${\displaystyle 0.1}$ sec.,

${\displaystyle {\frac {t}{T}}=10;\quad \epsilon ^{-10}=0.000045}$;

the variable term is ${\displaystyle 10\times 0.000045=0.00045}$, the current being ${\displaystyle 9.9995}$.

(2) The intensity ${\displaystyle I}$ of a beam of light which has passed through a thickness ${\displaystyle l}$ cm. of some transparent medium is ${\displaystyle I=I_{0}\epsilon ^{-Kl}}$, where ${\displaystyle I_{0}}$ is the initial intensity of the beam and ${\displaystyle K}$ is a “constant of absorption.”

This constant is usually found by experiments. If it be found, for instance, that a beam of light has its intensity diminished by ${\displaystyle 18\%}$ in passing through 10 cms. of a certain transparent medium, this means that ${\displaystyle 82=100\times \epsilon ^{-K\times 10}}$ or ${\displaystyle \epsilon ^{-10K}=0.82}$, and from the table one sees that ${\displaystyle 10K=0.20}$ very nearly; hence ${\displaystyle K=0.02}$.

To find the thickness that will reduce the intensity to half its value, one must find the value of ${\displaystyle l}$ which satisfies the equality ${\displaystyle 50=100\times \epsilon ^{-0.02l}}$, or ${\displaystyle 0.5=\epsilon ^{-0.02l}}$. It is found by putting this equation in its logarithmic form, namely,

${\displaystyle \log 0.5=-0.02\times l\times \log \epsilon }$,

which gives

${\displaystyle l={\frac {-0.3010}{-0.02\times 0.4343}}=34.7}$ centimetres nearly.

(3) The quantity ${\displaystyle Q}$ of a radio-active substance which has not yet undergone transformation is known to be related to the initial quantity ${\displaystyle Q_{0}}$ of the substance by the relation ${\displaystyle Q=Q_{0}\epsilon ^{-\lambda t}}$, where ${\displaystyle \lambda }$ is a constant and ${\displaystyle t}$ the time in seconds elapsed since the transformation began.

For “Radium ${\displaystyle A}$,” if time is expressed in seconds, experiment shows that ${\displaystyle \lambda =3.85\times 10^{-3}}$. Find the time required for transforming half the substance. (This time is called the “mean life” of the substance.)

We have ${\displaystyle 0.5=\epsilon ^{-0.00385t}}$.

${\displaystyle \log 0.5=-0.00385t\times \log \epsilon ;}$
and ${\displaystyle \quad \quad \quad t=3}$ minutes very nearly.

Exercises XIII. (See page 260 for Answers.)

(1) Draw the curve ${\displaystyle y=b\epsilon ^{-{\frac {t}{T}}}}$; where ${\displaystyle b=12}$, ${\displaystyle T=8}$, and ${\displaystyle t}$ is given various values from ${\displaystyle 0}$ to ${\displaystyle 20}$.

(2) If a hot body cools so that in ${\displaystyle 24}$ minutes its excess of temperature has fallen to half the initial amount, deduce the time-constant, and find how long it will be in cooling down to ${\displaystyle 1}$ per cent. of the original excess.

(3) Plot the curve ${\displaystyle y=100(1-\epsilon ^{-2t})}$.

(4) The following equations give very similar curves:

{\displaystyle {\begin{aligned}{\text{(i)}}\ y&={\frac {ax}{x+b}};\\{\text{(ii)}}\ y&=a(1-\epsilon ^{-{\frac {x}{b}}});\\{\text{(iii)}}\ y&={\frac {a}{90^{\circ }}}\arctan \left({\frac {x}{b}}\right).\end{aligned}}}

Draw all three curves, taking ${\displaystyle a=100}$ millimetres; ${\displaystyle b=30}$ millimetres.

(5) Find the differential coefficient of ${\displaystyle y}$ with respect to ${\displaystyle x}$, if

(a) ${\displaystyle \;y=x^{x}}$; (b) ${\displaystyle \;y=(\epsilon ^{x})^{x}}$; (c) ${\displaystyle \;y=\epsilon ^{x^{x}}}$.

(6) For “Thorium ${\displaystyle A}$,” the value of ${\displaystyle \lambda }$ is ${\displaystyle 5}$; find the “mean life,” that is, the time taken by the transformation of a quantity ${\displaystyle Q}$ of “Thorium ${\displaystyle A}$” equal to half the initial quantity ${\displaystyle Q_{0}}$ in the expression

${\displaystyle Q=Q_{0}\epsilon ^{-\lambda t}}$;

${\displaystyle t}$ being in seconds.

(7) A condenser of capacity ${\displaystyle K=4\times 10-6}$, charged to a potential ${\displaystyle V_{0}=20}$, is discharging through a resistance of ${\displaystyle 10,000}$ ohms. Find the potential ${\displaystyle V}$ after (a) ${\displaystyle 0.1}$ second; (b) ${\displaystyle 0.01}$ second; assuming that the fall of potential follows the rule ${\displaystyle V=V_{0}\epsilon ^{-{\frac {t}{KR}}}}$.

(8) The charge ${\displaystyle Q}$ of an electrified insulated metal sphere is reduced from ${\displaystyle 20}$ to ${\displaystyle 16}$ units in ${\displaystyle 10}$ minutes. Find the coefficient ${\displaystyle \mu }$ of leakage, if ${\displaystyle Q=Q_{0}\times \epsilon ^{-\mu t}}$; ${\displaystyle Q_{0}}$ being the initial charge and ${\displaystyle t}$ being in seconds. Hence find the time taken by half the charge to leak away.

(9) The damping on a telephone line can be ascertained from the relation ${\displaystyle i=i_{0}\epsilon ^{-\beta l}}$, where ${\displaystyle i}$ is the strength, after ${\displaystyle t}$ seconds, of a telephonic current of initial strength ${\displaystyle i_{0}}$; ${\displaystyle l}$ is the length of the line in kilometres, and ${\displaystyle \beta }$ is a constant. For the Franco-English submarine cable laid in ${\displaystyle 1910}$, ${\displaystyle \beta =0.0114}$. Find the damping at the end of the cable (${\displaystyle 40}$ kilometres), and the length along which ${\displaystyle i}$ is still ${\displaystyle 8\%}$ of the original current (limiting value of very good audition).

(10) The pressure ${\displaystyle p}$ of the atmosphere at an altitude ${\displaystyle h}$ kilometres is given by ${\displaystyle p=p_{0}\epsilon ^{-kh}}$; ${\displaystyle p_{0}}$ being the pressure at sea-level (${\displaystyle 760}$ millimetres).

The pressures at ${\displaystyle 10}$, ${\displaystyle 20}$ and ${\displaystyle 50}$ kilometres being ${\displaystyle 199.2}$, ${\displaystyle 42.2}$, ${\displaystyle 0.32}$ respectively, find ${\displaystyle k}$ in each case. Using the mean value of ${\displaystyle k}$, find the percentage error in each case.

(11) Find the minimum or maximum of ${\displaystyle y=x^{x}}$.

(12) Find the minimum or maximum of ${\displaystyle y=x^{\frac {1}{x}}}$.

(13) Find the minimum or maximum of ${\displaystyle y=xa^{\frac {1}{x}}}$.