This page has been proofread, but needs to be validated.
LAWS OF ERROR]
PROBABILITY
391

Section I.—The Law of Error.

98. (1) The Normal Law of Error.—The simplest and best recognized statement of the law of error, often called the “normal law,” is the equation

,

more conveniently written , where x is the magnitude of an observation or “statistic,” z is the proportional frequency of observations measuring x, a is the arithmetic mean of the group (supposed indefinitely[1] multiplied) of similar statistics: c is a constant sometimes called the “modulus”[2] proper to the group; and the equation signifies that if any large number N of such a group is taken at random, the number of observations between x and x + ∆x is (approximately) equal to the right-hand side of the equation multiplied by N∆x. A graphical representation of the corresponding curve—sometimes called the “probability-curve”—is here given (fig. 10), showing the general shape of the curve, and how its dimensions vary with the magnitude of the modulus c. The area being constant (viz. unity), the curve is furled up when c is small, spread out when c is large. There is added a table of integrals, corresponding to areas subtended by the curve; in a form suited for calculations of probability, the variable, τ, being the length of the abscissa referred to (divided by) the modulus.[3] It may be noted that the points of inflexion in the figure are each at a distance from the origin of 1/√2 modulus, a distance equal to the square foot of the mean square of error—often called the “standard deviation.” Another notable value of the abscissa is that which divides the area on either side of the origin into two equal parts; commonly called the “probable error.” The value of τ which corresponds to this point is 0.4769. . . .

Fig. 10.

99. An a priori proof of this law was given by Herschel[4] as A priori proof. follows: “The probability of an error depends solely on its magnitude and not on its direction;” positive and negative errors are equally probable. “Suppose a ball dropped from a given height with the intention that it should fall on a given mark,” errors in all directions are equally probable, and errors in perpendicular directions are independent. Accordingly the required law, “which must necessarily be general and apply alike in all cases, since the causes of error are supposed alike unknown,”[5] is for one dimension of the form φ(x2), for two dimensions φ(x2 + y2); and φ(x2 + y2) ≡ φ(x2) × φ(y2); a functional equation of which the solution is the function above written. A reason which satisfied Herschel is entitled to attention, especially if it is endorsed by Thomson and Tait.[6] But it must be confessed that the claim to universality is not, without some strain of interpretation,[7] to be reconciled with common experience.

Table of the Values of the Integral I = .

τ I


  0.00   0.00000  
  .01 .01128
  .02 .02256
  .03 .03384
  .04 .04511
  .05 .05637
  .06 .06762
  .07 .07886
  .08 .09008
  .09 .10128
 .1 .11246
 .2 .22270
 .3 .32863
 .4 .42839
 .5 .52050
 .6 .60386
 .7 .67780
 .8 .74210
 .9 .79691
1.0 .84270
1.1 .88020
1.2 .91031
1.3 .93401
1.4 .95229
1.5 .96611
1.6 .97635
1.7 .98379
1.8 .98909
1.9 .99279
2.0 .99532
2.1 .99702
2.2 .99814
2.3 .99886
2.4 .99931
2.5 .99959
2.6 .99976
2.7 .99986
2.8 .99992
2.9 .99996
3.0 .99998
1.00000 

100. There is, however, one class of phenomena to which Herschel's reasoning applies without reservation. In a “molecular chaos,” such as the received kinetic theory of gases postulates, if a molecule be placed at rest at a given point and the distance which it travels from that point in a given time, driven hither and thither by colliding molecules, is regarded as an “error,” it may be presumed that errors in all directions are equally probable and errors in perpendicular directions are independent. It is remarkable that a similar presumption with respect to the velocities of the molecules was employed by Clerk Maxwell, in his first approach to the theory of molecular motion, to establish the law of error in that region.

101. The Laplace-Quetelet Hypothesis.—That presumption has, indeed, not received general assent; and the law of error appears to be better rested on a proof which was originated by Laplace. According to this view, the normal law of error is a first approximation to the frequency with which different values are apt to be assumed by a variable magnitude dependent on a great number of independent variables, each of which assumes different values in random fashion over a limited range, according to a law of error, not in general the law, nor in general the same for each variable. The normal law prevails in nature because it often happens—in the world of atoms, in organic and in social life—that things depend on a number of independent agencies. Laplace, indeed appears to have applied the mathematical principle on which this explanation depends only to examples (of the law of error) artificially generated by the process of taking averages. The merit of accounting for the prevalence of the law in rerum natura belongs rather to Quetelet. He, however, employed too simple a formula[8] for the action of the causes. The hypothesis seems first to have been stated in all its generality both of mathematical theory and statistical exemplification by Glaisher.[9]

102. The validity of the explanation may best be tested by first (A) deducing the law of error from the condition of numerous (A) Deduction from Hypothetical Conditions. independent causes; and (B) showing that the law is adequately fulfilled in a variety of concrete cases, in which the condition is probably present. The condition may be supposed to be perfectly fulfilled in games of chance, or, more generally, sortitions, characterized by the circumstance that we have a knowledge prior to specific experience of the proportion of what Laplace calls favourable cases[10] to all cases—a category which includes, for instance, the distribution of digits obtained by random extracts from mathematical tables, as well as the distribution of the numbers of points on dominoes.

103. The genesis of the law of error is most clearly illustrated by the simplest sort of “game,” that in which the sortition is between two alternatives, heads or tails, hearts or not-hearts, or, generally, success or failure, the probability of a success being p and Games of Chance. that of a failure q, where p + q = 1. The number of such successes in the course of n trials may be considered as an aggregate made up of n independently varying elements, each of which assumes the values 0 or 1 with respective frequency q and p. The frequency of each value of the


  1. On this conception see below, par. 122.
  2. E.g. in the article on “Probability” in the 9th ed. of the Ency. Brit.; also by Airy and other authorities. Bravais, in his article Sur la probabilité des erreurs. . . . “Mémoires présentés par divers savants” (1846), p. 257, takes as the “modulus or parameter” the inverse square of our c. Doubtless different parameters are suited to different purposes and contexts; c when we consult the common tables, and in connexion with the operator, as below, par. 160; k( = ½c2) when we investigate the formation of the probability-curve out of independent elements (below, par. 104); h( = 1/c2) when we are concerned with weights or precisions (below, par. 134). If one form of the coefficient must be uniformly adhered to, probably, σ( = c/√2), for which Professor Pearson expresses a preference, appears the best. It is called by him the “standard deviation.”
  3. Fuller tables are to be found in many accessible treatises. Burgess's tables in the Trans. of the Edin. Roy. Soc. for 1900 are carried to a high degree of accuracy. Thorndike, in his Mental and Social Measurements, gives, among other useful tables, one referred to the standard deviation as the argument. New tables of the probability integral are given by W. F. Sheppard, Biometrics, ii. 174 seq.
  4. Edinburgh Review (1850), xcii. 19.
  5. The italics are in the original. The passage continues: “And it is on this ignorance, and not on any peculiarity in cases, that the idea of probability in the abstract is formed.” Cf. above, par. 6.
  6. Natural Philosophy, pt. i. art. 391. For other a priori proofs see Czuber, Theorie der Beobachtungsfehler, th. i.
  7. Cf. note to par. 127.
  8. He considered the effect as the sum of causes each of which obeys the simplest law of frequency, the symmetrical binomial.
  9. Memoirs of Astronomical Society (1878), p. 105. Cf. Morgan Crofton, “On the Law of Errors of Observation,” Trans. Roy. Soc. (1870), vol. clx. pt. i. p. 178.
  10. Above, par. 2.