783 several magnitudes, each of which has been determined by a set of observations. Suppose A and B two such magnitudes, and X their sum, to find the law of error in X-A + B. Let the functions of error for A and B be c- l ir-k X2c - 2 dx, f-^v-le- V tdx. In formula (49) let m = o, i = o, 27i = c 2 ; then the function for A is the law for the sum of a number of errors (37) the sum of whose mean squares is h = c 2 ; likewise that for B is the law for the sum of a number the sum of whose mean squares is ^/ 2 ; the same formula (49) shows us -that the law for the sum of these two series of errors that is, for the sum of the errors of A and B is that is, the modulus for X or A + B is Hence (58). Probable error of X= -4769Vc 2 +/ 2 . . (p.e. ofX) 2 = (p.e. of A) 2 + (p.e. of B) 2 . So likewise for the mean error. If X were the difference A - B, (58) still holds. If X be the sum of m magnitudes A, B, C . . . instead of two, its probable error is in like manner (p.e. X) 2 = (p.e. A) 2 + (p.e. B) 2 + , &c.; and if the function of error for A, B, C . . .be the same for all (p.e. X) 2 = ??i(p.e. A) 2 . Also the probable error in the mean is the mfh part of the above ; .-. p.e. of M(A) = 7?i-i(p.e. of A) . . . (59). Airy gives the following example. The co-latitude of a place is found by observing m times the Z.D. of a star at its upper cul mination and n times its Z. D. at its lower culmination ; to find the probable error. By (59) p.e. upper Z.D. =?/i~i(p.e. of an upper obs.) ; p.e. lower Z.D. = w-*(p.e. of a lower obs.) ; Now co-latitude = i(U. Z. D. + L. Z. D. ). Hence (58) (p.e. co-lat. ) 2 = ^?;i- 1 (p.e. up. obs. ) 2 + 4?i~ 1 (p.e. low. obs. ) 2 . If the upper Z.D. observations are equally good with the lower, p.e. co-lat. =J(p.e. &nobs.)/m~ l + n- 1 . 57. The magnitude to be found is often not observed directly, but another magnitude of which it is some function. Let A = true but unknown value of a quantity depending on another whose true unknown value is a, by the given function A=/() ; let an observed value for a be v, the corresponding value for A being V, then V =/(*). Let e = error of v, then the error of V is V-A=/( + e) -/(a) = </ (*) .... (60), as v is nearly equal to a. Suppose now the same magnitude A also a given function /^a,) of a second magnitude 1; which is also observed and found to be v l ; also for a third, and so on ; hence, writing C=f (v), C 1 =f 1 (v 1 ), &c. V-A-/*().f-C (61) ; and we have to judge of the best value for the unknown quantity, whose true value is called A. The arithmetical mean of V.V^V,, . . . seems the simplest, but it is not here the most probable, and we shall assume it to be a different mean, viz., (As VjVi.Vg .... are very nearly equal, it would be easy to show that any other way of combining them would be equivalent to this. ) The factors m, m 1} m. 2 .... remain to be determined. From (61) the error of X is X ~ A = ^ L ^^T^f^~ L ^ (62) " Let the moduli of the errors e, e l5 e 2 .... be c,C],c., .... (see art. 56) ; then (see art. 49) for modulus of the error X- A we have (mod. )"- If the factors mm^n^ ... are determined so as to make this modulus the least possible, the importance of the error X - A is the least possible. Differentiate with regard to in, and we find Likewise for m lt and so on. Hence mC 2 c 2 = mjCtf = m 2 C 2 c 2 = , &c. ; so that the most accurate mean to take is V V, V, t 1_ i ? i f+. t ,) ~t~ ri-> ? rv, > T J-+J-+J-. + . j2y2 r~j 2 ,,2 (J^c 2 The modulus of error in this value is, from (63), 1 _4_+4_ i+ 4_ i+i (mod. ) 58. The errors ,6j,e 2 first (64). (65). . are unknown. We have as to the Let the values of the quantities observed corresponding to the value X for that sought be x, x lt x 2 . . . so that X =f(x) =fi(x i ) =f.2(x. 2 ) . . . then X - A = (x - a}f (v) ; and, subtracting, V - X = (v - x)f (v) = (v- x)C . Here V-X is the apparent error in V, v x the apparent error of the observation v, taking X, x as the true values. Of course we have also If now we were to determine X so as to render the sum of squares of apparent errors of the observations, each divided by the square of its modulus, a minimum, that is, (v x) (Vj - XT) (V-X) 2 ( ~~c*?~" (V 2 -X) 2 "" ri2 T 2 " a minimum, we shall find the same value (64) for X. Of course if the modulus is the same for all the observations the sum of squares simply is to be made a minimum. To take a very simple instance. An observed value of a quantity is P ; an observed value of a quantity known to be the square root of the former is Q ; what is the most probable value ? If X be taken for the quantity, the apparent error of P is P - X ; the apparent error of Q is found from (Q- C ) 2 = X; .-. C = (Q 2 -X)/2Q; . . (P - X) 2 + (Q 2 - X) 2 /4Q 2 = minimum ; .-. X = (4P + 1)Q 2 /(4Q 2 + 1) ; the weight of both observations being supposed the same. Again, suppose a circle is divided by a diameter into two semi circles ; the whole circumference is measured and found to be L ; also the two semicircles are found to be M and N respectively. What is the most probable value of the circumference ? If X be taken as the circumference, the apparent error in L is L - X ; those of M and N are M - X, N - X. Hence, if all the measurements are equally good, (L - X) 2 + (M - ^X) 2 + (N - ^X) 2 = minimum , is the most probable value. The modulus of error of this result is (65) found to be (mod. ) 2 = f(mod. of measurements) 2 so that probable error = (prob. error of a measurement) Vf- 59. In the last article we have explained the method of least squares, as applied to determine one unknown element from more than one observation of the element itself or of others with which it is connected by known laws. If several observations of the element itself are made, it is obvious that the method of least squares gives the arithmetical mean of the observations as the best value, thus justifying what common sense seems to indicate. If the observations are not equally good, the best value will be calling w, the weights of the different observations i. e., w = c - - , u = Cj ~ - , u: 2 = c. 2 - - , &. It would carry us beyond our assigned limits in this article to attempt to demonstrate and explain the method of least squares when several elements have to be determined from a number of observations exceeding the elements in number. We must there fore refer the reader to the works already named, and also to the following : Gauss, Theoria Combinations Obscrvationum ; Gauss, Theoria Motus ; Airy, Theory of Errors of Observation ; Leslie
Ellis, in Camb. Phil. Trans., vol. viii.Page:Encyclopædia Britannica, Ninth Edition, v. 19.djvu/807
This page needs to be proofread.
POR—POR