Translation:Supplementum theoriae combinationis observationum erroribus minimis obnoxiae

Theory of the combination of observations which is subject to the least error (1821–1826)
by Carl Friedrich Gauss, translated from French by Wikisource

Based on the 1855 French translation by Joseph Bertrand.

4458592Theory of the combination of observations which is subject to the least error1821-1826Carl Friedrich Gauss

1. edit

In the previous Memoir, we assumed that the quantities to be determined, with the help of imperfect observations, depended on certain unknown elements, in terms of which they could be expressed: the problem then consisted in deducing from the observations, as accurately as possible, the values of these elements.

In most cases, this is indeed how the question arises. But sometimes the problem is presented differently, and at first glance it is even doubtful whether or not it can be reduced to a problem of the required form. Indeed, frequently the quantities under observation are not given in the form of a function of certain elements, and they seem to only be reducible to such a form by difficult or even ambiguous operations; whereas the nature of the problem does provide certain conditions that the observed values must rigorously satisfy.

Upon closer inspection, however, it can be seen that this case does not differ essentially from the previous one, and that it can be reduced to it. Indeed, if   denotes the number of observed quantities, and   denotes the number of constraint equations, then nothing prevents us from choosing, among the former,   quantities, and considering them as our only unknowns, the others being regarded as functions of these, defined by the constraint equations. By this artifice, we can return to the case of the previous Memoir.

Nevertheless, although this approach often leads to the result in a quite convenient manner, one cannot deny that it is less natural, and it is therefore desirable to treat the problem in another form, which admits a very elegant solution. Moreover, since this new solution leads to faster calculations than the previous one whenever   is less than   or equivalently, whenever the multitude of elements that we have denoted by   in the previous commentary, is greater than   it is preferable to use the new solution, even if it would be easy, given the nature of the problem, to unambiguously eliminate the constraint equations.

2. edit

Let       etc. denote the   quantities whose values are provided by observation. Suppose an unknown depends on these and is expressed by a known function   let       etc. be the values of the differential quotients

 

corresponding to the true values of       etc.. If one substituted the true values for       etc. in the function   one would also obtain the true value of   but if the observations are affected by errors       etc., there resulting total error for   will be represented by

 

provided (when   is not linear) that one can neglect the squares and products of errors       etc., as we will always assume.

Although the magnitude of errors       etc., is uncertain, the uncertainty attached to the value found for   can generally be measured by the mean error to be feared in the adopted determination. According to the principles developed in the previous commentary, this average error is

 

where       etc. are the average errors of the various observations. If all observations are affected by the same degree of uncertainty, this expression becomes

 

and it is clear that in this calculation, it is permissible to replace       etc., with the values taken by the differential coefficients

 

when one replaces       etc. with their observed values.

3. edit

When the quantities       etc. are independent, the unknown can only be determined in one way, and the uncertainty attached to the result cannot be avoided or reduced. In this case, the observations provide a value of the unknown which is not arbitrary.

It is quite different when the quantities       etc. are subject to certain relations, which we will suppose are expressed by   conditional equations

 

where       etc. are given functions of the unknowns       etc. In this case, there are infinitely many ways to represent our unknown as a combination of the quantities       etc., since it is clear that the function   can be replaced by any other function   such that the difference   vanishes identically upon setting

 

If the observations were rigorously exact, this substitution would not change the result at all; but, due to inevitable errors, each form adopted for   will correspond to a different result, and the error committed, instead of being

 

will be

 

where       etc. denote the differential quotients

 

Although it is impossible to assign the value of the various errors, we can, however, compare the mean errors to be feared in the various combinations. The most advantageous combination will be the one that gives the minimum value to the mean error. This error being

 

we must seek to make the sum

 

as small as possible.

4. edit

The infinite number of functions   by which   can be replaced, will differ from each other in our investigations only by the values they provide for       etc.: therefore, we must first seek the relationships that exist between the systems of values that these coefficients can take. Let

 

denote the values taken by the coefficients

 

when we substitute the true values of       etc. It is clear that if increments       etc. are given to       etc. which do not change       etc. and thus leave each of them with a value of zero, these increments, which satisfy the equations

 

will not change the value of   and consequently, we will have

 

We easily conclude that       etc., must have the form

 

where       etc. are determined multipliers. Conversely, it is clear that for all values of       etc., we can form a function   for which the values       etc., will precisely be those provided by these equations, and this function can, according to the above, be substituted for   The simplest form we can give it is

 

but the most general is

 

where   denotes a function of       etc. which vanishes identically when       etc., are zero, and whose value, in the current case, will be maximum or minimum. However, this makes no difference for our purposes.

5. edit

It is now easy to assign values to       etc. such that the sum

 

achieves its minimum value. It is clear that to achieve this goal, the knowledge of the absolute mean errors       etc. is not necessary. Rather, it suffices to know only their ratios. Indeed, instead of these quantities, let us introduce the weights of the observations,       etc., i.e. numbers inversely proportional to the squares       etc. The quantities       etc. must then be determined in such a way that the polynomial

 

achieves its minimum value. Suppose that       etc., are the determined values of       etc. that correspond to this minimum.

We now introduce the following notation:

 

The minimality condition clearly requires that we have

  (1)

After these equations have provided the values of       etc., we set

  (2)

and the most suitable function to determine our unknown, corresponding to the minimum mean error, will be one whose differential coefficients, for the considered values of the variables, are equal to       etc. The weight of the determination thus obtained will be

  (3)

meaning that   will precisely be the value taken by the polynomial considered above, for the values of the variables       etc. which satisfy the equations (1).

6. edit

In the previous article, we showed how to determine the function   that provides the most suitable determination of the unknown   Let us now examine what value results from this. Let's denote it by  : it will be obtained by substituting, in   the observed values of the quantities       etc. Let   be the value that   takes when the same substitutions are made, and let   be the true value of the unknown, obtained by substituting the true values of       etc., either in   or in   We have

 

and, consequently,

 

Replacing       etc. with their values provided by (2), and setting

  (4)

we will have

  (5)

It is not possible to calculate       etc. by means of the formulas (4), as the errors       etc. appearing in them have unknown values, but it is easy to see that the quantities       etc., are none other than the values of       etc., corresponding to the observed values of       etc. In this way, the systems of equations (1), (3), (5) exhibit the complete solution to our problem. In fact, it is clear that the remark made at the end of art. 2, regarding the quantities       etc., can be applied to the calculation of       etc.       etc. etc., or in other words, the true values of       etc. can be replaced with the observed values.

7. edit

In place of formula (3), which represents the weight of the most probable determination, several other expressions can be given, which are worth developing.

First, observe that if the equations (2) are multiplied by       etc. resp. and then added, the result is

 

The left hand side is   and therefore, denoting the right hand side by   we have

 

and similarly

 

Next, multiplying the equations (2) by       etc. and adding, we find that

 

and thus we obtain a second expression for the weight,

 

If, finally, if we multiply the same equations (2) by       etc. and add, we obtain a third expression for the weight,

 

where, with the same notation as before,

 

One can easily derive from this a fourth expression for the weight,

 

8. edit

The general solution we have just outlined is particularly adapted to the case where there is only one unknown to determine. However, when seeking the most plausible values for several unknowns, all of which depend on the same observations, or when it is unknown which unknowns should be derived from the observations, it is necessary to proceed differently, as we will now explain.

Let us consider the quantities       etc. as indeterminate, and set

  (6)

Suppose that we deduce, by elimination, that

  (7)

Let us first of all observe that the symmetrically placed coefficients are necessarily equal, i.e.

 

This follows from the theory of elimination, but we will also give a direct demonstration below.

Thus we have

  (8)

and so, by setting

  (9)

we obtain

 

and if we further set

  (10)

it follows that

  (11)

9. edit

Comparing equations (7) and (9) shows that the auxiliary quantities       etc., are the values taken by the indeterminates       etc., when we set       etc., and thus we have

  (12)

Multiplying equations (10) by       etc. respectively, and adding, we obtain

and similarly,


  (13)

Now, since   is the value taken by   after the observed values are substituted for       etc., it is easy to see that if we apply the corrections       etc., to each of these quantities, the value of   becomes  , and similarly     etc., vanish under this assumption. In a similar way, equation (11) also shows that   is the value taken by   after the same substitutions have been made.

We will refer to the application of the corrections       etc. as compensation of observations. This evidently leads us to the following very important consequence. Namely, the observations, compensated in the way we have describde, exactly satisfy all of the conditional equations, and cause any function of the observed quantities to take the value resulting from the most suitable combination of the unaltered observations. Since the conditional equations are too few to deduce the exact values of the errors, we will have found, by the above, the most plausible errors. We will henceforth refer to the quantities       etc. by this name.

10. edit

Since the number of observations is greater than the number of conditional equations, an infinite number of systems can be found, other than the system of the most plausible corrections, which exactly satisfy the condition equations.

It is important to examine the relationships that link these various systems. Let       etc., be such a system of corrections, different from the most plausible system. Then we have

 

Multiplying these equations by       etc., and adding, we obtain, with the help of the equations (10),

 

But combining equations (13) in the same manner gives

  (14)

From the combination of these results, one easily deduces

 
 

Therefore, the sum

 

is necessarily greater than the sum

 

which can be stated as follows:

Theorem. The sum of squares of the corrections by which the observations can be reconciled with the condition equations, being multiplied by the weights of the corresponding observations respectively, gives a minimum, if the most plausible corrections are adopted.

This is the very principle of least squares, from which equations (12) and (10) can be easily and immediately derived. The minimum value of the sum, which we will now denote by   is equal, by equation (14), to

 

11. edit

The determination of the most plausible errors, being independent of       etc., clearly provides the most convenient preparation, regardless of how the observations are to be subsequently used. Furthermore, it is easy to see that, to achieve this goal, it is not necessary to perform elimination indefinitely, i.e. to calculate     etc., it suffices to deduce from the equations (12), by a definite elimination, the auxiliary quantities       etc., which we will call, in what follows, the correlates of the condition equations

 

and to substitute these quantities into equation (10).

This method leaves nothing to be desired when only the most plausible values of the quantities provided by the observation are required, but it is different when one also seeks the weights of each of the found values. Whichever of the four previous formulas one wants to use, it is essential to know       etc., or equivalently,       etc.; for this reason, it will be useful to more closely study the elimination that provides these quantities, and to obtain a more convenient method for determining the weights.

12. edit

The relations between the quantities we are concerned with are notably simplified by considering the indefinite second-degree function

 

which we will denote by   This function is obviously equal to

  (15)

Moreover, we clearly have

  (16)

and if, with the help of the equations (7), one expresses       etc. in terms of       etc.,

 

The theory developed above provides two sets of determined values for the quantities       etc.,       etc.

The first one is

 

whose corresponding value of   is

 

as can be seen by comparing the third form of the weight   with equation (16), or by directly considering the fourth form.

The second set of values is

 

and the corresponding value of   is

 

as is evident from formulas (10) and (15), and also from formulas (14) and (16).

13. edit

We must now subject the function   to a transformation similar to that which has been indicated in Theoria Motus Corporum Coelestium art. 182, and was further developed in Disquisitione de elementis ellipticis Palladis. To this end, let us set

  (17)

After also setting[1]

 

we will have

 

where       etc. are derived from         etc. using the following equations:

 

We can then easily derive all the formulas required for our purpose. Namely, to determine the correlates       etc., we set

  (18)

and then         etc. are obtained from the following formulas, starting with the last one:

  (19)

For the sum   we have the new formula

  (20)

and finally, the weight   which is to be assigned to the most plausible determination of the quantity   is given by the formula

  (21)

In this formula, we have

  (22)

Formulas (17)....(21), whose simplicity leaves nothing to be desired, provide the complete solution to our problem.

14. edit

Having solved our primary problem, we will now address some secondary questions, which will shed more light on the subject.

We will first investigate whether it is possible for the elimination that provides       etc., as functions of       etc., to become impossible in certain cases. This would obviously happen if the functions       etc., were not independent of each other. Suppose, for a moment, that this is the case, and that one of them can be expressed as a function of the others, such that we have an identity

 

where       etc. are determined numbers. Then we will have

 

and if we set

 

it follows automatically that

 

and consequently,

 

where       etc. are, by their nature, all positive, this equation requires

 

Let us now consider the complete differentials       etc. corresponding to the values of       etc., immediately provided by the observations. By our previous results, these differentials

 

will be related to each other in such a way that when they are multiplied by       etc. respectively and added, the sum is identically zero, so that among the equations

 

at least one can be regarded as superfluous, because it will be satisfied as soon as the others are.

Upon closer examination of the question, it is seen that this conclusion is only applicable to values of the variables that differ infinitesimally from those provided by observation. In particular, there are two cases to distinguish: first, the case where one of the equations

 

is generally and absolutely contained within the others, and can therefore be eliminated; and second, the case where, for the particular values of       etc. provided by the observations, one of the functions       etc.,   for example, achieves its maximum or minimum value, or, more generally, a value where its differential vanishes, while the other equations remain satisfied. However, since we only consider variations of our variables whose squares are negligible, this second case (which in practice will rarely occur) can be assimilated into the first, and one of the constraint equations can be removed as redundant. If the remaining equations are independent in the sense we have indicated, it is certain, according to the above, that the elimination is possible. However, we reserve the right to return to this matter, which deserves to be examined as a theoretical subtlety rather than as a question of practical utility.

15. edit

In the previous commentary, arts. 37 sqq., we have shown how to approximate the weight of a determination a posteriori. If approximate values of   quantities are provided by equally precise observations, and they are compared with the values resulting from the most plausible assumptions one can make about the   elements on which they depend, it has been seen that one must add the squares of the obtained differences, divide the sum by   and then the resulting quotient can be regarded as an approximate value of the square of the average error inherent in this kind of observation. If the observations are of unequal precision, the only modification that must be made is that one must multiply the squares of the differences by the respective weights of the corresponding observations, and the mean error obtained in this way relates to observations whose weight is taken to be unity.

In the current case, the sum of the squares we are talking about obviously coincides with the sum  , and the difference   with the number   of the condition equations. Consequently, for the average error of observations whose weight is  , we will have the expression   so that the determination will be increasingly reliable for larger values of  

However, it is worthwhile to establish this result independently of the reasoning in the first investigation. To this end, it will be helpful to introduce some new notation. Suppose that, corresponding to the values

 

we have

 

so that

 

and furthermore, corresponding to the values

 

we have

 

and finally, corresponding to the values

 

we have

 

and so on.

Combining equations (4) and (9) then yields

 

and, since

 

we will clearly have

 

16. edit

The series of observations that provide the quantities       etc., affected by random errors       etc., can be considered as a test that does not reveal the magnitude of each error, but instead, by means of the rules explained above, allows us to determine the quantity   as a function of the errors. In such a test, some errors may be larger and others smaller; but the greater the number of errors used, the greater the probability that   differs little from its average value. The difficulty thus boils down to finding the average value of  

By the principles outlined in the first Memoir, which it is unnecessary to reproduce here, we find for this average value

 

Let   denote the average error corresponding to observations with weight 1, so that we have

 

Then the previous expression can be written as follows:

 

but we have found

 

and thus the right-hand side is unity, as can be easily recognized by comparing equations (6) and (7). Similarly, we find that

 

and so on.

Hence the average value of   is   and insofar as it is permissible to regard the random value of   as equal to the average value, we conclude that  

17. edit

One can assess the confidence warranted by this determination by calculating either the mean error to be feared, or its square. The latter will be the square root of the average value of the expression   which can be developed by reasoning similar to that which was explained in the first commentary (arts. 39 sqq.). We will omit these for the sake of brevity, and simply give the result.

The mean error to be feared in determining the square   is expressed by

 

where   is the average value of the fourth powers of the errors with weight 1, and   is the sum

 

This sum cannot generally be simplified; however it can be shown, by a method similar to that used in art. 40 of the previous commentary, that its value is between   and   With the hypothesis on which we originally established the method of least squares, the term containing this sum disappears, because   and the precision that must be attributed to the determination   is therefore the same as if one had operated on   observations affected by exactly known errors, following the precepts of arts. 15, 16 of the previous commentary.

18. edit

For the compensation of observations, we have seen above that there are two operations to perform: first, it is necessary to determine the correlates of the conditional equations, that is,       etc., which satisfy equations (12); secondly, we must substitute these quantities into equation (10). The compensation thus obtained can be called perfect or complete, as opposed to an imperfect or incomplete compensation. We will use the latter term to describe those resulting from the same equations (10), in which values of       are substituted which do not satisfy the equations (12), i.e. which satisfy only some or none. We will not consider such a system of corrections here, and we will not even describe them as a compensation. When equations (10) are satisfied, systems (12) and (13) become equivalent, and the difference of which we speak can then be stated as follows: Fully compensated observations satisfy the conditional equations

 

Partially compensated observations satisfy only some of these equations, and perhaps none; a compensation for which all equations are satisfied is necessarily complete.

19. edit

It follows from the very definition of a compensation that a combination of two compensations is again a compensation, and it does not matter whether the rules given to obtain a perfect compensation are applied to the raw observations or to imperfectly compensated observations.

Let       etc., be a system of incomplete compensations resulting from the formulas

  (I)

Since the observations thus changed do not satisfy all of the conditional equations, let       etc., be the values taken by       etc., when the values obtained for       etc., are substituted into them. We must then find the values       etc., which satisfy the equations

  (II)

Once this is done, the complete compensation of the observations thus modified will be accomplished by the new changes       etc.;       etc., being deduced from the formulas

  (III)

Let us now examine how these corrections agree with the complete compensation of the raw observations. First of all, it is clear that we have

 

In these equations, if we replace       etc. with their values from (I), and if we replace       etc., with their values from (II), then we obtain

 

It follows that the correlates of the conditional equations (12) are

 

and then equations (10), (I), and (III) show that

 

Hence the perfect compensation produces the same value for each unknown, whether calculated directly or obtained through an incomplete compensation.

20. edit

When there are a large number of conditional equations, determining the correlates       etc. may require calculations so lengthy that the calculator may be deterred: in such cases, it may be advantageous to obtain a complete compensation using a series of approximations based on the theorem of the preceding article. For this purpose, the conditional equations will be divided into two or more groups, and first, a compensation will be sought that satisfies the equations of the first group. Then, the values modified by this initial calculation will be treated, and they will be corrected again, considering only the equations of the second group. This second calculation will generally yield results that no longer satisfy the equations of the first group, and if only two groups were formed, one must then return to the first group and satisfy its equations with new corrections. The observations will then undergo a fourth compensation, in which only the conditions of the second group are considered, and by alternating in this way between the two groups of equations, corrections will be formed that will necessarily become smaller and smaller. If the choice of groups has been made skillfully, one will arrive at stable values after a few iterations.

When forming more than two groups, one must proceed in the same manner, with the various groups being used successively until the last, after which one returns to the first group to repeat the process in the same order. We have only indicated this method, the success of which will depend greatly on the skill of the calculator.

21. edit

We have yet to provide the proof of the lemma which was assumed in art. 8. For the sake of perspicuity, let us use notation which is better adapted to this matter.

Let       etc., be indeterminates, and suppose that the equations

 

have, through elimination, led to the following:

 

By substituting, in the first two equations of the second system, the values of       etc., provided by the first, we obtain two identical equations:

 

These equations being identical, we can substitute any quantities we wish for       etc. Let's choose in the first

 

and in the second

 

Subtracting the two identities element-wise yields   This can be written more succinctly as:   where     denote two arbitrarily chosen indices; from this, we conclude that the equalities   and more generally,   and so   Moreover, since the order of the indeterminates is arbitrary, it is evident that, under the assumed hypothesis, we have   in general.

22. edit

Since the method outlined in this commentary is intended primarily for useful application in high geodesy, the reader may appreciate it if we include some examples drawn from this branch of science.

The conditional equations that exist among the angles of a system of triangles can generally be divided into three categories.

I. The sum of the horizontal angles formed around the same vertex, encompassing the entire horizon, must be equal to four right angles.

II. The sum of the angles of each triangle can always be considered as known; for even when the triangle is situated on a curved surface, the excess of the sum of its angles over two right angles can be calculated with such approximation that it is permissible to consider the result as absolutely exact.

III. Finally, a third kind of relation is obtained by examining the ratios of the sides in triangles that form a closed network. If, indeed, the triangles are placed in such a way that the second triangle has a side   in common with the first, and a side   in common with the third; if the fourth triangle has two sides   and   respectively in common with the third and the fifth, and so on, until the last triangle, which has a common side   with the previous one, and with the first of all a common side   then the quotients

 

can be calculated by means of the angles opposite to them in the triangle whose two compared sides are part of it, and since the product of these fractions is obviously unity, there will be a relation between the sines of the various measured angles (reduced by one-third of the spherical or spheroidal excess when operating on a curved surface).

In somewhat complicated networks, it often happens that there are too many equations of the second and third category, and some of them are contained in the rest. On the other hand, it may happen in rare cases that some new equations must be added to those of the second category. For example, this will occur when the network contains polygons not divided into triangles by measurements; one must then introduce equations related to figures with more than three sides. On another occasion, we will return to this and provide more details on these various circumstances, but examining them now would divert us from our purpose. However, we cannot refrain from making an essential remark here for those who would like to make the rigorous application of our theory: we always assume that the quantities denoted by       etc., have been observed directly, or deduced from observations, in such a way that their determinations are independent of each other, or at least can be regarded as such. In common practice, one observes the angles that can be considered as the elements       etc. themselves. But one must not forget that if the system contains, in addition, triangles whose angles have not been directly observed and have been deduced from those that are known, by additions or subtractions, then these angles should not be counted among the quantities determined by the observation, and they must be included in the calculation as functions of the elements used to form them. It will be different if one adopts the method of observations of Struve (Astronomische Nachrichten II, p. 431), which consists of determining all directions around the same vertex, by relating them all to a single arbitrary direction. The angles measured in this way will then be taken for       etc., and the angles of the triangles will all appear as differences. The equations of the first category should, in this case, be suppressed as superfluous, for they will be identically satisfied. The method that I myself followed in the triangulations carried out during the last few years differs from the two previous methods; however, it can be reconciled, as to the result, with Struve's method, in the sense that, at each station, one must regard       etc., as the angles formed by the directions emanating from it, with a single arbitrarily chosen line.

We will give two examples: the first relates to the first mode of operation, and the second is related to observations made according to the second method.

23. edit

The first example comes from the work of de Krayenhof, Précis historique des opérations trigonométriques faites en Hollande. We will seek to compensate the part of the observations related to the system of triangles that are contained between Harlingen, Sneek, Oldeholtpade, Ballum, Leeuwarden, Dockum, Drachten, Oosterwolde, and Gröningen. Between these points, nine triangles numbered 121, 122, 123, 124, 125, 127, 128, 131, 132 were formed in the cited work. The observed angles are as follows:

Triangle 121.
0. Harlingen 50° 58′ 15″ 238
1. Leeuwarden 82° 47′ 15″ 351
2. Ballum 46° 14′ 27″ 202
Triangle 122.
3. Harlingen 51° 05′ 39″ 717
4. Sneek 70° 48′ 33″ 445
5. Leeuwarden 58° 05′ 48″ 707
Triangle 123.
6. Sneek 49° 30′ 40″ 051
7. Drachten 42° 52′ 59″ 382
8. Leeuwarden 87° 36′ 21″ 057
Triangle 124.
9. Sneek 45° 36′ 07″ 492
10. Oldeholtpade 67° 52′ 00″ 048
11. Drachten 66° 31′ 56″ 513
Triangle 125.
12. Drachten 53° 55′ 24″ 745
13. Oldeholtpade 47° 48′  52″ 580
14. Oosterwolde 78° 15′  42″ 347
Triangle 127.
15. Leeuwarden 59° 24′  00″ 645
16. Dockum 76° 34′  09″ 021
17. Ballum 44° 01′  51″ 040
Triangle 128.
18. Leeuwarden 72° 06′  32″ 043
19. Drachten 46° 53′  27″ 163
20. Dockum 61° 00′  04″ 494
Triangle 131.
21. Dockum 57° 01′  55″ 292
22. Drachten 83° 33′  14″ 515
23. Gröningen 39° 24′  52″ 397
Triangle 132.
24. Oosterwolde 81° 54′  17″ 447
25. Gröningen 31° 52′  49″ 094
26. Drachten 66° 12′  57″ 246

The consideration of these triangles shows that the twenty-seven angles directly provided by observation have thirteen necessary relationships among them, namely: two of the first kind, nine of the second kind, and two of the third kind. However, it is not necessary to write down all these equations here in their final form, because for the calculation we only need the quantities which were denoted in the general theory by  ,  ,  , etc.,  ,  ,  , etc. etc. Therefore, we can immediately write down the equations (13), which bring out these quantities. Instead of  ,  , etc., we will simply write here       etc.

In this way, the two equations of the first kind correspond to the following:

 

Next, we find, for the spheroidal excesses of the nine triangles:                   Then, we have the conditional equation of the second kind:

 

and so on for the others, and we have the following nine equations:

 

The conditional equations of the third kind are more easily expressed using logarithms. The first one is:

 

It seems unnecessary to develop the other one in finite form. To these two equations correspond the following, in which the coefficients refer to the seventh decimal place of Brigg's logarithms:

 

No reason leads us to attribute unequal weights to the various observations, so we will assume

 

Denoting the correlates of the condition equations, in the same order as these equations were written, by

                         

we determine them by the following equations:

 

We then deduce by elimination:

   

Finally, the most plausible errors are given by the formulas

 

and we obtain the following numerical values, to which we add, for comparison, the corrections adopted by de Krayenhof:

de Kr. de Kr.
       

|}

The sum of the squares of our corrections is 97.8845; the average error, as indicated by the 27 observed angles, is therefore

 

The sum of the squares of the corrections by de Krayenhof is 341.4201.

24.

The triangles whose vertices, in the triangulation of Hanover, are placed at Falkenberg, Breithorn, Hauselberg, Wulfsode, and Wilsede, will provide us with a second example.

The following directions have been observed:[2]

At the station in Falkenberg
0. Wilsede 187° 47′ 30″ 311
1. Wulfsode 225° 09′ 39″ 676
2. Hauselberg 266° 13′ 56″ 239
3. Breithorn 274° 14′ 43″ 634
À la station de Breithorn.
4. Falkenberg 094° 33′ 40″ 755
5. Hauselberg 122° 51′ 23″ 054
6. Wilsede 150° 18′ 35″ 100
À la station de Hauselberg.
7. Falkenberg 086° 29′ 06″ 872
8. Wilsede 154° 37′ 09″ 624
9. Wulfsode 189° 02′ 56″ 376
10. Breithorn 302° 47′ 37″ 732
At the station in Wulfsode.
11. Hauselberg 009° 05′ 36″ 593
12. Falkenberg 045° 27′ 33″ 556
13. Wilsede 118° 44′ 13″ 159
At the station in Wilsede.
14. Falkenberg 007° 51′ 01″ 027
15. Wulfsode 298° 29′ 49″ 519
16. Breithorn 330° 03′ 07″ 392
17. Hauselberg 334° 25′ 26″ 746

These observations allow us to form seven triangles.

Triangle I.
Falkenberg 008° 00′ 47″ 395
Breithorn 028° 17′ 42″ 299
Hauselberg 143° 41′ 29″ 140
Triangle II.
Falkenberg 086° 27′ 13″ 323
Breithorn 055° 44′ 54″ 345
Wilsede 037° 47′ 53″ 635
Triangle III.
Falkenberg 041° 04′ 16″ 563
Hauselberg 102° 33′ 49″ 504
Wulfsode 036° 21′ 56″ 963
Triangle IV.
Falkenberg 078° 26′ 25″ 928
Hauselberg 068° 08′ 02″ 752
Wilsede 035° 25′ 34″ 281
Triangle V.
Falkenberg 037° 22′ 09″ 365
Wulfsode 073° 16′ 39″ 603
Wilsede 069° 21′ 11″ 508
Triangle VI.
Breithorn 027° 27′ 12″ 046
Hauselberg 148° 10′ 28″ 108
Wilsede 004° 22′ 19″ 354
Triangle VII.
Hauselberg 034° 25′ 46″ 752
Wulfsode 109° 38′ 36″ 566
Wilsede 035° 55′ 37″ 227

We have here seven conditional equations of the second kind (there is obviously no need to form any of the first); to form them, we must first find the spheroidal excesses of the seven triangles, and for this it is essential to know the length of one side. The one that connects Wilsede to Wulfsode is 22877.94 meters. We conclude that the spheroidal excesses of the various triangles are:

I...  II...  III...  IV...  V...  VI...  VII... 

If we denote the angles that determine the directions indicated above by         etc., then the angles of the first triangle, marked with the same indices, will be

 

and the first condition equation is therefore

 

The remaining six triangles will yield six similar equations, but a little attention shows that these equations are not independent. Indeed, the second is identical to the sum of the first, fourth, and sixth; the sum of the third and fifth is identical to that of the fourth and seventh: for this reason we will ignore the second and fifth. Instead of writing the remaining equations in finite form, we will write here the corresponding equations (13), with symbols       etc. taking the place of       etc.:

 

We can obtain eight conditional equations of the third kind from the triangles of the system, to which end one may combine three of the four triangles I, II, IV, VI, or three of the triangles III, IV, V, VII; however, a little attention shows that it suffices to consider two belonging to each of these two systems of triangles, and those will encompass all the others. Thus, for the sixth conditional equation, we have

 

and for the seventh,

 

to which correspond the equations (13),

 

If we attribute the same certainty to the various directions, setting

 ,

and if we denote the correlates of the seven conditional equations by               then their determination will depend on the following equations:

 

From this, by elimination, we deduce

 

and the most probable errors are given by the formulas

 

from which we derive the following numerical values:

   

The sum of the squares of these errors is equal to   hence, the average error resulting from the 18 observed directions is

 

25. edit

To give an example of the last part of our theory, let us determine the precision with which the compensated observations determine the Falkenberg-Breithorn side, using the Wilsede-Wulfsode side. The function   by which it is expressed, in this case, is:

 

Its value, deduced from the corrected observations, is

 

Differentiation of this equation provides the following, with     etc. expressed in seconds,

 

From this we find that:

 

Taking the meter as the unit of length, the methods indicated above yield,

  or  

We conclude that the mean error to be feared in the value of the Falkenberg-Breithorn side is   (where   denotes the average error to be feared in the observed directions, this error being expressed in seconds), and consequently, if we adopt the value of   announced above, this mean error to be feared is  

Moreover, inspecting the system of triangles immediately shows that one cans completely disregard the Hauselberg station without breaking the network that connects the other four. However, it would not be good practice to eliminate the operations related to this point,[3] as they certainly contribute to increasing the precision of the whole. To show more clearly what increase in precision results from this, we will finish by again performing the calculation, after excluding all results related to the Hauselberg point. Of the eighteen directions mentioned above, eight then cease to be used, and the most plausible errors on those that remain are:

   

The value of the Falkenberg-Breithorn side then becomes   a result little different from that obtained earlier. But calculating the weight yields

  or  

and the mean error to be feared, in meters, is

 

We see that by adding the operations related to Hauselberg, the weight of the determination of the Falkenberg-Breithorn side is increased in the ratio of   to   that is, in the ratio of unity to  

  1. In the previous calculations, it was sufficient to use three letters from each series to clarify the formulas' law; it seemed necessary here to include a fourth one to make the algorithm more evident. We have followed the series of letters                   in the natural way, with       However, lacking the required letters, we have followed the series       with   and the series       with  .
  2. The starting points to which the individual directions are referred are here regarded as arbitrary, although in reality they coincide with the meridian lines of the stations. The observations will be published in full at a later date; meanwhile, the figure can be found in "Astronomische Nachrichten" Vol. I. p. 441
  3. The greater part of these observations had already been made before Breithorn's point was discovered and incorporated into the system.