next up previous
Next: Bibliography Up: Least-Squares Fitting of a Previous: Refinement

Error analysis

The variances of the hyperplane parameters can be found by evaluating
\begin{displaymath}
\sigma^2(a_j) = \sum_i \sum_{k=1}^{m} \sigma_{ik}^2 \left({ \partial
a_j \over \partial Y_{ik} }\right)^2
\end{displaymath} (19)

(This equation assumes the data $Y_{ik}$ are uncorrelated.) Since the dependence of $a_j$ on $Y_{ik}$ is not linear as Equation (13) suggests, due to the dependence of $W_i$ and $y_{ik}$ on $a_j$, evaluation of this expression is very complicated. The original version of this paper contained an error in the result of this calculation, and a corrected calculation has not yet been done. To first order, however, ignoring the nonlinearity one obtains the approximation
\begin{displaymath}
\sigma^2(a_j) \approx M_{jj}^{-1}
\end{displaymath} (20)

that is, the variances of the parameters are given simply by the diagonal elements of the inverse of the normal matrix defined in Equation (13). (The off-diagonal elements of this matrix are the covariances of the parameters.) For well-behaved data such as those used for illustration by York (1966), this approximation is good to within a few percent.

If the experimenter does not have standard errors $\sigma_{ik}$ for the measured quantities $Y_{ik}$, but only relative uncertainties, the resulting fit is the same using these relative uncertainties, but the variances in the fitted parameters are given by expression (20) multiplied by $S/\nu$, where $\nu = n - m$ is the number of degrees of freedom of the problem. If the errors $\sigma_{ik}$ are known a priori, then the goodness of fit can be inferred from the value of $S/\nu$, which should be close to unity for normally distributed errors. This constitutes a test of the $m$-component hypothesis as set forth in the introduction.


next up previous
Next: Bibliography Up: Least-Squares Fitting of a Previous: Refinement
Robert Moniot 2002-10-20