×

The exact amount of \(t\)-ness that the normal model can tolerate. (English) Zbl 0801.62035

Summary: Suppose that the normal model is used for data \(Y_ 1,\dots,Y_ n\), but that the true distribution is a \(t\)-distribution with location and scale parameters \(\xi\) and \(\sigma\) and \(m\) degrees of freedom. The normal model corresponds to \(m = \infty\). Using a local asymptotic framework where \(m\) is allowed to increase with \(n\), two classes of estimands are identified. One small class, which in particular contains the functions of \(\xi\) alone, is affected by \(t\)-ness only to the second order, and maximum likelihood estimation in the two- or three-parameter models becomes equivalent. For all other estimands, it is shown that if \(m \geq 1.458 \sqrt{n}\), then maximum likelihood estimation using the incorrect normal model is still more precise than using the correct three-parameter model. This is further shown to be true in regression models with \(t\)- distributed residuals.
We also propose and analyze compromise estimators that in various ways interpolate between the normal and the nonnormal models. A separate section extends the \(t\)-ness results to general normal scale mixtures, in which case the tolerance radius around the normal error distribution takes the form of an upper bound \(.3429 \sqrt{n}\) for the variance of the scale mixture distribution.
Proving our results requires somewhat nonstandard “corner asymptotics”, because behavior of estimators must be studied when the crucial parameter \(\gamma = 1/m\) is close to 0, which is not an inner point of the parameter space, and the maximum likelihood estimator of \(m\) is equal to \(\infty\) with positive probability.

MSC:

62F35 Robustness and adaptive procedures (parametric inference)
62F10 Point estimation
62J05 Linear regression; mixed models
Full Text: DOI