Normal distribution is mostly used distribution in statistics, dating back to the Karl F. Gauss. It is used in many branches of statistics, however, testing for normality is not well understood. But which deviations from theoretical normality are still acceptable for a given statistical procedure? This contribution aims towards better understanding of such problems. In particular, we study how much effects the violation of ANOVA prerequisites the underlying inference. It is clear, that one should develop a proper robustness in a given setup, under which the statistical analysis is still reliable. We also study the influence of outliers in dataset, in particular with focus on the tradeoff between power and robustness.

This content is only available via PDF.
You do not currently have access to this content.