Two component modified Lilliefors test for normality
Research background: Commonly known and used parametric tests e.g. Student, Behrens? Fisher, Snedecor, Bartlett, Cochran, Hartley tests are applicable when there is an evidence that samples come from the Normal general population. What makes things worse is that testers are not fully aware in what degree of abnormality distorts results of parametric tests listed above and suchlike. So, it is no exaggeration to say that testing for normality (goodness-of-fit testing, GoFT) is a gate to proper parametric statistical reasoning. It seems that the gate opens too easily. In other words, most popular goodness-of-fit tests are weaker than statisticians want them to be.
Purpose of the article: The main purpose of this paper is to put forward the GoFT that is, in particular circumstances, more powerful than GoFTs used until now. The other goals are to define a similarity measure between an alternative distribution and the normal one and to calculate the power of normality tests for a big set of alternatives. And, of course, to interest statisticians in using the GoFTs in their practice.
Method: There are two ways to make GoFT more powerful: extensive and intensive one. The extensive method consists in drawing large samples. The intensive method consists in extracting more information from mall samples. In order to make the test method intensive, the test statistics, as distinct from all existing GoFTs, has two components. The first component (denoted by ?) is a classic Kolmogorov / Lilliefors test statistics i.e. the greatest absolute difference between theoretical and empirical cumulative distribution functions. The second component is the order statistics (r) at which the ?_max^((r) ) locate itself. Of course ?_max^((r) ) is the conditional random variable with (r) being the condition. Large scale Monte Carlo simulations provided data sufficient to in-depth study of properties of distributions of ?_max^((r) ) random variable.
Findings & value-added: Simulation study shows that the Two Component Modified Lilliefors test for normality is the most powerful for some type of alternatives, especially for the symmetrical, unimodal and bimodal distributions with positive excess kurtosis, for symmetrical and unimodal distributions with negative excess kurtosis and small sample sizes. Due to the values of skewness and excess kurtosis, and the defined similarity measure between the ND and an alternative, alternative distributions are close to the normal distribution. Numerous examples of real data show the usefulness of the proposed GoFT.
Ahmad, F., & Khan, R. A. (2015). A power comparison of various normality tests. Pakistan Journal of Statistics and Operation Research, 11(3), 331?345. doi: 10.18187/pjsor.v11i3.845.
Alizadeh Noughabi, H., & Arghami, N. R. (2011). Monte Carlo comparison of seven normality tests. Journal of Statistical Computation and Simulation, 81, 965?972. doi:10.1080/00949650903580047.
Anderson, T. W., & Darling, D. A. (1952). Asymptotic theory of certain "goodness-of-fit" criteria based on stochastic processes. Annals of Mathematical Statistics, 23,193?212. doi:10.1214/aoms/1177729437.
Blom, G. (1958). Statistical estimates and transformed Beta variables. New York: Wiley.
Cramér, H. (1928). On the composition of elementary errors. Scandinavian Actuarial Journal ,1, 13?74. doi: 10.1080/03461238.1928.10416862.
D'Agostino, R. B., & Stephens, A. N. (1986). Goodness?of?fit techniques. Marcel Dekker Inc.
Esteban, M. D., Castellanos, M. E., Morales, D., & Vajda, I. (2001). Monte carlo comparison of four normality tests using different entropy estimates. Communications in Statistics-Simulation and Computation, 30, 761?285. doi: 10.1081 /SAC-100107780.
Feltz, C. J. (2002). Customizing generalizations of the Kolmogorov-Smirnov goodness-of-fit test. Journal of Statistical Computation and Simulation, 72(2), 179?186. doi: 10.1080/00949650212143.
Filliben, J. J. (1975). The probability plot correlation coefficient test for normality. Technometrics, 17(1), 111?117. doi: 10.2307/1268008.
Gan, F. F., & Koehler, K. J. (1990). Goodness of fit tests based on P-P probability plots. Technometrics, 32, 289?303. doi: 10.2307/1269106.
Harter H. L., Khamis, H. J., & Lamb, R. E. (1984). Modified Kolmogorov-Smirnov tests of goodness of fit. Communications in Statistics - Simulation and Computation, 13(3), 293?323. doi: 10.1080/03610918408812378.
Janssen, A. (2000). Global power functions of goodness-of-fit tests. Annals of Statistics, 28, 239?253. doi: 10.1214/aos/1016120371.
Khamis, H. J. (1990). The ?-corrected Kolmogorov-Smirnov test for goodness-of-fit. Journal of Statistical Planning and Inference, 24, 317?335. doi: 10.1016/ 0378-3758(90)90051-U.
Khamis, H. J. (1992). The ?-corrected Kolmogorov-Smirnov test with estimated parameters. Journal of Nonparametric Statistics, 2, 17?27. doi: 10.1080/10485 259208832539.
Khamis, H. J. (1993). A comparative study of the d-corrected Kolmogorov-Smirnov test. Journal of Applied Statistics, 20, 401?421. doi: 10.1080/0266476 9300000040.
Krauczi, E. (2009). A study of the quantile correlation test of normality. Test, 18(1), 156?65. doi: 10.1007/s11749-007-0074-6
Kundu, D., & Raqab, M. Z. (2009). Estimation of R = P(Y < X) for three parameter Weibull distribution. Statistics and Probability Letters, 79(17), 1839?1846. doi: 10.1016/j.spl.2009.05.026.
Lilliefors, H. W. (1967). On the Kolmogorov-Smirnov Test for normality with mean and variance unknown. Journal of the American Statistical Association, 62(318), 399?402. doi: 10.1080/01621459.1967.10482916.
Malachov, A. N. (1978). A cumulant analysis of random non-Gaussian processes and their transformations. Moscow: Soviet Radio.
Marange, C. S., & Qin, Y. (2019). A new empirical likelihood ratio goodness of fit test for normality based on moment constraints. Communications in Statistics-Simulation and Computation. Advance online publication. doi: 10.1080/03610 918.2019.1586923.
Nofal, Z. M., Afify, A. Z., Yousof, H. M., & Cordeiro, G. M. (2017). The generalized transmuted-G family of distributions. Communications in Statistics-Theory and Methods, 46(8), 4119?4136. doi: 10.1080/ 03610926.2015.1078478.
Razali, N. M., & Wah Y. B. (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of statistical modeling and analytics, 2(1), 21?33.
Romao, X., Delgado, R., & Costa, A. (2010). An empirical power comparison of univariate goodness-of-fit tests of normality. Journal of Statistical Computation and Simulation, 80, 545?591. doi: 10.1080/ 00949650902740824.
Shapiro, S. S., & Francia, R. S. (1972). An approximate analysis of variance test for normality. Journal of the American Statistical Association, 67, 215?216. doi: 10.1080/01621459.1972.10481232.
Shapiro, S. S., & Wilk M. B. (1965). An analysis of variance test for normality. Biometrika, 52(3/4), 591?611. doi:10.2307/2333709.
Smirnov, N. (1948). Table for estimating the goodness of fit of empirical distributions. Annals of Mathematical Statistics, 19, 279?281. doi: 10.1214/aoms/1177 730256.
Sulewski, P. (2019a). Modification of Anderson-Darling goodness-of-fit test for normality. Afinidad, 76(588).
Sulewski P. (2019b). Modified Lilliefors goodness-of-fit test for normality. Communications in Statistics - Simulation and Computation. Advance online publication. doi: 10.1080/03610918.2019.1664580.
Torabi, H., Montazeri, N. H., & Grané A. (2016). A test of normality based on the empirical distribution function. SORT, 40(1), 55?88.
Yap, B. W., & Sim, C. H. (2011). Comparisons of various types of normality tests. Journal of Statistical Computation and Simulation, 81, 2141?2155. doi: 10.108 0/00949655.2010.520163.
Yazici, B., & Yolacan S.A. (2007). Comparison of various tests of normality. Journal of Statistical Computation and Simulation, 77(2), 175?183. doi: 10.108 0/10629360600678310.
How to Cite
Copyright (c) 2021 Equilibrium. Quarterly Journal of Economics and Economic Policy
This work is licensed under a Creative Commons Attribution 4.0 International License.