Home About Login Register Search Current Archive Announcement

Post-hoc and multiple comparison test – An overview with SAS and R Statistical Package

Editor IJSMI


Analysis of Variance (ANOVA) is a basic but most important tool in Statistics. The simplest form is one way ANOVA wherein equivalence of treatment means are tested. If the means are not equal then the next step is to check which means are different from each other. Post-Hoc and multiple comparison tests are used to identify which pairs of treatment means differ. This paper starts with the overview of Post-Hoc and Multiple Comparison test and discusses the various Post-hoc multiple comparison tests, its usability, positives and limitations. The paper also provides the Statistical Analysis System (SAS) and R Statistical Package codes to carry out the various Post-hoc and multiple comparison tests


Analysis of Variance; Post-hoc; Post hoc; multiple comparison; ANOVA; SAS; R package

Full Text:



Toothaker, L. E. (1993). Multiple comparison procedures (No. 89). Sage.

Saville, D. J. (1990). Multiple comparison procedures: the practical solution. The American Statistician, 44(2), 174-180.

Kim, H. Y. (2015). Statistical notes for clinical researchers: post-hoc multiple comparisons. Restorative dentistry & endodontics, 40(2), 172-176.

Ruxton, G. D., & Beauchamp, G. (2008). Time for some a priori thinking about post hoc testing. Behavioral Ecology, 19(3), 690-693.

Cabral, H. J. (2008). Multiple comparisons procedures. Circulation, 117(5), 698-701.

Brown, A. M. (2005). A new software for carrying out one-way ANOVA post hoc tests. Computer methods and programs in biomedicine, 79(1), 89-95.


Abdi, H., & Williams, L. J. (2010). Tukey’s honestly significant difference (HSD) test. Encyclopedia of Research Design. Thousand Oaks, CA: Sage, 1-5.

Games, P. A., & Howell, J. F. (1976). Pairwise multiple comparison procedures with unequal n’s and/or variances: a Monte Carlo study. Journal of Educational and Behavioral Statistics, 1(2), 113-125.

Abdi, H., & Williams, L. J. (2010). Newman-Keuls test and Tukey test. Encyclopedia of Research Design. Thousand Oaks, CA: Sage, 1-11.

Ryan, T. H. (1960). Significance tests for multiple comparisons of proportions, variances, and other statistics. Psychological bulletin, 57(4), 318.

Williams, L. J., & Abdi, H. (2010). Fisher’s least significant difference (LSD) test. Encyclopedia of research design, 1-5.

Richter, S. J., & McCann, M. H. (2012). Using the Tukey–Kramer omnibus test in the Hayter–Fisher procedure. British Journal of Mathematical and Statistical Psychology, 65(3), 499-510.

Tamhane, A. C. (1979). A comparison of procedures for multiple comparisons of means with unequal variances. Journal of the American Statistical Association, 74(366a), 471-480.

Gabriel, K. R. (1969). Simultaneous test procedures--some theory of multiple comparisons. The Annals of Mathematical Statistics, 224-250.

Scheffe, H. (1999). The analysis of variance (Vol. 72). John Wiley & Sons.

Duncan, D. B. (1955). Multiple range and multiple F tests. Biometrics, 11(1), 1-42.

SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration.

DOI: http://dx.doi.org/10.3000/ijsmi.v1i1.4

DOI (PDF): http://dx.doi.org/10.3000/ijsmi.v1i1.4.g5