Event

Pre-Clinical SIG Webinar: Multi-Group Comparison of Means for Single-Factor Experiment with Small Sample Size Under Normality and Heteroscedasticity

Add to:

Date: Tuesday 3rd February 2026
Time: 14:00 - 15:00 GMT | 9:00 - 10:00 EST (US)
Location: Online via Zoom
Speakers: Weiliang Qiu (Sanofi)

Who is this event intended for?: Statisticians and Scientists involved or interested in Multi-Group Comparison of Means for Single-Factor Experiment with Small Sample Size Under Normality and Heteroscedasticity.

What is the benefit of attending?:
Be able to choose the most appropriate method on Heteroscedastic data

Cost

This webinar is free to both Members of PSI and Non-Members.

Registration

To register for this event, please click here

Overview

The design of single-factor experiment is commonly used to compare multiple groups in non-clinical studies, where group sizes are generally small and groups usually have different variances. The classical F test tends to inflate type I error rate in this scenario. Many alternative tests have been proposed in literature to manage heterogeneity of variance. Several papers compared these alternative tests, among which is heterogeneous-variance mixed-effects model using Satterthwaite approximation of degree of freedom. The mixed-effects model approach has at least 2 advantages over other tests: (1) allowing different group variance structures (e.g., homogeneous-variance or heterogeneous-variance); and (2) the capacity to do model diagnosis via residual analysis. In this presentation, we evaluate both Type I error rates and powers of the 13 tests investigated in Pham et al. (2020) spanning ANOVA-based tests, structured means modeling (SMM), and mixed-effects models—under diverse conditions, including (un)equal variances, (non-)normal distributions, and (un)balanced designs. Two additional mixed-effects models (homogeneous-variance mixed-effects model and adaptive mixed-effects model) are introduced and assessed alongside the 13 tests. We also consider the Kenward-Roger approximation of degrees of freedom for the 3 mixed-effects models, which generally offers more reliable type I error rate than the Satterthwaite approximation. Some recommendations about analysis for data from single-factor experiments will finally be given.

Speaker details

Speaker

Biography

Abstract

Weiliang
Weiliang Qiu, Sanofi

Weiliang Qiu is a Non-Clinical Efficacy and Safety Statistician Expert Leader at Sanofi, passionate about leveraging statistical expertise to improve patients’ lives. He earned his Ph.D. in Statistics from the University of British Columbia in 2004 and spent 14 years at Brigham and Women’s Hospital/Harvard Medical School, contributing to impactful research.

Since joining Sanofi’s Non-Clinical Efficacy and Safety (NCES) team in 2018, Weiliang has provided statistical support for non-clinical studies across diverse therapeutic areas, including translational sciences, rare and neurological diseases, immunology and inflammation, immuno-oncology, and gene therapy. In addition to supporting these studies, he collaborates closely with teammates in NCES to design and implement innovative statistical methodologies that enhance data analysis and drive scientific insights.
 Modern Algorithms for Animal Randomization in Preclinical Studies

The design of single-factor experiment is commonly used to compare multiple groups in non-clinical studies, where group sizes are generally small and groups usually have different variances. The classical F test tends to inflate type I error rate in this scenario. Many alternative tests have been proposed in literature to manage heterogeneity of variance. Several papers compared these alternative tests, among which is heterogeneous-variance mixed-effects model using Satterthwaite approximation of degree of freedom. The mixed-effects model approach has at least 2 advantages over other tests: (1) allowing different group variance structures (e.g., homogeneous-variance or heterogeneous-variance); and (2) the capacity to do model diagnosis via residual analysis. In this presentation, we evaluate both Type I error rates and powers of the 13 tests investigated in Pham et al. (2020) spanning ANOVA-based tests, structured means modeling (SMM), and mixed-effects models—under diverse conditions, including (un)equal variances, (non-)normal distributions, and (un)balanced designs. Two additional mixed-effects models (homogeneous-variance mixed-effects model and adaptive mixed-effects model) are introduced and assessed alongside the 13 tests. We also consider the Kenward-Roger approximation of degrees of freedom for the 3 mixed-effects models, which generally offers more reliable type I error rate than the Satterthwaite approximation. Some recommendations about analysis for data from single-factor experiments will finally be given.


Upcoming Events