Amanda Kay Montoya
I am an Associate Professor at UCLA in the Department of Psychology - Quantitative Area. I received my PhD in Quantitative Psychology from the Ohio State University in 2018. My primary adviser was Dr. Andrew Hayes. I completed my M.A. in Psychology and my M.S. in Statistics at Ohio State in 2016. I graduated from the University of Washington with a B.S. in Psychology and a minor in Mathematics in 2013. My research interests include mediation, moderation, conditional process models, structural equation modeling, and meta-science.
UPCOMING EVENTS
Registered Reports for Simulation Studies
Modern Modeling Methods
Storrs, CT
June 24-26, 2024
Mediation and Moderation Analysis for Simple Within-Subject Designs
Statistical Horizons
Virtual
Sep 12-13, 2024
New Event Coming Soon
MY LATEST RESEARCH
Published in to Frontiers in Psychology, led by QRClab graduate student Tristan Tibbe, we introduce two bias-corrected bootstrap confidence interval methods for use with the indirect effect, describing their relation to bias assumptions made by the current bias-corrected bootstrap confidence interval and comparing their performance to existing methods used in the area of mediation analysis.
At Advances in Methods and Practices in Psychology, led by QRClab graduate student Jessica Fossum, we compare power estimates from six commonly used tests of the indirect effect for mediation analysis, concluding that power estimates from the joint significance test, Monte Carlo confidence interval, and percentile bootstrap confidence interval are similar enough to not have to use bootstrapping for power analysis
Published in Collabra: Psychology, in collaboration with Dr. William Leo Donald Krenzer (Duke University) and QRClab graduate student Jessica Fossum, we explore how registered reports have been implemented at journals, explores the typical time to publication of the journal's first registered report, and common barriers in adopting registered reports.
Published in Multivariate Behavior Research, I discuss three factors research should consider when selecting a design for mediation analysis: validity, causality, and power. Depending on the circumstance between-subject designs may have stronger validity than within-subject designs, and there are similar trade-offs with causality. In most cases within-subject designs have greater power to detect the indirect effect compared to a between-subject design with the same number of participants, but this not true in all cases. I provide an R script for conducting power analysis in within-subject designs.