ML Researcher
LLM Reliability · Uncertainty Quantification · Sequential Inference
Builder–researcher focused on reliability, calibration, and robustness in large language models. I design sequential inference systems that provide statistical guarantees on model outputs—work motivated by ensuring AI systems remain trustworthy as they scale.
Designing systems with statistical guarantees, early stopping mechanisms, and optimal cost–accuracy tradeoffs
Building reliable, well-calibrated model outputs with principled uncertainty estimation
Stress-testing models via perturbations, distribution shift, and adversarial inputs
Adaptive computation using KV caching and low-cost branching strategies
Working Paper
Built a sequential LLM inference system that adaptively allocates compute based on confidence. Demonstrated calibrated uncertainty estimates, statistical early stopping, and methods for knowing when models should abstain. Evaluated across MMLU, ARC-Challenge, HellaSwag, and CommonsenseQA.
ACL Findings (First Author)
Designed a scalable data filtering pipeline over 11T tokens, showing that data quality dominates scale. Achieved consistent multi-point gains on MMLU with smaller, cleaner corpora.
Capital One
Zhai Lab, UIUC
National Center for Supercomputing Applications (NCSA)
Data Science Specialization
University of Illinois Urbana-Champaign
Advanced NLP · Deep Learning · Computational Neuroscience
Minor in Mathematics
University of Illinois Urbana-Champaign
National Merit Finalist · James Scholar · ISUR Scholar
I'm always interested in discussing research collaborations, new opportunities, or just chatting about LLM reliability and uncertainty quantification.