Hi, I'm Eric Modesitt

ML Researcher

LLM Reliability · Uncertainty Quantification · Sequential Inference

Builder–researcher focused on reliability, calibration, and robustness in large language models. I design sequential inference systems that provide statistical guarantees on model outputs—work motivated by ensuring AI systems remain trustworthy as they scale.

📍 Arlington, Virginia 🇺🇸 US Citizen

Research Focus

🎯

Sequential Inference

Designing systems with statistical guarantees, early stopping mechanisms, and optimal cost–accuracy tradeoffs

📊

Uncertainty & Calibration

Building reliable, well-calibrated model outputs with principled uncertainty estimation

🛡️

Robustness Testing

Stress-testing models via perturbations, distribution shift, and adversarial inputs

Efficient Ensembles

Adaptive computation using KV caching and low-cost branching strategies

Selected Research

2025

Sequential Multi-Persona Direct Logit Inference (SMP-DLI)

Working Paper

Built a sequential LLM inference system that adaptively allocates compute based on confidence. Demonstrated calibrated uncertainty estimates, statistical early stopping, and methods for knowing when models should abstain. Evaluated across MMLU, ARC-Challenge, HellaSwag, and CommonsenseQA.

Sequential Inference Uncertainty Quantification Calibration
2024

ORBIT: Cost-Effective Dataset Curation for Large Language Model Domain Adaptation with an Astronomy Case Study

ACL Findings (First Author)

Designed a scalable data filtering pipeline over 11T tokens, showing that data quality dominates scale. Achieved consistent multi-point gains on MMLU with smaller, cleaner corpora.

Data Curation Domain Adaptation LLM Training

Research & Engineering Experience

Software Engineer — ML Infrastructure

Capital One

Aug 2025 – Present Arlington, VA
  • Designed reliability checks and validation systems for production ML pipelines
  • Built automated testing frameworks reducing manual verification by 99%
  • Experience with fault-tolerant distributed systems, observability, and CI/CD

Graduate Research Assistant — Information Retrieval & LLMs

Zhai Lab, UIUC

Jan 2025 – Aug 2025 Urbana, IL
  • Built a 135M-parameter Transformer reranker competitive with multi-billion-parameter baselines
  • Ran controlled ablations on scaling, regularization, and representation quality
  • Contributed to work awarded SIGIR 2025 Spotlight
  • First-author publications in ACL Findings and SIGIRD

Student Researcher

National Center for Supercomputing Applications (NCSA)

Aug 2023 – Dec 2024 Urbana, IL
  • Trained domain-specific LLMs using selective pretraining and noise-filtered corpora
  • Worked with HPC-scale training workflows and parameter-efficient tuning

Education

2025

M.C.S. Computer Science

Data Science Specialization

University of Illinois Urbana-Champaign

Advanced NLP · Deep Learning · Computational Neuroscience

2024

B.S. Computer Science

Minor in Mathematics

University of Illinois Urbana-Champaign

National Merit Finalist · James Scholar · ISUR Scholar

Technical Skills

ML & AI

Transformers LLMs ViTs PyTorch W&B RAG Parameter-Efficient Tuning

Reliability

Sequential Inference Uncertainty Estimation Calibration Robustness Analysis

Systems

Python C++ Docker Kubernetes AWS Linux Git CI/CD Distributed Systems

Get in Touch

I'm always interested in discussing research collaborations, new opportunities, or just chatting about LLM reliability and uncertainty quantification.