Abstract
Today, two major trends are shaping the evolution of ML systems. First,
modern AI systems are becoming increasingly complex, often integrating
components beyond the model itself. A notable example is Retrieval-Augmented
Generation (RAG), which incorporates not only multiple models but also vector
databases, leading to heterogeneity in both system components and underlying
hardware. Second, with the end of Moore's Law, achieving high system efficiency
is no longer feasible without accounting for the rapid evolution of the
hardware landscape.
Building on the observations above, this thesis adopts a cross-stack approach
to improving ML system efficiency, presenting solutions that span algorithms,
systems, and hardware. First, it introduces several pioneering works about RAG
serving efficiency across the computing stack. PipeRAG focuses on
algorithm-level improvements, RAGO introduces system-level optimizations, and
Chameleon explores heterogeneous accelerator systems for RAG. Second, this
thesis investigates algorithm-hardware co-design for vector search.
Specifically, FANNS and Falcon optimize quantization-based and graph-based
vector search, the two most popular paradigms of retrieval algorithms. Third,
this thesis addresses the serving efficiency of recommender systems, another
example of vector-centric ML systems, where the memory-intensive lookup
operations on embedding vector tables often represent a major performance
bottleneck. MicroRec and FleetRec propose solutions at the hardware and system
levels, respectively, optimizing both data movement and computation to enhance
the efficiency of large-scale recommender models.
Abstract
Machine learning (ML) systems are increasingly deployed in high-stakes
domains where reliability is paramount. This thesis investigates how
uncertainty estimation can enhance the safety and trustworthiness of ML,
focusing on selective prediction -- where models abstain when confidence is
low.
We first show that a model's training trajectory contains rich uncertainty
signals that can be exploited without altering its architecture or loss. By
ensembling predictions from intermediate checkpoints, we propose a lightweight,
post-hoc abstention method that works across tasks, avoids the cost of deep
ensembles, and achieves state-of-the-art selective prediction performance.
Crucially, this approach is fully compatible with differential privacy (DP),
allowing us to study how privacy noise affects uncertainty quality. We find
that while many methods degrade under DP, our trajectory-based approach remains
robust, and we introduce a framework for isolating the privacy-uncertainty
trade-off. Next, we then develop a finite-sample decomposition of the selective
classification gap -- the deviation from the oracle accuracy-coverage curve --
identifying five interpretable error sources and clarifying which interventions
can close the gap. This explains why calibration alone cannot fix ranking
errors, motivating methods that improve uncertainty ordering. Finally, we show
that uncertainty signals can be adversarially manipulated to hide errors or
deny service while maintaining high accuracy, and we design defenses combining
calibration audits with verifiable inference.
Together, these contributions advance reliable ML by improving, evaluating,
and safeguarding uncertainty estimation, enabling models that not only make
accurate predictions -- but also know when to say "I do not know".