Abstract
Ranking and selection (R&S) aims to identify the alternative with the best
mean performance among $k$ simulated alternatives. The practical value of R&S
depends on accurate simulation input modeling, which often suffers from the
curse of input uncertainty due to limited data. Distributionally robust ranking
and selection (DRR&S) addresses this challenge by modeling input uncertainty
via an ambiguity set of $m > 1$ plausible input distributions, resulting in
$km$ scenarios in total. Recent DRR&S studies suggest a key structural insight:
additivity in budget allocation is essential for efficiency. However, existing
justifications are heuristic, and fundamental properties such as consistency
and the precise allocation pattern induced by additivity remain poorly
understood. In this paper, we propose a simple additive allocation (AA)
procedure that aims to exclusively sample the $k + m - 1$ previously
hypothesized critical scenarios. Leveraging boundary-crossing arguments, we
establish a lower bound on the probability of correct selection and
characterize the procedure's budget allocation behavior. We then prove that AA
is consistent and, surprisingly, achieves additivity in the strongest sense: as
the total budget increases, only $k + m - 1$ scenarios are sampled infinitely
often. Notably, the worst-case scenarios of non-best alternatives may not be
among them, challenging prior beliefs about their criticality. These results
offer new and counterintuitive insights into the additive structure of DRR&S.
To improve practical performance while preserving this structure, we introduce
a general additive allocation (GAA) framework that flexibly incorporates
sampling rules from traditional R&S procedures in a modular fashion. Numerical
experiments support our theoretical findings and demonstrate the competitive
performance of the proposed GAA procedures.
University of Augsburg, a
Abstract
The rank aggregation problem, which has many real-world applications, refers
to the process of combining multiple input rankings into a single aggregated
ranking. In dynamic settings, where new rankings arrive over time, efficiently
updating the aggregated ranking is essential. This paper develops a fast,
theoretically and practically efficient dynamic rank aggregation algorithm.
First, we develop the LR-Aggregation algorithm, built on top of the LR-tree
data structure, which is itself modeled on the LR-distance, a novel and
equivalent take on the classical Spearman's footrule distance. We then analyze
the theoretical efficiency of the Pick-A-Perm algorithm, and show how it can be
combined with the LR-aggregation algorithm using another data structure that we
develop. We demonstrate through experimental evaluations that LR-Aggregation
produces close to optimal solutions in practice. We show that Pick-A-Perm has a
theoretical worst case approximation guarantee of 2. We also show that both the
LR-Aggregation and Pick-A-Perm algorithms, as well as the methodology for
combining them can be run in $O(n \log n)$ time. To the best of our knowledge,
this is the first fast, near linear time rank aggregation algorithm in the
dynamic setting, having both a theoretical approximation guarantee, and
excellent practical performance (much better than the theoretical guarantee).
AI Insights - Borda count assigns points inversely proportional to rank position, making it resilient to minor rank shifts.
- Pairwise comparison aggregates by majority preference between each pair, yielding a Condorcet‑consistent ranking when it exists.
- Dynamic aggregation of streaming gene lists adapts to new experiments with negligible recomputation, as shown in Wang et al. 2022.
- Robust variants like median rank or trimmed Borda mitigate outlier influence, addressing weaknesses highlighted in the paper.
- Wang et al.'s 2024 survey offers a taxonomy of aggregation methods across domains, from social choice to bioinformatics.
- Teng et al. 2018 present a voting aggregation algorithm that optimizes social satisfaction under cardinal utilities, a useful benchmark.