Hi!

Your personalized paper recommendations for 08 to 12 December, 2025.
🎯 Top Personalized Recommendations
EURECOM
AI Summary
  • Upper bounds from sub-multiplicativity dominate in dense regimes; GED-based bounds are tight and informative in sparse two-hop regimes. [3]
  • The paper assumes that the system parameters are known and does not consider uncertainty or noise in the system. [3]
  • The paper builds upon previous work on distributed computation, including [1], [2], and [3]. [3]
  • The authors also mention the concept of gradient coding from [4]. [3]
  • MUDC: Matrix-Valued Uncoded Distributed Computation GED: Generalized Edit Distance β„“1–ℓ2 norm relations: A mathematical relationship between the β„“1 and β„“2 norms of a matrix Sub-multiplicativity: A property of matrices where the product of two sub-multiplicative matrices is also sub-multiplicative. [2]
  • The paper provides a typical-case solution for real-valued MUDC with thresholded GED, an affine objective in the reach probability, explicit recall lines, compute caps with visible knees, and concentration. [1]
Abstract
We solve, in the typical-case sense, the multi-sender linearly-decomposable distributed computing problem introduced by tessellated distributed computing. We model real-valued encoders/decoders and demand matrices, and assess structural fidelity via a thresholded graph edit distance between the demand support and the two-hop support of the computed product. Our analysis yields: a closed-form second-moment (Frobenius) risk under spike-and-slab ensembles; deterministic links between thresholded GED and norm error; a Gaussian surrogate with sub-exponential tails that exposes explicit recall lines; concentration of GED and operator-norm control; and a compute-capped design with a visible knee. We map the rules to aeronautical and satellite networks.
Why we think this paper is great for you:
This paper addresses distributed computing, a key area of interest, focusing on linear decomposition which is relevant to high-throughput systems. The work directly tackles the challenges of managing multiple users in a distributed environment.
ITMO University
Abstract
Designing and implementing distributed systems correctly can be quite challenging. Although these systems are often accompanied by formal specifications that are verified using model-checking techniques, a gap still exists between the implementation and its formal specification: there is no guarantee that the implementation is free of bugs. To bridge this gap, we can use model-based testing. Specifically, if the model of the system can be interpreted as a finite-state automaton, we can generate an exhaustive test suite for the implementation that covers all possible states and transitions. In this paper, we discuss how to efficiently generate such a test suite for distributed systems written in the actor model. Importantly, our approach does not require any modifications to the code or interfering with the distributed system execution environment. As an example, we verified an implementation of a replication algorithm based on Viewstamped Replication, which is used in a real-world system.
Why we think this paper is great for you:
The paper’s focus on testing distributed systems using an actor model aligns with the need for reliable and efficient systems. It offers a practical approach to ensuring low-latency performance.
Tsinghua University
AI Summary
  • IRTest effectively reduces the surrogate-to-real gap with relatively few tests. [2]
Abstract
Testing and evaluating decision-making agents remains challenging due to unknown system architectures, limited access to internal states, and the vastness of high-dimensional scenario spaces. Existing testing approaches often rely on surrogate models of decision-making agents to generate large-scale scenario libraries; however, discrepancies between surrogate models and real decision-making agents significantly limit their generalizability and practical applicability. To address this challenge, this paper proposes intelligent resilience testing (IRTest), a unified online adaptive testing framework designed to rapidly adjust to diverse decision-making agents. IRTest initializes with an offline-trained surrogate prediction model and progressively reduces surrogate-to-real gap during testing through two complementary adaptation mechanisms: (i) online neural fine-tuning in data-rich regimes, and (ii) lightweight importance-sampling-based weighting correction in data-limited regimes. A Bayesian optimization strategy, equipped with bias-corrected acquisition functions, guides scenario generation to balance exploration and exploitation in complex testing spaces. Extensive experiments across varying levels of task complexity and system heterogeneity demonstrate that IRTest consistently improves failure-discovery efficiency, testing robustness, and cross-system generalizability. These results highlight the potential of IRTest as a practical solution for scalable, adaptive, and resilient testing of decision-making agents.
Why we think this paper is great for you:
This research directly addresses resilience, a core interest, through intelligent testing techniques for decision-making agents. The use of surrogate models is particularly relevant for evaluating system robustness.
University of Pennsylvann
AI Summary
  • They are designed to assess student learning outcomes more accurately than traditional multiple-choice tests. [3]
  • The paper discusses the challenges posed by large language models (LLMs) in education, particularly in assessing student learning and academic integrity. [2]
  • The authors propose a framework for designing such assessments, which includes elements of task complexity, interactivity, and feedback. [1]
  • It highlights the need for more authentic assessments that reflect real-world problems and require critical thinking and problem-solving skills. [0]
Abstract
The rapid adoption of generative AI has undermined traditional modular assessments in computing education, creating a disconnect between academic evaluation and industry practice. This paper presents a theoretically grounded framework for designing AI-resilient assessments, supported by formal analysis and multi-year empirical validation. We make three contributions. First, we establish two theoretical results: (1) assessments composed of interconnected problems, where outputs feed into subsequent stages, are more AI-resilient than modular assessments because current language models struggle with sustained multi-step reasoning and context; and (2) semi-structured problems with deterministic success criteria provide more reliable measures of student competency than fully open-ended projects, which allow AI systems to default to familiar solution patterns. These results challenge common policy and institutional guidance that promotes open-ended assessments as the primary safeguard for academic integrity. Second, we validate these results using data from four university data science courses (N = 138). While students achieve near-perfect scores on AI-assisted modular homework, performance drops by roughly 30 percentage points on proctored exams, indicating substantial AI score inflation. Interconnected projects remain strongly correlated with modular assessments, suggesting they measure the same underlying skills while resisting AI misuse. Proctored exams show weaker alignment, implying they may assess test-taking ability rather than intended learning outcomes. Third, we translate these findings into a practical assessment design framework. The proposed approach enables educators to create assessments that promote integrative thinking, reflect real-world AI-augmented workflows, and naturally resist trivial delegation to generative AI, thereby helping restore academic integrity.
Why we think this paper is great for you:
The paper’s exploration of AI-resilient assessments is pertinent to current challenges in system design and evaluation. It provides a framework for addressing the impact of rapidly evolving technologies.
Paderborn
Paper visualization
Rate image: πŸ‘ πŸ‘Ž
AI Summary
  • FPGA: Field-Programmable Gate Array HLS: High-Level Synthesis HDL: Hardware Description Language BSP: Board Support Package SLASH: Software Layer for Accelerated HPC Systems XRT: Xilinx Runtime SYCL/DPC++: Open standard for heterogeneous parallel programming [3]
  • Otus uses Rocky Linux as the operating system on all CPU, GPU, and FPGA nodes. [2]
Abstract
Otus is a high-performance computing cluster that was launched in 2025 and is operated by the Paderborn Center for Parallel Computing (PC2) at Paderborn University in Germany. The system is part of the National High Performance Computing (NHR) initiative. Otus complements the previous supercomputer Noctua 2, offering approximately twice the computing power while retaining the three node types that were characteristic of Noctua 2: 1) CPU compute nodes with different memory capacities, 2) high-end GPU nodes, and 3) HPC-grade FPGA nodes. On the Top500 list, which ranks the 500 most powerful supercomputers in the world, Otus is in position 164 with the CPU partition and in position 255 with the GPU partition (June 2025). On the Green500 list, ranking the 500 most energy-efficient supercomputers in the world, Otus is in position 5 with the GPU partition (June 2025). This article provides a comprehensive overview of the system in terms of its hardware, software, system integration, and its overall integration into the data center building to ensure energy-efficient operation. The article aims to provide unique insights for scientists using the system and for other centers operating HPC clusters. The article will be continuously updated to reflect the latest system setup and measurements.
Why we think this paper is great for you:
As a high-performance computing cluster, Otus directly relates to the need for systems with high throughput. The paper’s focus on a modern supercomputer is relevant to current infrastructure needs.
ITMO University
Abstract
Designing and implementing distributed systems correctly can be quite challenging. Although these systems are often accompanied by formal specifications that are verified using model-checking techniques, a gap still exists between the implementation and its formal specification: there is no guarantee that the implementation is free of bugs. To bridge this gap, we can use model-based testing. Specifically, if the model of the system can be interpreted as a finite-state automaton, we can generate an exhaustive test suite for the implementation that covers all possible states and transitions. In this paper, we discuss how to efficiently generate such a test suite for distributed systems written in the actor model. Importantly, our approach does not require any modifications to the code or interfering with the distributed system execution environment. As an example, we verified an implementation of a replication algorithm based on Viewstamped Replication, which is used in a real-world system.
Why we think this paper is great for you:
The paper’s focus on testing distributed systems using an actor model aligns with the need for reliable and efficient systems. It offers a practical approach to ensuring low-latency performance.
Tsinghua University
AI Summary
  • IRTest effectively reduces the surrogate-to-real gap with relatively few tests. [2]
Abstract
Testing and evaluating decision-making agents remains challenging due to unknown system architectures, limited access to internal states, and the vastness of high-dimensional scenario spaces. Existing testing approaches often rely on surrogate models of decision-making agents to generate large-scale scenario libraries; however, discrepancies between surrogate models and real decision-making agents significantly limit their generalizability and practical applicability. To address this challenge, this paper proposes intelligent resilience testing (IRTest), a unified online adaptive testing framework designed to rapidly adjust to diverse decision-making agents. IRTest initializes with an offline-trained surrogate prediction model and progressively reduces surrogate-to-real gap during testing through two complementary adaptation mechanisms: (i) online neural fine-tuning in data-rich regimes, and (ii) lightweight importance-sampling-based weighting correction in data-limited regimes. A Bayesian optimization strategy, equipped with bias-corrected acquisition functions, guides scenario generation to balance exploration and exploitation in complex testing spaces. Extensive experiments across varying levels of task complexity and system heterogeneity demonstrate that IRTest consistently improves failure-discovery efficiency, testing robustness, and cross-system generalizability. These results highlight the potential of IRTest as a practical solution for scalable, adaptive, and resilient testing of decision-making agents.
Why we think this paper is great for you:
This research directly addresses resilience, a core interest, through intelligent testing techniques for decision-making agents. The use of surrogate models is particularly relevant for evaluating system robustness.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Low latency
You can edit or add more interests any time.