Hi j34nc4rl0+mlops,

Here is our personalized paper recommendations for you sorted by most relevant
Machine Learning Infrastructure
Abstract
Automated machine learning (AutoML) has democratized the design of machine learning based systems, by automating model selection, hyperparameter tuning and feature engineering. However, the high computational cost associated with traditional search and optimization strategies, such as Random Search, Particle Swarm Optimization and Bayesian Optimization, remains a significant challenge. Moreover, AutoML systems typically explore a large search space, which can lead to overfitting. This paper introduces a metalearning method for dynamically designing search spaces for AutoML system. The proposed method uses historical metaknowledge to select promising regions of the search space, accelerating the optimization process. According to experiments conducted for this study, the proposed method can reduce runtime by 89\% in Random Search and search space by (1.8/13 preprocessor and 4.3/16 classifier), without compromising significant predictive performance. Moreover, the proposed method showed competitive performance when adapted to Auto-Sklearn, reducing its search space. Furthermore, this study encompasses insights into meta-feature selection, meta-model explainability, and the trade-offs inherent in search space reduction strategies.
Abstract
As the volume and diversity of available datasets continue to increase, assessing data quality has become crucial for reliable and efficient Machine Learning analytics. A modern, game-theoretic approach for evaluating data quality is the notion of Data Shapley which quantifies the value of individual data points within a dataset. State-of-the-art methods to scale the NP-hard Shapley computation also face severe challenges when applied to large-scale datasets, limiting their practical use. In this work, we present a Data Shapley approach to identify a dataset's high-quality data tuples, Chunked Data Shapley (C-DaSh). C-DaSh scalably divides the dataset into manageable chunks and estimates the contribution of each chunk using optimized subset selection and single-iteration stochastic gradient descent. This approach drastically reduces computation time while preserving high quality results. We empirically benchmark our method on diverse real-world classification and regression tasks, demonstrating that C-DaSh outperforms existing Shapley approximations in both computational efficiency (achieving speedups between 80x - 2300x) and accuracy in detecting low-quality data regions. Our method enables practical measurement of dataset quality on large tabular datasets, supporting both classification and regression pipelines.
Machine Learning Testing
Abstract
Property-based testing validates software against an executable specification by evaluating it on randomly generated inputs. The standard way that PBT users generate test inputs is via generators that describe how to sample test inputs through random choices. To achieve a good distribution over test inputs, users must tune their generators, i.e., decide on the weights of these individual random choices. Unfortunately, it is very difficult to understand how to choose individual generator weights in order to achieve a desired distribution, so today this process is tedious and limits the distributions that can be practically achieved. In this paper, we develop techniques for the automatic and offline tuning of generators. Given a generator with undetermined symbolic weights and an objective function, our approach automatically learns values for these weights that optimize for the objective. We describe useful objective functions that allow users to (1) target desired distributions and (2) improve the diversity and validity of their test cases. We have implemented our approach in a novel discrete probabilistic programming system, Loaded Dice, that supports differentiation and parameter learning, and use it as a language for generators. We empirically demonstrate that our approach is effective at optimizing generator distributions according to the specified objective functions. We also perform a thorough evaluation on PBT benchmarks, demonstrating that, when automatically tuned for diversity and validity, the generators exhibit a 3.1-7.4x speedup in bug finding.
Abstract
Over the past eight years, the META method has served as a multidimensional testing skill assessment system in the National College Student Contest on Software Testing, successfully assessing over 100,000 students' testing skills. However, META is primarily limited to the objective assessment of test scripts, lacking the ability to automatically assess subjective aspects such as test case and test report. To address this limitation, this paper proposes RUM, a comprehensive assessment approach that combines rules and large language models (LLMs). RUM achieves a comprehensive assessment by rapidly processing objective indicators through rules while utilizing LLMs for in-depth subjective analysis of test case documents, test scripts, and test reports. The experimental results show that compared to traditional manual testing skill assessment, RUM improves assessment efficiency by 80.77\% and reduces costs by 97.38\%, while maintaining high accuracy and consistency of assessment. By applying RUM on the contest on software testing, we find that it not only enhances the efficiency and scalability of skill assessment in software testing education, but also provides teachers with more comprehensive and objective evidence for student ability assessment, facilitating personalized teaching and learning. This study offers new insights into the assessment of testing skills, which are expected to promote further development in test process optimization and software quality assurance.
Data Science Development Environment and Productivity
Abstract
This study investigates the impact of Generative AI on software development within the IT sector through a mixed-method approach, utilizing a survey developed based on expert interviews. The preliminary results of an ongoing survey offer early insights into how Generative AI reshapes personal productivity, organizational efficiency, adoption, business strategy and job insecurity. The findings reveal that 97% of IT workers use Generative AI tools, mainly ChatGPT. Participants report significant personal productivity gain and perceive organizational efficiency improvements that correlate positively with Generative AI adoption by their organizations (r = .470, p < .05). However, increased organizational adoption of AI strongly correlates with heightened employee job security concerns (r = .549, p < .001). Key adoption challenges include inaccurate outputs (64.2%), regulatory compliance issues (58.2%) and ethical concerns (52.2%). This research offers early empirical insights into Generative AI's economic and organizational implications.
Abstract
As data science emerges as a distinct academic discipline, introductory data science (IDS) courses have also drawn attention to their role in providing foundational knowledge of data science to students. IDS courses not only help students transition to higher education but also expose students to the field, often for the first time. They are often taught by instructors without formal training in data science or pedagogy, creating a unique context for examining their pedagogical content knowledge (PCK). This study explores IDS instructors' PCK, particularly how instructors' varied backgrounds interact with their instructional practices. Employing empirical phenomenological methodology, we conducted semi-structured interviews to understand the nature of their PCK. Comparing instructors' PCK was inherently challenging due to their diverse backgrounds and teaching contexts. Prior experiences played a central role in shaping participants' instructional choices. Their perceptions regarding the goals and rationale for teaching data science reflected three distinct orientations. Instructors also acknowledged students entering IDS courses often brought preconceived notions that shaped their learning experiences. Despite the absence of national guidelines, participants demonstrated notable overlap in foundational IDS content, though some instructors felt less confident with advanced or specialized topics. Additionally, instructors commonly employed formative and summative assessment approaches, though few explicitly labeled their practices using these terms. The findings highlight key components of PCK in IDS and offer insights into supporting instructor development through targeted training and curriculum design. This work contributes to ongoing efforts to build capacity in data science education and expand the scope of PCK research into new interdisciplinary domains.
Fault tolerance
Abstract
Complex and large industrial systems often misbehave, for instance, due to wear, misuse, or faults. To cope with these incidents, it is important to timely detect their occurrences, localize the sources of the problems, and implement the appropriate countermeasures. This paper reports our experience with a state-of-the-art failure prediction method, PREVENT, and its extension with a troubleshooting module, REACT, applied to naval systems developed by Fincantieri. Our results show how to integrate anomaly detection with troubleshooting procedures. We conclude by discussing a lesson learned, which may help deploy and extend these analyses to other industrial products.
Abstract
SIL (Safety Integrity Level) allocation plays a crucial role in defining the design requirements for Safety Functions (SFs) within high-risk industries. SIL is typically determined based on the estimated Probability of Failure on Demand (PFD), which must remain within permissible limits to manage risk effectively. Extensive research has been conducted on determining target PFD and SIL, with a stronger emphasis on preventive SFs than on mitigation SFs. In this paper, we address a rather conceptual issue: we argue that PFD is not an appropriate reliability measure for mitigation SFs to begin with, and we propose an alternative approach that leverages the Probability Density Function (PDF) and the expected degree of failure as key metrics. The principles underlying this approach are explained and supported by detailed mathematical formulations. Furthermore, the practical application of this new methodology is illustrated through case studies.
Machine Learning Lifecycle
Abstract
Scalability and maintainability challenges in monolithic systems have led to the adoption of microservices, which divide systems into smaller, independent services. However, migrating existing monolithic systems to microservices is a complex and resource-intensive task, which can benefit from machine learning (ML) to automate some of its phases. Choosing the right ML approach for migration remains challenging for practitioners. Previous works studied separately the objectives, artifacts, techniques, tools, and benefits and challenges of migrating monolithic systems to microservices. No work has yet investigated systematically existing ML approaches for this migration to understand the \revised{automated migration phases}, inputs used, ML techniques applied, evaluation processes followed, and challenges encountered. We present a systematic literature review (SLR) that aggregates, synthesises, and discusses the approaches and results of 81 primary studies (PSs) published between 2015 and 2024. We followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement to report our findings and answer our research questions (RQs). We extract and analyse data from these PSs to answer our RQs. We synthesise the findings in the form of a classification that shows the usage of ML techniques in migrating monolithic systems to microservices. The findings reveal that some phases of the migration process, such as monitoring and service identification, are well-studied, while others, like packaging microservices, remain unexplored. Additionally, the findings highlight key challenges, including limited data availability, scalability and complexity constraints, insufficient tool support, and the absence of standardized benchmarking, emphasizing the need for more holistic solutions.
Machine Learning Resilience
Abstract
In recent years, machine learning has demonstrated impressive results in various fields, including software vulnerability detection. Nonetheless, using machine learning to identify software vulnerabilities presents new challenges, especially regarding the scale of data involved, which was not a factor in traditional methods. Consequently, in spite of the rise of new machine-learning-based approaches in that space, important shortcomings persist regarding their evaluation. First, researchers often fail to provide concrete statistics about their training datasets, such as the number of samples for each type of vulnerability. Moreover, many methods rely on training with semantically similar functions rather than directly on vulnerable programs. This leads to uncertainty about the suitability of the datasets currently used for training. Secondly, the choice of a model and the level of granularity at which models are trained also affect the effectiveness of such vulnerability discovery approaches. In this paper, we explore the challenges of applying machine learning to vulnerability discovery. We also share insights from our two previous research papers, Bin2vec and BinHunter, which could enhance future research in this field.
Abstract
The widespread adoption of machine learning (ML) systems increased attention to their security and emergence of adversarial machine learning (AML) techniques that exploit fundamental vulnerabilities in ML systems, creating an urgent need for comprehensive risk assessment for ML-based systems. While traditional risk assessment frameworks evaluate conventional cybersecurity risks, they lack ability to address unique challenges posed by AML threats. Existing AML threat evaluation approaches focus primarily on technical attack robustness, overlooking crucial real-world factors like deployment environments, system dependencies, and attack feasibility. Attempts at comprehensive AML risk assessment have been limited to domain-specific solutions, preventing application across diverse systems. Addressing these limitations, we present FRAME, the first comprehensive and automated framework for assessing AML risks across diverse ML-based systems. FRAME includes a novel risk assessment method that quantifies AML risks by systematically evaluating three key dimensions: target system's deployment environment, characteristics of diverse AML techniques, and empirical insights from prior research. FRAME incorporates a feasibility scoring mechanism and LLM-based customization for system-specific assessments. Additionally, we developed a comprehensive structured dataset of AML attacks enabling context-aware risk assessment. From an engineering application perspective, FRAME delivers actionable results designed for direct use by system owners with only technical knowledge of their systems, without expertise in AML. We validated it across six diverse real-world applications. Our evaluation demonstrated exceptional accuracy and strong alignment with analysis by AML experts. FRAME enables organizations to prioritize AML risks, supporting secure AI deployment in real-world environments.
Data Science Development Tools
Abstract
We present Datarus-R1-14B, a 14 B-parameter open-weights language model fine-tuned from Qwen 2.5-14B-Instruct to act as a virtual data analyst and graduate-level problem solver. Datarus is trained not on isolated question-answer pairs but on full analytical trajectories including reasoning steps, code execution, error traces, self-corrections, and final conclusions, all captured in a ReAct-style notebook format spanning finance, medicine, numerical analysis, and other quantitative domains. Our training pipeline combines (i) a trajectory-centric synthetic data generator that yielded 144 000 tagged notebook episodes, (ii) a dual-reward framework blending a lightweight tag-based structural signal with a Hierarchical Reward Model (HRM) that scores both single-step soundness and end-to-end coherence, and (iii) a memory-optimized implementation of Group Relative Policy Optimization (GRPO) featuring KV-cache reuse, sequential generation, and reference-model sharding. A cosine curriculum smoothly shifts emphasis from structural fidelity to semantic depth, reducing the format collapse and verbosity that often plague RL-aligned LLMs. A central design choice in Datarus is it dual reasoning interface. In agentic mode the model produces ReAct-tagged steps that invoke Python tools to execute real code; in reflection mode it outputs compact Chain-of-Thought (CoT) traces delimited by and tags. On demanding postgraduate-level problems, Datarus exhibits an "AHA-moment" pattern: it sketches hypotheses, revises them once or twice, and converges avoiding the circular, token-inflating loops common to contemporary systems. Across standard public benchmarks Datarus surpasses similar size models and even reaches the level of larger reasoning models such as QwQ-32B achieving up to 30% higher accuracy on AIME 2024/2025 and LiveCodeBench while emitting 18-49% fewer tokens per solution.
Model Monitoring
Abstract
In this work, we describe a new software model-checking algorithm called GPS. GPS treats the task of model checking a program as a directed search of the program states, guided by a compositional, summary-based static analysis. The summaries produced by static analysis are used both to prune away infeasible paths and to drive test generation to reach new, unexplored program states. GPS can find both proofs of safety and counter-examples to safety (i.e., inputs that trigger bugs), and features a novel two-layered search strategy that renders it particularly efficient at finding bugs in programs featuring long, input-dependent error paths. To make GPS refutationally complete (in the sense that it will find an error if one exists, if it is allotted enough time), we introduce an instrumentation technique and show that it helps GPS achieve refutation-completeness without sacrificing overall performance. We benchmarked GPS on a suite of benchmarks including both programs from the Software Verification Competition (SV-COMP) and from prior literature, and found that our implementation of GPS outperforms state-of-the-art software model checkers (including the top performers in SV-COMP ReachSafety-Loops category), both in terms of the number of benchmarks solved and in terms of running time.
Abstract
Understanding the nuanced performance of machine learning models is essential for responsible deployment, especially in high-stakes domains like healthcare and finance. This paper introduces a novel framework, Conformalized Exceptional Model Mining, which combines the rigor of Conformal Prediction with the explanatory power of Exceptional Model Mining (EMM). The proposed framework identifies cohesive subgroups within data where model performance deviates exceptionally, highlighting regions of both high confidence and high uncertainty. We develop a new model class, mSMoPE (multiplex Soft Model Performance Evaluation), which quantifies uncertainty through conformal prediction's rigorous coverage guarantees. By defining a new quality measure, Relative Average Uncertainty Loss (RAUL), our framework isolates subgroups with exceptional performance patterns in multi-class classification and regression tasks. Experimental results across diverse datasets demonstrate the framework's effectiveness in uncovering interpretable subgroups that provide critical insights into model behavior. This work lays the groundwork for enhancing model interpretability and reliability, advancing the state-of-the-art in explainable AI and uncertainty quantification.
Online inference
Abstract
The machine learning community has recently put effort into quantized or low-precision arithmetics to scale large models. This paper proposes performing probabilistic inference in the quantized, discrete parameter space created by these representations, effectively enabling us to learn a continuous distribution using discrete parameters. We consider both 2D densities and quantized neural networks, where we introduce a tractable learning approach using probabilistic circuits. This method offers a scalable solution to manage complex distributions and provides clear insights into model behavior. We validate our approach with various models, demonstrating inference efficiency without sacrificing accuracy. This work advances scalable, interpretable machine learning by utilizing discrete approximations for probabilistic computations.
Abstract
Probabilistic extensions of logic programming languages, such as ProbLog, integrate logical reasoning with probabilistic inference to evaluate probabilities of output relations; however, prior work does not account for potential statistical correlations among input facts. This paper introduces Praline, a new extension to Datalog designed for precise probabilistic inference in the presence of (partially known) input correlations. We formulate the inference task as a constrained optimization problem, where the solution yields sound and precise probability bounds for output facts. However, due to the complexity of the resulting optimization problem, this approach alone often does not scale to large programs. To address scalability, we propose a more efficient $\delta$-exact inference algorithm that leverages constraint solving, static analysis, and iterative refinement. Our empirical evaluation on challenging real-world benchmarks, including side-channel analysis, demonstrates that our method not only scales effectively but also delivers tight probability bounds.
Machine Learning Operations
Abstract
Emerging expert-specialized Mixture-of-Experts (MoE) architectures, such as DeepSeek-MoE, deliver strong model quality through fine-grained expert segmentation and large top-k routing. However, their scalability is limited by substantial activation memory overhead and costly all-to-all communication. Furthermore, current MoE training systems - primarily optimized for NVIDIA GPUs - perform suboptimally on non-NVIDIA platforms, leaving significant computational potential untapped. In this work, we present X-MoE, a novel MoE training system designed to deliver scalable training performance for next-generation MoE architectures. X-MoE achieves this via several novel techniques, including efficient padding-free MoE training with cross-platform kernels, redundancy-bypassing dispatch, and hybrid parallelism with sequence-sharded MoE blocks. Our evaluation on the Frontier supercomputer, powered by AMD MI250X GPUs, shows that X-MoE scales DeepSeek-style MoEs up to 545 billion parameters across 1024 GPUs - 10x larger than the largest trainable model with existing methods under the same hardware budget, while maintaining high training throughput. The source code of X-MoE is available at https://github.com/Supercomputing-System-AI-Lab/X-MoE.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Machine Learning Deployment
  • Machine Learning Validation
  • MLOps
You can edit or add more interests any time.

Unsubscribe from these updates