Hi!

Your personalized paper recommendations for 19 to 23 January, 2026.
RMIT University
Paper visualization
Rate image: ๐Ÿ‘ ๐Ÿ‘Ž
AI Insights
  • Machine learning (ML) pipelines: A series of processes that transform raw data into actionable insights, often involving multiple models and algorithms. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • The paper discusses the importance of transparency and accountability in machine learning (ML) pipelines, highlighting the need for fine-grained traceability. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • The proposed framework for ML pipeline provenance and transparency has the potential to significantly improve the accountability and trustworthiness of ML decision-making. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The framework relies on the availability of detailed data about user interactions and model decisions, which may not always be feasible or practical. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • Fine-grained traceability is a crucial aspect of transparent and accountable ML pipelines, enabling users to understand how decisions are made and identify potential biases or errors. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The results show that the framework can effectively capture complex interactions between data, models, and users, enabling more transparent and accountable ML decision-making. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • Fine-grained traceability: The ability to track the flow of data through a machine learning pipeline at a detailed level, including model decisions and user interactions. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • The proposed framework is still in its early stages, and further research is needed to evaluate its scalability and applicability to real-world ML pipelines. (ML: 0.94)๐Ÿ‘๐Ÿ‘Ž
  • The proposed framework is demonstrated on several real-world datasets, including brain disorders prediction and bitcoin transaction tracking. (ML: 0.93)๐Ÿ‘๐Ÿ‘Ž
  • The authors propose a framework for ML pipeline provenance and transparency, leveraging graph neural networks (GNNs) to track data flow and model decisions. (ML: 0.91)๐Ÿ‘๐Ÿ‘Ž
Abstract
Modern machine learning systems are increasingly realised as multistage pipelines, yet existing transparency mechanisms typically operate at a model level: they describe what a system is and why it behaves as it does, but not how individual data samples are operationally recorded, tracked, and verified as they traverse the pipeline. This absence of verifiable, sample-level traceability leaves practitioners and users unable to determine whether a specific sample was used, when it was processed, or whether the corresponding records remain intact over time. We introduce FG-Trac, a model-agnostic framework that establishes verifiable, fine-grained sample-level traceability throughout machine learning pipelines. FG-Trac defines an explicit mechanism for capturing and verifying sample lifecycle events across preprocessing and training, computes contribution scores explicitly grounded in training checkpoints, and anchors these traces to tamper-evident cryptographic commitments. The framework integrates without modifying model architectures or training objectives, reconstructing complete and auditable data-usage histories with practical computational overhead. Experiments on a canonical convolutional neural network and a multimodal graph learning pipeline demonstrate that FG-Trac preserves predictive performance while enabling machine learning systems to furnish verifiable evidence of how individual samples were used and propagated during model execution.
Why we are recommending this paper?
Due to your Interest in Data Transparency

This paper directly addresses the need for transparency in machine learning pipelines, aligning with your interest in data transparency and traceability. It offers a mechanism for tracking data samples, a critical component for understanding and mitigating potential biases within these systems.
Bar Ilan University
Paper visualization
Rate image: ๐Ÿ‘ ๐Ÿ‘Ž
AI Insights
  • The study uses a detailed sampling algorithm to ensure geographic and occupational diversity in the dataset used for evaluation. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • The study covers three main experiments: Recommendations, Salary Estimation, and Representations. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • Evaluative prompts are listed, including both positive ( Proprietary models Open-weight models Root-weighted allocation (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • A total of 17 models were evaluated for their recommendation capabilities, 14 for salary estimation, and 12 for representation tasks. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The models include both proprietary and open-weight models, with some models being specifically designed for certain domains such as startup ideas or job titles. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • The AI job title classification system prompt is provided, which classifies job titles into two categories: explicitly related to Artificial Intelligence or Machine Learning, and not. (ML: 0.93)๐Ÿ‘๐Ÿ‘Ž
  • The document appears to be a research paper or report on the evaluation of various AI models for their ability to provide recommendations, estimate salaries, and represent knowledge in different domains. (ML: 0.90)๐Ÿ‘๐Ÿ‘Ž
Abstract
Large language models (LLMs) are increasingly employed for decision-support across multiple domains. We investigate whether these models display a systematic preferential bias in favor of artificial intelligence (AI) itself. Across three complementary experiments, we find consistent evidence of pro-AI bias. First, we show that LLMs disproportionately recommend AI-related options in response to diverse advice-seeking queries, with proprietary models doing so almost deterministically. Second, we demonstrate that models systematically overestimate salaries for AI-related jobs relative to closely matched non-AI jobs, with proprietary models overestimating AI salaries more by 10 percentage points. Finally, probing internal representations of open-weight models reveals that ``Artificial Intelligence'' exhibits the highest similarity to generic prompts for academic fields under positive, negative, and neutral framings alike, indicating valence-invariant representational centrality. These patterns suggest that LLM-generated advice and valuation can systematically skew choices and perceptions in high-stakes decisions.
Why we are recommending this paper?
Due to your Interest in AI Bias

Given your focus on AI bias and fairness, this research investigating pro-AI bias in LLMs is highly relevant. The experiments provide empirical evidence of a systematic bias, directly addressing concerns about fairness in AI systems.
University of California, Santa Barbara
AI Insights
  • Fairness: The principle of ensuring that machine learning models do not discriminate against certain groups or individuals based on protected characteristics such as race, gender, or age. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • The paper demonstrates the potential of using Chernoff Information as a fairness metric in machine learning models. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The paper explores the connection between Chernoff Information and fairness in machine learning models. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The paper presents several experiments to demonstrate the effectiveness of using Chernoff Information as a fairness metric. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The connection between noise and differential privacy is crucial for understanding the impact of noise on model fairness. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • Chernoff Information is used as a privacy constraint for adversarial classification, providing a new perspective on fairness. (ML: 0.91)๐Ÿ‘๐Ÿ‘Ž
  • The authors provide a comprehensive overview of related work in the field of fairness and differential privacy. (ML: 0.91)๐Ÿ‘๐Ÿ‘Ž
  • The authors investigate the relationship between noise and differential privacy, highlighting the importance of understanding this connection. (ML: 0.88)๐Ÿ‘๐Ÿ‘Ž
  • Chernoff Information: A measure of the difference between two probability distributions, used to quantify the amount of information gained from observing one distribution given another. (ML: 0.85)๐Ÿ‘๐Ÿ‘Ž
  • Differential Privacy: A framework for protecting individual data by adding noise to ensure that an attacker cannot infer sensitive information about a single individual. (ML: 0.84)๐Ÿ‘๐Ÿ‘Ž
Abstract
Fairness and privacy are two vital pillars of trustworthy machine learning. Despite extensive research on these individual topics, the relationship between fairness and privacy has received significantly less attention. In this paper, we utilize the information-theoretic measure Chernoff Information to highlight the data-dependent nature of the relationship among the triad of fairness, privacy, and accuracy. We first define Noisy Chernoff Difference, a tool that allows us to analyze the relationship among the triad simultaneously. We then show that for synthetic data, this value behaves in 3 distinct ways (depending on the distribution of the data). We highlight the data distributions involved in these cases and explore their fairness and privacy implications. Additionally, we show that Noisy Chernoff Difference acts as a proxy for the steepness of the fairness-accuracy curves. Finally, we propose a method for estimating Chernoff Information on data from unknown distributions and utilize this framework to examine the triad dynamic on real datasets. This work builds towards a unified understanding of the fairness-privacy-accuracy relationship and highlights its data-dependent nature.
Why we are recommending this paper?
Due to your Interest in Data Ethics

This paper explores the complex relationship between privacy and fairness, a key area of your interests. Utilizing information-theoretic measures, it offers a nuanced approach to understanding data-dependent trade-offs, which is crucial for addressing fairness challenges.
University of Central Florida
AI Insights
  • The article emphasizes the need for interdisciplinary collaboration between computer scientists, psychologists, and other experts to develop more transparent and explainable AI systems. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • The article highlights the challenges associated with achieving transparency in AI, including the complexity of AI algorithms, the lack of standardization, and the difficulty of explaining complex decisions to non-experts. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • They also discuss the importance of human factors in AI design, including user experience, usability, and accessibility. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • It also highlights the potential benefits of transparency in AI, such as improved trust, accountability, and decision-making quality. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The authors argue that transparency is essential for ensuring accountability, fairness, and reliability in AI decision-making processes. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • They also highlight the potential benefits of transparency in AI, such as improved trust, accountability, and decision-making quality. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • It highlights the challenges associated with achieving transparency in AI and proposes a framework for evaluating explainability in AI systems. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The authors emphasize the need for interdisciplinary collaboration to develop more transparent and explainable AI systems. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • Human factors: The study of how humans interact with technology, including user experience, usability, and accessibility. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • Transparency: The degree to which an AI system is open and transparent about its decision-making processes and data used. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • Explainability: The ability of an AI system to provide clear and understandable explanations for its decisions or actions. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • The authors propose a framework for evaluating explainability in AI systems, which includes metrics such as accuracy, precision, recall, F1-score, and mean absolute error. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • The article does not provide a comprehensive review of existing literature on transparency in AI. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • The article discusses the concept of transparency in artificial intelligence (AI) and its importance for building trust between humans and AI systems. (ML: 0.95)๐Ÿ‘๐Ÿ‘Ž
  • The article concludes that transparency in AI is essential for building trust between humans and AI systems. (ML: 0.95)๐Ÿ‘๐Ÿ‘Ž
Abstract
Objective: This paper develops a theoretical framework explaining when and why AI explanations enhance versus impair human decision-making. Background: Transparency is advocated as universally beneficial for human-AI interaction, yet identical AI explanations improve decision quality in some contexts but impair it in others. Current theories--trust calibration, cognitive load, and self-determination--cannot fully account for this paradox. Method: The framework models autonomy as a continuous stochastic process influenced by information-induced cognitive load. Using stochastic control theory, autonomy evolution is formalized as geometric Brownian motion with information-dependent drift, and optimal transparency is derived via Hamilton-Jacobi-Bellman equations. Monte Carlo simulations validate theoretical predictions. Results: Mathematical analysis generates five testable predictions about disengagement timing, working memory moderation, autonomy trajectory shapes, and optimal information levels. Computational solutions demonstrate that dynamic transparency policies outperform both maximum and minimum transparency by adapting to real-time cognitive state. The optimal policy exhibits threshold structure: provide information when autonomy is high and accumulated load is low; withhold when resources are depleted. Conclusion: Transparency effects depend on dynamic cognitive resource depletion rather than static design choices. Information provision triggers metacognitive processing that reduces perceived control when cognitive load exceeds working memory capacity. Application: The framework provides design principles for adaptive AI systems: adjust transparency based on real-time cognitive state, implement information budgets respecting capacity limits, and personalize thresholds based on individual working memory capacity.
Why we are recommending this paper?
Due to your Interest in AI Transparency

This paper tackles the challenges of transparency in explainable AI, directly addressing your interest in AI transparency and its impact on human decision-making. The theoretical framework offers insights into how explanations can inadvertently impair understanding.
University of York
Paper visualization
Rate image: ๐Ÿ‘ ๐Ÿ‘Ž
AI Insights
  • The adoption of conformance arguments requires significant expertise and time, but can improve the rigour and consistency of assessments. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • The adoption of conformance arguments requires significant expertise and time. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • Future work should focus on education and training programs for stakeholders and tools to support argument construction and evaluation. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • Conformance arguments can be used to demonstrate compliance with data protection principles, improving the rigour and consistency of assessments. (ML: 0.95)๐Ÿ‘๐Ÿ‘Ž
  • There is limited guidance on content, structure, and presentation of conformance arguments. (ML: 0.95)๐Ÿ‘๐Ÿ‘Ž
  • Conformance argument: a structured argument that demonstrates how an organisation's processing activities comply with specific data protection principles. (ML: 0.94)๐Ÿ‘๐Ÿ‘Ž
  • Conformance arguments are a structured approach to demonstrating compliance with data protection principles, which can improve the rigour and consistency of assessments. (ML: 0.94)๐Ÿ‘๐Ÿ‘Ž
  • Conformance arguments are like a blueprint for showing how an organisation's handling of personal data meets certain rules and guidelines. (ML: 0.94)๐Ÿ‘๐Ÿ‘Ž
  • The Governance of Privacy (2013) by C. (ML: 0.92)๐Ÿ‘๐Ÿ‘Ž
  • It's a way to demonstrate that you're following best practices and being transparent about your data processing activities. (ML: 0.91)๐Ÿ‘๐Ÿ‘Ž
  • Raab discusses the importance of transparency in data protection. (ML: 0.91)๐Ÿ‘๐Ÿ‘Ž
  • J. (ML: 0.90)๐Ÿ‘๐Ÿ‘Ž
  • Data Protection Principle: a principle set out in legislation or regulation that governs the handling of personal data. (ML: 0.90)๐Ÿ‘๐Ÿ‘Ž
  • D. (ML: 0.87)๐Ÿ‘๐Ÿ‘Ž
  • Bennett and C. (ML: 0.79)๐Ÿ‘๐Ÿ‘Ž
Abstract
We show how conformance arguments can be used by organisations to substantiate claims of conformance to data protection principles. Use of conformance arguments can improve the rigour and consistency with which these organisations, supervisory authorities, certification bodies and data subjects can assess the truth of these claims.
Why we are recommending this paper?
Due to your Interest in Data Ethics

This paper's focus on conformance arguments for data protection aligns with your interest in data ethics and principles. It provides a framework for organizations to demonstrate compliance, contributing to a more transparent and accountable data governance process.
Delft University of Technology
AI Insights
  • Ecological validity: The extent to which the results of an experiment can be generalized to real-world situations. (ML: 0.99)๐Ÿ‘๐Ÿ‘Ž
  • Fair compensation: Ensuring that participants receive a fair wage for their work, considering factors such as task complexity and required expertise. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • The Incentive-Tuning Framework provides a standardized solution for designing effective incentive schemes in human-AI decision-making studies. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The Incentive-Tuning Framework is a standardized solution for designing and documenting effective incentive schemes in human-AI decision-making studies. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • Incentive scheme: A system of rewards or penalties designed to motivate participants in human-AI decision-making studies. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • The Incentive-Tuning Framework aims to address methodological challenges surrounding incentive design and provide a solution for researchers to tune 'appropriate' incentive schemes for their specific studies. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • A well-designed framework can foster a standardized, systematic, and comprehensive approach to designing effective incentive schemes. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • Researchers should prioritize intentional design and alignment with research goals when employing an incentive scheme. (ML: 0.96)๐Ÿ‘๐Ÿ‘Ž
  • Researchers should explicitly identify the purpose of employing an incentive scheme to ensure intentional design and alignment with research goals. (ML: 0.95)๐Ÿ‘๐Ÿ‘Ž
  • The framework consists of five steps: identifying the purpose of employing an incentive scheme, coming up with a base pay, designing a bonus structure, gathering participant feedback, and reflecting on design implications. (ML: 0.88)๐Ÿ‘๐Ÿ‘Ž
Abstract
AI has revolutionised decision-making across various fields. Yet human judgement remains paramount for high-stakes decision-making. This has fueled explorations of collaborative decision-making between humans and AI systems, aiming to leverage the strengths of both. To explore this dynamic, researchers conduct empirical studies, investigating how humans use AI assistance for decision-making and how this collaboration impacts results. A critical aspect of conducting these studies is the role of participants, often recruited through crowdsourcing platforms. The validity of these studies hinges on the behaviours of the participants, hence effective incentives that can potentially affect these behaviours are a key part of designing and executing these studies. In this work, we aim to address the critical role of incentive design for conducting empirical human-AI decision-making studies, focusing on understanding, designing, and documenting incentive schemes. Through a thematic review of existing research, we explored the current practices, challenges, and opportunities associated with incentive design for human-AI decision-making empirical studies. We identified recurring patterns, or themes, such as what comprises the components of an incentive scheme, how incentive schemes are manipulated by researchers, and the impact they can have on research outcomes. Leveraging the acquired understanding, we curated a set of guidelines to aid researchers in designing effective incentive schemes for their studies, called the Incentive-Tuning Framework, outlining how researchers can undertake, reflect on, and document the incentive design process. By advocating for a standardised yet flexible approach to incentive design and contributing valuable insights along with practical tools, we hope to pave the way for more reliable and generalizable knowledge in the field of human-AI decision-making.
Why we are recommending this paper?
Due to your Interest in AI Bias
The University of Tennessee
Paper visualization
Rate image: ๐Ÿ‘ ๐Ÿ‘Ž
AI Insights
  • The visibility and availability of sources impact not only how people navigate and access information but also how they think, write, and integrate evidence in shaping their ideas and arguments. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • Design can reverse the negative impact of information overload on information evaluation and critical thinking. (ML: 0.98)๐Ÿ‘๐Ÿ‘Ž
  • Critical thinking: The process of objectively evaluating evidence and arguments to form a judgment or decision. (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
  • Designers should consider the trade-off between workflow fluency and cognitive persistence when designing conversational AI systems. (ML: 0.95)๐Ÿ‘๐Ÿ‘Ž
  • External working memory: A concept referring to the use of external tools or interfaces to support cognitive processes such as memory and reasoning. (ML: 0.94)๐Ÿ‘๐Ÿ‘Ž
  • Conversational AI systems may need interfaces tailored to different tasks and information densities or adaptive interfaces. (ML: 0.93)๐Ÿ‘๐Ÿ‘Ž
  • The study highlights that source transparency is not a fixed property but rather a contextual and intersectional aspect of design. (ML: 0.90)๐Ÿ‘๐Ÿ‘Ž
  • Information density: The amount of information presented in a given space, often measured by the number of citations or references. (ML: 0.89)๐Ÿ‘๐Ÿ‘Ž
  • There is a trade-off between interfaces that support 'flow' and those enhance 'verification'. (ML: 0.89)๐Ÿ‘๐Ÿ‘Ž
  • Source transparency: The visibility and availability of sources in an interface. (ML: 0.69)๐Ÿ‘๐Ÿ‘Ž
Abstract
Conversational AI systems increasingly function as primary interfaces for information seeking, yet how they present sources to support information evaluation remains under-explored. This paper investigates how source transparency design shapes interactive information seeking, trust, and critical engagement. We conducted a controlled between-subjects experiment (N=372) comparing four source presentation interfaces - Collapsible, Hover Card, Footer, and Aligned Sidebar - varying in visibility and accessibility. Using fine-grained behavioral analysis and automated critical thinking assessment, we found that interface design fundamentally alters exploration strategies and evidence integration. While the Hover Card interface facilitated seamless, on-demand verification during the task, the Aligned Sidebar uniquely mitigated the negative effects of information overload: as citation density increased, Sidebar users demonstrated significantly higher critical thinking and synthesis scores compared to other conditions. Our results highlight a trade-off between designs that support workflow fluency and those that enforce reflective verification, offering practical implications for designing adaptive and responsible conversational AI that fosters critical engagement with AI generated content.
Why we are recommending this paper?
Due to your Interest in Data Transparency
Timaeus
Paper visualization
Rate image: ๐Ÿ‘ ๐Ÿ‘Ž
AI Insights
  • patterning is a method for controlling neural network training by identifying which data shapes which internal structures and intervening accordingly the mathematical framework of patterning is based on linear response theory, where susceptibilities measure how observables respond to infinitesimal shifts in the data distribution experiments demonstrate that susceptibility measurements can be used to steer circuit formation in a small language model patterning has potential applications to AI alignment, where the goal is to control how models generalize beyond their training distribution patterning: the dual problem to interpretability, where given a desired form of generalization, one determines what training data produces it susceptibilities: measures of how observables respond to infinitesimal shifts in the data distribution linear response theory: a framework for understanding how systems respond to small perturbations patterning provides a principled approach to steering generalization, with potential applications to AI alignment and other areas the ability to identify which data shapes which internal structures and intervene accordingly offers a promising direction for controlling neural network training experiments use small models (3M parameters) and simple tasks (ML: 0.97)๐Ÿ‘๐Ÿ‘Ž
Abstract
Mechanistic interpretability aims to understand how neural networks generalize beyond their training data by reverse-engineering their internal structures. We introduce patterning as the dual problem: given a desired form of generalization, determine what training data produces it. Our approach is based on susceptibilities, which measure how posterior expectation values of observables respond to infinitesimal shifts in the data distribution. Inverting this linear response relationship yields the data intervention that steers the model toward a target internal configuration. We demonstrate patterning in a small language model, showing that re-weighting training data along principal susceptibility directions can accelerate or delay the formation of structure, such as the induction circuit. In a synthetic parentheses balancing task where multiple algorithms achieve perfect training accuracy, we show that patterning can select which algorithm the model learns by targeting the local learning coefficient of each solution. These results establish that the same mathematical framework used to read internal structure can be inverted to write it.
Why we are recommending this paper?
Due to your Interest in Data Representation
Columbia University
AI Insights
  • It can be used as a regularization term in machine learning models to improve the quality of node representations. (ML: 0.93)๐Ÿ‘๐Ÿ‘Ž
  • The structural regularizer is a convex function that encourages node representations to be similar for connected nodes in a graph. (ML: 0.92)๐Ÿ‘๐Ÿ‘Ž
  • The structural regularizer can be used to encourage node representations to be similar for connected nodes in a graph. (ML: 0.92)๐Ÿ‘๐Ÿ‘Ž
  • Graph Laplacian: A symmetric matrix representing the structure of a graph, defined as L = D - W, where D is the degree matrix and W is the weight matrix. (ML: 0.89)๐Ÿ‘๐Ÿ‘Ž
  • The structural regularizer is defined as the sum of weighted squared differences between node representations. (ML: 0.89)๐Ÿ‘๐Ÿ‘Ž
  • Structural Regularizer: A function R_struct(Z; S) that encourages node representations to be similar for connected nodes in a graph. (ML: 0.87)๐Ÿ‘๐Ÿ‘Ž
  • Representation Matrix: A matrix Z โˆˆ โ„โฟร—d, where each row z_i represents the d-dimensional representation of node i. (ML: 0.86)๐Ÿ‘๐Ÿ‘Ž
  • This property makes the regularizer a convex function. (ML: 0.84)๐Ÿ‘๐Ÿ‘Ž
  • The regularizer can be rewritten in terms of the graph Laplacian and the representation matrix. (ML: 0.84)๐Ÿ‘๐Ÿ‘Ž
  • The graph Laplacian is a positive semidefinite matrix, which implies that the regularizer is also positive semidefinite. (ML: 0.83)๐Ÿ‘๐Ÿ‘Ž
Abstract
Uncertainty estimation in machine learning has traditionally focused on the prediction stage, aiming to quantify confidence in model outputs while treating learned representations as deterministic and reliable by default. In this work, we challenge this implicit assumption and argue that reliability should be regarded as a first-class property of learned representations themselves. We propose a principled framework for reliable representation learning that explicitly models representation-level uncertainty and leverages structural constraints as inductive biases to regularize the space of feasible representations. Our approach introduces uncertainty-aware regularization directly in the representation space, encouraging representations that are not only predictive but also stable, well-calibrated, and robust to noise and structural perturbations. Structural constraints, such as sparsity, relational structure, or feature-group dependencies, are incorporated to define meaningful geometry and reduce spurious variability in learned representations, without assuming fully correct or noise-free structure. Importantly, the proposed framework is independent of specific model architectures and can be integrated with a wide range of representation learning methods.
Why we are recommending this paper?
Due to your Interest in Data Representation

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Data Fairness
  • AI Fairness
  • AI Ethics
  • Data Bias
You can edit or add more interests any time.