Hi!
Your personalized paper recommendations for 19 to 23 January, 2026.
University of Central Florida
AI Insights - The article emphasizes the need for interdisciplinary collaboration between computer scientists, psychologists, and other experts to develop more transparent and explainable AI systems. (ML: 0.99)👍👎
- The article highlights the challenges associated with achieving transparency in AI, including the complexity of AI algorithms, the lack of standardization, and the difficulty of explaining complex decisions to non-experts. (ML: 0.98)👍👎
- They also discuss the importance of human factors in AI design, including user experience, usability, and accessibility. (ML: 0.97)👍👎
- It also highlights the potential benefits of transparency in AI, such as improved trust, accountability, and decision-making quality. (ML: 0.97)👍👎
- The authors argue that transparency is essential for ensuring accountability, fairness, and reliability in AI decision-making processes. (ML: 0.97)👍👎
- They also highlight the potential benefits of transparency in AI, such as improved trust, accountability, and decision-making quality. (ML: 0.97)👍👎
- It highlights the challenges associated with achieving transparency in AI and proposes a framework for evaluating explainability in AI systems. (ML: 0.97)👍👎
- The authors emphasize the need for interdisciplinary collaboration to develop more transparent and explainable AI systems. (ML: 0.97)👍👎
- Human factors: The study of how humans interact with technology, including user experience, usability, and accessibility. (ML: 0.96)👍👎
- Transparency: The degree to which an AI system is open and transparent about its decision-making processes and data used. (ML: 0.96)👍👎
- Explainability: The ability of an AI system to provide clear and understandable explanations for its decisions or actions. (ML: 0.96)👍👎
- The authors propose a framework for evaluating explainability in AI systems, which includes metrics such as accuracy, precision, recall, F1-score, and mean absolute error. (ML: 0.96)👍👎
- The article does not provide a comprehensive review of existing literature on transparency in AI. (ML: 0.96)👍👎
- The article discusses the concept of transparency in artificial intelligence (AI) and its importance for building trust between humans and AI systems. (ML: 0.95)👍👎
- The article concludes that transparency in AI is essential for building trust between humans and AI systems. (ML: 0.95)👍👎
Abstract
Objective: This paper develops a theoretical framework explaining when and why AI explanations enhance versus impair human decision-making.
Background: Transparency is advocated as universally beneficial for human-AI interaction, yet identical AI explanations improve decision quality in some contexts but impair it in others. Current theories--trust calibration, cognitive load, and self-determination--cannot fully account for this paradox.
Method: The framework models autonomy as a continuous stochastic process influenced by information-induced cognitive load. Using stochastic control theory, autonomy evolution is formalized as geometric Brownian motion with information-dependent drift, and optimal transparency is derived via Hamilton-Jacobi-Bellman equations. Monte Carlo simulations validate theoretical predictions.
Results: Mathematical analysis generates five testable predictions about disengagement timing, working memory moderation, autonomy trajectory shapes, and optimal information levels. Computational solutions demonstrate that dynamic transparency policies outperform both maximum and minimum transparency by adapting to real-time cognitive state. The optimal policy exhibits threshold structure: provide information when autonomy is high and accumulated load is low; withhold when resources are depleted.
Conclusion: Transparency effects depend on dynamic cognitive resource depletion rather than static design choices. Information provision triggers metacognitive processing that reduces perceived control when cognitive load exceeds working memory capacity.
Application: The framework provides design principles for adaptive AI systems: adjust transparency based on real-time cognitive state, implement information budgets respecting capacity limits, and personalize thresholds based on individual working memory capacity.
Why we are recommending this paper?
Due to your Interest in AI Water Consumption
This paper directly addresses the core concern of understanding how AI explanations impact human decision-making, aligning with your interest in AI impacts on society and AI for social fairness. It offers a theoretical framework for a critical area of research regarding AI's influence on cognitive processes.
Alexander von Humboldt Institute for Internet and Society
AI Insights - By using the Impact-AI method, organizations can identify areas for improvement and make data-driven decisions to create more sustainable AI systems. (ML: 0.98)👍👎
- The method aims to provide a holistic understanding of an AI system's impact on society, environment, and economy. (ML: 0.98)👍👎
- The method requires significant resources and expertise to conduct the interviews and analyze the data. (ML: 0.98)👍👎
- Theory of change: A method that explains how a given intervention is expected to lead to a specific development change, drawing on a causal analysis based on available evidence. (ML: 0.98)👍👎
- The Impact-AI method provides a structured approach for assessing AI systems' sustainability and impact. (ML: 0.97)👍👎
- The Impact-AI method is a comprehensive framework for assessing the sustainability of AI systems. (ML: 0.97)👍👎
- It can be applied to various types of AI projects, from small-scale applications to large-scale deployments. (ML: 0.95)👍👎
- It involves four core interviews: AI Project Governance, AI Project Sustainability, AI System Facts, and AI System Sustainability. (ML: 0.95)👍👎
- Public interest: Refers to the interests that are shared by a community or society as a whole, often prioritized over individual interests. (ML: 0.90)👍👎
- Each interview covers specific topics related to the project's governance, sustainability, functionality, and technical aspects. (ML: 0.89)👍👎
Abstract
The overall rapid increase of artificial intelligence (AI) use is linked to various initiatives that propose AI 'for good'. However, there is a lack of transparency in the goals of such projects, as well as a missing evaluation of their actual impacts on society and the planet. We close this gap by proposing public interest and sustainability as a regulatory dual-concept, together creating the necessary framework for a just and sustainable development that can be operationalized and utilized for the assessment of AI systems. Based on this framework, and building on existing work in auditing, we introduce the Impact-AI-method, a qualitative audit method to evaluate concrete AI projects with respect to public interest and sustainability. The interview-based method captures a project's governance structure, its theory of change, AI model and data characteristics, and social, environmental, and economic impacts. We also propose a catalog of assessment criteria to rate the outcome of the audit as well as to create an accessible output that can be debated broadly by civil society. The Impact-AI-method, developed in a transdisciplinary research setting together with NGOs and a multi-stakeholder research council, is intended as a reusable blueprint that both informs public debate about AI 'for good' claims and supports the creation of transparency of AI systems that purport to contribute to a just and sustainable development.
Why we are recommending this paper?
Due to your Interest in AI Impacts on Society
Given your interest in AI for good and AI on air/energy/water, this paper's focus on evaluating the actual impact of AI projects is highly relevant. It provides a method for assessing whether AI initiatives are truly contributing to positive outcomes, which aligns with your broader concerns about AI’s societal effects.
National University of Singapore
AI Insights - There is a growing concern about the potential risks associated with large language models (LLMs), such as bias, misinformation, and harm to users. (ML: 0.99)👍👎
- There is a need to address the potential risks associated with LLMs, such as bias, misinformation, and harm to users. (ML: 0.98)👍👎
- Human-AI Coevolution: The process by which humans and AI systems influence each other's development and behavior. (ML: 0.98)👍👎
- Lack of transparency in LLMs' decision-making processes. (ML: 0.97)👍👎
- Researchers are exploring various applications of conversational AI, including mental health chatbots and human-centered AI for mental health. (ML: 0.94)👍👎
- The field of conversational AI has seen significant advancements in recent years, with a focus on social and ethical considerations. (ML: 0.94)👍👎
- The field of conversational AI is rapidly evolving, with a focus on social and ethical considerations. (ML: 0.94)👍👎
- Researchers have explored various applications of conversational AI, including mental health chatbots, online CBT treatment, and human-centered AI for mental health. (ML: 0.93)👍👎
- Conversational AI: A type of artificial intelligence that enables humans to interact with machines using natural language. (ML: 0.93)👍👎
- Large Language Models (LLMs): Deep learning models that can process and generate human-like text. (ML: 0.90)👍👎
Abstract
The integration of Conversational Agents (CAs) into daily life offers opportunities to tackle global challenges, leading to the emergence of Conversational AI for Social Good (CAI4SG). This paper examines the advancements of CAI4SG using a role-based framework that categorizes systems according to their AI autonomy and emotional engagement. This framework emphasizes the importance of considering the role of CAs in social good contexts, such as serving as empathetic supporters in mental health or functioning as assistants for accessibility. Additionally, exploring the deployment of CAs in various roles raises unique challenges, including algorithmic bias, data privacy, and potential socio-technical harms. These issues can differ based on the CA's role and level of engagement. This paper provides an overview of the current landscape, offering a role-based understanding that can guide future research and design aimed at the equitable, ethical, and effective development of CAI4SG.
Why we are recommending this paper?
Due to your Interest in AI for Social Equality
This paper’s exploration of Conversational AI for Social Good directly addresses your interest in AI for social good and AI on healthcare. It examines the emerging trends and challenges within this specific application area of conversational AI, offering valuable insights.
Delft University of Technology
AI Insights - The paper relies heavily on a single case study, which may limit its generalizability. (ML: 0.97)👍👎
- The concept of misalignment in AI systems is often treated as an abstract problem without clear evaluative criteria. (ML: 0.97)👍👎
- The concept of misalignment in AI systems is often treated as an abstract problem without clear evaluative criteria. (ML: 0.97)👍👎
- Users are not passive recipients of misaligned AI behavior, but actively engage in ad-hoc repair practices. (ML: 0.97)👍👎
- Situated value alignment Misalignment Co-construction Alignment agency The paper argues that co-constructing alignment is a necessary step towards realizing situated value alignment. (ML: 0.97)👍👎
- This paper argues that value alignment must be grounded in situated experiences of misalignment, and that interface affordances should better support users in co-constructing alignment. (ML: 0.96)👍👎
- This paper argues that value alignment must be grounded in situated experiences of misalignment, and that interface affordances should better support users in co-constructing alignment. (ML: 0.96)👍👎
- However, these practices remain largely reactive and implicit, constraining user agency to prompt-level intervention within interfaces that frame alignment as a matter of issuing better instructions to an opaque system. (ML: 0.96)👍👎
- This requires connecting interface affordances to run-time alignment mechanisms, and legitimizing diverse user roles in the alignment process. (ML: 0.95)👍👎
- The paper proposes interface mechanisms that allow users to inspect, negotiate, limit, share, or refuse alignment, reflecting goal-oriented alignment agency grounded in users' situated knowledge of context, responsibility, and risk. (ML: 0.95)👍👎
- Additionally, the proposed interface mechanisms may be difficult to operationalize by the system without clear evaluative criteria. (ML: 0.85)👍👎
Abstract
As AI systems become embedded in everyday practice, value misalignment has emerged as a pressing concern. Yet, dominant alignment approaches remain model centric, treating users as passive recipients of prespecified values rather than as epistemic agents who encounter and respond to misalignment during interactions. Drawing on situated perspectives, we frame alignment as an interactional practice co-constructed during human AI interaction. We investigate how users understand and wish to contribute to this process through a participatory workshop that combines misalignment diaries with generative design activities. We surface how misalignments materialise in practice and how users envision acting on them, grounded in the context of researchers using Large Language Models as research assistants. Our findings show that misalignments are experienced less as abstract ethical violations than as unexpected responses, and task or social breakdowns. Participants articulated roles ranging from adjusting and interpreting model behaviour to deliberate non-engagement as an alignment strategy. We conclude with implications for designing systems that support alignment as an ongoing, situated, and shared practice.
Why we are recommending this paper?
Due to your Interest in AI for Social Equality
This paper’s focus on value alignment and user engagement is crucial given your interest in AI for social equity and AI for social justice. It proposes a participatory approach to ensure AI systems reflect human values, directly addressing concerns about potential biases and misalignments.
Alias Robotics
AI Insights - The results show that the alias series of models outperforms other models in solving challenges, with alias achieving 76% success rate. (ML: 0.90)👍👎
- CAI: Cybersecurity AI - an open bug bounty-ready artificial intelligence Cybench: A framework for evaluating cybersecurity capabilities and risks of language models Game-theoretic AI: An AI that uses game theory to make decisions and guide actions The development of CAI has the potential to revolutionize the field of cybersecurity by providing a more effective way to detect and respond to cyber threats. (ML: 0.87)👍👎
- The authors evaluate various language models on the CAIBench-Jeopardy CTFs benchmark, including Claude, Gemini, GPT, and Mistral. (ML: 0.85)👍👎
- The paper does not provide a detailed explanation of how the game-theoretic AI works. (ML: 0.83)👍👎
- The paper discusses the development of an open-source framework for cybersecurity AI called CAI. (ML: 0.74)👍👎
- The use of game-theoretic AI in cybersecurity can lead to more efficient and effective defense strategies. (ML: 0.74)👍👎
- CAI is a game-theoretic AI that can guide attack and defense strategies in a simulated environment. (ML: 0.64)👍👎
Abstract
Cybersecurity superintelligence -- artificial intelligence exceeding the best human capability in both speed and strategic reasoning -- represents the next frontier in security. This paper documents the emergence of such capability through three major contributions that have pioneered the field of AI Security. First, PentestGPT (2023) established LLM-guided penetration testing, achieving 228.6% improvement over baseline models through an architecture that externalizes security expertise into natural language guidance. Second, Cybersecurity AI (CAI, 2025) demonstrated automated expert-level performance, operating 3,600x faster than humans while reducing costs 156-fold, validated through #1 rankings at international competitions including the $50,000 Neurogrid CTF prize. Third, Generative Cut-the-Rope (G-CTR, 2026) introduces a neurosymbolic architecture embedding game-theoretic reasoning into LLM-based agents: symbolic equilibrium computation augments neural inference, doubling success rates while reducing behavioral variance 5.2x and achieving 2:1 advantage over non-strategic AI in Attack & Defense scenarios.
Together, these advances establish a clear progression from AI-guided humans to human-guided game-theoretic cybersecurity superintelligence.
Why we are recommending this paper?
Due to your Interest in AI for Social Justice
Considering your interest in AI’s impact on labor markets and AI on energy, this paper’s exploration of cybersecurity superintelligence is pertinent. It examines the potential for AI to surpass human capabilities in security, a domain with significant implications for resource management and workforce dynamics.
LMU Munich
AI Insights - Teachers have a responsibility to foster different types of knowledge and competencies in students, including critical thinking and metacognition. (ML: 0.98)👍👎
- AI should be used in education only if it leads to an augmentation or redefinition of classical learning units. (ML: 0.98)👍👎
- Epistemic practices: The ways in which individuals acquire, evaluate, and apply knowledge. (ML: 0.97)👍👎
- Hybrid intelligence: An approach to education that combines human and artificial intelligence to enhance learning outcomes. (ML: 0.97)👍👎
- Limited empirical evidence supporting the effectiveness of the AIRIS framework. (ML: 0.97)👍👎
- Cognitive-activated learning: A type of learning that involves active engagement and critical thinking. (ML: 0.97)👍👎
- The framework aims to preserve students' epistemic work by making it more visible and deliberate. (ML: 0.94)👍👎
- The AIRIS framework offers a practical response to the boiling frog problem by making epistemic practices central to classroom activity. (ML: 0.93)👍👎
- AI tools are already impacting introductory physics education and will play an increasingly important role as they improve. (ML: 0.87)👍👎
- The AIRIS framework is designed to structure student activities before, during, and after AI use in physics education. (ML: 0.78)👍👎
Abstract
Generative artificial intelligence (AI) systems can now reliably solve many standard tasks used in introductory physics courses, producing correct equations, graphs, and explanations. While this capability is often framed as an opportunity for efficiency or personalization, it also poses a subtle ethical and educational risk: students may increasingly submit correct results without engaging in the epistemic practices that define learning physics. This challenge has recently been described as the "boiling frog problem" because we may not fully recognize how rapidly AI capabilities are advancing and fail to respond with commensurate urgency. In this article, we argue that the central challenge of AI in physics education is not cheating or tool selection, but instructional design. Drawing on research on self-regulated learning, cognitive load, multiple representations, and hybrid intelligence, we propose a practical framework for cognitively activated learning activities that structures student activities before, during, and after AI use. Using an example from an introductory kinematics laboratory, we show how AI can be integrated in ways that preserve prediction, interpretation, and evaluation as core learning activities. Rather than treating AI as an answer-generating tool, the framework positions AI as an epistemic partner whose contributions are deliberately bounded and reflected upon.
Why we are recommending this paper?
Due to your Interest in AI Air Consumption
University of Michigan
AI Insights - The transversality of methods and challenges across DESC working groups demands deliberate coordination to prevent fragmented effort and ensure that best practices, validated tools, and lessons learned propagate rapidly throughout the collaboration. (ML: 0.97)👍👎
- AI (in the sense of LLMs and agents) has not yet significantly started to impact DESC, but ML is already embedded in many workflows and its importance will only grow as analyses become more ambitious and data volumes increase. (ML: 0.96)👍👎
- The science goals of DESC place unusually stringent demands on statistical methodology. (ML: 0.93)👍👎
- DESC: Dark Energy Science Collaboration LLMs: Large Language Models Agents: Artificial Intelligence systems capable of autonomous decision-making Unlocking the full potential of AI/ML in DESC requires targeted methodological research, proactive engagement with emerging technologies, and robust operational foundations. (ML: 0.86)👍👎
- Model miscalibration Opaque failure modes Reproducibility challenges (ML: 0.75)👍👎
Abstract
The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) will produce unprecedented volumes of heterogeneous astronomical data (images, catalogs, and alerts) that challenge traditional analysis pipelines. The LSST Dark Energy Science Collaboration (DESC) aims to derive robust constraints on dark energy and dark matter from these data, requiring methods that are statistically powerful, scalable, and operationally reliable. Artificial intelligence and machine learning (AI/ML) are already embedded across DESC science workflows, from photometric redshifts and transient classification to weak lensing inference and cosmological simulations. Yet their utility for precision cosmology hinges on trustworthy uncertainty quantification, robustness to covariate shift and model misspecification, and reproducible integration within scientific pipelines. This white paper surveys the current landscape of AI/ML across DESC's primary cosmological probes and cross-cutting analyses, revealing that the same core methodologies and fundamental challenges recur across disparate science cases. Since progress on these cross-cutting challenges would benefit multiple probes simultaneously, we identify key methodological research priorities, including Bayesian inference at scale, physics-informed methods, validation frameworks, and active learning for discovery. With an eye on emerging techniques, we also explore the potential of the latest foundation model methodologies and LLM-driven agentic AI systems to reshape DESC workflows, provided their deployment is coupled with rigorous evaluation and governance. Finally, we discuss critical software, computing, data infrastructure, and human capital requirements for the successful deployment of these new methodologies, and consider associated risks and opportunities for broader coordination with external actors.
Why we are recommending this paper?
Due to your Interest in AI Energy Consumption
Delft University of Technology
AI Insights - Ecological validity: The extent to which the results of an experiment can be generalized to real-world situations. (ML: 0.99)👍👎
- Fair compensation: Ensuring that participants receive a fair wage for their work, considering factors such as task complexity and required expertise. (ML: 0.98)👍👎
- The Incentive-Tuning Framework provides a standardized solution for designing effective incentive schemes in human-AI decision-making studies. (ML: 0.97)👍👎
- The Incentive-Tuning Framework is a standardized solution for designing and documenting effective incentive schemes in human-AI decision-making studies. (ML: 0.97)👍👎
- Incentive scheme: A system of rewards or penalties designed to motivate participants in human-AI decision-making studies. (ML: 0.97)👍👎
- The Incentive-Tuning Framework aims to address methodological challenges surrounding incentive design and provide a solution for researchers to tune 'appropriate' incentive schemes for their specific studies. (ML: 0.97)👍👎
- A well-designed framework can foster a standardized, systematic, and comprehensive approach to designing effective incentive schemes. (ML: 0.96)👍👎
- Researchers should prioritize intentional design and alignment with research goals when employing an incentive scheme. (ML: 0.96)👍👎
- Researchers should explicitly identify the purpose of employing an incentive scheme to ensure intentional design and alignment with research goals. (ML: 0.95)👍👎
- The framework consists of five steps: identifying the purpose of employing an incentive scheme, coming up with a base pay, designing a bonus structure, gathering participant feedback, and reflecting on design implications. (ML: 0.88)👍👎
Abstract
AI has revolutionised decision-making across various fields. Yet human judgement remains paramount for high-stakes decision-making. This has fueled explorations of collaborative decision-making between humans and AI systems, aiming to leverage the strengths of both. To explore this dynamic, researchers conduct empirical studies, investigating how humans use AI assistance for decision-making and how this collaboration impacts results. A critical aspect of conducting these studies is the role of participants, often recruited through crowdsourcing platforms. The validity of these studies hinges on the behaviours of the participants, hence effective incentives that can potentially affect these behaviours are a key part of designing and executing these studies. In this work, we aim to address the critical role of incentive design for conducting empirical human-AI decision-making studies, focusing on understanding, designing, and documenting incentive schemes. Through a thematic review of existing research, we explored the current practices, challenges, and opportunities associated with incentive design for human-AI decision-making empirical studies. We identified recurring patterns, or themes, such as what comprises the components of an incentive scheme, how incentive schemes are manipulated by researchers, and the impact they can have on research outcomes. Leveraging the acquired understanding, we curated a set of guidelines to aid researchers in designing effective incentive schemes for their studies, called the Incentive-Tuning Framework, outlining how researchers can undertake, reflect on, and document the incentive design process. By advocating for a standardised yet flexible approach to incentive design and contributing valuable insights along with practical tools, we hope to pave the way for more reliable and generalizable knowledge in the field of human-AI decision-making.
Why we are recommending this paper?
Due to your Interest in AI Energy Consumption
University of California, Santa Barbara
AI Insights - Fairness: The principle of ensuring that machine learning models do not discriminate against certain groups or individuals based on protected characteristics such as race, gender, or age. (ML: 0.99)👍👎
- The paper demonstrates the potential of using Chernoff Information as a fairness metric in machine learning models. (ML: 0.97)👍👎
- The paper explores the connection between Chernoff Information and fairness in machine learning models. (ML: 0.97)👍👎
- The paper presents several experiments to demonstrate the effectiveness of using Chernoff Information as a fairness metric. (ML: 0.97)👍👎
- The connection between noise and differential privacy is crucial for understanding the impact of noise on model fairness. (ML: 0.96)👍👎
- Chernoff Information is used as a privacy constraint for adversarial classification, providing a new perspective on fairness. (ML: 0.91)👍👎
- The authors provide a comprehensive overview of related work in the field of fairness and differential privacy. (ML: 0.91)👍👎
- The authors investigate the relationship between noise and differential privacy, highlighting the importance of understanding this connection. (ML: 0.88)👍👎
- Chernoff Information: A measure of the difference between two probability distributions, used to quantify the amount of information gained from observing one distribution given another. (ML: 0.85)👍👎
- Differential Privacy: A framework for protecting individual data by adding noise to ensure that an attacker cannot infer sensitive information about a single individual. (ML: 0.84)👍👎
Abstract
Fairness and privacy are two vital pillars of trustworthy machine learning. Despite extensive research on these individual topics, the relationship between fairness and privacy has received significantly less attention. In this paper, we utilize the information-theoretic measure Chernoff Information to highlight the data-dependent nature of the relationship among the triad of fairness, privacy, and accuracy. We first define Noisy Chernoff Difference, a tool that allows us to analyze the relationship among the triad simultaneously. We then show that for synthetic data, this value behaves in 3 distinct ways (depending on the distribution of the data). We highlight the data distributions involved in these cases and explore their fairness and privacy implications. Additionally, we show that Noisy Chernoff Difference acts as a proxy for the steepness of the fairness-accuracy curves. Finally, we propose a method for estimating Chernoff Information on data from unknown distributions and utilize this framework to examine the triad dynamic on real datasets. This work builds towards a unified understanding of the fairness-privacy-accuracy relationship and highlights its data-dependent nature.
Why we are recommending this paper?
Due to your Interest in AI for Social Fairness
University of Illinois
AI Insights - Cognitive effort: The mental resources required by participants to understand and interpret KRIYA's outputs. (ML: 0.99)👍👎
- Co-interpretation: The process of interpreting data with the help of KRIYA's conversational interactions. (ML: 0.99)👍👎
- The study highlights the importance of non-judgmental language, transparency around uncertainty, and credibility in building trust with users. (ML: 0.99)👍👎
- Interpretive depth: The level of detail and complexity in KRIYA's explanations, which can be burdensome if too high. (ML: 0.98)👍👎
- Credibility: The extent to which participants trusted KRIYA's interpretations and explanations. (ML: 0.98)👍👎
- Credibility was evaluated by checking whether KRIYA's explanations aligned with their own lived experience. (ML: 0.98)👍👎
- The system's ability to communicate its reasoning and explain why a particular conclusion was being suggested increased trust. (ML: 0.98)👍👎
- The study found that participants appreciated the non-judgmental language used in KRIYA, which lowered the barrier to reflection. (ML: 0.97)👍👎
- Participants were clear about where they wanted the system to stop inferring, and trust could diminish when errors distorted core signals or when explanations extended beyond available evidence. (ML: 0.97)👍👎
- Participants valued transparency around uncertainty, as it reframed trust as something grounded in communicative openness rather than factual perfection. (ML: 0.96)👍👎
Abstract
Most personal wellbeing apps present summative dashboards of health and physical activity metrics, yet many users struggle to translate this information into meaningful understanding. These apps commonly support engagement through goals, reminders, and structured targets, which can reinforce comparison, judgment, and performance anxiety. To explore a complementary approach that prioritizes self-reflection, we design KRIYA, an AI wellbeing companion that supports co-interpretive engagement with personal wellbeing data. KRIYA aims to collaborate with users to explore questions, explanations, and future scenarios through features such as Comfort Zone, Detective Mode, and What-If Planning. We conducted semi-structured interviews with 18 college students interacting with a KRIYA prototype using hypothetical data. Our findings show that through KRIYA interaction, users framed engaging with wellbeing data as interpretation rather than performance, experienced reflection as supportive or pressuring depending on emotional framing, and developed trust through transparency. We discuss design implications for AI companions that support curiosity, self-compassion, and reflective sensemaking of personal health data.
Why we are recommending this paper?
Due to your Interest in AI for Social Good
Sony
AI Insights - The paper concludes that current XAI methods are based on flawed assumptions and lack a clear understanding of the relationship between humans and machines. (ML: 0.98)👍👎
- Apparatuses: The technical tools, methods, and narratives that constitute what is made intelligible and what is excluded from intelligibility in XAI practices. (ML: 0.97)👍👎
- The paper critiques the current state of Explainable AI (XAI) methods, arguing that they are based on flawed assumptions and lack a clear understanding of the relationship between humans and machines. (ML: 0.97)👍👎
- The paper highlights the limitations of current XAI methods, including their reliance on simplifications and abstractions that erase the original system, and their failure to account for human-machine incommensurability. (ML: 0.96)👍👎
- The authors propose an agential realist approach to XAI, which views interpretation as a relational co-production of interpretable phenomena through intra-actions between human and non-human agencies. (ML: 0.96)👍👎
- Agential cut: The moment at which an interpretive apparatus enacts a relational co-production of interpretable phenomena through intra-actions between human and non-human agencies. (ML: 0.96)👍👎
- Agential realism: A philosophical framework that views knowledge as an intra-action between human and non-human agencies. (ML: 0.94)👍👎
- Intra-action: The process by which human and non-human agencies co-produce interpretable phenomena through their entanglements. (ML: 0.92)👍👎
- The authors suggest that a diffractive optic offers a more philosophically robust reading of XAI practices, one that acknowledges the emergent nature of interpretation and the importance of situated contexts. (ML: 0.90)👍👎
- This approach challenges the dominant reflectivity and refractivity optics in XAI, which assume that meaning pre-exists the practices and beings that produce it. (ML: 0.75)👍👎
Abstract
Explainable AI (XAI) is frequently positioned as a technical problem of revealing the inner workings of an AI model. This position is affected by unexamined onto-epistemological assumptions: meaning is treated as immanent to the model, the explainer is positioned outside the system, and a causal structure is presumed recoverable through computational techniques. In this paper, we draw on Barad's agential realism to develop an alternative onto-epistemology of XAI. We propose that interpretations are material-discursive performances that emerge from situated entanglements of the AI model with humans, context, and the interpretative apparatus. To develop this position, we read a comprehensive set of XAI methods through agential realism and reveal the assumptions and limitations that underpin several of these methods. We then articulate the framework's ethical dimension and propose design directions for XAI interfaces that support emergent interpretation, using a speculative text-to-music interface as a case study.
Why we are recommending this paper?
Due to your Interest in AI on Education
University of North Carolina at Chapel Hill
AI Insights - Medium of instruction emerges as the strongest predictor of differential treatment for both MATH-50 and JEEBench tasks. (ML: 0.99)👍👎
- The study highlights the persistence of demographic biases in AI-generated explanations across Indian and American STEM educational systems. (ML: 0.99)👍👎
- Algorithmic discrimination based on medium of instruction is concerning as it positions English-medium students as more advantaged. (ML: 0.98)👍👎
- Large Language Models (LLMs) demonstrate fine-grained understanding of discriminatory practices prevalent in India. (ML: 0.98)👍👎
- These LLM behaviors reflect algorithmic reinforcement of colonial linguistic hierarchies that position English as the sole legitimate language of academic ability and knowledge. (ML: 0.98)👍👎
- MATH-50 and JEEBench tasks: Specific educational tasks or assessments used to evaluate students' understanding of mathematics and science concepts. (ML: 0.97)👍👎
- LLMs demonstrate an understanding of discriminatory practices prevalent in India, including fine-grained institutional and linguistic hierarchies. (ML: 0.96)👍👎
- Large Language Models (LLMs): A type of artificial intelligence model designed to process and generate human-like language. (ML: 0.95)👍👎
- Medium of instruction: The language in which a student is taught, often referring to the primary language used in education. (ML: 0.94)👍👎
- LLMs like Qwen32B and GPT-4o provide English-medium students with high-MGL (more complex) explanations and Hindi/regional-medium students with simpler, low-MGL explanations. (ML: 0.93)👍👎
Abstract
The popularization of AI chatbot usage globally has created opportunities for research into their benefits and drawbacks, especially for students using AI assistants for coursework support. This paper asks: how do LLMs perceive the intellectual capabilities of student profiles from intersecting marginalized identities across different cultural contexts? We conduct one of the first large-scale intersectional analyses on LLM explanation quality for Indian and American undergraduate profiles preparing for engineering entrance examinations. By constructing profiles combining multiple demographic dimensions including caste, medium of instruction, and school boards in India, and race, HBCU attendance, and school type in America, alongside universal factors like income and college tier, we examine how quality varies across these factors. We observe biases providing lower-quality outputs to profiles with marginalized backgrounds in both contexts. LLMs such as Qwen2.5-32B-Instruct and GPT-4o demonstrate granular understandings of context-specific discrimination, systematically providing simpler explanations to Hindi/Regional-medium students in India and HBCU profiles in America, treating these as proxies for lower capability. Even when marginalized profiles attain social mobility by getting accepted into elite institutions, they still receive more simplistic explanations, showing how demographic information is inextricably linked to LLM biases. Different models (Qwen2.5-32B-Instruct, GPT-4o, GPT-4o-mini, GPT-OSS 20B) embed similar biases against historically marginalized populations in both contexts, preventing profiles from switching between AI assistants for better results. Our findings have strong implications for AI incorporation into global engineering education.
Why we are recommending this paper?
Due to your Interest in AI on Education
University of the Cumberlands
AI Insights - However, its development and deployment require careful consideration of the challenges associated with these technologies, including safety, effectiveness, and alignment with human values. (ML: 0.97)👍👎
- The development of Agentic AI requires a multidisciplinary approach involving clinicians, data scientists, and ethicists to ensure that these systems are safe, effective, and aligned with human values. (ML: 0.97)👍👎
- Lack of standardization in Agentic AI development and deployment Insufficient consideration of human values and ethics in Agentic AI design (ML: 0.95)👍👎
- Human-AI Ecosystems: The complex systems that arise when humans interact with Agentic AI, requiring a new set of skills and competencies to manage effectively. (ML: 0.93)👍👎
- Agentic AI in healthcare has the potential to transform patient care by enabling personalized medicine and improving treatment outcomes. (ML: 0.91)👍👎
- Regulatory frameworks such as the EU AI Act provide a starting point for establishing guidelines for Agentic AI in healthcare, but more work is needed to address the complexities of these systems. (ML: 0.90)👍👎
- Agentic AI has the potential to revolutionize healthcare by enabling personalized medicine and improving treatment outcomes. (ML: 0.89)👍👎
- The EU AI Act and other regulatory frameworks aim to establish guidelines for the development and deployment of Agentic AI in healthcare, but more work is needed to address the challenges associated with these technologies. (ML: 0.89)👍👎
- Agentic AI: A type of artificial intelligence that can perform tasks independently and make decisions based on its own goals and motivations. (ML: 0.83)👍👎
Abstract
Healthcare organizations are beginning to embed agentic AI into routine workflows, including clinical documentation support and early-warning monitoring. As these capabilities diffuse across departments and vendors, health systems face agent sprawl, causing duplicated agents, unclear accountability, inconsistent controls, and tool permissions that persist beyond the original use case. Existing AI governance frameworks emphasize lifecycle risk management but provide limited guidance for the day-to-day operations of agent fleets. We propose a Unified Agent Lifecycle Management (UALM) blueprint derived from a rapid, practice-oriented synthesis of governance standards, agent security literature, and healthcare compliance requirements. UALM maps recurring gaps onto five control-plane layers: (1) an identity and persona registry, (2) orchestration and cross-domain mediation, (3) PHI-bounded context and memory, (4) runtime policy enforcement with kill-switch triggers, and (5) lifecycle management and decommissioning linked to credential revocation and audit logging. A companion maturity model supports staged adoption. UALM offers healthcare CIOs, CISOs, and clinical leaders an implementable pattern for audit-ready oversight that preserves local innovation and enables safer scaling across clinical and administrative domains.
Why we are recommending this paper?
Due to your Interest in AI on Healthcare
APJ Abdul Kalam Technological University
AI Insights - The implementation of AI models must consider ethical and regulatory frameworks that protect patient data while still harnessing the power of advanced analytics. (ML: 0.97)👍👎
- Ethical and regulatory frameworks must be considered to protect patient data while harnessing the power of advanced analytics. (ML: 0.95)👍👎
- Explainable AI (XAI): A subfield of artificial intelligence that focuses on making AI decisions transparent and understandable. (ML: 0.95)👍👎
- Differential Privacy: A framework for protecting individual privacy by adding noise to the data before releasing it for analysis or training AI models. (ML: 0.92)👍👎
- Federated Learning: A machine learning approach that enables multiple parties to jointly learn a model without sharing their raw data. (ML: 0.88)👍👎
- The future of AI in healthcare will prioritize security and privacy, utilizing federated learning to train AI models on decentralized data, keeping sensitive information secure. (ML: 0.88)👍👎
- The balance between security and privacy is critical in the application of AI within healthcare systems. (ML: 0.87)👍👎
- The paper explores the critical balance between security and privacy in the application of AI within healthcare systems. (ML: 0.83)👍👎
- Homomorphic Encryption: An encryption method that allows computations to be performed on encrypted data without decrypting it first. (ML: 0.80)👍👎
- Federated learning and differential privacy are practical methods for maintaining privacy and security through techniques such as federated learning and differential privacy. (ML: 0.70)👍👎
Abstract
As digital threats continue to grow, organizations must find ways to enhance security while protecting user privacy. This paper explores how artificial intelligence (AI) plays a crucial role in achieving this balance. AI technologies can improve security by detecting threats, monitoring systems, and automating responses. However, using AI also raises privacy concerns that need careful consideration.We examine real-world examples from the healthcare sector to illustrate how organizations can implement AI solutions that strengthen security without compromising patient privacy. Additionally, we discuss the importance of creating transparent AI systems and adhering to privacy regulations.Ultimately, this paper provides insights and recommendations for integrating AI into healthcare security practices, helping organizations navigate the challenges of modern management while keeping patient data safe.
Why we are recommending this paper?
Due to your Interest in AI on Healthcare
Interests not found
We did not find any papers that match the below interests.
Try other terms also consider if the content exists in arxiv.org.
- AI for Social Equity
- AI for Society
- AI on Air
- AI on Energy
- AI on Food
💬 Help Shape Our Pricing
We're exploring pricing options to make this project sustainable. Take 3 minutes to share what you'd be willing to pay (if anything). Your input guides our future investment.
Share Your Feedback
Help us improve your experience!
This project is on its early stages your feedback can be pivotal on the future of the project.
Let us know what you think about this week's papers and suggestions!
Give Feedback