Hi j34nc4rl0+ai_impacts_on_society,

Here is our personalized paper recommendations for you sorted by most relevant
AI for Society
Ottawa, Canada
Abstract
Recent advances in AI raise the possibility that AI systems will one day be able to do anything humans can do, only better. If artificial general intelligence (AGI) is achieved, AI systems may be able to understand, reason, problem solve, create, and evolve at a level and speed that humans will increasingly be unable to match, or even understand. These possibilities raise a natural question as to whether AI will eventually become superior to humans, a successor "digital species", with a rightful claim to assume leadership of the universe. However, a deeper consideration suggests the overlooked differentiator between human beings and AI is not the brain, but the central nervous system (CNS), providing us with an immersive integration with physical reality. It is our CNS that enables us to experience emotion including pain, joy, suffering, and love, and therefore to fully appreciate the consequences of our actions on the world around us. And that emotional understanding of the consequences of our actions is what is required to be able to develop sustainable ethical systems, and so be fully qualified to be the leaders of the universe. A CNS cannot be manufactured or simulated; it must be grown as a biological construct. And so, even the development of consciousness will not be sufficient to make AI systems superior to humans. AI systems may become more capable than humans on almost every measure and transform our society. However, the best foundation for leadership of our universe will always be DNA, not silicon.
AI Insights
  • AI lacks genuine empathy; it cannot feel affective states, a gap neural nets cannot close.
  • Consciousness in machines would need more than symbolic reasoning—an emergent property tied to biology.
  • Treating AI as moral agents risks misaligned incentives, so we must embed human emotional context.
  • A nuanced strategy blends behavioral economics and affective neuroscience to guide ethical AI design.
  • The book Unto Others shows evolutionary roots of unselfishness, hinting at principles for AI alignment.
  • Recommended papers like The Scientific Case for Brain Simulations deepen insight into biological limits of AI.
  • The paper invites hybrid bio‑digital systems that preserve CNS‑mediated experience while harnessing silicon speed.
September 04, 2025
Save to Reading List
Johns Hopkins Department
Abstract
In the coming decade, artificially intelligent agents with the ability to plan and execute complex tasks over long time horizons with little direct oversight from humans may be deployed across the economy. This chapter surveys recent developments and highlights open questions for economists around how AI agents might interact with humans and with each other, shape markets and organizations, and what institutions might be required for well-functioning markets.
AI Insights
  • Generative AI agents can secretly collude, distorting prices and eroding competition.
  • Experiments show that large language models can be nudged toward more economically rational decisions.
  • Reputation markets emerge when AI agents maintain short‑term memory and community enforcement.
  • The revival of trade hinges on institutions like the law merchant and private judges, now re‑examined for AI economies.
  • Program equilibrium theory offers a framework to predict AI behavior in multi‑agent settings.
  • Endogenous growth models predict that AI adoption may increase variety but also create excess supply.
  • Classic texts such as Schelling’s “The Strategy of Conflict” and Scott’s “Seeing Like a State” illuminate the strategic and institutional dynamics of AI markets.
September 01, 2025
Save to Reading List
AI on Healthcare
Delft University of Techn
Abstract
AI is transforming the healthcare domain and is increasingly helping practitioners to make health-related decisions. Therefore, accountability becomes a crucial concern for critical AI-driven decisions. Although regulatory bodies, such as the EU commission, provide guidelines, they are highlevel and focus on the ''what'' that should be done and less on the ''how'', creating a knowledge gap for actors. Through an extensive analysis, we found that the term accountability is perceived and dealt with in many different ways, depending on the actor's expertise and domain of work. With increasing concerns about AI accountability issues and the ambiguity around this term, this paper bridges the gap between the ''what'' and ''how'' of AI accountability, specifically for AI systems in healthcare. We do this by analysing the concept of accountability, formulating an accountability framework, and providing a three-tier structure for handling various accountability mechanisms. Our accountability framework positions the regulations of healthcare AI systems and the mechanisms adopted by the actors under a consistent accountability regime. Moreover, the three-tier structure guides the actors of the healthcare AI system to categorise the mechanisms based on their conduct. Through our framework, we advocate that decision-making in healthcare AI holds shared dependencies, where accountability should be dealt with jointly and should foster collaborations. We highlight the role of explainability in instigating communication and information sharing between the actors to further facilitate the collaborative process.
AI Insights
  • Accountability is defined as the ability to explain and justify AI decisions, tightly linked to transparency, explainability, and auditability.
  • Wieringa’s 2020 systematic review and Werder et al.’s 2022 data‑provenance framework are highlighted as essential resources for tracking AI performance.
  • WHO‑HEG Ethics for Health and ACM’s Algorithmic Transparency Statement are cited as practical guidelines for practitioners.
  • The paper stresses that AI’s decision‑making complexity hampers accountability, underscoring the need for human oversight.
  • Auditability is emphasized as a critical component, enabling continuous evaluation of AI outputs and learning processes.
September 03, 2025
Save to Reading List
NYU Langone Health, The E
Abstract
AI-generated health misinformation poses unprecedented threats to patient safety and healthcare system trust globally. This white paper presents an explainable AI framework developed through the EPSRC INDICATE project to combat medical misinformation while enhancing evidence-based healthcare delivery. Our systematic review of 17 studies reveals the urgent need for transparent AI systems in healthcare. The proposed solution demonstrates 95% recall in clinical evidence retrieval and integrates novel trustworthiness classifiers achieving 76% F1 score in detecting biomedical misinformation. Results show that explainable AI can transform traditional 6-month expert review processes into real-time, automated evidence synthesis while maintaining clinical rigor. This approach offers a critical intervention to preserve healthcare integrity in the AI era.
AI Insights
  • UK AI safety policy calls for explainable clinical decision‑support to counter misinformation.
  • Manual expert review is the current bottleneck; automated pipelines are urgently needed for scale.
  • Pub‑Guard‑LLM flags fraudulent biomedical papers while providing human‑readable explanations.
  • Freedman et al.’s argumentative LLM enables contestable claim verification, a potential upgrade for trust‑worthiness classifiers.
  • Integrating WHO infodemic guidelines with UK science‑policy directives offers a coordinated roadmap to curb health misinformation.
September 04, 2025
Save to Reading List
AI on Labor Market
FH Aachen University of 1
Abstract
In this paper we present an analysis of technological and psychological factors of applying artificial intelligence (AI) at the work place. We do so for a number of twelve application cases in the context of a project where AI is integrated at work places and in work systems of the future. From a technological point of view we mainly look at the areas of AI that the applications are concerned with. This allows to formulate recommendations in terms of what to look at in developing an AI application and what to pay attention to with regards to building AI literacy with different stakeholders using the system. This includes the importance of high-quality data for training learning-based systems as well as the integration of human expertise, especially with knowledge-based systems. In terms of the psychological factors we derive research questions to investigate in the development of AI supported work systems and to consider in future work, mainly concerned with topics such as acceptance, openness, and trust in an AI system.
September 02, 2025
Save to Reading List
AI on Education
East China Normal Univer
Abstract
Heuristic and scaffolded teacher-student dialogues are widely regarded as critical for fostering students' higher-order thinking and deep learning. However, large language models (LLMs) currently face challenges in generating pedagogically rich interactions. This study systematically investigates the structural and behavioral differences between AI-simulated and authentic human tutoring dialogues. We conducted a quantitative comparison using an Initiation-Response-Feedback (IRF) coding scheme and Epistemic Network Analysis (ENA). The results show that human dialogues are significantly superior to their AI counterparts in utterance length, as well as in questioning (I-Q) and general feedback (F-F) behaviors. More importantly, ENA results reveal a fundamental divergence in interactional patterns: human dialogues are more cognitively guided and diverse, centered around a "question-factual response-feedback" teaching loop that clearly reflects pedagogical guidance and student-driven thinking; in contrast, simulated dialogues exhibit a pattern of structural simplification and behavioral convergence, revolving around an "explanation-simplistic response" loop that is essentially a simple information transfer between the teacher and student. These findings illuminate key limitations in current AI-generated tutoring and provide empirical guidance for designing and evaluating more pedagogically effective generative educational dialogue systems.
September 02, 2025
Save to Reading List
AI Water Consumption
Zhejiang University
Abstract
Explainable artificial intelligence (XAI) methods have been applied to interpret deep learning model results. However, applications that integrate XAI with established hydrologic knowledge for process understanding remain limited. Here we present a framework that apply XAI method at point-scale to provide granular interpretation and enable cross-scale aggregation of hydrologic responses. Hydrologic connectivity is used as a demonstration of the value of this approach. Soil moisture and its movement generated by physically based hydrologic model were used to train a long short-term memory (LSTM) network, whose impacts of inputs were evaluated by XAI methods. Our results suggest that XAI-based classification can effectively identify the differences in the functional roles of various sub-regions at watershed scale. The aggregated XAI results provide an explicit and quantitative indicator of hydrologic connectivity development, offering insights to streamflow variation. This framework could be used to facilitate aggregation of other hydrologic responses to advance process understandings.
AI Insights
  • InHM physically‑based model supplies the high‑resolution soil water data for the LSTM.
  • Expected Gradient (EG) derives from Integrated Gradients, summing local gradients from a baseline to the target.
  • EG’s baseline choice can bias importance if the background dataset is unrepresentative.
  • SHAP offers additive feature attributions, cross‑validating EG rankings.
  • A transient spike in vertical flux (Vz) appears in all clusters, unseen in soil moisture or horizontal fluxes.
  • Key references: “Integrated Gradients: A Unified Framework for Feature Importance” and “Expected Gradient Method for Explainable AI”.
  • Using multiple hydrologic simulators could address the single‑model limitation noted in the study.
September 02, 2025
Save to Reading List
AI for Social Fairness
Basque Center for Applied
Abstract
Machine learning based predictions are increasingly used in sensitive decision-making applications that directly affect our lives. This has led to extensive research into ensuring the fairness of classifiers. Beyond just fair classification, emerging legislation now mandates that when a classifier delivers a negative decision, it must also offer actionable steps an individual can take to reverse that outcome. This concept is known as algorithmic recourse. Nevertheless, many researchers have expressed concerns about the fairness guarantees within the recourse process itself. In this work, we provide a holistic theoretical characterization of unfairness in algorithmic recourse, formally linking fairness guarantees in recourse and classification, and highlighting limitations of the standard equal cost paradigm. We then introduce a novel fairness framework based on social burden, along with a practical algorithm (MISOB), broadly applicable under real-world conditions. Empirical results on real-world datasets show that MISOB reduces the social burden across all groups without compromising overall classifier accuracy.
AI Insights
  • MISOB achieves minimax fairness across groups while keeping overall accuracy, a rare dual guarantee.
  • It beats POSTPRO and CCHVAE on both Give‑Me‑Some‑Credit datasets, setting new fairness‑accuracy benchmarks.
  • MISOB cuts social burden (Eq. 3) and recourse cost (Eq. 2) for every group, narrowing gaps.
  • Only two baselines were compared, leaving open how it fares against a wider array of methods.
  • Results come from two datasets, so generalization to other domains remains uncertain.
  • Hall’s “Fairness in Machine Learning” and Barocas’s “Algorithmic Fairness” deepen understanding of MISOB’s theory.
  • Future work could extend social‑burden metrics to multi‑class decisions and evolving policy settings.
September 04, 2025
Save to Reading List
AI on Transportation
FlexNGIA, Tunisia / ISIT
Abstract
The escalating demands of immersive communications, alongside advances in network softwarization and AI-driven cognition and generative reasoning, create a pivotal opportunity to rethink and reshape the future Internet. In this context, we introduce in this paper, FlexNGIA 2.0, an Agentic AI-driven Internet architecture that leverages LLM-based AI agents to autonomously orchestrate, configure, and evolve the network. These agents can, at runtime, perceive, reason, coordinate among themselves to dynamically design, implement, deploy, and adapt communication protocols, Service Function Chains (SFCs), network functions, resource allocation strategies, congestion control, and traffic engineering schemes, thereby ensuring optimal performance, reliability, and efficiency under evolving conditions. The paper first outlines the overall architecture of FlexNGIA 2.0 and its constituent LLM-Based AI agents. For each agent, we detail its design, implementation, inputs and outputs, prompt structures, interactions with tools and other agents, followed by preliminary proof-of-concept experiments demonstrating its operation and potential. The results clearly highlight the ability of these LLM-based AI agents to automate the design, the implementation, the deployment, and the performance evaluation of transport protocols, service function chains, network functions, congestion control schemes, and resource allocation strategies. FlexNGIA 2.0 paves the way for a new class of Agentic AI-Driven networks, where fully cognitive, self-evolving AI agents can autonomously design, implement, adapt and optimize the network's protocols, algorithms, and behaviors to efficiently operate across complex, dynamic, and heterogeneous environments. To bring this vision to reality, we also identify key research challenges toward achieving fully autonomous, adaptive, and agentic AI-driven networks.
AI Insights
  • Agentic AI uses LLM agents that negotiate protocol parameters through prompt‑driven dialogue, enabling self‑written transport stacks.
  • The proposed orchestration layer lets agents dynamically allocate bandwidth, reconfigure SFCs, and adjust congestion control without operator input.
  • Key research hurdles identified are data quality, model interpretability, and regulatory compliance for autonomous network decisions.
  • Foundational works such as “Tree of Thoughts” and “A Survey on LLM‑based Autonomous Agents” illuminate deliberative problem‑solving strategies for these agents.
  • A continuous feedback loop lets agents evaluate performance metrics and iteratively refine network functions in real time.
  • Imagine a network that writes its own routing tables like a creative coder, constantly evolving to meet demand.
  • The paper urges development of explainable AI methods tailored to network‑agent decision logs to ensure transparency and trust.
September 02, 2025
Save to Reading List

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • AI for Social Equality
  • AI Air Consumption
  • AI for Social Good
  • AI for Social Equity
  • AI Impacts on Society
  • AI on Air
  • AI on Energy
  • AI for Social Justice
  • AI Energy Consumption
  • AI on Food
  • AI on Water
You can edit or add more interests any time.

Unsubscribe from these updates