Hi!

Your personalized paper recommendations for 24 to 28 November, 2025.
🎯 Top Personalized Recommendations
Rate paper: 👍 👎 ♥ Save
AI Summary
  • The paper proposes a new algorithm called DP2DP (Differentially Private and Fair Deep Learning) that combines differential privacy and fairness. [3]
  • The algorithm is designed for deep learning tasks and uses a Lagrangian dual approach to achieve both privacy and fairness. [3]
  • Fairness: The concept of ensuring that an algorithm's output is unbiased and does not discriminate against certain groups or individuals. [2]
  • The authors provide a theoretical analysis of the algorithm's privacy and fairness properties, as well as experimental results on benchmark datasets. [1]
Abstract
The increasing use of machine learning in sensitive applications demands algorithms that simultaneously preserve data privacy and ensure fairness across potentially sensitive sub-populations. While privacy and fairness have each been extensively studied, their joint treatment remains poorly understood. Existing research often frames them as conflicting objectives, with multiple studies suggesting that strong privacy notions such as differential privacy inevitably compromise fairness. In this work, we challenge that perspective by showing that differential privacy can be integrated into a fairness-enhancing pipeline with minimal impact on fairness guarantees. We design a postprocessing algorithm, called DP2DP, that enforces both demographic parity and differential privacy. Our analysis reveals that our algorithm converges towards its demographic parity objective at essentially the same rate (up logarithmic factor) as the best non-private methods from the literature. Experiments on both synthetic and real datasets confirm our theoretical results, showing that the proposed algorithm achieves state-of-the-art accuracy/fairness/privacy trade-offs.
Why we think this paper is great for you:
This paper directly tackles the critical challenge of ensuring fairness and privacy in AI applications. It offers insights into building machine learning systems that are equitable across diverse groups.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Summary
  • Social AI is a type of artificial intelligence that can interact with humans in a way that simulates human-like conversation and behavior. [3]
  • The development of Social AI requires the integration of multiple disciplines, including computer science, psychology, sociology, and philosophy. [3]
  • Social AI has the potential to revolutionize various industries, such as healthcare, education, and customer service, by providing personalized support and improving user experience. [3]
  • Social AI: A type of artificial intelligence that can interact with humans in a way that simulates human-like conversation and behavior. [3]
  • Human-AI interaction: The process by which humans interact with artificial intelligence systems, such as chatbots or virtual assistants. [3]
  • The development of Social AI has the potential to transform various industries and improve user experience. [3]
  • However, the development of Social AI also raises several challenges and concerns, including ensuring transparency, accountability, and ethics in AI decision-making. [2]
Abstract
As artificial intelligence systems become increasingly integrated into human social contexts, Artificial Social Intelligence (ASI) has emerged as a critical capability that enables AI to perceive, understand, and engage meaningfully in complex human social interactions. This chapter introduces a comprehensive framework for Human-Centered Artificial Social Intelligence (HC-ASI), built upon the Technology-Human Factors-Ethics (THE) Triangle, which systematically addresses both technical foundations and human-centered design principles necessary for developing socially intelligent AI systems. This chapter provides a comprehensive overview of current ASI research. This chapter begins by establishing the theoretical foundations of ASI, tracing its evolution from classical psychological theories of human social intelligence to contemporary computational models, then examines the mechanisms underlying human-AI social interaction with particular emphasis on establishing shared social understanding and appropriate role positioning. The chapter further explores ASI's practical implications for individuals and groups through comprehensive evaluation frameworks that combine technical benchmarks with human-centered experiential assessments, demonstrating real-world applications through detailed case studies spanning healthcare, companionship, education, and customer service domains. Building on the overview and the framework of HC -ASI, this chapter articulates core HC-ASI design principles and translates them into actionable methodologies and implementation guidelines that provide practical guidance for researchers and practitioners. This chapter concludes with a critical discussion of current challenges and promising directions for developing comprehensive HC-ASI ecosystems.
Why we think this paper is great for you:
This work explores how AI can meaningfully integrate into human social contexts, focusing on its ability to understand and engage in complex interactions. It is highly relevant to understanding AI's role in society.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Summary
  • The paper presents an Active Inference Framework (AIF) driven agent that can act adequately in context, facing normative conflicts, similar to human agents. [3]
  • Context-dependent preferences allow AIF-driven agents to make nuanced decisions based on the current context. [3]
  • Gamma dynamics reflect the affective aspect of belief updating in human subjects, where valence and arousal emerge from precision-weighted prediction-error flow and belief updating about policies. [3]
  • Active Inference Framework (AIF): A computational framework that models how agents make decisions based on their internal state and external environment. [3]
  • Context-dependent preferences: Preferences that change depending on the current context, allowing the agent to adapt its behavior accordingly. [3]
  • Low confidence is conducive to vigilance and allows for normatively appropriate conduct in context. [2]
  • Precision: A measure of confidence in a decision or policy, with higher precision indicating greater certainty. [1]
Abstract
This paper presents a computational account of how legal norms can influence the behavior of artificial intelligence (AI) agents, grounded in the active inference framework (AIF) that is informed by principles of economic legal analysis (ELA). The ensuing model aims to capture the complexity of human decision-making under legal constraints, offering a candidate mechanism for agent governance in AI systems, that is, the (auto)regulation of AI agents themselves rather than human actors in the AI industry. We propose that lawful and norm-sensitive AI behavior can be achieved through regulation by design, where agents are endowed with intentional control systems, or behavioral safety valves, that guide real-time decisions in accordance with normative expectations. To illustrate this, we simulate an autonomous driving scenario in which an AI agent must decide when to yield the right of way by balancing competing legal and pragmatic imperatives. The model formalizes how AIF can implement context-dependent preferences to resolve such conflicts, linking this mechanism to the conception of law as a scaffold for rational decision-making under uncertainty. We conclude by discussing how context-dependent preferences could function as safety mechanisms for autonomous agents, enhancing lawful alignment and risk mitigation in AI governance.
Why we think this paper is great for you:
You will find this paper insightful as it investigates how legal frameworks can guide AI behavior and shape its governance. It provides a computational perspective on the societal and ethical implications of AI.
Rate paper: 👍 👎 ♥ Save
Abstract
Accurate and interpretable air pollution forecasting is crucial for public health, but most models face a trade-off between performance and interpretability. This study proposes a physics-guided, interpretable-by-design spatiotemporal learning framework. The model decomposes the spatiotemporal behavior of air pollutant concentrations into two transparent, additive modules. The first is a physics-guided transport kernel with directed weights conditioned on wind and geography (advection). The second is an explainable attention mechanism that learns local responses and attributes future concentrations to specific historical lags and exogenous drivers. Evaluated on a comprehensive dataset from the Stockholm region, our model consistently outperforms state-of-the-art baselines across multiple forecasting horizons. Our model's integration of high predictive performance and spatiotemporal interpretability provides a more reliable foundation for operational air-quality management in real-world applications.
Why we think this paper is great for you:
This research presents an interpretable AI model for forecasting air pollution, which is vital for public health and environmental monitoring. It offers a practical application of AI to address critical environmental concerns.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Summary
  • The five levels of teacher-AI teaming are: transactional, situational, operational, praxical, and synergistic. [3]
  • Human factors play a significant role in achieving synergy, with humans exceeding AI performance being essential for synergy to occur. [3]
  • GenAI technical capabilities and designs also play a vital role in nurturing synergy, particularly in the educational context. [3]
  • The development of pedagogically aligned GenAI systems is crucial for contributing to task-specific synergy. [3]
  • GenAI: Generative Artificial Intelligence, a type of AI that can generate human-like text, images, or other forms of content. [3]
  • Synergy: A state of interaction where the performance of both humans and AI is greater than the sum of their individual performances. [3]
  • Institutions should establish structured professional development aligned with AI literacy frameworks to ensure that teachers develop the metacognitive, critical, and pedagogical skills necessary for praxical and eventually synergistic interactions. [2]
  • Effective implementation requires human-centred design processes in which teachers participate in co-designing models, prompts, interaction protocols, and decision-support flows to ensure pedagogical alignment and local relevance. [1]
Abstract
Generative artificial intelligence (GenAI) is increasingly used in education, posing significant challenges for teachers adapting to these changes. GenAI offers unprecedented opportunities for accessibility, scalability and productivity in educational tasks. However, the automation of teaching tasks through GenAI raises concerns about reduced teacher agency, potential cognitive atrophy, and the broader deprofessionalisation of teaching. Drawing findings from prior literature on AI in Education, and refining through a recent systematic literature review, this chapter presents a conceptualisation of five levels of teacher-AI teaming: transactional, situational, operational, praxical and synergistic teaming. The framework aims to capture the nuanced dynamics of teacher-AI interactions, particularly with GenAI, that may lead to the replacement, complementarity, or augmentation of teachers' competences and professional practice. GenAI technological affordances required in supporting teaming, along with empirical studies, are discussed. Drawing on empirical observations, we outline a future vision that moves beyond individual teacher agency toward collaborative decision-making between teachers and AI, in which both agents engage in negotiation, constructive challenge, and co-reasoning that enhance each other's capabilities and enable outcomes neither could realise independently. Further discussion of socio-technical factors beyond teacher-AI teaming is also included to streamline the synergy of teachers and AI in education ethically and practically.
Why we think this paper is great for you:
This paper delves into the evolving role of generative AI in education, particularly its impact on teachers and learning environments. It provides valuable perspectives on fostering synergistic teacher-AI interactions.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Educational simulations have long been recognized as powerful tools for enhancing learning outcomes, yet their creation has traditionally required substantial resources and technical expertise. This paper introduces MicroSims a novel framework for creating lightweight, interactive educational simulations that can be rapidly generated using artificial intelligence, universally embedded across digital learning platforms, and easily customized without programming knowledge. MicroSims occupy a unique position at the intersection of three key innovations: (1) standardized design patterns that enable AI-assisted generation, (2) iframe-based architecture that provides universal embedding and sandboxed security, and (3) transparent, modifiable code that supports customization and pedagogical transparency. We present a comprehensive framework encompassing design principles, technical architecture, metadata standards, and development workflows. Drawing on empirical research from physics education studies and meta-analyses across STEM disciplines, we demonstrate that interactive simulations can improve conceptual understanding by up to 30-40\% compared to traditional instruction. MicroSims extend these benefits while addressing persistent barriers of cost, technical complexity, and platform dependence. This work has significant implications for educational equity, and low-cost intelligent interactive textbooks that enabling educators worldwide to create customized, curriculum-aligned simulations on demand. We discuss implementation considerations, present evidence of effectiveness, and outline future directions for AI-powered adaptive learning systems built on the MicroSim foundation.
Why we think this paper is great for you:
You will appreciate this framework for creating AI-generated educational simulations, designed to enhance learning outcomes. It highlights innovative ways AI can support and transform educational practices.
Rate paper: 👍 👎 ♥ Save
AI Summary
  • The rise of digital ghosts and deadbots forces us to confront fundamental questions about how we remember our dead. [3]
  • Digital ghosts can become the blind spot between memory and trickery, a prolonged mourning disguised as dialogue. [3]
  • Deadbots: AI replicas that simulate the presence of deceased individuals. [3]
  • Digital ghosts: AI-generated representations of deceased people that can interact with the living. [3]
  • The era of AI 'afterlives' is here, and it falls upon us to ensure this technology is used in a way that supports memory without becoming an imposture, and helps heal without betraying the dignity of those we love and lose. [3]
  • The article cites various studies and papers on the topic of digital ghosts and deadbots, including works by authors such as Jed R. [3]
  • From an ethical standpoint, arguably intent and transparency make a substantial difference in creating or engaging with a simulacrum of a loved one. [2]
  • Brubaker and John Danaher. [1]
Abstract
Advances in artificial intelligence now make it possible to simulate the dead through chatbots, voice clones, and video avatars trained on a person's digital traces. These "digital ghosts" are moving from fiction to commercial reality, reshaping how people mourn and remember. This paper offers a conceptual and ethical analysis of AI-mediated digital afterlives. We define what counts as a digital ghost, trace their rise across personal, commercial, and institutional contexts, and identify core ethical tensions around grief and well-being, truthfulness and deception, consent and posthumous privacy, dignity and misrepresentation, and the commercialization of mourning. To analyze these challenges, we propose a nine-dimensional taxonomy of digital afterlife technologies and, building on it, outline the features of an ethically acceptable digital ghost: premortem intent, mutual consent, transparent and limited data use, clear disclosure, restricted purposes and access, family or estate stewardship, and minimal behavioral agency. We argue for targeted regulation and professional guidelines to ensure that digital ghosts can aid remembrance without slipping into forms of deception.
Why we think this paper is great for you:
This paper explores the ethical considerations in designing AI systems that simulate deceased individuals, prompting reflection on AI's profound impact on human experiences. It offers a unique perspective on the ethical dimensions of AI in society.
AI Air Consumption
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
With the rapid rise of the Low-Altitude Economy (LAE), the demand for intelligent processing and real-time response in services such as aerial traffic, emergency communications, and environmental monitoring continues to grow. Meanwhile, the Computing Power Network (CPN) aims to integrate global computing resources and perform on-demand scheduling to efficiently handle services from diverse sources. However, it is limited by static deployment and limited adaptability. In this paper, we analyze the complementary relationship between LAE and CPN and propose a novel air-ground collaborative intelligent service provision with an agentification paradigm. Through synergy between LAE and CPNs, computing and communication services are jointly scheduled and collaboratively optimized to enhance the execution efficiency of low-altitude services and improve the flexibility of CPNs. It also integrates LAE's strengths in aerial sensing, mobile coverage, and dynamic communication links, forming a cloud-edge-air collaborative framework. Hence, we review the characteristics and limitations of both LAE and CPN and explore how they can cooperate to overcome these limitations. Then we demonstrate the flexibility of the integrated CPN and LAE framework through a case study. Finally, we summarize the key challenges in constructing an integrated air-ground computing and communication system and discuss future research directions toward emerging technologies.
AI Summary
  • Computing Power Networks (CPN): A type of network that provides powerful computing resources for various applications and services. [3]
  • The paper proposes an integrated air-ground computing and communication architecture that combines the strengths of Low-Altitude Economy (LAE) networks and Computing Power Networks (CPN). [2]
  • The authors outline key future research directions, including digital twin-enhanced LAE-CPN integrated systems, security and privacy in LAE-CPN integrated systems, and energy-aware LAE-CPN integrated systems. [1]
AI Energy Consumption
Rate paper: 👍 👎 ♥ Save
Abstract
The Cognitive Buffer Hypothesis (CBH) posits that larger brains evolved to enhance survival in changing conditions. However, larger brains also carry higher energy demands, imposing additional metabolic burdens. Alongside brain size, brain organization plays a key role in cognitive ability and, with suitable architectures, may help mitigate energy challenges. This study evolves Artificial Neural Networks (ANNs) used by Reinforcement Learning (RL) agents to investigate how environmental variability and energy costs influence the evolution of neural complexity, defined in terms of ANN size and structure. Results indicate that under energy constraints, increasing seasonality led to smaller ANNs. This challenges CBH and supports the Expensive Brain Hypothesis (EBH), as highly seasonal environments reduced net energy intake and thereby constrained brain size. ANN structural complexity primarily emerged as a byproduct of size, where energy costs promoted the evolution of more efficient networks. These results highlight the role of energy constraints in shaping neural complexity, offering in silico support for biological theory and energy-efficient robotic design.
AI Summary
  • Evolving neural networks through artificial life techniques can provide insights into the evolution of brain structure and function. [3]
  • The results suggest that the expensive brain hypothesis may be supported in certain environments, but not universally. [3]
  • Evolving neural networks through artificial life techniques can provide insights into the evolution of brain structure and function. [3]
  • The expensive brain hypothesis suggests that larger brains are more energetically costly to maintain and may be selected for in environments with high levels of predation or competition. [2]
  • Evolving neural networks through artificial life techniques can provide insights into the evolution of brain structure and function. [1]
Rate paper: 👍 👎 ♥ Save
Abstract
World models are emerging as a foundational paradigm for scalable, data-efficient embodied AI. In this work, we present GigaWorld-0, a unified world model framework designed explicitly as a data engine for Vision-Language-Action (VLA) learning. GigaWorld-0 integrates two synergistic components: GigaWorld-0-Video, which leverages large-scale video generation to produce diverse, texture-rich, and temporally coherent embodied sequences under fine-grained control of appearance, camera viewpoint, and action semantics; and GigaWorld-0-3D, which combines 3D generative modeling, 3D Gaussian Splatting reconstruction, physically differentiable system identification, and executable motion planning to ensure geometric consistency and physical realism. Their joint optimization enables the scalable synthesis of embodied interaction data that is visually compelling, spatially coherent, physically plausible, and instruction-aligned. Training at scale is made feasible through our efficient GigaTrain framework, which exploits FP8-precision and sparse attention to drastically reduce memory and compute requirements. We conduct comprehensive evaluations showing that GigaWorld-0 generates high-quality, diverse, and controllable data across multiple dimensions. Critically, VLA model (e.g., GigaBrain-0) trained on GigaWorld-0-generated data achieve strong real-world performance, significantly improving generalization and task success on physical robots without any real-world interaction during training.
AI Impacts on Society
Rate paper: 👍 👎 ♥ Save
Abstract
In AI, the existential risk denotes the hypothetical threat posed by an artificial system that would possess both the capability and the objective, either directly or indirectly, to eradicate humanity. This issue is gaining prominence in scientific debate due to recent technical advancements and increased media coverage. In parallel, AI progress has sparked speculation and studies about the potential emergence of artificial consciousness. The two questions, AI consciousness and existential risk, are sometimes conflated, as if the former entailed the latter. Here, I explain that this view stems from a common confusion between consciousness and intelligence. Yet these two properties are empirically and theoretically distinct. Arguably, while intelligence is a direct predictor of an AI system's existential threat, consciousness is not. There are, however, certain incidental scenarios in which consciousness could influence existential risk, in either direction. Consciousness could be viewed as a means towards AI alignment, thereby lowering existential risk; or, it could be a precondition for reaching certain capabilities or levels of intelligence, and thus positively related to existential risk. Recognizing these distinctions can help AI safety researchers and public policymakers focus on the most pressing issues.
AI Summary
  • The concept of artificial consciousness is being explored in various fields, including AI research, neuroscience, and philosophy. [3]
  • Artificial consciousness: The ability of a machine or computer program to possess consciousness, which is often defined as subjective experience, self-awareness, and the ability to have thoughts and feelings. [3]
  • Consciousness: A complex and multifaceted concept that refers to the state of being aware of one's surroundings, thoughts, and emotions. [3]
  • Selection-broadcast cycle structure: A hypothetical mechanism that proposes how the global workspace integrates information from various sources to generate conscious experience. [3]
  • Researchers are debating whether large language models (LLMs) can be considered conscious or intelligent. [2]
  • Some experts argue that LLMs lack the ability to truly understand and reason about their environment, while others propose that they may possess a form of consciousness. [1]
Rate paper: 👍 👎 ♥ Save
Abstract
Algorithms have been estimated to increase AI training FLOP efficiency by a factor of 22,000 between 2012 and 2023 [Ho et al., 2024]. Running small-scale ablation experiments on key innovations from this time period, we are able to account for less than 10x of these gains. Surveying the broader literature, we estimate that additional innovations not included in our ablations account for less than 10x, yielding a total under 100x. This leads us to conduct scaling experiments, which reveal that much of this efficiency gap can be explained by algorithms with scale-dependent efficiency improvements. In particular, we conduct scaling experiments between LSTMs and Transformers, finding exponent differences in their compute-optimal scaling law while finding little scaling difference for many other innovations. These experiments demonstrate that - contrary to standard assumptions - an algorithm's efficiency gains are tied to compute scale. Using experimental extrapolation and literature estimates, we account for 6,930x efficiency gains over the same time period, with the scale-dependent LSTM-to-Transformer transition accounting for the majority of gains. Our results indicate that algorithmic progress for small models has been far slower than previously assumed, and that measures of algorithmic efficiency are strongly reference-dependent.
AI Summary
  • Algorithmic progress in language models exhibits fundamentally different behavior across compute scales. [3]
  • Algorithmic progress: The improvement in training efficiency and capabilities of language models over time. [3]
  • Scale-dependent innovations: Innovations whose impact on efficiency gains varies depending on the compute scale. [3]
  • Algorithmic progress is not a single number, but rather depends on both the reference algorithm and target compute scale. [3]
  • Scale-dependent innovations are critical to understanding algorithmic progress and its implications for the future of AI. [3]
  • The study's experiments are conducted at small scales compared to more recent scaling studies. [3]
  • Scale-dependent innovations, such as the LSTM-to-Transformer transition and Chinchilla rebalancing, account for most of the efficiency gains at frontier scales. [2]
  • The concentration of progress in architectural transitions suggests that future progress may depend on discovering fundamentally new architectures rather than incremental refinements of existing ones. [1]
AI Water Consumption
Rate paper: 👍 👎 ♥ Save
Abstract
Driven by the advancement of GPUs and AI, the field of Computational Fluid Dynamics (CFD) is undergoing significant transformations. This paper bridges the gap between the machine learning and CFD communities by deconstructing industrial-scale CFD simulations into their core components. Our main contribution is to propose the first scaling law that incorporates CFD inputs for both data generation and model training to outline the unique challenges of developing and deploying these next-generation AI models for complex fluid dynamics problems. Using our new scaling law, we establish quantitative estimates for the large-scale limit, distinguishing between regimes where the cost of data generation is the dominant factor in total compute versus where the cost of model training prevails. We conclude that the incorporation of high-fidelity transient data provides the optimum route to a foundation model. We constrain our theory with concrete numbers, providing the first public estimates on the computational cost and time to build a foundation model for CFD.
AI Summary
  • The article discusses the development of Large Language Models (LLMs) and their scaling laws, as well as the application of these principles to Computational Fluid Dynamics (CFD). [3]
  • They also discuss the Chinchilla scaling laws, which suggest that for a given compute budget, a model's parameter count should be roughly proportional to its training data size. [3]
  • The authors suggest that AI models for CFD should be trained using a similar scaling approach, where each subset of the simulation mesh is processed only once and input diversity is fostered. [3]
  • The authors argue that the performance of LLMs, measured by cross-entropy loss, scales predictably with three primary factors: model size, dataset size, and total training compute. [2]
Rate paper: 👍 👎 ♥ Save
Abstract
Due to rising demands for Artificial Inteligence (AI) inference, especially in higher education, novel solutions utilising existing infrastructure are emerging. The utilisation of High-Performance Computing (HPC) has become a prevalent approach for the implementation of such solutions. However, the classical operating model of HPC does not adapt well to the requirements of synchronous, user-facing dynamic AI application workloads. In this paper, we propose our solution that serves LLMs by integrating vLLM, Slurm and Kubernetes on the supercomputer \textit{RAMSES}. The initial benchmark indicates that the proposed architecture scales efficiently for 100, 500 and 1000 concurrent requests, incurring only an overhead of approximately 500 ms in terms of end-to-end latency.
AI for Social Equality
Rate paper: 👍 👎 ♥ Save
Abstract
Equality saturation is a technique for program optimization based on non-destructive rewriting and a form of program analysis called e-class analysis. The current form of e-class analysis is pessimistic and therefore ineffective at analyzing cyclic programs, such as those in SSA form. We propose an abstract interpretation algorithm that can precisely analyze cycles during equality saturation. This results in a unified algorithm for optimistic analysis and non-destructive rewriting. We instantiate this approach on a prototype abstract interpreter for SSA programs using a new semantics of SSA. Our prototype can analyze simple example programs more precisely than clang and gcc.
AI Summary
  • Abstract interpretation and equality saturation can be combined using a fixpoint algorithm that alternates between phases of abstract interpretation and equality saturation. [2]
  • Analysis results from abstract interpretation can help improve the precision of equality saturation by providing flow-insensitive abstractions that can be used in rewrite rules. [1]
AI for Social Good
Rate paper: 👍 👎 ♥ Save
Abstract
AI/ML model cards can contain a benchmarked evaluation of an AI/ML model against intended use but a one time assessment during model training does not get at how and where a model is actually used over its lifetime. Through Patra Model Cards embedded in the ICICLE AI Institute software ecosystem we study model cards as dynamic objects. The study reported here assesses the benefits and tradeoffs of adopting the Model Context Protocol (MCP) as an interface to the Patra Model Card server. Quantitative assessment shows the overhead of MCP as compared to a REST interface. The core question however is of active sessions enabled by MCP; this is a qualitative question of fit and use in the context of dynamic model cards that we address as well.
AI Summary
  • The article discusses the Model Context Protocol (MCP) and its performance evaluation in serving model cards, comparing it to REST protocol. [2]
  • FAIR Signposting Profile: Implementation guidelines for exposing machine-actionable navigation links using standardized HTTP headers and HTML link elements. [1]
AI on Energy
Rate paper: 👍 👎 ♥ Save
Abstract
Understanding household behaviour is essential for modelling macroeconomic dynamics and designing effective policy. While heterogeneous agent models offer a more realistic alternative to representative agent frameworks, their implementation poses significant computational challenges, particularly in continuous time. The Aiyagari-Bewley-Huggett (ABH) framework, recast as a system of partial differential equations, typically relies on grid-based solvers that suffer from the curse of dimensionality, high computational cost, and numerical inaccuracies. This paper introduces the ABH-PINN solver, an approach based on Physics-Informed Neural Networks (PINNs), which embeds the Hamilton-Jacobi-Bellman and Kolmogorov Forward equations directly into the neural network training objective. By replacing grid-based approximation with mesh-free, differentiable function learning, the ABH-PINN solver benefits from the advantages of PINNs of improved scalability, smoother solutions, and computational efficiency. Preliminary results show that the PINN-based approach is able to obtain economically valid results matching the established finite-difference solvers.
AI on Healthcare
Rate paper: 👍 👎 ♥ Save
Abstract
This second update to the 2025 International AI Safety Report assesses new developments in general-purpose AI risk management over the past year. It examines how researchers, public institutions, and AI developers are approaching risk management for general-purpose AI. In recent months, for example, three leading AI developers applied enhanced safeguards to their new models, as their internal pre-deployment testing could not rule out the possibility that these models could be misused to help create biological weapons. Beyond specific precautionary measures, there have been a range of other advances in techniques for making AI models and systems more reliable and resistant to misuse. These include new approaches in adversarial training, data curation, and monitoring systems. In parallel, institutional frameworks that operationalise and formalise these technical capabilities are starting to emerge: the number of companies publishing Frontier AI Safety Frameworks more than doubled in 2025, and governments and international organisations have established a small number of governance frameworks for general-purpose AI, focusing largely on transparency and risk assessment.
AI Summary
  • The report discusses the development of artificial intelligence (AI) and its potential risks and benefits. [3]
  • The development of AI is a rapidly evolving field, and researchers must continue to work together to address its potential risks and benefits. [3]
  • Researchers are working on developing more robust and secure AI systems that can mitigate these risks. [2]
  • Red teaming: A method used to test the security and robustness of AI systems by simulating attacks on them. [1]

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • AI for Society
  • AI on Air
  • AI on Food
You can edit or add more interests any time.