Papers from 22 to 26 September, 2025

Here are the personalized paper recommendations sorted by most relevant
AI for Society
👍 👎 ♥ Save
University of Buenos Ares
Paper visualization
Rate this image: 😍 👍 👎
Abstract
This paper develops a taxonomy of expert perspectives on the risks and likely consequences of artificial intelligence, with particular focus on Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). Drawing from primary sources, we identify three predominant doctrines: (1) The dominance doctrine, which predicts that the first actor to create sufficiently advanced AI will attain overwhelming strategic superiority sufficient to cheaply neutralize its opponents' defenses; (2) The extinction doctrine, which anticipates that humanity will likely lose control of ASI, leading to the extinction of the human species or its permanent disempowerment; (3) The replacement doctrine, which forecasts that AI will automate a large share of tasks currently performed by humans, but will not be so transformative as to fundamentally reshape or bring an end to human civilization. We examine the assumptions and arguments underlying each doctrine, including expectations around the pace of AI progress and the feasibility of maintaining advanced AI under human control. While the boundaries between doctrines are sometimes porous and many experts hedge across them, this taxonomy clarifies the core axes of disagreement over the anticipated scale and nature of the consequences of AI development.
👍 👎 ♥ Save
Abstract
Modern socio-economic systems are undergoing deep integration with artificial intelligence technologies. This paper constructs a heterogeneous agent-based modeling framework that incorporates both human workers and autonomous AI agents, to study the impact of AI collaboration under resource constraints on aggregate social output. We build five progressively extended models: Model 1 serves as the baseline of pure human collaboration; Model 2 introduces AI as collaborators; Model 3 incorporates network effects among agents; Model 4 treats agents as independent producers; and Model 5 integrates both network effects and independent agent production. Through theoretical derivation and simulation analysis, we find that the introduction of AI agents can significantly increase aggregate social output. When considering network effects among agents, this increase exhibits nonlinear growth far exceeding the simple sum of individual contributions. Under the same resource inputs, treating agents as independent producers provides higher long-term growth potential; introducing network effects further demonstrates strong characteristics of increasing returns to scale.
AI Air Consumption
👍 👎 ♥ Save
Microsoft
Abstract
As AI inference scales to billions of queries and emerging reasoning and agentic workflows increase token demand, reliable estimates of per-query energy use are increasingly important for capacity planning, emissions accounting, and efficiency prioritization. Many public estimates are inconsistent and overstate energy use, because they extrapolate from limited benchmarks and fail to reflect efficiency gains achievable at scale. In this perspective, we introduce a bottom-up methodology to estimate the per-query energy of large-scale LLM systems based on token throughput. For models running on an H100 node under realistic workloads, GPU utilization and PUE constraints, we estimate a median energy per query of 0.34 Wh (IQR: 0.18-0.67) for frontier-scale models (>200 billion parameters). These results are consistent with measurements using production-scale configurations and show that non-production estimates and assumptions can overstate energy use by 4-20x. Extending to test-time scaling scenarios with 15x more tokens per typical query, the median energy rises 13x to 4.32 Wh, indicating that targeting efficiency in this regime will deliver the largest fleet-wide savings. We quantify achievable efficiency gains at the model, serving platform, and hardware levels, finding individual median reductions of 1.5-3.5x in energy per query, while combined advances can plausibly deliver 8-20x reductions. To illustrate the system-level impact, we estimate the baseline daily energy use of a deployment serving 1 billion queries to be 0.8 GWh/day. If 10% are long queries, demand could grow to 1.8 GWh/day. With targeted efficiency interventions, it falls to 0.9 GWh/day, similar to the energy footprint of web search at that scale. This echoes how data centers historically tempered energy growth through efficiency gains during the internet and cloud build-up.
AI Insights
  • Piecewise regression links tokens‑per‑second (TPS) to input/output lengths, revealing nonlinear scaling.
  • Doubling input tokens barely raises energy per query, while extending output tokens causes a larger jump.
  • A scaling factor derived from non‑uniform usage predicts idle nodes needed to hit target utilization.
  • The study assumes a sinusoidal load 50–100 % (avg 75 %)—a simplification that may not fit all data‑center patterns.
  • Node power set at 11.3 kW peak, 2.7 kW idle (dynamic 8.6 kW) may vary across hardware families.
  • Tokens‑per‑second (TPS) is the rate of tokens processed per second by a node, a key energy‑budget metric.
  • “Energy‑Efficient Data Centers: A Survey of Techniques and Technologies” provides a deep dive into scaling‑aware power‑management.
AI for Social Good
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
AI safety research has emphasized interpretability, control, and robustness, yet without an ethical substrate these approaches may remain fragile under competitive and open-ended pressures. This paper explores ethics not as an external add-on, but as a possible structural lens for alignment, introducing a \emph{moral problem space} $M$: a high-dimensional domain in which moral distinctions could, in principle, be represented in AI systems. Human moral reasoning is treated as a compressed and survival-biased projection $\tilde{M}$, clarifying why judgment is inconsistent while suggesting tentative methods -- such as sparse autoencoders, causal mediation, and cross-cultural corpora -- that might help probe for disentangled moral features. Within this framing, metaethical positions are interpreted as research directions: realism as the search for stable invariants, relativism as context-dependent distortions, constructivism as institutional shaping of persistence, and virtue ethics as dispositional safeguards under distributional shift. Evolutionary dynamics and institutional design are considered as forces that may determine whether ethical-symbiotic lineages remain competitively viable against more autarkic trajectories. Rather than offering solutions, the paper sketches a research agenda in which embedding ethics directly into representational substrates could serve to make philosophical claims more empirically approachable, positioning moral theory as a potential source of hypotheses for alignment work.
AI on Energy
👍 👎 ♥ Save
European University of 1
Abstract
Artificial Intelligence is increasingly pervasive across domains, with ever more complex models delivering impressive predictive performance. This fast technological advancement however comes at a concerning environmental cost, with state-of-the-art models - particularly deep neural networks and large language models - requiring substantial computational resources and energy. In this work, we present the intuition of Green AI dynamic model selection, an approach based on dynamic model selection that aims at reducing the environmental footprint of AI by selecting the most sustainable model while minimizing potential accuracy loss. Specifically, our approach takes into account the inference task, the environmental sustainability of available models, and accuracy requirements to dynamically choose the most suitable model. Our approach presents two different methods, namely Green AI dynamic model cascading and Green AI dynamic model routing. We demonstrate the effectiveness of our approach via a proof of concept empirical example based on a real-world dataset. Our results show that Green AI dynamic model selection can achieve substantial energy savings (up to ~25%) while substantially retaining the accuracy of the most energy greedy solution (up to ~95%). As conclusion, our preliminary findings highlight the potential that hybrid, adaptive model selection strategies withhold to mitigate the energy demands of modern AI systems without significantly compromising accuracy requirements.
AI Insights
  • Green AI leverages pruning, distillation, and quantization to slash energy use while preserving accuracy.
  • Carbon Aware Transformers jointly optimize model and hardware to cut carbon footprints.
  • The Hidden Cost of an Image quantifies energy spikes in AI image generation, revealing hidden inefficiencies.
  • Dynamic model cascading routes inputs to the lightest suitable network, achieving ~25% energy savings.
  • Dynamic model routing adapts inference paths per task, balancing speed, accuracy, and sustainability.
  • A key challenge remains integrating Green AI into production pipelines without compromising robustness.
  • For deeper dives, explore Goodfellow et al.'s Deep Learning and Bishop's Pattern Recognition texts.
AI on Food
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Global food systems must deliver nutritious and sustainable foods while sharply reducing environmental impact. Yet, food innovation remains slow, empirical, and fragmented. Artificial intelligence (AI) now offers a transformative path with the potential to link molecular composition to functional performance, bridge chemical structure to sensory outcomes, and accelerate cross-disciplinary innovation across the entire production pipeline. Here we outline AI for Food as an emerging discipline that integrates ingredient design, formulation development, fermentation and production, texture analysis, sensory properties, manufacturing, and recipe generation. Early successes demonstrate how AI can predict protein performance, map molecules to flavor, and tailor consumer experiences. But significant challenges remain: lack of standardization, scarce multimodal data, cultural and nutritional diversity, and low consumer confidence. We propose three priorities to unlock the field: treating food as a programmable biomaterial, building self-driving laboratories for automated discovery, and developing deep reasoning models that integrate sustainability and human health. By embedding AI responsibly into the food innovation cycle, we can accelerate the transition to sustainable protein systems and chart a predictive, design-driven science of food for our own health and the health of our planet.
AI for Social Equity
👍 👎 ♥ Save
The University of Manch
Abstract
This paper offers a domain-mediated comparative review of 251 studies on public attitudes toward AI, published between 2011 and 2025. Drawing on a systematic literature review, we analyse how different factors including perceived benefits and concerns (or risks) shape public acceptance of - or resistance to - artificial intelligence across domains and use-cases, including healthcare, education, security, public administration, generative AI, and autonomous vehicles. The analysis highlights recurring patterns in individual, contextual, and technical factors influencing perception, while also tracing variations in institutional trust, perceived fairness, and ethical concerns. We show that the public perception in AI is shaped not only by technical design or performance but also by sector-specific considerations as well as imaginaries, cultural narratives, and historical legacies. This comparative approach offers a foundation for developing more tailored and context-sensitive strategies for responsible AI governance.
AI Insights
  • AI acceptance varies by region, with East Asian publics more optimistic than many Western groups.
  • Job displacement fears dominate public discourse, outweighing enthusiasm for AI efficiency.
  • Bias and opacity in algorithms are top trust barriers, especially in healthcare and education.
  • Government AI use sparks accountability concerns, demanding transparent audit trails.
  • Age, education, and culture jointly shape AI perceptions, calling for demographic‑tailored outreach.
  • Literature gaps include systematic studies of AI’s cultural narratives and policy impact.
  • Key readings: Tegmark’s Life 3.0, Kurzweil’s Singularity Is Near, and Zhang & Dafoe’s 2019–2020 surveys.
AI for Social Equality
👍 👎 ♥ Save
University of Amsterdam
Abstract
Mainstream AI ethics, with its reliance on top-down, principle-driven frameworks, fails to account for the situated realities of diverse communities affected by AI (Artificial Intelligence). Critics have argued that AI ethics frequently serves corporate interests through practices of 'ethics washing', operating more as a tool for public relations than as a means of preventing harm or advancing the common good. As a result, growing scepticism among critical scholars has cast the field as complicit in sustaining harmful systems rather than challenging or transforming them. In response, this paper adopts a Science and Technology Studies (STS) perspective to critically interrogate the field of AI ethics. It hence applies the same analytic tools STS has long directed at disciplines such as biology, medicine, and statistics to ethics. This perspective reveals a core tension between vertical (top-down, principle-based) and horizontal (risk-mitigating, implementation-oriented) approaches to ethics. By tracing how these models have shaped the discourse, we show how both fall short in addressing the complexities of AI as a socio-technical assemblage, embedded in practice and entangled with power. To move beyond these limitations, we propose a threefold reorientation of AI ethics. First, we call for a shift in foundations: from top-down abstraction to empirical grounding. Second, we advocate for pluralisation: moving beyond Western-centric frameworks toward a multiplicity of onto-epistemic perspectives. Finally, we outline strategies for reconfiguring AI ethics as a transformative force, moving from narrow paradigms of risk mitigation toward co-creating technologies of hope.
AI on Transportation
👍 👎 ♥ Save
Argonne National Lab, Rut
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Coupled AI-Simulation workflows are becoming the major workloads for HPC facilities, and their increasing complexity necessitates new tools for performance analysis and prototyping of new in-situ workflows. We present SimAI-Bench, a tool designed to both prototype and evaluate these coupled workflows. In this paper, we use SimAI-Bench to benchmark the data transport performance of two common patterns on the Aurora supercomputer: a one-to-one workflow with co-located simulation and AI training instances, and a many-to-one workflow where a single AI model is trained from an ensemble of simulations. For the one-to-one pattern, our analysis shows that node-local and DragonHPC data staging strategies provide excellent performance compared Redis and Lustre file system. For the many-to-one pattern, we find that data transport becomes a dominant bottleneck as the ensemble size grows. Our evaluation reveals that file system is the optimal solution among the tested strategies for the many-to-one pattern.
AI Insights
  • SimAI‑Bench is engineered for portability, scalability, and tunability, enabling faithful mini‑app replicas of complex scientific workflows.
  • The framework supports rapid prototyping in climate modeling, materials science, and quantum computing.
  • Users can extend mini‑apps, tailoring compute, I/O, and communication patterns to their needs.
  • The paper cites the HPC Challenge benchmark suite and ADIOS 2 for performance evaluation and data staging.
  • DragonHPC is highlighted as a high‑performance data staging solution, yet integration details are omitted.
  • A limitation is the lack of discussion on challenges or scalability bottlenecks when deploying SimAI‑Bench.
  • Definitions state that “workflow mini‑apps” are small programs mimicking real‑world scientific workflows, while “HPC” denotes systems for rapid, complex computations.
AI on Healthcare
👍 👎 ♥ Save
Abstract
This paper introduces Agentic-AI Healthcare, a privacy-aware, multilingual, and explainable research prototype developed as a single-investigator project. The system leverages the emerging Model Context Protocol (MCP) to orchestrate multiple intelligent agents for patient interaction, including symptom checking, medication suggestions, and appointment scheduling. The platform integrates a dedicated Privacy and Compliance Layer that applies role-based access control (RBAC), AES-GCM field-level encryption, and tamper-evident audit logging, aligning with major healthcare data protection standards such as HIPAA (US), PIPEDA (Canada), and PHIPA (Ontario). Example use cases demonstrate multilingual patient-doctor interaction (English, French, Arabic) and transparent diagnostic reasoning powered by large language models. As an applied AI contribution, this work highlights the feasibility of combining agentic orchestration, multilingual accessibility, and compliance-aware architecture in healthcare applications. This platform is presented as a research prototype and is not a certified medical device.
👍 👎 ♥ Save
Mayo Clinic
Abstract
Shared decision-making (SDM) is necessary to achieve patient-centred care. Currently no methodology exists to automatically measure SDM at scale. This study aimed to develop an automated approach to measure SDM by using language modelling and the conversational alignment (CA) score. A total of 157 video-recorded patient-doctor conversations from a randomized multi-centre trial evaluating SDM decision aids for anticoagulation in atrial fibrillations were transcribed and segmented into 42,559 sentences. Context-response pairs and negative sampling were employed to train deep learning (DL) models and fine-tuned BERT models via the next sentence prediction (NSP) task. Each top-performing model was used to calculate four types of CA scores. A random-effects analysis by clinician, adjusting for age, sex, race, and trial arm, assessed the association between CA scores and SDM outcomes: the Decisional Conflict Scale (DCS) and the Observing Patient Involvement in Decision-Making 12 (OPTION12) scores. p-values were corrected for multiple comparisons with the Benjamini-Hochberg method. Among 157 patients (34% female, mean age 70 SD 10.8), clinicians on average spoke more words than patients (1911 vs 773). The DL model without the stylebook strategy achieved a recall@1 of 0.227, while the fine-tuned BERTbase (110M) achieved the highest recall@1 with 0.640. The AbsMax (18.36 SE7.74 p=0.025) and Max CA (21.02 SE7.63 p=0.012) scores generated with the DL without stylebook were associated with OPTION12. The Max CA score generated with the fine-tuned BERTbase (110M) was associated with the DCS score (-27.61 SE12.63 p=0.037). BERT model sizes did not have an impact the association between CA scores and SDM. This study introduces an automated, scalable methodology to measure SDM in patient-doctor conversations through explainable CA scores, with potential to evaluate SDM strategies at scale.
AI Insights
  • 157 video‑recorded encounters produced 42,559 utterances for analysis.
  • Fine‑tuned BERT‑base (110M) achieved recall@1 = 0.640, beating a DL model without stylebook (0.227).
  • AbsMax and Max CA from the DL model correlated with OPTION12 (p < 0.03).
  • BERT‑base Max CA inversely predicted DCS (β = ‑27.61, p = 0.037), linking alignment to lower conflict.
  • Random‑effects analysis adjusted for age, sex, race, trial arm, with Benjamini‑Hochberg correction.
  • Negative sampling and context‑response pairs fed a next‑sentence‑prediction DL pipeline, yielding explainable CA.
  • Model size did not alter CA‑SDM links, showing the framework scales for automated SDM monitoring.
AI for Social Fairness
👍 👎 ♥ Save
Abstract
As machine learning (ML) algorithms are increasingly used in social domains to make predictions about humans, there is a growing concern that these algorithms may exhibit biases against certain social groups. Numerous notions of fairness have been proposed in the literature to measure the unfairness of ML. Among them, one class that receives the most attention is \textit{parity-based}, i.e., achieving fairness by equalizing treatment or outcomes for different social groups. However, achieving parity-based fairness often comes at the cost of lowering model accuracy and is undesirable for many high-stakes domains like healthcare. To avoid inferior accuracy, a line of research focuses on \textit{preference-based} fairness, under which any group of individuals would experience the highest accuracy and collectively prefer the ML outcomes assigned to them if they were given the choice between various sets of outcomes. However, these works assume individual demographic information is known and fully accessible during training. In this paper, we relax this requirement and propose a novel \textit{demographic-agnostic fairness without harm (DAFH)} optimization algorithm, which jointly learns a group classifier that partitions the population into multiple groups and a set of decoupled classifiers associated with these groups. Theoretically, we conduct sample complexity analysis and show that our method can outperform the baselines when demographic information is known and used to train decoupled classifiers. Experiments on both synthetic and real data validate the proposed method.
👍 👎 ♥ Save
University of Glasgow and
Abstract
Assessing fairness in artificial intelligence (AI) typically involves AI experts who select protected features, fairness metrics, and set fairness thresholds. However, little is known about how stakeholders, particularly those affected by AI outcomes but lacking AI expertise, assess fairness. To address this gap, we conducted a qualitative study with 30 stakeholders without AI expertise, representing potential decision subjects in a credit rating scenario, to examine how they assess fairness when placed in the role of deciding on features with priority, metrics, and thresholds. We reveal that stakeholders' fairness decisions are more complex than typical AI expert practices: they considered features far beyond legally protected features, tailored metrics for specific contexts, set diverse yet stricter fairness thresholds, and even preferred designing customized fairness. Our results extend the understanding of how stakeholders can meaningfully contribute to AI fairness governance and mitigation, underscoring the importance of incorporating stakeholders' nuanced fairness judgments.
AI Insights
  • Stakeholders added non‑protected features like income stability to the fairness assessment.
  • They created domain‑specific metrics, e.g., “affordability‑adjusted accuracy,” beyond standard indices.
  • Thresholds set were stricter than typical industry norms, indicating higher risk tolerance.
  • Some advocated “fairness through unawareness,” dropping features such as Telephone or Foreign Worker.
  • The study’s narrow focus on outcome fairness left process and distributional dimensions unexplored.
  • Key readings: Hall et al.’s 2011 survey and Barocas et al.’s 2019 book on algorithmic fairness.
  • Recommended courses: Andrew Ng’s AI Fairness on Coursera and Microsoft Learn’s Fairness in AI.
AI on Air
👍 👎 ♥ Save
KT
Paper visualization
Rate this image: 😍 👍 👎
Abstract
KT developed a Responsible AI (RAI) assessment methodology and risk mitigation technologies to ensure the safety and reliability of AI services. By analyzing the Basic Act on AI implementation and global AI governance trends, we established a unique approach for regulatory compliance and systematically identify and manage all potential risk factors from AI development to operation. We present a reliable assessment methodology that systematically verifies model safety and robustness based on KT's AI risk taxonomy tailored to the domestic environment. We also provide practical tools for managing and mitigating identified AI risks. With the release of this report, we also release proprietary Guardrail : SafetyGuard that blocks harmful responses from AI models in real-time, supporting the enhancement of safety in the domestic AI development ecosystem. We also believe these research outcomes provide valuable insights for organizations seeking to develop Responsible AI.
AI Insights
  • The risk taxonomy categorizes threats into data, model, deployment, and societal dimensions, each with measurable indicators.
  • A multi‑stage assessment pipeline integrates static code analysis, adversarial testing, and human‑in‑the‑loop audits to quantify robustness.
  • SafetyGuard employs a lightweight transformer‑based policy network that intercepts outputs in real time, achieving <5 ms latency on edge devices.
  • Compliance mapping aligns each risk factor with specific clauses of the Basic Act on AI, enabling automated audit reports.
  • Pilot deployments in Korean telecom and finance sectors demonstrated a 30 % reduction in policy‑violating incidents after Guardrail integration.
  • The report proposes a future research agenda on explainable mitigation strategies and cross‑border data‑sharing protocols.
AI Energy Consumption
👍 👎 ♥ Save
Hellenic Open University
Abstract
The deployment of machine learning (ML) models on microcontrollers (MCUs) is constrained by strict energy, latency, and memory requirements, particularly in battery-operated and real-time edge devices. While software-level optimizations such as quantization and pruning reduce model size and computation, hardware acceleration has emerged as a decisive enabler for efficient embedded inference. This paper evaluates the impact of Neural Processing Units (NPUs) on MCU-based ML execution, using the ARM Cortex-M55 core combined with the Ethos-U55 NPU on the Alif Semiconductor Ensemble E7 development board as a representative platform. A rigorous measurement methodology was employed, incorporating per-inference net energy accounting via GPIO-triggered high-resolution digital multimeter synchronization and idle-state subtraction, ensuring accurate attribution of energy costs. Experimental results across six representative ML models -including MiniResNet, MobileNetV2, FD-MobileNet, MNIST, TinyYolo, and SSD-MobileNet- demonstrate substantial efficiency gains when inference is offloaded to the NPU. For moderate to large networks, latency improvements ranged from 7x to over 125x, with per-inference net energy reductions up to 143x. Notably, the NPU enabled execution of models unsupported on CPU-only paths, such as SSD-MobileNet, highlighting its functional as well as efficiency advantages. These findings establish NPUs as a cornerstone of energy-aware embedded AI, enabling real-time, power-constrained ML inference at the MCU level.
AI on Education
👍 👎 ♥ Save
Abstract
Artificial intelligence (AI) is reshaping higher education, yet current debates often feel tangled, mixing concerns about pedagogy, operations, curriculum, and the future of work without a shared framework. This paper offers a first attempt at a taxonomy to organize the diverse narratives of AI education and to inform discipline-based curricular discussions. We place these narratives within the enduring responsibility of higher education: the mission of knowledge. This mission includes not only the preservation and advancement of disciplinary expertise, but also the cultivation of skills and wisdom, i.e., forms of meta-knowledge that encompass judgment, ethics, and social responsibility. For the purpose of this paper's discussion, AI is defined as adaptive, data-driven systems that automate analysis, modeling, and decision-making, highlighting its dual role as enabler and disruptor across disciplines. We argue that the most consequential challenges lie at the level of curriculum and disciplinary purpose, where AI accelerates inquiry but also unsettles expertise and identity. We show how disciplines evolve through the interplay of research, curriculum, pedagogy, and faculty expertise, and why curricular reform is the central lever for meaningful change. Pedagogical innovation offers a strategic and accessible entry point, providing actionable steps that help faculty and students build the expertise needed to engage in deeper curricular rethinking and disciplinary renewal. Within this framing, we suggest that meaningful reform can move forward through structured faculty journeys: from AI literacy to pedagogy, curriculum design, and research integration. The key is to align these journeys with the mission of knowledge, turning the disruptive pressures of AI into opportunities for disciplines to sustain expertise, advance inquiry, and serve society.
👍 👎 ♥ Save
Abstract
Many institutions are currently grappling with teaching artificial intelligence (AI) in the face of growing demand and relevance in our world. The Computing Research Association (CRA) has conducted 32 moderated virtual roundtable discussions of 202 experts committed to improving AI education. These discussions slot into four focus areas: AI Knowledge Areas and Pedagogy, Infrastructure Challenges in AI Education, Strategies to Increase Capacity in AI Education, and AI Education for All. Roundtables were organized around institution type to consider the particular goals and resources of different AI education environments. We identified the following high-level community needs to increase capacity in AI education. A significant digital divide creates major infrastructure hurdles, especially for smaller and under-resourced institutions. These challenges manifest as a shortage of faculty with AI expertise, who also face limited time for reskilling; a lack of computational infrastructure for students and faculty to develop and test AI models; and insufficient institutional technical support. Compounding these issues is the large burden associated with updating curricula and creating new programs. To address the faculty gap, accessible and continuous professional development is crucial for faculty to learn about AI and its ethical dimensions. This support is particularly needed for under-resourced institutions and must extend to faculty both within and outside of computing programs to ensure all students have access to AI education. We have compiled and organized a list of resources that our participant experts mentioned throughout this study. These resources contribute to a frequent request heard during the roundtables: a central repository of AI education resources for institutions to freely use across higher education.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • AI on Labor Market
  • AI for Social Justice
  • AI Impacts on Society
  • AI Water Consumption
  • AI on Water
You can edit or add more interests any time.

Unsubscribe from these updates