Papers from 06 to 10 October, 2025

Here are the personalized paper recommendations sorted by most relevant
LLMs for Compliance
👍 👎 ♥ Save
Chalmers University of Go
Abstract
The risks associated with adopting large language model (LLM) chatbots in software organizations highlight the need for clear policies. We examine how 11 companies create these policies and the factors that influence them, aiming to help managers safely integrate chatbots into development workflows.
AI Insights
  • Team communication, knowledge sharing, and governance emerge as the three pillars shaping LLM policies.
  • Adopting LLMs spawns new roles—AI ethics officers, prompt engineers, and LLM auditors—within software teams.
  • Policy frameworks must evolve with legal shifts; court rulings on AI liability could mandate periodic updates.
  • Documenting LLM policies turns abstract guidelines into actionable playbooks, preventing decision paralysis.
  • Recommended reading: “Rethinking Software Engineering in the Era of Foundation Models” for a deep dive into trustworthy FMware.
  • Definition: LLM (Large Language Model) is a transformer-based neural network trained on billions of tokens.
👍 👎 ♥ Save
Pennsylvania State Univer
Abstract
In this paper, we report our experience with several LLMs for their ability to understand a process model in an interactive, conversational style, find syntactical and logical errors in it, and reason with it in depth through a natural language (NL) interface. Our findings show that a vanilla, untrained LLM like ChatGPT (model o3) in a zero-shot setting is effective in understanding BPMN process models from images and answering queries about them intelligently at syntactic, logic, and semantic levels of depth. Further, different LLMs vary in performance in terms of their accuracy and effectiveness. Nevertheless, our empirical analysis shows that LLMs can play a valuable role as assistants for business process designers and users. We also study the LLM's "thought process" and ability to perform deeper reasoning in the context of process analysis and optimization. We find that the LLMs seem to exhibit anthropomorphic properties.
AI Governance
👍 👎 ♥ Save
Universit de Montral
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Artificial intelligence systems increasingly mediate knowledge, communication, and decision making. Development and governance remain concentrated within a small set of firms and states, raising concerns that technologies may encode narrow interests and limit public agency. Capability benchmarks for language, vision, and coding are common, yet public, auditable measures of pluralistic governance are rare. We define AI pluralism as the degree to which affected stakeholders can shape objectives, data practices, safeguards, and deployment. We present the AI Pluralism Index (AIPI), a transparent, evidence-based instrument that evaluates producers and system families across four pillars: participatory governance, inclusivity and diversity, transparency, and accountability. AIPI codes verifiable practices from public artifacts and independent evaluations, explicitly handling "Unknown" evidence to report both lower-bound ("evidence") and known-only scores with coverage. We formalize the measurement model; implement a reproducible pipeline that integrates structured web and repository analysis, external assessments, and expert interviews; and assess reliability with inter-rater agreement, coverage reporting, cross-index correlations, and sensitivity analysis. The protocol, codebook, scoring scripts, and evidence graph are maintained openly with versioned releases and a public adjudication process. We report pilot provider results and situate AIPI relative to adjacent transparency, safety, and governance frameworks. The index aims to steer incentives toward pluralistic practice and to equip policymakers, procurers, and the public with comparable evidence.
AI Insights
  • Imagine model cards closing the AI accountability gap by transparently reporting model behavior.
  • OECD AI Recommendation pushes for human‑centered, explainable, and fair AI.
  • UNESCO Ethics Recommendation embeds human values to turn AI into societal good.
  • HELM from Stanford’s CRFM holistically benchmarks language models on safety and impact.
  • NIST AI RMF offers a risk‑management cycle for responsible AI governance.
  • WCAG 2.2 ensures AI interfaces are accessible to users with disabilities.
  • Krippendorff’s content‑analysis method quantifies stakeholder participation in AI governance.
👍 👎 ♥ Save
Abstract
This is a skeptical overview of the literature on AI consciousness. We will soon create AI systems that are conscious according to some influential, mainstream theories of consciousness but are not conscious according to other influential, mainstream theories of consciousness. We will not be in a position to know which theories are correct and whether we are surrounded by AI systems as richly and meaningfully conscious as human beings or instead only by systems as experientially blank as toasters. None of the standard arguments either for or against AI consciousness takes us far. Table of Contents Chapter One: Hills and Fog Chapter Two: What Is Consciousness? What Is AI? Chapter Three: Ten Possibly Essential Features of Consciousness Chapter Four: Against Introspective and Conceptual Arguments for Essential Features Chapter Five: Materialism and Functionalism Chapter Six: The Turing Test and the Chinese Room Chapter Seven: The Mimicry Argument Against AI Consciousness Chapter Eight: Global Workspace Theories and Higher Order Theories Chapter Nine: Integrated Information, Local Recurrence, Associative Learning, and Iterative Natural Kinds Chapter Ten: Does Biological Substrate Matter? Chapter Eleven: The Problem of Strange Intelligence Chapter Twelve: The Leapfrog Hypothesis and the Social Semi-Solution
Chat Designers
👍 👎 ♥ Save
Abstract
Artificial Intelligence and especially Large Language Models (LLM), such as ChatGPT has revolutionized the way educators work. The results we get from LLMs depend on how we ask them to help us. The process and the technique behind an effective input is called prompt engineering. The aim of this study is to investigate whether science educators in secondary education improve their attitude toward ChatGPT as a learning assistant after appropriate training in prompt engineering. The results of the pilot study presented in this paper show an improvement in the previously mentioned teachers perceptions.
👍 👎 ♥ Save
Tianjin University, Lough
Abstract
Robotic performance emerges from the coupling of body and controller, yet it remains unclear when morphology-control co-design is necessary. We present a unified framework that embeds morphology and control parameters within a single neural network, enabling end-to-end joint optimization. Through case studies in static-obstacle-constrained reaching, we evaluate trajectory error, success rate, and collision probability. The results show that co-design provides clear benefits when morphology is poorly matched to the task, such as near obstacles or workspace boundaries, where structural adaptation simplifies control. Conversely, when the baseline morphology already affords sufficient capability, control-only optimization often matches or exceeds co-design. By clarifying when control is enough and when it is not, this work advances the understanding of embodied intelligence and offers practical guidance for embodiment-aware robot design.
AI Insights
  • Co‑design only outperforms control‑only when geometry is the bottleneck—near obstacles or workspace limits.
  • The fixed neural‑network architecture limits adaptability, making the framework less suitable for highly varied tasks.
  • Training co‑design models is computationally heavy, requiring large datasets and longer convergence than pure control tuning.
  • When the baseline morphology already suits the task, control‑only optimization converges faster and matches co‑design accuracy.
  • Key literature: Pfeifer & Bongard’s “How the Body Shapes the Way We Think” and Schaff et al.’s Deep RL co‑design paper.
AI for Compliance
👍 👎 ♥ Save
Abstract
The operationalization of ethics in the technical practices of artificial intelligence (AI) is facing significant challenges. To address the problem of ineffective implementation of AI ethics, we present our diagnosis, analysis, and interventional recommendations from a unique perspective of the real-world implementation of AI ethics through explainable AI (XAI) techniques. We first describe the phenomenon (i.e., the "symptoms") of ineffective implementation of AI ethics in explainable AI using four empirical cases. From the "symptoms", we diagnose the root cause (i.e., the "disease") being the dysfunction and imbalance of power structures in the sociotechnical system of AI. The power structures are dominated by unjust and unchecked power that does not represent the benefits and interests of the public and the most impacted communities, and cannot be countervailed by ethical power. Based on the understanding of power mechanisms, we propose three interventional recommendations to tackle the root cause, including: 1) Making power explicable and checked, 2) Reframing the narratives and assumptions of AI and AI ethics to check unjust power and reflect the values and benefits of the public, and 3) Uniting the efforts of ethical and scientific conduct of AI to encode ethical values as technical standards, norms, and methods, including conducting critical examinations and limitation analyses of AI technical practices. We hope that our diagnosis and interventional recommendations can be a useful input to the AI community and civil society's ongoing discussion and implementation of ethics in AI for ethical and responsible AI practice.
👍 👎 ♥ Save
Abstract
As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.
Unsubscribe from these updates