🎯 Top Personalized Recommendations
Chalmers University of
Why we think this paper is great for you:
This paper directly addresses the critical task of managing data annotation requirements for AI systems. You will find its practical insights into developing reliable AI-enabled perception systems highly valuable.
Abstract
High-quality data annotation requirements are crucial for the development of safe and reliable AI-enabled perception systems (AIePS) in autonomous driving. Although these requirements play a vital role in reducing bias and enhancing performance, their formulation and management remain underexplored, leading to inconsistencies, safety risks, and regulatory concerns. Our study investigates how annotation requirements are defined and used in practice, the challenges in ensuring their quality, practitioner-recommended improvements, and their impact on AIePS development and performance. We conducted $19$ semi-structured interviews with participants from six international companies and four research organisations. Our thematic analysis reveals five main key challenges: ambiguity, edge case complexity, evolving requirements, inconsistencies, and resource constraints and three main categories of best practices, including ensuring compliance with ethical standards, improving data annotation requirements guidelines, and embedded quality assurance for data annotation requirements. We also uncover critical interrelationships between annotation requirements, annotation practices, annotated data quality, and AIePS performance and development, showing how requirement flaws propagate through the AIePS development pipeline. To the best of our knowledge, this study is the first to offer empirically grounded guidance on improving annotation requirements, offering actionable insights to enhance annotation quality, regulatory compliance, and system reliability. It also contributes to the emerging fields of Software Engineering (SE for AI) and Requirements Engineering (RE for AI) by bridging the gap between RE and AI in a timely and much-needed manner.
AI Summary - Annotation requirement flaws propagate through the AIePS development pipeline, directly impacting data quality, model performance, and system reliability. [2]
- Edge cases are the primary challenge in data annotation, resisting standardization and exposing limitations in current requirement definitions, leading to safety risks and rework. [2]
- Iterative feedback loops, such as the Plan-Do-Check-Act cycle, are crucial for continuously improving annotation guidelines and ensuring their maturity and alignment with evolving real-world complexities. [2]
- Cross-functional collaboration involving domain experts, data scientists, legal professionals, and annotators is essential for defining technically feasible, contextually grounded, and ethically compliant annotation requirements. [2]
- Automation should support human annotators, especially for complex or safety-critical edge cases, to enhance scalability and consistency while maintaining quality through human oversight. [2]
- Embedding ethical standards, privacy protection, and safety-centric principles early in annotation requirement definition is critical for regulatory compliance, bias mitigation, and public trust. [2]
- Resource limitations (strict budgets, limited workforce, time constraints, and inadequate tools) significantly compromise annotation quality, leading to rushed work, skipped edge cases, and increased rework costs. [2]
- AI-enabled perception systems (AIePS): Systems central to automated driving, supporting object detection, tracking, and classification for enhanced safety and efficiency. [2]
- Data annotation requirements (annotation requirements): The standards, criteria, and instructions guiding annotation efforts, directly influencing the learning outcomes of AI systems. [2]
- Edge cases: Rare or ambiguous scenarios outside typical data distributions that are difficult to annotate consistently with standard guidelines. [2]
McGill University
Why we think this paper is great for you:
This paper explores the profound organizational and management implications of Large Language Models. It offers a conceptual framework for you to understand how advanced AI reshapes organizational knowledge.
Abstract
Large Language Models (LLMs) are reshaping organizational knowing by unsettling the epistemological foundations of representational and practice-based perspectives. We conceptualize LLMs as Haraway-ian monsters, that is, hybrid, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry. Focusing on analogizing as a fundamental driver of knowledge, we examine how LLMs generate connections through large-scale statistical inference. Analyzing their operation across the dimensions of surface/deep analogies and near/far domains, we highlight both their capacity to expand organizational knowing and the epistemic risks they introduce. Building on this, we identify three challenges of living with such epistemic monsters: the transformation of inquiry, the growing need for dialogical vetting, and the redistribution of agency. By foregrounding the entangled dynamics of knowing-with-LLMs, the paper extends organizational theory beyond human-centered epistemologies and invites renewed attention to how knowledge is created, validated, and acted upon in the age of intelligent technologies.
Lule University
Why we think this paper is great for you:
This paper demonstrates the application of Industrial AI for decision support and management across various phases. You will see how AI can transform operational and strategic decision-making in this context.
Abstract
The construction industry is presently going through a transformation led by adopting digital technologies that leverage Artificial Intelligence (AI). These industrial AI solutions assist in various phases of the construction process, including planning, design, production and management. In particular, the production phase offers unique potential for the integration of such AI-based solutions. These AI-based solutions assist site managers, project engineers, coordinators and other key roles in making final decisions. To facilitate the decision-making process in the production phase of construction through a human-centric AI-based solution, it is important to understand the needs and challenges faced by the end users who interact with these AI-based solutions to enhance the effectiveness and usability of these systems. Without this understanding, the potential usage of these AI-based solutions may be limited. Hence, the purpose of this research study is to explore, identify and describe the key factors crucial for developing AI solutions in the construction industry. This study further identifies the correlation between these key factors. This was done by developing a demonstrator and collecting quantifiable feedback through a questionnaire targeting the end users, such as site managers and construction professionals. This research study will offer insights into developing and improving these industrial AI solutions, focusing on Human-System Interaction aspects to enhance decision support, usability, and overall AI solution adoption.
Saarland University
Why we think this paper is great for you:
This paper delves into the evolving non-technical aspects of software engineering roles. It offers valuable insights for you into defining and identifying well-rounded engineering talent and managing tech teams.
Abstract
A well-rounded software engineer is often defined by technical prowess and the ability to deliver on complex projects. However, the narrative around the ideal Software Engineering (SE) candidate is evolving, suggesting that there is more to the story. This article explores the non-technical aspects emphasized in SE job postings, revealing the sociotechnical and organizational expectations of employers. Our Thematic Analysis of 100 job postings shows that employers seek candidates who align with their sense of purpose, fit within company culture, pursue personal and career growth, and excel in interpersonal interactions. This study contributes to ongoing discussions in the SE community about the evolving role and workplace context of software engineers beyond technical skills. By highlighting these expectations, we provide relevant insights for researchers, educators, practitioners, and recruiters. Additionally, our analysis offers a valuable snapshot of SE job postings in 2023, providing a scientific record of prevailing trends and expectations.
Portland State University
Why we think this paper is great for you:
This paper focuses on managing and enforcing access control policies in database management systems. You will find its discussion on data governance and security in complex systems highly relevant.
Abstract
The proliferation of smart technologies and evolving privacy regulations such as the GDPR and CPRA has increased the need to manage fine-grained access control (FGAC) policies in database management systems (DBMSs). Existing approaches to enforcing FGAC policies do not scale to thousands of policies, leading to degraded query performance and reduced system effectiveness. We present Sieve, a middleware for relational DBMSs that combines query rewriting and caching to optimize FGAC policy enforcement. Sieve rewrites a query with guarded expressions that group and filter policies and can efficiently use indexes in the DBMS. It also integrates a caching mechanism with an effective replacement strategy and a refresh mechanism to adapt to dynamic workloads. Experiments on two DBMSs with real and synthetic datasets show that Sieve scales to large datasets and policy corpora, maintaining low query latency and system load and improving policy evaluation performance by between 2x and 10x on workloads with 200 to 1,200 policies. The caching extension further improves query performance by between 6 and 22 percent under dynamic workloads, especially with larger cache sizes. These results highlight Sieve's applicability for real-time access control in smart environments and its support for efficient, scalable management of user preferences and privacy policies.
University of Maryland
Why we think this paper is great for you:
This paper presents an innovative AI agent system for data exploration. You will appreciate its showcase of practical applications of multi-agent systems and LLMs for complex data analysis.
Abstract
Sensorium Arc (AI reflects on climate) is a real-time multimodal interactive AI agent system that personifies the ocean as a poetic speaker and guides users through immersive explorations of complex marine data. Built on a modular multi-agent system and retrieval-augmented large language model (LLM) framework, Sensorium enables natural spoken conversations with AI agents that embodies the ocean's perspective, generating responses that blend scientific insight with ecological poetics. Through keyword detection and semantic parsing, the system dynamically triggers data visualizations and audiovisual playback based on time, location, and thematic cues drawn from the dialogue. Developed in collaboration with the Center for the Study of the Force Majeure and inspired by the eco-aesthetic philosophy of Newton Harrison, Sensorium Arc reimagines ocean data not as an abstract dataset but as a living narrative. The project demonstrates the potential of conversational AI agents to mediate affective, intuitive access to high-dimensional environmental data and proposes a new paradigm for human-machine-ecosystem.