Hi J34Nc4Rl0+Product Management,

Your personalized paper recommendations for 03 to 07 November, 2025.

Dear user, for this week we added the possiblity to further personalize your results by adding a personalized description of yourself.

Login in our website and head to the profile tab. There provide any details you want like your profession, age, background. That is then taken into account for the language models to generate something tailored for you.

🎯 Top Personalized Recommendations
ifak eV
Why we think this paper is great for you:
This paper directly addresses the future vision of Generative AI in software engineering, offering crucial insights for setting strategic direction and integrating AI into your product management practices. It provides a forward-looking perspective highly relevant to your interests in AI and vision setting.
Rate paper: 👍 👎 ♥ Save
Abstract
Generative AI (GenAI) has recently emerged as a groundbreaking force in Software Engineering, capable of generating code, suggesting fixes, and supporting quality assurance. While its use in coding tasks shows considerable promise, applying GenAI across the entire Software Development Life Cycle (SDLC) has not yet been fully explored. Critical uncertainties in areas such as reliability, accountability, security, and data privacy demand deeper investigation and coordinated action. The GENIUS project, comprising over 30 European industrial and academic partners, aims to address these challenges by advancing AI integration across all SDLC phases. It focuses on GenAI's potential, the development of innovative tools, and emerging research challenges, actively shaping the future of software engineering. This vision paper presents a shared perspective on the future of GenAI-based software engineering, grounded in cross-sector dialogue and experience within the GENIUS consortium, supported by an exploratory literature review. The paper explores four central elements: (1) a structured overview of current challenges in GenAI adoption across the SDLC; (2) a forward-looking vision outlining key technological and methodological advances expected over the next five years; (3) anticipated shifts in the roles and required skill sets of software professionals; and (4) the contribution of GENIUS in realizing this transformation through practical tools and industrial validation. By aligning technical innovation with business relevance, this paper aims to inform both research agendas and industrial strategies, providing a foundation for reliable, scalable, and industry-ready GenAI solutions for software engineering teams.
AI Summary
  • GenAI's current application in SE is largely confined to coding, with significant challenges in extending its utility across the entire SDLC due to limitations in context awareness, reliability, and structured output generation. [3]
  • The future of GenAI in SE envisions a shift towards increased autonomy through agentic teams and "self-*" systems capable of end-to-end software development, requiring proactive, dynamic workflows and robust data management. [3]
  • The shift towards GenAI-driven development poses a critical challenge to the competence acquisition pipeline for junior developers, as AI increasingly handles tasks traditionally performed by entry-level staff, necessitating new strategies for skill development. [3]
  • Retrieval-Augmented Generation (RAG): A method to enhance GenAI models' context awareness by retrieving relevant information from external knowledge bases to inform generation. [3]
  • Human roles in GenAI-driven SE will evolve from manual generation to critical verification, validation, and orchestration of AI efforts, demanding new competencies in prompt engineering, AI oversight, and debugging AI-generated artifacts. [2]
  • Addressing GenAI's inherent risks (security vulnerabilities, data privacy, biases, environmental impact) necessitates greater transparency in training data, improved benchmarks, and the embedding of sustainability as a core functional requirement throughout the SDLC. [2]
  • Effective integration of GenAI requires legislative frameworks that keep pace with technological advancements, particularly concerning accountability, liability, and the scoped access of autonomous agents in high-risk domains. [2]
  • Hallucinations: Confident but incorrect or unverifiable outputs from LLMs, often due to training on inconsistent or outdated data. [2]
  • Grammar-Constrained Decoding: Guiding an LLM's output using predefined grammatical rules (e.g., Context-Free Grammar) to ensure syntactically correct structured outputs. [2]
  • The evolution of programming languages is anticipated, moving towards higher-level, natural language-centric descriptions of systems and visual coding, abstracting away architectural design decisions to autonomous AI. [1]
Chalmers University of
Why we think this paper is great for you:
This paper explores the adoption of AI in requirements engineering, providing valuable insights for integrating AI into your product management processes and understanding practitioner perspectives. It directly connects AI with a fundamental aspect of product definition and roadmap development.
Rate paper: 👍 👎 ♥ Save
Abstract
The integration of AI for Requirements Engineering (RE) presents significant benefits but also poses real challenges. Although RE is fundamental to software engineering, limited research has examined AI adoption in RE. We surveyed 55 software practitioners to map AI usage across four RE phases: Elicitation, Analysis, Specification, and Validation, and four approaches for decision making: human-only decisions, AI validation, Human AI Collaboration (HAIC), and full AI automation. Participants also shared their perceptions, challenges, and opportunities when applying AI for RE tasks. Our data show that 58.2% of respondents already use AI in RE, and 69.1% view its impact as positive or very positive. HAIC dominates practice, accounting for 54.4% of all RE techniques, while full AI automation remains minimal at 5.4%. Passive AI validation (4.4 to 6.2%) lags even further behind, indicating that practitioners value AI's active support over passive oversight. These findings suggest that AI is most effective when positioned as a collaborative partner rather than a replacement for human expertise. It also highlights the need for RE-specific HAIC frameworks along with robust and responsible AI governance as AI adoption in RE grows.
University of Toronto
Why we think this paper is great for you:
This paper on assurance case development for evolving software product lines provides a formal approach directly applicable to managing product quality and strategic evolution within your product portfolio. It aligns well with your focus on product strategy and roadmap.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
In critical software engineering, structured assurance cases (ACs) are used to demonstrate how key system properties are supported by evidence (e.g., test results, proofs). Creating rigorous ACs is particularly challenging in the context of software product lines (SPLs), i.e, sets of software products with overlapping but distinct features and behaviours. Since SPLs can encompass very large numbers of products, developing a rigorous AC for each product individually is infeasible. Moreover, if the SPL evolves, e.g., by the modification or introduction of features, it can be infeasible to assess the impact of this change. Instead, the development and maintenance of ACs ought to be lifted such that a single AC can be developed for the entire SPL simultaneously, and be analyzed for regression in a variability-aware fashion. In this article, we describe a formal approach to lifted AC development and regression analysis. We formalize a language of variability-aware ACs for SPLs and study the lifting of template-based AC development. We also define a regression analysis to determine the effects of SPL evolutions on variability-aware ACs. We describe a model-based assurance management tool which implements these techniques, and illustrate our contributions by developing an AC for a product line of medical devices.
University of South Flori
Why we think this paper is great for you:
While focused on robot vision, this paper might offer tangential insights into how advanced technological capabilities are communicated and perceived, which could be relevant to understanding technology adoption. It touches upon the broader theme of vision systems.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Research indicates that humans can mistakenly assume that robots and humans have the same field of view (FoV), possessing an inaccurate mental model of robots. This misperception may lead to failures during human-robot collaboration tasks where robots might be asked to complete impossible tasks about out-of-view objects. The issue is more severe when robots do not have a chance to scan the scene to update their world model while focusing on assigned tasks. To help align humans' mental models of robots' vision capabilities, we propose four FoV indicators in augmented reality (AR) and conducted a user human-subjects experiment (N=41) to evaluate them in terms of accuracy, confidence, task efficiency, and workload. These indicators span a spectrum from egocentric (robot's eye and head space) to allocentric (task space). Results showed that the allocentric blocks at the task space had the highest accuracy with a delay in interpreting the robot's FoV. The egocentric indicator of deeper eye sockets, possible for physical alteration, also increased accuracy. In all indicators, participants' confidence was high while cognitive load remained low. Finally, we contribute six guidelines for practitioners to apply our AR indicators or physical alterations to align humans' mental models with robots' vision capabilities.
University of Belgrade
Why we think this paper is great for you:
This paper on eye movement analysis in driving scenarios is less directly aligned with your core interests, but it could offer general insights into human perception and interaction research methods. It provides a scientific approach to understanding user behavior.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
This study investigates eye movement behaviour during three conditions: Baseline, Ride (simulated drive under normal visibility), and Fog (simulated drive under reduced visibility). Eye tracking data are analyzed using 31 parameters, organized into three groups: (1) saccade features, (2) Bivariate Contour Ellipse Area (BCEA), and (3) blinking features. Specifically, the analysis includes 13 saccade, 13 BCEA, and 5 blinking variables. Across all feature groups, numerous statistically significant differences emerge between Baseline and the driving conditions, particularly between Baseline and Ride or Fog. Between Ride and Fog, saccade features show minimal changes (one out of 13), whereas BCEA (9 of 13) and blink features (four of 5) exhibit pronounced differences, highlighting the strong impact of reduced visibility on gaze stability and blinking behaviour. In addition to conventional measures such as Mean Squared Error (MSE) and entropy metrics, a new parameter, Guzik's Index (GI), is introduced to quantify fixation asymmetry along the major axis of the BCEA. This index utilizes eye tracking data to enhance the understanding of eye movement dynamics during driving conditions. Separately from GI, other parameters elicit the largest deviations compared to Ride (e.g., number of saccades: Cliff's $\delta$ = 0.96, BCEA: Cohen's $\textit{d}$ = 0.89, and standard deviation of blink duration: Cliff's $\delta$ = 0.80), underscoring the influence of reduced visibility on visual attention. Overall, these findings demonstrate that combining BCEA with saccade and blink parameters provides a comprehensive understanding of visual attention and gaze stability, while GI offers additional insights into fixation asymmetry under varying visibility conditions.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Product Strategy
  • Product Roadmap
You can edit or add more interests any time.