Papers from 15 to 19 September, 2025
Here are the personalized paper recommendations sorted by most relevant
AI for Product Management
Fraunhofer Institute for
Abstract
The benefits of adopting artificial intelligence (AI) in manufacturing are
undeniable. However, operationalizing AI beyond the prototype, especially when
involved with cyber-physical production systems (CPPS), remains a significant
challenge due to the technical system complexity, a lack of implementation
standards and fragmented organizational processes. To this end, this paper
proposes a new process model for the lifecycle management of AI assets designed
to address challenges in manufacturing and facilitate effective
operationalization throughout the entire AI lifecycle. The process model, as a
theoretical contribution, builds on machine learning operations (MLOps)
principles and refines three aspects to address the domain-specific
requirements from the CPPS context. As a result, the proposed process model
aims to support organizations in practice to systematically develop, deploy and
manage AI assets across their full lifecycle while aligning with CPPS-specific
constraints and regulatory demands.
AI Insights - The AIM4M model is tool‑agnostic, enabling any ML framework to plug into CPPS workflows.
- It embeds audit‑ready traceability, so every model change is logged for regulatory compliance.
- Rollouts become predictable across heterogeneous factories thanks to a standardized deployment pipeline.
- SMEs can adopt bundled roles, while large plants can scale to fine‑grained governance without redesign.
- Future iterations will be driven by real‑world customer projects, iteratively tightening the process logic.
- The project is backed by EU “AI Matters” funding and Baden‑Württemberg state support, ensuring cross‑border collaboration.
- For deeper dives, consult “Machine Learning in Manufacturing” and the MLOps maturity model paper cited in the study.
Abstract
This study investigates the impact of artificial intelligence (AI) adoption
on job loss rates using the Global AI Content Impact Dataset (2020--2025). The
panel comprises 200 industry-country-year observations across Australia, China,
France, Japan, and the United Kingdom in ten industries. A three-stage ordinary
least squares (OLS) framework is applied. First, a full-sample regression finds
no significant linear association between AI adoption rate and job loss rate
($\beta \approx -0.0026$, $p = 0.949$). Second, industry-specific regressions
identify the marketing and retail sectors as closest to significance. Third,
interaction-term models quantify marginal effects in those two sectors,
revealing a significant retail interaction effect ($-0.138$, $p < 0.05$),
showing that higher AI adoption is linked to lower job loss in retail. These
findings extend empirical evidence on AI's labor market impact, emphasize AI's
productivity-enhancing role in retail, and support targeted policy measures
such as intelligent replenishment systems and cashierless checkout
implementations.
Product Strategy
Emory University
Abstract
We study how a platform should design early exposure and rewards when
creators strategically choose quality before release. A short testing window
with a pass/fail bar induces a pass probability, the slope of which is the key
sufficient statistic for incentives. We derive three main results. First, a
closed-form ``implementability bounty'' can perfectly align creator and
platform objectives, correcting for incomplete revenue sharing. Second,
front-loading guaranteed impressions is the most effective way to strengthen
incentives for a given attention budget. Third, when impression and cash
budgets are constrained, the optimal policy follows an equal-marginal-value
rule based on the prize spread and certain exposure. We map realistic ranking
engines (e.g., Thompson sampling) into the model's parameters and provide
telemetry-based estimators. The framework is simple to operationalize and
offers a direct, managerially interpretable solution for platforms to solve the
creator cold-start problem and cultivate high-quality supply.
Mohammed V University in
Abstract
This study introduces a mathematical framework to investigate the viability
and reachability of production systems under constraints. We develop a model
that incorporates key decision variables, such as pricing policy, quality
investment, and advertising, to analyze short-term tactical decisions and
long-term strategic outcomes. In the short term, we constructed a capture basin
that defined the initial conditions under which production viability
constraints were satisfied within the target zone. In the long term, we explore
the dynamics of product quality and market demand to achieve and sustain the
desired target. The Hamilton-Jacobi-Bellman (HJB) theory characterizes the
capture basin and viability kernel using viscosity solutions of the HJB
equation. This approach, which avoids controllability assumptions, is well
suited to viability problems with specified targets. It provides managers with
insights into maintaining production and inventory levels within viable ranges
while considering product quality and evolving market demand. We numerically
studied the HJB equation to design and test computational methods that validate
the theoretical insights. Simulations offer practical tools for decision-makers
to address operational challenges while aligning with the long-term
sustainability goals. This study enhances the production system performance and
resilience by linking rigorous mathematics with actionable solutions.
Vision Setting for Tech Teams
UC San Diego, MIT, NVIDIA
Abstract
We present Spatial Region 3D (SR-3D) aware vision-language model that
connects single-view 2D images and multi-view 3D data through a shared visual
token space. SR-3D supports flexible region prompting, allowing users to
annotate regions with bounding boxes, segmentation masks on any frame, or
directly in 3D, without the need for exhaustive multi-frame labeling. We
achieve this by enriching 2D visual features with 3D positional embeddings,
which allows the 3D model to draw upon strong 2D priors for more accurate
spatial reasoning across frames, even when objects of interest do not co-occur
within the same view. Extensive experiments on both general 2D vision language
and specialized 3D spatial benchmarks demonstrate that SR-3D achieves
state-of-the-art performance, underscoring its effectiveness for unifying 2D
and 3D representation space on scene understanding. Moreover, we observe
applicability to in-the-wild videos without sensory 3D inputs or ground-truth
3D annotations, where SR-3D accurately infers spatial relationships and metric
measurements.
Abstract
MLLMs (Multimodal Large Language Models) have showcased remarkable
capabilities, but their performance in high-stakes, domain-specific scenarios
like surgical settings, remains largely under-explored. To address this gap, we
develop \textbf{EyePCR}, a large-scale benchmark for ophthalmic surgery
analysis, grounded in structured clinical knowledge to evaluate cognition
across \textit{Perception}, \textit{Comprehension} and \textit{Reasoning}.
EyePCR offers a richly annotated corpus with more than 210k VQAs, which cover
1048 fine-grained attributes for multi-view perception, medical knowledge graph
of more than 25k triplets for comprehension, and four clinically grounded
reasoning tasks. The rich annotations facilitate in-depth cognitive analysis,
simulating how surgeons perceive visual cues and combine them with domain
knowledge to make decisions, thus greatly improving models' cognitive ability.
In particular, \textbf{EyePCR-MLLM}, a domain-adapted variant of Qwen2.5-VL-7B,
achieves the highest accuracy on MCQs for \textit{Perception} among compared
models and outperforms open-source models in \textit{Comprehension} and
\textit{Reasoning}, rivalling commercial models like GPT-4.1. EyePCR reveals
the limitations of existing MLLMs in surgical cognition and lays the foundation
for benchmarking and enhancing clinical reliability of surgical video
understanding models.