Hi!

Your personalized paper recommendations for 17 to 21 November, 2025.
🎯 Top Personalized Recommendations
CAS
Why we think this paper is great for you:
This paper directly addresses knowledge graph reasoning, which is central to your interest in Knowledge Graphs and Knowledge Management. It explores advanced methods for inferring new knowledge from these structures.
Rate paper: 👍 👎 ♥ Save
Abstract
Knowledge graph reasoning (KGR) is the task of inferring new knowledge by performing logical deductions on knowledge graphs. Recently, large language models (LLMs) have demonstrated remarkable performance in complex reasoning tasks. Despite promising success, current LLM-based KGR methods still face two critical limitations. First, existing methods often extract reasoning paths indiscriminately, without assessing their different importance, which may introduce irrelevant noise that misleads LLMs. Second, while many methods leverage LLMs to dynamically explore potential reasoning paths, they require high retrieval demands and frequent LLM calls. To address these limitations, we propose PathMind, a novel framework designed to enhance faithful and interpretable reasoning by selectively guiding LLMs with important reasoning paths. Specifically, PathMind follows a "Retrieve-Prioritize-Reason" paradigm. First, it retrieves a query subgraph from KG through the retrieval module. Next, it introduces a path prioritization mechanism that identifies important reasoning paths using a semantic-aware path priority function, which simultaneously considers the accumulative cost and the estimated future cost for reaching the target. Finally, PathMind generates accurate and logically consistent responses via a dual-phase training strategy, including task-specific instruction tuning and path-wise preference alignment. Extensive experiments on benchmark datasets demonstrate that PathMind consistently outperforms competitive baselines, particularly on complex reasoning tasks with fewer input tokens, by identifying essential reasoning paths.
AI Summary
  • PathMind significantly improves KGR performance, particularly on complex multi-hop questions, by selectively guiding LLMs with prioritized reasoning paths rather than indiscriminately retrieved information. [3]
  • Adopt PathMind's approach to balance performance and efficiency in KGR by reducing the number of input tokens and LLM calls compared to synergy-augmented methods, making it more practical for real-world applications. [3]
  • Knowledge Graph Reasoning (KGR): The task of inferring new knowledge or answering complex queries by performing logical deductions on knowledge graphs (KGs). [3]
  • -> rl -> el, that reveal connections among entities. [3]
  • Employ a dual-phase training strategy, including task-specific instruction tuning and path-wise preference alignment, to enhance LLMs' ability to generate accurate and logically consistent responses. [2]
  • Optimize the number of nodes selected per iteration in path prioritization (e.g., K=3) to balance retrieving sufficient information with avoiding the introduction of misleading, irrelevant entities. [2]
  • Implement a semantic-aware path prioritization mechanism that combines accumulative cost (from query to current entity) and estimated future cost (to target answer) to effectively identify crucial reasoning paths. [1]
  • Prioritize important reasoning paths over random or shortest paths, as this strategy demonstrably reduces irrelevant noise and significantly improves reasoning accuracy, especially for complex queries. [1]
  • Leverage the framework's generalizability across various LLM backbones, as PathMind consistently achieves superior performance, indicating its adaptability and potential for integration with future advanced models. [1]
  • Reasoning Paths: Sequences of consecutive triples in KGs, defined as π = e0 -> r1 -> e1 -> ... [1]
  • Important Reasoning Paths: A subset of reasoning paths that are particularly necessary and relevant for accurate knowledge reasoning, distinguishing them from irrelevant or misleading paths. [1]
Not available
Why we think this paper is great for you:
This paper offers a comparative analysis of Executable Ontologies, which is highly relevant to your interest in Ontology for Products and Knowledge Management. It provides insights into modeling complex domains using declarative approaches.
Rate paper: 👍 👎 ♥ Save
Abstract
This paper compares two distinct approaches to modeling robotic behavior: imperative Behavior Trees (BTs) and declarative Executable Ontologies (EO), implemented through the boldsea framework. BTs structure behavior hierarchically using control-flow, whereas EO represents the domain as a temporal, event-based semantic graph driven by dataflow rules. We demonstrate that EO achieves comparable reactivity and modularity to BTs through a fundamentally different architecture: replacing polling-based tick execution with event-driven state propagation. We propose that EO offers an alternative framework, moving from procedural programming to semantic domain modeling, to address the semantic-process gap in traditional robotic control. EO supports runtime model modification, full temporal traceability, and a unified representation of data, logic, and interface - features that are difficult or sometimes impossible to achieve with BTs, although BTs excel in established, predictable scenarios. The comparison is grounded in a practical mobile manipulation task. This comparison highlights the respective operational strengths of each approach in dynamic, evolving robotic systems.
University of Melbourne
Why we think this paper is great for you:
This paper focuses on building a Knowledge Graph for complex collections, directly aligning with your interest in Knowledge Graphs and managing diverse information. The challenges it addresses are analogous to those in product categorization.
Rate paper: 👍 👎 ♥ Save
Abstract
Digital transformation in the cultural heritage sector has produced vast yet fragmented collections of artefact data. Existing frameworks for museum information systems struggle to integrate heterogeneous metadata, unstructured documents, and multimodal artefacts into a coherent and queryable form. We present MuseKG, an end-to-end knowledge-graph framework that unifies structured and unstructured museum data through symbolic-neural integration. MuseKG constructs a typed property graph linking objects, people, organisations, and visual or textual labels, and supports natural language queries. Evaluations on real museum collections demonstrate robust performance across queries over attributes, relations, and related entities, surpassing large-language-model zero-shot, few-shot and SPARQL prompt baselines. The results highlight the importance of symbolic grounding for interpretable and scalable cultural heritage reasoning, and pave the way for web-scale integration of digital heritage knowledge.
Beihang University
Why we think this paper is great for you:
This paper delves into continual learning, which is directly applicable to your interest in Continual Generalized Category Discovery. It explores how models can adapt and acquire new knowledge without forgetting existing information.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Domain-specific post-training often causes catastrophic forgetting, making foundation models lose their general reasoning ability and limiting their adaptability to dynamic real-world environments. Preserving general capabilities while acquiring downstream domain knowledge is a central challenge for large language and multimodal models. Traditional continual learning methods, such as regularization, replay and architectural isolation, suffer from poor downstream performance, reliance on inaccessible historical data, or additional parameter overhead. While recent parameter-efficient tuning (PET) methods can alleviate forgetting, their effectiveness strongly depends on the choice of parameters and update strategies. In this paper, we introduce PIECE, a Parameter Importance Estimation-based Continual Enhancement method that preserves general ability while efficiently learning domain knowledge without accessing prior training data or increasing model parameters. PIECE selectively updates only 0.1% of core parameters most relevant to new tasks, guided by two importance estimators: PIECE-F based on Fisher Information, and PIECE-S based on a second-order normalization that combines gradient and curvature information. Experiments across three language models and two multimodal models show that PIECE maintains general capabilities and achieves state-of-the-art continual learning performance across diverse downstream tasks. Our results highlight a practical path to scalable, domain-adaptive foundation models without catastrophic forgetting.
University of Michigan
Why we think this paper is great for you:
This paper explores classification trees, a fundamental technique for building interpretable models, which is highly relevant to your interests in Taxonomy of Products and Product Categorization. It offers insights into performing valid inference for such models.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Decision trees are widely used for non-linear modeling, as they capture interactions between predictors while producing inherently interpretable models. Despite their popularity, performing inference on the non-linear fit remains largely unaddressed. This paper focuses on classification trees and makes two key contributions. First, we introduce a novel tree-fitting method that replaces the greedy splitting of the predictor space in standard tree algorithms with a probabilistic approach. Each split in our approach is selected according to sampling probabilities defined by an exponential mechanism, with a temperature parameter controlling its deviation from the deterministic choice given data. Second, while our approach can fit a tree that, with high probability, approximates the fit produced by standard tree algorithms at high temperatures, it is not merely predictive- unlike standard algorithms, it enables valid inference by taking into account the highly adaptive tree structure. Our method produces pivots directly from the sampling probabilities in the exponential mechanism. In theory, our pivots allow asymptotically valid inference on the parameters in the predictive fit, and in practice, our method delivers powerful inference without sacrificing predictive accuracy, in contrast to data splitting methods.
Monash University
Why we think this paper is great for you:
This paper investigates string graphs and their product structure, which directly aligns with your interest in Graphs for Products. It explores how complex graphs can be understood through simpler building blocks.
Rate paper: 👍 👎 ♥ Save
Abstract
We investigate string graphs through the lens of graph product structure theory, which describes complicated graphs as subgraphs of strong products of simpler building blocks. A graph $G$ is called a string graph if its vertices can be represented by a collection $\mathcal{C}$ of continuous curves (called a string representation of $G$) in a surface so that two vertices are adjacent in $G$ if and only if the corresponding curves in $\mathcal{C}$ cross. We prove that every string graph with bounded maximum degree in a fixed surface is isomorphic to a subgraph of the strong product of a graph with bounded treewidth and a path. This extends recent product structure theorems for string graphs. Applications of this result are presented. This product structure theorem ceases to be true if the `bounded maximum degree' assumption is relaxed to `bounded degeneracy'. For string graphs in the plane, we give an alternative proof of this result. Specifically, we show that every string graph in the plane has a `localised' string representation where the number of crossing points on the curve representing a vertex $u$ is bounded by a function of the degree of $u$. Our proof of the product structure theorem also leads to a result about the treewidth of outerstring graphs, which qualitatively extends a result of Fox and Pach [Eur. J. Comb. 2012] about outerstring graphs with bounded maximum degree. We extend our result to outerstring graphs defined in arbitrary surfaces.
McGill University
Why we think this paper is great for you:
This paper discusses how new technologies are reshaping organizational knowing and challenging established categories, which touches upon your interests in Knowledge Management and Product Categorization. It offers a perspective on the evolving nature of classification.
Rate paper: 👍 👎 ♥ Save
Abstract
Large Language Models (LLMs) are reshaping organizational knowing by unsettling the epistemological foundations of representational and practice-based perspectives. We conceptualize LLMs as Haraway-ian monsters, that is, hybrid, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry. Focusing on analogizing as a fundamental driver of knowledge, we examine how LLMs generate connections through large-scale statistical inference. Analyzing their operation across the dimensions of surface/deep analogies and near/far domains, we highlight both their capacity to expand organizational knowing and the epistemic risks they introduce. Building on this, we identify three challenges of living with such epistemic monsters: the transformation of inquiry, the growing need for dialogical vetting, and the redistribution of agency. By foregrounding the entangled dynamics of knowing-with-LLMs, the paper extends organizational theory beyond human-centered epistemologies and invites renewed attention to how knowledge is created, validated, and acted upon in the age of intelligent technologies.
Product Categorization
University College London
Rate paper: 👍 👎 ♥ Save
Abstract
We provide a unified treatment of several commuting tensor products considered in the literature, including the tensor product of enriched categories and the Boardman-Vogt tensor product of operads and symmetric multicategories, subsuming work of Elmendorf and Mandell. We then show how a commuting tensor product extends to bimodules, generalising results of Dwyer and Hess. In particular, we construct a double category of symmetric multicategories, symmetric multifunctors and bimodules and show that it admits a symmetric oplax monoidal structure. These applications are obtained as instances of a general construction of commuting tensor products on double categories of monads, monad morphisms and bimodules.
Graphs for Products
University of
Rate paper: 👍 👎 ♥ Save
Abstract
We survey the known group properties that a sequence of finite groups or group actions needs to satisfy to admit subsets of bounded cardinality producing expander Cayley or Schreier graphs. We prove that an infinite amenable group and solvable groups of bounded derived length do not produce expander Schreier graphs, generalizing with easier proofs results of Lubotzky and Weiss for Cayley graphs. In particular, the poor expansion properties of a group action cannot in general be detected by looking at the abelian sections or at the representations above the stabilizer of a point.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Taxonomy of Products
  • MECE Mutually Exclusive, Collectively Exhaustive.
You can edit or add more interests any time.