Hi!

Your personalized paper recommendations for 19 to 23 January, 2026.
University of Edinburgh
AI Insights
  • The paper discusses the concept of partial linearity in categories, which is a generalization of linear categories. (ML: 0.89)👍👎
  • A morphism f:X→Y is central iff ⊏⊕1Y and f⊏1X. (ML: 0.86)👍👎
  • It is shown that the central morphisms between two objects have the structure of a monoid (Z(X,Y),+,z) where composition distributes over +. (ML: 0.84)👍👎
  • The paper shows that any two ⊕⊗-words of the same length have a unique isomorphism canonical with respect to ⊕ and ⊗ between them. (ML: 0.84)👍👎
  • Conversely, it is shown that if for each pair of objects X,Y in a prelinear C we have that set of central morphisms from X to Y is a monoid (Z(X,Y),+,z) where composition distributes over +, then C is linear. (ML: 0.81)👍👎
  • The paper defines addition of morphisms in a partially linear category as Lawvere and Schanuel do in [4]. (ML: 0.80)👍👎
  • A lineariser i: ⊕ → ⊗ is an isomorphism if it has both a left and right inverse. (ML: 0.80)👍👎
  • A prelinear category is called partially linear if its lineariser i: ⊕ → ⊗ is an isomorphism. (ML: 0.79)👍👎
  • A prelinear category is a category equipped with a lineariser i: ⊕ → ⊗. (ML: 0.76)👍👎
  • A prelinear category is defined as a category equipped with a lineariser i: ⊕ → ⊗, where ⊕ and ⊗ are monoidal structures on the category. (ML: 0.74)👍👎
Abstract
In this paper we study partial linearity in a category by replacing isomorphism between coproducts and products in a linear category with isomorphism between suitable monoidal structures on a category. The main results a coherence theorem and a generalization of the theory of central morphisms from unital categories to our context of partial linearity
Why we are recommending this paper?
Due to your Interest in Product Categorization

This paper explores a fundamental concept – partial linearity – which aligns with your interest in category theory and taxonomy development. The theoretical framework presented could be valuable for structuring and understanding complex relationships within your product knowledge domains.
SUPSI
AI Insights
  • The authors emphasize the need for interpretable models, which can provide transparent and explainable results. (ML: 0.99)👍👎
  • Interpretability: The ability of a model to provide transparent and explainable results, allowing users to understand the reasoning behind its predictions. (ML: 0.98)👍👎
  • They also discuss the importance of interpretability in LLMs, highlighting the need for transparent and explainable models. (ML: 0.98)👍👎
  • Prompt engineering is a crucial aspect of LLM development, as it can significantly impact the model's performance and accuracy. (ML: 0.96)👍👎
  • Prompt engineering: The process of designing input prompts to elicit specific responses from large language models (LLMs). (ML: 0.95)👍👎
  • The paper discusses the concept of prompt engineering, which involves designing input prompts to elicit specific responses from large language models (LLMs). (ML: 0.94)👍👎
  • Multi-agent framework: A system where multiple agents work together to design and refine prompts for LLMs. (ML: 0.93)👍👎
  • The paper highlights the importance of prompt engineering in LLM development and proposes a multi-agent framework for optimizing prompts. (ML: 0.93)👍👎
  • The authors propose a multi-agent framework for prompt optimization, where multiple agents work together to design and refine prompts. (ML: 0.89)👍👎
  • The paper reviews various techniques for optimizing LLMs, including gradient descent, beam search, and meta-learning. (ML: 0.83)👍👎
Abstract
Feature extraction from unstructured text is a critical step in many downstream classification pipelines, yet current approaches largely rely on hand-crafted prompts or fixed feature schemas. We formulate feature discovery as a dataset-level prompt optimization problem: given a labelled text corpus, the goal is to induce a global set of interpretable and discriminative feature definitions whose realizations optimize a downstream supervised learning objective. To this end, we propose a multi-agent prompt optimization framework in which language-model agents jointly propose feature definitions, extract feature values, and evaluate feature quality using dataset-level performance and interpretability feedback. Instruction prompts are iteratively refined based on this structured feedback, enabling optimization over prompts that induce shared feature sets rather than per-example predictions. This formulation departs from prior prompt optimization methods that rely on per-sample supervision and provides a principled mechanism for automatic feature discovery from unstructured text.
Why we are recommending this paper?
Due to your Interest in Continual Generalized Category Discovery

Given your focus on product categorization and knowledge graphs, this work on automated feature discovery is highly relevant. The approach of optimizing prompts for dataset-level insights directly addresses your need for efficient knowledge extraction.
University of Chinese Academy of Sciences
Paper visualization
Rate image: 👍 👎
Abstract
The rash development of knowledge graph research has brought big driving force to its application in many areas, including the medicine and healthcare domain. However, we have found that the application of some major information processing techniques on knowledge graph still lags behind. This defect includes the failure to make sufficient use of advanced logic reasoning, advanced artificial intelligence techniques, special-purpose programming languages, modern probabilistic and statistic theories et al. on knowledge graphs development and application. In particular, the multiple knowledge graphs cooperation and competition techniques have not got enough attention from researchers. This paper develops a systematic theory, technique and application of the concept 'knowledge graph network' and its application in medical and healthcare domain. Our research covers its definition, development, reasoning, computing and application under different conditions such as unsharp, uncertain, multi-modal, vectorized, distributed, federated. Almost in each case we provide (real data) examples and experiment results. Finally, a conclusion of innovation is provided.
Why we are recommending this paper?
Due to your Interest in Knowledge Graphs

This paper’s application of logic programming to knowledge graphs, particularly in the medical domain, aligns with your interest in knowledge management and structured knowledge representation. It offers a potential approach to leveraging knowledge graphs for complex reasoning.
UCLouvain
AI Insights
  • Limited discrepancy search (LDS): A search strategy that explores a limited number of possible solutions to find an optimal solution. (ML: 0.96)👍👎
  • It integrates limited discrepancy search and carefully controls the exploration of both feature and split discrepancies to discover high-quality solutions early and improve upon them. (ML: 0.94)👍👎
  • Optimal decision trees: Trees that are optimal in terms of accuracy or other criteria, such as minimizing error or maximizing information gain. (ML: 0.94)👍👎
  • Branch-and-bound search: A search strategy that uses a tree-like structure to explore the solution space and prune branches that are not promising. (ML: 0.90)👍👎
  • CA-ConTree is a powerful algorithm for learning optimal decision trees on continuous features with strong anytime performance. (ML: 0.89)👍👎
  • CA-ConTree is a novel algorithm for learning optimal decision trees on continuous features with strong anytime performance. (ML: 0.88)👍👎
  • Anytime algorithm: An algorithm that can provide an approximate solution at any time and improve it over time without sacrificing optimality. (ML: 0.87)👍👎
  • Experimental results on a diverse set of benchmark datasets demonstrate that CA-ConTree consistently outperforms existing exact methods in time-limited settings, particularly on medium-sized datasets and for tree depths where standard approaches struggle. (ML: 0.82)👍👎
  • It has been shown to outperform existing exact methods in time-limited settings, particularly on medium-sized datasets and for tree depths where standard approaches struggle. (ML: 0.81)👍👎
  • CA-ConTree achieves substantially better anytime behavior without sacrificing generalization performance and often surpasses both greedy baselines and exact solvers within the same computational budget. (ML: 0.76)👍👎
Abstract
In recent years, significant progress has been made on algorithms for learning optimal decision trees, primarily in the context of binary features. Extending these methods to continuous features remains substantially more challenging due to the large number of potential splits for each feature. Recently, an elegant exact algorithm was proposed for learning optimal decision trees with continuous features; however, the rapidly increasing computational time limits its practical applicability to shallow depths (typically 3 or 4). It relies on a depth-first search optimization strategy that fully optimizes the left subtree of each split before exploring the corresponding right subtree. While effective in finding optimal solutions given sufficient time, this strategy can lead to poor anytime behavior: when interrupted early, the best-found tree is often highly unbalanced and suboptimal. In such cases, purely greedy methods such as C4.5 may, paradoxically, yield better solutions. To address this limitation, we propose an anytime, yet complete approach leveraging limited discrepancy search, distributing the computational effort more evenly across the entire tree structure, and thus ensuring that a high-quality decision tree is available at any interruption point. Experimental results show that our approach outperforms the existing one in terms of anytime performance.
Why we are recommending this paper?
Due to your Interest in Continual Generalized Category Discovery

The exploration of decision trees, especially with continuous features, connects to your interest in product categorization and MECE methodologies. This work could provide techniques for structuring and classifying products based on continuous data.
Southern University of Science and Technology
Paper visualization
Rate image: 👍 👎
AI Insights
  • The authors also propose an adaptive weighting strategy based on the teacher's true class probability (TCP) that dynamically adjusts reference loss strength for each sample, providing flexible balance between knowledge retention and integration. (ML: 0.96)👍👎
  • The use of a reference model supervision mechanism helps prevent catastrophic forgetting during stage transitions, enabling effective knowledge acquisition. (ML: 0.96)👍👎
  • Extensive experiments demonstrate that SMSKD consistently improves student accuracy across diverse teacher-student architectures and KD method combinations and achieves superior performance compared to existing methods. (ML: 0.96)👍👎
  • The adaptive weighting strategy based on TCP allows for flexible balance between knowledge retention and integration, further improving student accuracy. (ML: 0.96)👍👎
  • The authors propose a new framework for knowledge distillation called Sequential Multi-Stage Knowledge Distillation (SMSKD), which sequentially integrates heterogeneous KD methods to achieve superior student performance. (ML: 0.95)👍👎
  • Teacher-Student Architecture: The architecture of the teacher and student models used for knowledge distillation. (ML: 0.95)👍👎
  • The teacher is typically a larger, more complex model that has been pre-trained on a large dataset, while the student is a smaller, simpler model that is being trained to mimic the teacher's behavior. (ML: 0.94)👍👎
  • Knowledge Distillation (KD): A technique used in deep learning to transfer knowledge from a pre-trained teacher model to a smaller student model, with the goal of improving the student's accuracy on specific tasks. (ML: 0.94)👍👎
  • To address catastrophic forgetting during stage transitions, the authors introduce a reference model supervision mechanism that anchors the student to its previous-stage state, preventing excessive deviation while enabling effective knowledge acquisition. (ML: 0.94)👍👎
  • The proposed SMSKD framework provides a simple yet effective way to integrate multiple KD methods and achieve superior student performance. (ML: 0.93)👍👎
  • Unlike prior approaches combining multiple KD objectives simultaneously, SMSKD adopts a progressive training strategy that allows the student to absorb complementary knowledge from different distillation methods in successive stages. (ML: 0.92)👍👎
Abstract
Knowledge distillation (KD) transfers knowledge from large teacher models to compact student models, enabling efficient deployment on resource constrained devices. While diverse KD methods, including response based, feature based, and relation based approaches, capture different aspects of teacher knowledge, integrating multiple methods or knowledge sources is promising but often hampered by complex implementation, inflexible combinations, and catastrophic forgetting, which limits practical effectiveness. This work proposes SMSKD (Sequential Multi Stage Knowledge Distillation), a flexible framework that sequentially integrates heterogeneous KD methods. At each stage, the student is trained with a specific distillation method, while a frozen reference model from the previous stage anchors learned knowledge to mitigate forgetting. In addition, we introduce an adaptive weighting mechanism based on the teacher true class probability (TCP) that dynamically adjusts the reference loss per sample to balance knowledge retention and integration. By design, SMSKD supports arbitrary method combinations and stage counts with negligible computational overhead. Extensive experiments show that SMSKD consistently improves student accuracy across diverse teacher student architectures and method combinations, outperforming existing baselines. Ablation studies confirm that stage wise distillation and reference model supervision are primary contributors to performance gains, with TCP based adaptive weighting providing complementary benefits. Overall, SMSKD is a practical and resource efficient solution for integrating heterogeneous KD methods.
Why we are recommending this paper?
Due to your Interest in Knowledge Management

With your interest in knowledge graphs and efficient knowledge management, this paper on knowledge distillation is a strong fit. The framework could be applied to optimize knowledge representation and retrieval within your knowledge graph systems.
Austin Peay State University
Paper visualization
Rate image: 👍 👎
AI Insights
  • Prime Labeling: A labeling of a graph is said to be prime if every pair of adjacent vertices has relatively prime labels. (ML: 0.92)👍👎
  • These findings provide new insights into the structure of these graphs and have potential applications in fields such as network analysis and coding theory. (ML: 0.92)👍👎
  • These findings provide new insights into the structure of these graphs and have potential applications in fields such as network analysis and coding theory. (ML: 0.92)👍👎
  • These results demonstrate the complexity of the problem and highlight the need for further research in this area. (ML: 0.91)👍👎
  • Total Prime Graph: A graph is considered total prime if it admits a labeling that satisfies the relatively prime condition for both vertex and edge labels. (ML: 0.91)👍👎
  • This means that any two adjacent vertices must have relatively prime labels, and any two edges incident on the same vertex must also have relatively prime labels. (ML: 0.91)👍👎
  • The authors' results provide new insights into the structure of these graphs and have potential applications in fields such as network analysis and coding theory. (ML: 0.90)👍👎
  • In other words, for any two adjacent vertices u and v, the label of u and the label of v must not share any common factors greater than 1. (ML: 0.89)👍👎
  • A graph is considered total prime if it admits a labeling that satisfies the relatively prime condition for both vertex and edge labels. (ML: 0.88)👍👎
  • While the authors provide several counterexamples to the conjecture that all graphs are total prime, there may be other classes of graphs that are not yet understood. (ML: 0.85)👍👎
  • The problem of determining whether a graph is total prime or not has been studied extensively in graph theory. (ML: 0.84)👍👎
  • The authors' investigation into various classes of graphs reveals that certain classes are total prime, while others are not. (ML: 0.84)👍👎
  • The authors' investigation into various classes of graphs reveals that certain classes are total prime, while others are not. (ML: 0.84)👍👎
  • The study of total prime graphs has implications for various areas of mathematics, including combinatorics, algebraic geometry, and computer science. (ML: 0.84)👍👎
  • The authors investigate various classes of graphs to determine whether they are total prime or not. (ML: 0.83)👍👎
  • They find that certain classes of trees, such as spider graphs, caterpillars, and complete binary trees, are total prime. (ML: 0.82)👍👎
  • The study of total prime graphs is a complex problem that requires careful consideration of many different factors. (ML: 0.82)👍👎
  • The authors investigate various classes of graphs to determine whether they are total prime or not, providing new insights into the structure of these graphs and highlighting the need for further research in this area. (ML: 0.82)👍👎
  • The authors provide several counterexamples to the conjecture that all graphs are total prime. (ML: 0.81)👍👎
  • The study of total prime graphs is an active area of research with many results and counterexamples having been discovered over the years. (ML: 0.81)👍👎
  • The study of total prime graphs is an active area of research with many results and counterexamples having been discovered over the years. (ML: 0.81)👍👎
  • However, they also show that other classes of graphs, including unions of cycles with at least two odd cycles, and the union of a graph with sufficiently many 3-cycles, are not total prime. (ML: 0.81)👍👎
  • The study of total prime graphs is an active area of research, with many results and counterexamples having been discovered over the years. (ML: 0.80)👍👎
  • The study of total prime graphs has been an active area of research for many years, with many results and counterexamples having been discovered. (ML: 0.80)👍👎
  • They show that the union of cycles with at least two odd cycles is not total prime, and that the union of a graph with sufficiently many 3-cycles is also not total prime. (ML: 0.77)👍👎
  • Further research is needed to fully understand the properties of these graphs and to develop new methods for determining whether they are total prime or not. (ML: 0.76)👍👎
Abstract
A total prime labeling of a graph of order $n$ is an extension of a prime labeling in which we distinctly label the vertices and edges. The goal of the labeling is for adjacent vertex labels to be relatively prime, and for each vertex of degree at least two, the greatest common divisor of the labels on its incident edges is equal to 1. In this paper, we construct total prime labelings by extending known prime and minimum coprime labelings and by developing new constructions for various classes of graphs. In particular, we show that snakes, books, prisms, prime trees, certain families of windmills, and other families of graphs are total prime.
Why we are recommending this paper?
Due to your Interest in Graphs for Products
Georgia Southern University
AI Insights
  • However, the complexity of deciding P(d,3) or the more general problem P(d,r) remains unknown. (ML: 0.92)👍👎
  • Bipartite realization: a graph whose vertices can be partitioned into two disjoint sets such that every edge connects a vertex from one set to a vertex from the other set. (ML: 0.91)👍👎
  • The generated graphs can be further tested to determine if they are bipartite to generate all labeled bipartite realizations. (ML: 0.90)👍👎
  • Graphical sequence: a sequence of integers representing the degrees of the vertices in a graph. (ML: 0.89)👍👎
  • Lexicographical order: an ordering of sequences based on their lexicographic (dictionary) order, where each sequence is compared character by character. (ML: 0.89)👍👎
  • The algorithm can be used to solve the decision problem P(d,3) of whether an input graphical sequence has any triangle-free realizations. (ML: 0.87)👍👎
  • Triangle-free realization: a graph that does not contain any triangles (i.e., cycles of length 3). (ML: 0.81)👍👎
  • The algorithm can often finish quickly and much faster than generating all labeled realizations in [21]. (ML: 0.81)👍👎
  • For example, on the sequence (71,59,31,21,11) of length 13, it can finish in about 5 seconds to find all 27720 labeled triangle-free realizations and all 17640 labeled bipartite realizations. (ML: 0.80)👍👎
  • The paper presents an algorithm to generate all labeled triangle-free realizations of a given graphical sequence in lexicographical order. (ML: 0.80)👍👎
Abstract
We extend our previous algorithm that generates all labeled graphs with a given graphical degree sequence to generate all labeled triangle-free graphs with a given graphical degree sequence. The algorithm uses various pruning techniques to avoid having to first generate all labeled realizations of the input sequence and then testing whether each labeled realization is triangle-free. It can be further extended to generate all labeled bipartite graphs with a given graphical degree sequence by adding a simple test whether each generated triangle-free realization is a bipartite graph. All output graphs are generated in the lexicographical ordering as in the original algorithm. The algorithms can also be easily parallelized.
Why we are recommending this paper?
Due to your Interest in Graphs for Products
Harvard Medical School
AI Insights
  • The paper assumes that the knowledge graph is well-structured and accurate, which may not always be the case in real-world scenarios. (ML: 0.97)👍👎
  • Knowledge Graph (KG): A structured representation of knowledge that consists of entities, relationships between them, and attributes or properties. (ML: 0.96)👍👎
  • Future work may focus on adapting GRAG to other NLP tasks, exploring different graph traversal strategies, and investigating the impact of KG quality on model performance. (ML: 0.94)👍👎
  • The proposed method relies on the availability of a large-scale knowledge graph, which can be a limitation for smaller datasets or applications with limited resources. (ML: 0.94)👍👎
  • The proposed GRAG method has shown promising results in improving question answering performance, especially when combined with large language models. (ML: 0.89)👍👎
  • Retrieval-Augmented Generation (RAG): A paradigm that combines the strengths of retrieval-based models and generative models to improve performance on various NLP tasks. (ML: 0.88)👍👎
  • GRAG is evaluated on several benchmark datasets, demonstrating state-of-the-art results in terms of accuracy and efficiency. (ML: 0.88)👍👎
  • The paper presents a novel approach to knowledge graph-based retrieval-augmented generation, which leverages the strengths of both large language models and knowledge graphs. (ML: 0.86)👍👎
  • Recent works have shown that retrieval-augmented generation can improve performance on various NLP tasks, especially when combined with large language models. (ML: 0.86)👍👎
  • The proposed method, called GRAG (Graph Retrieval-Augmented Generation), combines the benefits of dense passage retrieval and graph traversal to improve question answering performance. (ML: 0.84)👍👎
Abstract
Retrieving evidence for language model queries from knowledge graphs requires balancing broad search across the graph with multi-hop traversal to follow relational links. Similarity-based retrievers provide coverage but remain shallow, whereas traversal-based methods rely on selecting seed nodes to start exploration, which can fail when queries span multiple entities and relations. We introduce ARK: Adaptive Retriever of Knowledge, an agentic KG retriever that gives a language model control over this breadth-depth tradeoff using a two-operation toolset: global lexical search over node descriptors and one-hop neighborhood exploration that composes into multi-hop traversal. ARK alternates between breadth-oriented discovery and depth-oriented expansion without depending on a fragile seed selection, a pre-set hop depth, or requiring retrieval training. ARK adapts tool use to queries, using global search for language-heavy queries and neighborhood exploration for relation-heavy queries. On STaRK, ARK reaches 59.1% average Hit@1 and 67.4 average MRR, improving average Hit@1 by up to 31.4% and average MRR by up to 28.0% over retrieval-based and agentic training-free methods. Finally, we distill ARK's tool-use trajectories from a large teacher into an 8B model via label-free imitation, improving Hit@1 by +7.0, +26.6, and +13.5 absolute points over the base 8B model on AMAZON, MAG, and PRIME datasets, respectively, while retaining up to 98.5% of the teacher's Hit@1 rate.
Why we are recommending this paper?
Due to your Interest in Knowledge Graphs
University of Bremen
AI Insights
  • Reasoning: The process of drawing conclusions or making decisions based on the knowledge represented in an ontology. (ML: 0.98)👍👎
  • Knowledge representation and reasoning (KRR): The process of representing knowledge in a way that can be used by machines to reason about it. (ML: 0.98)👍👎
  • Ontology: A formal representation of knowledge that provides a common understanding of the meaning of terms and concepts. (ML: 0.97)👍👎
  • They also discuss the importance of explainability and adaptability in KRR for robotics, as well as the need for more research in this area. (ML: 0.95)👍👎
  • It highlights the potential benefits of using an ontology-based framework for representing and reasoning about knowledge in robotics. (ML: 0.94)👍👎
  • The paper discusses the challenges of integrating knowledge representation and reasoning (KRR) into robotic systems. (ML: 0.93)👍👎
  • The paper concludes by emphasizing the need for more research in KRR for robotics, particularly in areas such as explainability and adaptability. (ML: 0.89)👍👎
  • The authors propose an ontology-based framework for representing and reasoning about knowledge in robotics, which can be used to integrate various components of a robotic system. (ML: 0.87)👍👎
  • It highlights the need for a more flexible and open approach to KRR in robotics, rather than relying on proprietary or closed systems. (ML: 0.83)👍👎
  • The authors also discuss the importance of developing more flexible and open approaches to KRR in robotics. (ML: 0.82)👍👎
Abstract
This paper introduces KRROOD, a framework designed to bridge the integration gap between modern software engineering and Knowledge Representation & Reasoning (KR&R) systems. While Object-Oriented Programming (OOP) is the standard for developing complex applications, existing KR&R frameworks often rely on external ontologies and specialized languages that are difficult to integrate with imperative code. KRROOD addresses this by treating knowledge as a first-class programming abstraction using native class structures, bridging the gap between the logic programming and OOP paradigms. We evaluate the system on the OWL2Bench benchmark and a human-robot task learning scenario. Experimental results show that KRROOD achieves strong performance while supporting the expressive reasoning required for real-world autonomous systems.
Why we are recommending this paper?
Due to your Interest in Knowledge Management

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Ontology for Products
  • MECE Mutually Exclusive, Collectively Exhaustive.
  • Taxonomy of Products
You can edit or add more interests any time.