Hi j34nc4rl0+categorization,

Here is our personalized paper recommendations for you sorted by most relevant
Graphs for Products
Fort Hays State Universt
Abstract
A \emph{Fibonacci cordial labeling} of a graph \( G \) is an injective function \( f: V(G) \rightarrow \{F_0, F_1, \dots, F_n\} \), where \( F_i \) denotes the \( i^{\text{th}} \) Fibonacci number, such that the induced edge labeling \( f^*: E(G) \rightarrow \{0,1\} \), given by \( f^*(uv) = (f(u) + f(v)) \) $(\bmod\ 2)$, satisfies the balance condition \( |e_f(0) - e_f(1)| \le 1 \). Here, \( e_f(0) \) and \( e_f(1) \) represent the number of edges labeled 0 and 1, respectively. A graph that admits such a labeling is termed a \emph{Fibonacci cordial graph}. In this paper, we investigate the existence and construction of Fibonacci cordial labelings for several families of graphs, including \emph{Generalized Petersen graphs}, \emph{open and closed helm graphs}, \emph{joint sum graphs}, and \emph{circulant graphs of small order}. New results and examples are presented, contributing to the growing body of knowledge on graph labelings inspired by numerical sequences.
September 01, 2025
Save to Reading List
School of Mathematics,Mon
Abstract
A graph is $k$-gap-planar if it has a drawing in the plane such that every crossing can be charged to one of the two edges involved so that at most $k$ crossings are charged to each edge. We show this class of graphs has linear expansion. In particular, every $r$-shallow minor of a $k$-gap-planar graph has density $O(rk)$. Several extensions of this result are proved: for topological minors, for $k$-cover-planar graphs, for $k$-gap-cover-planar graphs, and for drawings on any surface. Application to graph colouring are presented.
September 03, 2025
Save to Reading List
Ontology for Products
Ashoka University
Paper visualization
Abstract
Formalizing cooking procedures remains a challenging task due to their inherent complexity and ambiguity. We introduce an extensible domain-specific language for representing recipes as directed action graphs, capturing processes, transfers, environments, concurrency, and compositional structure. Our approach enables precise, modular modeling of complex culinary workflows. Initial manual evaluation on a full English breakfast recipe demonstrates the DSL's expressiveness and suitability for future automated recipe analysis and execution. This work represents initial steps towards an action-centric ontology for cooking, using temporal graphs to enable structured machine understanding, precise interpretation, and scalable automation of culinary processes - both in home kitchens and professional culinary settings.
AI Insights
  • The DSL decomposes recipes into Process, Transfer, and Plate nodes, enabling fine‑grained action tracking.
  • Implicit state tracking (PPCs) automatically records ingredient states without explicit annotations.
  • Explicit environment modeling lets the graph encode kitchen zones, appliances, and their interactions.
  • First‑class concurrency support captures simultaneous actions like simmering while chopping.
  • The three‑stage pipeline transforms natural‑language text into machine‑interpretable ontologies and temporal graphs.
  • This formalism facilitates automated recipe execution, from smart ovens to robotic chefs.
  • By integrating graph‑based representation with procedural semantics, the DSL bridges the gap left by prior NLP‑centric studies.
September 04, 2025
Save to Reading List
arXiv:2509.03780v1 [math
Abstract
Suppose two Bayesian agents each learn a generative model of the same environment. We will assume the two have converged on the predictive distribution, i.e. distribution over some observables in the environment, but may have different generative models containing different latent variables. Under what conditions can one agent guarantee that their latents are a function of the other agents latents? We give simple conditions under which such translation is guaranteed to be possible: the natural latent conditions. We also show that, absent further constraints, these are the most general conditions under which translatability is guaranteed. Crucially for practical application, our theorems are robust to approximation error in the natural latent conditions.
AI Insights
  • Empirically equivalent systems are defined as distinct generative models that yield identical predictive distributions yet differ in latent structure.
  • The Mediator Determines Redund Theorem links a mediator’s ability to resolve redundancy with the factorization of the entire system.
  • A Dangly Bit Lemma simplifies entropy calculations by treating dangling variables as independent contributions.
  • The authors provide a ready‑to‑run Python routine that computes expected entropy from a system’s graph representation.
  • Their framework leverages information‑theoretic divergences and graph‑theoretic motifs to compare and merge fine‑tuned language models.
  • The analysis assumes full knowledge of each model’s internal graph, a limitation that invites future work on inference‑based reconstruction.
  • Despite these constraints, the results hint at scalable methods for aligning heterogeneous AI systems across domains.
September 04, 2025
Save to Reading List
Product Categorization
Macquarie University
Abstract
Linearly distributive categories (LDC) were introduced by Cockett and Seely to provide an alternative categorical semantics for multiplicative linear logic. In contrast to Barr's $\ast$-autonomous categories, LDCs take multiplicative conjunction and disjunction as primitive notions. Thus, a LDC is a category with two monoidal products that interact via linear distributors. A cartesian linearly distributive category (CLDC) is a LDC whose two monoidal products coincide with categorical products and coproducts. Initially, it was believed that CLDCs and distributive categories would coincide, but this was later found not to be the case. Consequently, the study on CLDCs was not pursued further at the time. With recent developments for and applications of LDCs, there has been renewed interest in CLDCs. This paper revisits CLDCs, demonstrating strong structural properties they all satisfy and investigating two key classes of examples: posetal distributive categories and semi-additive categories. Additionally, we re-examine a previously assumed class of CLDCs, the Kleisli categories of exception monads of distributive categories, and show that they are not, in fact, CLDCs.
September 04, 2025
Save to Reading List
Continual Generalized Category Discovery
Research Institute for S
Abstract
Biases in machine learning pose significant challenges, particularly when models amplify disparities that affect disadvantaged groups. Traditional bias mitigation techniques often lead to a {\itshape leveling-down effect}, whereby improving outcomes of disadvantaged groups comes at the expense of reduced performance for advantaged groups. This study introduces Bias Mitigation through Continual Learning (BM-CL), a novel framework that leverages the principles of continual learning to address this trade-off. We postulate that mitigating bias is conceptually similar to domain-incremental continual learning, where the model must adjust to changing fairness conditions, improving outcomes for disadvantaged groups without forgetting the knowledge that benefits advantaged groups. Drawing inspiration from techniques such as Learning without Forgetting and Elastic Weight Consolidation, we reinterpret bias mitigation as a continual learning problem. This perspective allows models to incrementally balance fairness objectives, enhancing outcomes for disadvantaged groups while preserving performance for advantaged groups. Experiments on synthetic and real-world image datasets, characterized by diverse sources of bias, demonstrate that the proposed framework mitigates biases while minimizing the loss of original knowledge. Our approach bridges the fields of fairness and continual learning, offering a promising pathway for developing machine learning systems that are both equitable and effective.
AI Insights
  • Just Train Twice (JTT) attains state‑of‑the‑art group robustness without group labels.
  • JTT trains two models—one per group—and a combiner that learns a weighted loss.
  • The weighted loss balances group performance, boosting robustness without extra data.
  • JTT doubles training time and memory, limiting scalability to very large datasets.
  • It shows promise in medical imaging and computer vision, where group shifts are common.
  • More work is needed to assess residual biases and extend JTT to complex architectures.
  • See Liu et al. “Just Train Twice” and Sagawa et al. “Distributionally Robust Neural Networks” for deeper insight.
September 01, 2025
Save to Reading List
Ludwig-Maximilian Univers
Abstract
Catastrophic forgetting is a significant challenge in continual learning, in which a model loses prior knowledge when it is fine-tuned on new tasks. This problem is particularly critical for large language models (LLMs) undergoing continual learning, as retaining performance across diverse domains is important for their general utility. In this paper, we explore model growth, a promising strategy that leverages smaller models to expedite and structure the training of larger ones for mitigating the catastrophic forgetting problem. Although growth-based pretraining, particularly via transformer stacking, has shown promise in accelerating convergence, its impact on forgetting remains under-explored. Therefore, we evaluate whether growth-based models can retain previously learned capabilities more effectively across a sequence of fine-tuning tasks involving domain knowledge, reasoning, reading comprehension, and bias. Our findings show that both models -- one trained with growth (Stack LLM) and one without (LLM) -- exhibit improvements in domain knowledge. However, reasoning and reading comprehension degrade over time, indicating signs of catastrophic forgetting. Stack LLM consistently shows less degradation, especially in reading comprehension, suggesting enhanced retention capabilities. Interestingly, in bias evaluation, the baseline LLM becomes progressively more neutral with continued fine-tuning, while Stack LLM maintains a steady bias ratio around 60--61\%. These results indicate that growth-based pretraining may deliver modest improvements in resisting catastrophic forgetting, though trade-offs remain in handling social biases.
AI Insights
  • CrowS‑pairs is used to track gender and racial bias shifts during continual fine‑tuning.
  • A hybrid of knowledge distillation from a frozen teacher and task‑specific fine‑tuning mitigates forgetting.
  • Benchmarks on GLUE and other NLP tasks show growth‑based models beat vanilla baselines on domain accuracy.
  • The paper cites BERT, RoBERTa, and XLNet to frame its pre‑training strategy.
  • Limitations: narrow task set, unreported compute budgets, and training‑from‑scratch assumption.
  • Future work: test other distillation objectives and multi‑task curricula for stronger retention.
  • Catastrophic Forgetting: loss of prior knowledge when fine‑tuning; Knowledge Distillation: student mimics teacher outputs.
September 01, 2025
Save to Reading List
Knowledge Graphs
University of Hull, Hull
Paper visualization
Abstract
Generative AI, such as Large Language Models (LLMs), has achieved impressive progress but still produces hallucinations and unverifiable claims, limiting reliability in sensitive domains. Retrieval-Augmented Generation (RAG) improves accuracy by grounding outputs in external knowledge, especially in domains like healthcare, where precision is vital. However, RAG remains opaque and essentially a black box, heavily dependent on data quality. We developed a method-agnostic, perturbation-based framework that provides token and component-level interoperability for Graph RAG using SMILE and named it as Knowledge-Graph (KG)-SMILE. By applying controlled perturbations, computing similarities, and training weighted linear surrogates, KG-SMILE identifies the graph entities and relations most influential to generated outputs, thereby making RAG more transparent. We evaluate KG-SMILE using comprehensive attribution metrics, including fidelity, faithfulness, consistency, stability, and accuracy. Our findings show that KG-SMILE produces stable, human-aligned explanations, demonstrating its capacity to balance model effectiveness with interpretability and thereby fostering greater transparency and trust in machine learning technologies.
AI Insights
  • KG‑SMILE’s perturbation framework is truly model‑agnostic, fitting any graph‑based RAG pipeline.
  • Weighted linear surrogates pinpoint which graph entities and relations drive token‑level outputs.
  • The paper stresses that stability estimation is a key frontier in XAI, and KG‑SMILE’s consistency metric tackles it.
  • For a solid grounding in XAI metrics, “Machine Learning Interpretability: A Survey on Methods and Metrics” is essential.
  • The NeurIPS 2022 Workshop on Explainable AI for NLP showcases the latest graph‑RAG interpretability advances.
  • KG‑SMILE shows that transparency can coexist with high performance, encouraging trust in sensitive domains.
September 03, 2025
Save to Reading List
LASIGE, Faculty of Scienc
Abstract
Knowledge graphs (KGs) are powerful tools for modelling complex, multi-relational data and supporting hypothesis generation, particularly in applications like drug repurposing. However, for predictive methods to gain acceptance as credible scientific tools, they must ensure not only accuracy but also the capacity to offer meaningful scientific explanations. This paper presents a novel approach REx, for generating scientific explanations based in link prediction in knowledge graphs. It employs reward and policy mechanisms that consider desirable properties of scientific explanation to guide a reinforcement learning agent in the identification of explanatory paths within a KG. The approach further enriches explanatory paths with domain-specific ontologies, ensuring that the explanations are both insightful and grounded in established biomedical knowledge. We evaluate our approach in drug repurposing using three popular knowledge graph benchmarks. The results clearly demonstrate its ability to generate explanations that validate predictive insights against biomedical knowledge and that outperform the state-of-the-art approaches in predictive performance, establishing REx as a relevant contribution to advance AI-driven scientific discovery.
September 02, 2025
Save to Reading List

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Taxonomy of Products
  • MECE Mutually Exclusive, Collectively Exhaustive.Knowledge Management
You can edit or add more interests any time.

Unsubscribe from these updates