Hi j34nc4rl0+categorization,

Here is our personalized paper recommendations for you sorted by most relevant
Product Categorization
Nova School of Business and Economics
Abstract
This study addresses critical industrial challenges in e-commerce product categorization, namely platform heterogeneity and the structural limitations of existing taxonomies, by developing and deploying a multimodal hierarchical classification framework. Using a dataset of 271,700 products from 40 international fashion e-commerce platforms, we integrate textual features (RoBERTa), visual features (ViT), and joint vision--language representations (CLIP). We investigate fusion strategies, including early, late, and attention-based fusion within a hierarchical architecture enhanced by dynamic masking to ensure taxonomic consistency. Results show that CLIP embeddings combined via an MLP-based late-fusion strategy achieve the highest hierarchical F1 (98.59\%), outperforming unimodal baselines. To address shallow or inconsistent categories, we further introduce a self-supervised ``product recategorization'' pipeline using SimCLR, UMAP, and cascade clustering, which discovered new, fine-grained categories (e.g., subtypes of ``Shoes'') with cluster purities above 86\%. Cross-platform experiments reveal a deployment-relevant trade-off: complex late-fusion methods maximize accuracy with diverse training data, while simpler early-fusion methods generalize more effectively to unseen platforms. Finally, we demonstrate the framework's industrial scalability through deployment in EURWEB's commercial transaction intelligence platform via a two-stage inference pipeline, combining a lightweight RoBERTa stage with a GPU--accelerated multimodal stage to balance cost and accuracy.
Abstract
The demand for text classification is growing significantly in web searching, data mining, web ranking, recommendation systems, and so many other fields of information and technology. This paper illustrates the text classification process on different datasets using some standard supervised machine learning techniques. Text documents can be classified through various kinds of classifiers. Labeled text documents are used to classify the text in supervised classifications. This paper applies these classifiers on different kinds of labeled documents and measures the accuracy of the classifiers. An Artificial Neural Network (ANN) model using Back Propagation Network (BPN) is used with several other models to create an independent platform for labeled and supervised text classification process. An existing benchmark approach is used to analyze the performance of classification using labeled documents. Experimental analysis on real data reveals which model works well in terms of classification accuracy.
Continual Generalized Category Discovery
Sichuan University
Abstract
Balancing sensitivity to new tasks and stability for retaining past knowledge is crucial in continual learning (CL). Recently, sharpness-aware minimization has proven effective in transfer learning and has also been adopted in continual learning (CL) to improve memory retention and learning efficiency. However, relying on zeroth-order sharpness alone may favor sharper minima over flatter ones in certain settings, leading to less robust and potentially suboptimal solutions. In this paper, we propose \textbf{C}ontinual \textbf{Flat}ness (\textbf{C-Flat}), a method that promotes flatter loss landscapes tailored for CL. C-Flat offers plug-and-play compatibility, enabling easy integration with minimal modifications to the code pipeline. Besides, we present a general framework that integrates C-Flat into all major CL paradigms and conduct comprehensive comparisons with loss-minima optimizers and flat-minima-based CL methods. Our results show that C-Flat consistently improves performance across a wide range of settings. In addition, we introduce C-Flat++, an efficient yet effective framework that leverages selective flatness-driven promotion, significantly reducing the update cost required by C-Flat. Extensive experiments across multiple CL methods, datasets, and scenarios demonstrate the effectiveness and efficiency of our proposed approaches. Code is available at https://github.com/WanNaa/C-Flat.
Center for Nanophase Materials Sciences, Oak Ridge National Laboratory
Abstract
Autonomous experiments (AEs) are transforming how scientific research is conducted by integrating artificial intelligence with automated experimental platforms. Current AEs primarily focus on the optimization of a predefined target; while accelerating this goal, such an approach limits the discovery of unexpected or unknown physical phenomena. Here, we introduce a novel framework, INS2ANE (Integrated Novelty Score-Strategic Autonomous Non-Smooth Exploration), to enhance the discovery of novel phenomena in autonomous experimentation. Our method integrates two key components: (1) a novelty scoring system that evaluates the uniqueness of experimental results, and (2) a strategic sampling mechanism that promotes exploration of under-sampled regions even if they appear less promising by conventional criteria. We validate this approach on a pre-acquired dataset with a known ground truth comprising of image-spectral pairs. We further implement the process on autonomous scanning probe microscopy experiments. INS2ANE significantly increases the diversity of explored phenomena in comparison to conventional optimization routines, enhancing the likelihood of discovering previously unobserved phenomena. These results demonstrate the potential for AE to enhance the depth of scientific discovery; in combination with the efficiency provided by AEs, this approach promises to accelerate scientific research by simultaneously navigating complex experimental spaces to uncover new phenomena.
MECE Mutually Exclusive, Collectively Exhaustive.Knowledge Management
Jagiellonian University
Abstract
While transfer learning is an advantageous strategy, it overlooks the opportunity to leverage knowledge from numerous available models online. Addressing this multi-source transfer learning problem is a promising path to boost adaptability and cut re-training costs. However, existing approaches are inherently coarse-grained, lacking the necessary precision for granular knowledge extraction and the aggregation efficiency required to fuse knowledge from either a large number of source models or those with high parameter counts. We address these limitations by leveraging Singular Value Decomposition (SVD) to first decompose each source model into its elementary, rank-one components. A subsequent aggregation stage then selects only the most salient components from all sources, thereby overcoming the previous efficiency and precision limitations. To best preserve and leverage the synthesized knowledge base, our method adapts to the target task by fine-tuning only the principal singular values of the merged matrix. In essence, this process only recalibrates the importance of top SVD components. The proposed framework allows for efficient transfer learning, is robust to perturbations both at the input level and in the parameter space (e.g., noisy or pruned sources), and scales well computationally.
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Abstract
Large vision-language models (LVLMs) demonstrate strong visual question answering (VQA) capabilities but are shown to hallucinate. A reliable model should perceive its knowledge boundaries-knowing what it knows and what it does not. This paper investigates LVLMs' perception of their knowledge boundaries by evaluating three types of confidence signals: probabilistic confidence, answer consistency-based confidence, and verbalized confidence. Experiments on three LVLMs across three VQA datasets show that, although LVLMs possess a reasonable perception level, there is substantial room for improvement. Among the three confidences, probabilistic and consistency-based signals are more reliable indicators, while verbalized confidence often leads to overconfidence. To enhance LVLMs' perception, we adapt several established confidence calibration methods from Large Language Models (LLMs) and propose three effective methods. Additionally, we compare LVLMs with their LLM counterparts, finding that jointly processing visual and textual inputs decreases question-answering performance but reduces confidence, resulting in an improved perception level compared to LLMs.
Ontology for Products
Department of Artificial Intelligence in Biomedical Engineering
Abstract
Retrieval-augmented learning based on radiology reports has emerged as a promising direction to improve performance on long-tail medical imaging tasks, such as rare disease detection in chest X-rays. Most existing methods rely on comparing high-dimensional text embeddings from models like CLIP or CXR-BERT, which are often difficult to interpret, computationally expensive, and not well-aligned with the structured nature of medical knowledge. We propose a novel, ontology-driven alternative for comparing radiology report texts based on clinically grounded concepts from the Unified Medical Language System (UMLS). Our method extracts standardised medical entities from free-text reports using an enhanced pipeline built on RadGraph-XL and SapBERT. These entities are linked to UMLS concepts (CUIs), enabling a transparent, interpretable set-based representation of each report. We then define a task-adaptive similarity measure based on a modified and weighted version of the Tversky Index that accounts for synonymy, negation, and hierarchical relationships between medical entities. This allows efficient and semantically meaningful similarity comparisons between reports. We demonstrate that our approach outperforms state-of-the-art embedding-based retrieval methods in a radiograph classification task on MIMIC-CXR, particularly in long-tail settings. Additionally, we use our pipeline to generate ontology-backed disease labels for MIMIC-CXR, offering a valuable new resource for downstream learning tasks. Our work provides more explainable, reliable, and task-specific retrieval strategies in clinical AI systems, especially when interpretability and domain knowledge integration are essential. Our code is available at https://github.com/Felix-012/ontology-concept-distillation
Abstract
Medical ontology graphs map external knowledge to medical codes in electronic health records via structured relationships. By leveraging domain-approved connections (e.g., parent-child), predictive models can generate richer medical concept representations by incorporating contextual information from related concepts. However, existing literature primarily focuses on incorporating domain knowledge from a single ontology system, or from multiple ontology systems (e.g., diseases, drugs, and procedures) in isolation, without integrating them into a unified learning structure. Consequently, concept representation learning often remains limited to intra-ontology relationships, overlooking cross-ontology connections. In this paper, we propose LINKO, a large language model (LLM)-augmented integrative ontology learning framework that leverages multiple ontology graphs simultaneously by enabling dual-axis knowledge propagation both within and across heterogeneous ontology systems to enhance medical concept representation learning. Specifically, LINKO first employs LLMs to provide a graph-retrieval-augmented initialization for ontology concept embedding, through an engineered prompt that includes concept descriptions, and is further augmented with ontology context. Second, our method jointly learns the medical concepts in diverse ontology graphs by performing knowledge propagation in two axes: (1) intra-ontology vertical propagation across hierarchical ontology levels and (2) inter-ontology horizontal propagation within every level in parallel. Last, through extensive experiments on two public datasets, we validate the superior performance of LINKO over state-of-the-art baselines. As a plug-in encoder compatible with existing EHR predictive models, LINKO further demonstrates enhanced robustness in scenarios involving limited data availability and rare disease prediction.
Graphs for Products
Department of Chemical Engineering and Applied Chemistry, University of Toronto
Abstract
Graphs are central to the chemical sciences, providing a natural language to describe molecules, proteins, reactions, and industrial processes. They capture interactions and structures that underpin materials, biology, and medicine. This primer, Graph Data Modeling: Molecules, Proteins, & Chemical Processes, introduces graphs as mathematical objects in chemistry and shows how learning algorithms (particularly graph neural networks) can operate on them. We outline the foundations of graph design, key prediction tasks, representative examples across chemical sciences, and the role of machine learning in graph-based modeling. Together, these concepts prepare readers to apply graph methods to the next generation of chemical discovery.
Department of Computer Science, KU Leuven Campus Kulak-Kortrijk, 8500 Kortrijk, Belgium
Abstract
Computers and algorithms play an ever-increasing role in obtaining new results in graph theory. In this survey, we present a broad range of techniques used in computer-assisted graph theory, including the exhaustive generation of all pairwise non-isomorphic graphs within a given class, the use of searchable databases containing graphs and invariants as well as other established and emerging algorithmic paradigms. We cover approaches based on mixed integer linear programming, semidefinite programming, dynamic programming, SAT solving, metaheuristics and machine learning. The techniques are illustrated with numerous detailed results covering several important subareas of graph theory such as extremal graph theory, graph coloring, structural graph theory, spectral graph theory, regular graphs, topological graph theory, special sets in graphs, algebraic graph theory and chemical graph theory. We also present some smaller new results that demonstrate how readily a computer-assisted graph theory approach can be applied once the appropriate tools have been developed.
Knowledge Graphs
Abstract
Community detection in citation networks offers a powerful approach to understanding knowledge flow and identifying core research areas within academic disciplines. This study focuses on knowledge source discovery in statistics by analyzing a weighted bipartite journal citation network constructed from 16,119 articles published in eight core journals from 2001 to 2023. To capture the inherent asymmetry of citation behavior, we explicitly preserve the bipartite structure of the network, distinguishing between citing and cited journals. For this task, we propose Bi-SCORE (Bipartite Spectral Clustering on Ratios-of-Eigenvectors), a computationally efficient and initialization-free spectral method designed for community detection in weighted bipartite networks with degree heterogeneity. We establish rigorous theoretical guarantees for the performance of Bi-SCORE under the weighted bipartite degree-corrected stochastic block model. Furthermore, simulation studies demonstrate its robustness across varying levels of sparsity and degree heterogeneity, where it outperforms existing methods. When applied to the real-world citation network, Bi-SCORE uncovers a six-community structure corresponding to key research areas in statistics, including applied statistics, methodology, theory, computation, and econometrics. These findings provide valuable insights into the intricate citation patterns and knowledge flow among statistical journals.
Abstract
Biomedical knowledge graphs (KGs) are widely used across research and translational settings, yet their design decisions and implementation are often opaque. Unlike ontologies that more frequently adhere to established creation principles, biomedical KGs lack consistent practices for construction, documentation, and dissemination. To address this gap, we introduce a set of evaluation criteria grounded in widely accepted data standards and principles from related fields. We apply these criteria to 16 biomedical KGs, revealing that even those that appear to align with best practices often obscure essential information required for external reuse. Moreover, biomedical KGs, despite pursuing similar goals and ingesting the same sources in some cases, display substantial variation in models, source integration, and terminology for node types. Reaping the potential benefits of knowledge graphs for biomedical research while reducing wasted effort requires community-wide adoption of shared criteria and maturation of standards such as BioLink and KGX. Such improvements in transparency and standardization are essential for creating long-term reusability, improving comparability across resources, and enhancing the overall utility of KGs within biomedicine.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Taxonomy of Products
You can edit or add more interests any time.

Unsubscribe from these updates