Papers from 06 to 10 October, 2025

Here are the personalized paper recommendations sorted by most relevant
Functional Programming
👍 👎 ♥ Save
University of Bath
Abstract
The Functional Machine Calculus (Heijltjes 2022) is a new approach to unifying the imperative and functional programming paradigms. It extends the lambda-calculus, preserving the key features of confluent reduction and typed termination, to embed computational effects, evaluation strategies, and control flow operations. The first instalment modelled sequential higher-order computation with global store, input/output, probabilities, and non-determinism, and embedded both the call-by-name and call-by-value lambda-calculus, as well as Moggi's computational metalanguage and Levy's call-by-push-value. The present paper extends the calculus from sequential to branching and looping control flow. This allows the faithful embedding of a minimal but complete imperative language, including conditionals, exception handling, and iteration, as well as constants and algebraic data types. The calculus is defined through a simple operational semantics, extending the (simplified) Krivine machine for the lambda-calculus with multiple operand stacks to model effects and a continuation stack to model sequential, branching, and looping computation. It features a confluent reduction relation and a system of simple types that guarantees termination of the machine and strong normalization of reduction (in the absence of iteration). These properties carry over to the embedded imperative language, providing a unified functional-imperative model of computation that supports simple types, a direct and intuitive operational semantics, and a confluent reduction semantics.
👍 👎 ♥ Save
Hangzhou Institute for
Abstract
Constraint programming (CP) is a crucial technology for solving real-world constraint optimization problems (COPs), with the advantages of rich modeling semantics and high solving efficiency. Using large language models (LLMs) to generate formal modeling automatically for COPs is becoming a promising approach, which aims to build trustworthy neuro-symbolic AI with the help of symbolic solvers. However, CP has received less attention compared to works based on operations research (OR) models. We introduce ConstraintLLM, the first LLM specifically designed for CP modeling, which is trained on an open-source LLM with multi-instruction supervised fine-tuning. We propose the Constraint-Aware Retrieval Module (CARM) to increase the in-context learning capabilities, which is integrated in a Tree-of-Thoughts (ToT) framework with guided self-correction mechanism. Moreover, we construct and release IndusCP, the first industrial-level benchmark for CP modeling, which contains 140 challenging tasks from various domains. Our experiments demonstrate that ConstraintLLM achieves state-of-the-art solving accuracy across multiple benchmarks and outperforms the baselines by 2x on the new IndusCP benchmark. Code and data are available at: https://github.com/william4s/ConstraintLLM.
AI Insights
  • IndusCP has 140 industrial tasks in logistics, scheduling, and resource allocation, each a node‑to‑ring assignment with capacity limits.
  • A connections list forces certain nodes to one ring while the objective is minimize(Sum(x)).
  • pycsp3 implements variables x[i][j] with constraints Sum(x[i]) ≤ r and LexIncreasing(x).
  • Tree‑of‑Thoughts adds a tree search to ConstraintLLM, boosting quality but raising worst‑case runtime.
  • Guided self‑correction cuts error propagation, yet ToT still inflates computational cost.
  • Resources: Czarnecki & van Beek’s “Constraint Programming” and sites pycsp.org, csplib.org.
  • Limitation: rigid input format assumption limits portability beyond IndusCP.
Object Oriented Programming
👍 👎 ♥ Save
Abstract
We develop foundations for oriented category theory, an extension of $(\infty,\infty)$-category theory obtained by systematic usage of the Gray tensor product, in order to study lax phenomena in higher category theory. As categorical dimension increases, classical category-theoretic concepts generally prove too rigid or fully break down and must be replaced by oriented versions, which allow more flexible notions of naturality and coherence. Oriented category theory provides a framework to address these issues. The main objects of study are oriented, and their conjugate antioriented, categories, which are deformations of $(\infty,\infty)$-categories where the various compositions only commute up to a coherent (anti)oriented interchange law. We give a geometric description of (anti)oriented categories as sheaves on a deformation of the simplex category $\Delta$ in which the linear graphs are weighted by (anti)oriented cubes. To demonstrate the utility of our theory, we show that the categorical analogues of fundamental constructions in homotopy theory, such as cylinder and path objects, join and slice, and suspension and loops, are not functors of $(\infty, \infty)$-categories, but only of (anti)oriented categories, generalizing work of Ara, Guetta, and Maltsiniotis in the strict setting. As a main result we construct an embedding of the theory of $(\infty,\infty)$-categories into the theory of (anti)oriented categories and characterize the image, which we call (anti)oriented spaces. We provide an algebraic description of (anti)oriented spaces as (anti)oriented categories satisfying a strict (anti)oriented interchange law and a geometric description as sheaves on suitable categories of (anti)oriented polytopes, generalizing Grothendieck's philosophy of test categories to higher categorical dimension and refining Campion's work on lax cubes and suitable sites.
👍 👎 ♥ Save
Squirrel Ai Learning
Abstract
Large Reasoning Models (LRMs) often suffer from the ``over-thinking'' problem, generating unnecessarily long reasoning on simple tasks. Some strategies have been proposed to mitigate this issue, such as length penalties or routing mechanisms, but they are typically heuristic and task-specific, lacking a general framework for adaptive reasoning. In this paper, we present ARM2, a unified model that adaptively balances reasoning performance and efficiency across multiple formats through a reinforcement learning framework augmented with length-aware optimization. Beyond conventional natural language inference, ARM2 integrates vision understanding, extending its applicability to multimodal. Moreover, ARM2 integrates executable code into reasoning, enabling substantial reductions in token cost while preserving task performance compared to long CoT. Experiments demonstrate that ARM2 achieves performance on par with traditional reasoning models trained with GRPO, while reducing token usage by over 70% on average. We further conduct extensive analyses to validate the effectiveness of ARM2 and the soundness of its design.
AI Insights
  • ARM2 uses a lightweight RL policy to choose the best reasoning format—CoT, code, or visual inference—based on input complexity.
  • It beats state‑of‑the‑art on CSQA, GSM8K, GEO3K while cutting token usage by over 70 %.
  • The paper surveys multimodal reasoning, placing ARM2 among vision‑language and code‑augmented models.
  • Authors admit very complex reasoning still strains ARM2, calling for more robust, adaptable architectures.
  • ARM2 uses LoRA for efficient tuning and VERL to ground text in images.
  • Recommended reading: “Multimodal Reasoning: A Survey” plus VERL, LoRA, CSQA, GSM8K, GEO3K papers.
  • The adaptive policy adds computational overhead, a trade‑off the authors suggest can be cut with smarter RL.
Programming Language Design
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
All widely used and useful programming languages have a common problem. They restrict entry on the basis of knowledge of the English language. The lack of knowledge of English poses a major hurdle to many newcomers who do not have the resources, in terms of time and money, to learn the English language. Studies show that people learn better in their own language. Therefore, we propose a language transpiler built on top of the Python programming language, called UniversalPython, which allows one to write Python in their own human language. We demonstrate the ability to create an "Urdu Python" with this transpiler. In the future, we aim to scale the language to encapsulate more human languages to increase the availability of programming. The source code for this transpiler is open-source, and available at https://github.com/universalpython/universalpython
👍 👎 ♥ Save
Notre Dame University, S
Abstract
We present a system that uses LLMs as a tool in the development of Constructed Languages. The system is modular in that one first creates a target phonology for the language using an agentic approach that refines its output at each step with commentary feedback on its previous attempt. Next, a set of sentences is 'translated' from their English original into a morphosyntactic markup that reflects the word order and morphosyntactic feature specifications of the desired target language, with affixes represented as morphosyntactic feature bundles. From this translated corpus, a lexicon is constructed using the phonological model and the set of morphemes (stems and affixes) extracted from the 'translated' sentences. The system is then instructed to provide an orthography for the language, using an existing script such as Latin or Cyrillic. Finally, the system writes a brief grammatical handbook of the language. The system can also translate further sentences into the target language. Our goal is twofold. First, we hope that these tools will be fun to use for creating artificially constructed languages. Second, we are interested in exploring what LLMs 'know' about language-not what they know about any particular language or linguistic phenomenon, but how much they know about and understand language and linguistic concepts. As we shall see, there is a fairly wide gulf in capabilities both among different LLMs and among different linguistic specifications, with it being notably easier for systems to deal with more common patterns than rarer ones. An additional avenue that we explore is the application of our approach to translating from high-resource into low-resource languages. While the results so far are mostly negative, we provide some evidence that an improved version of the present system could afford some real gains in such tasks. https://github.com/SakanaAI/IASC
AI Insights
  • The configuration tables span typologies from Arabic‑like fusional to Vietnamese‑like VSO, letting users probe LLMs on diverse word‑order patterns.
  • Arabic‑like entries compress verbal features into single affixes, testing LLMs’ ability to parse multi‑function morphemes.
  • Fijian‑ and French‑like tables share SVO order but differ in adjective‑noun placement, showing how subtle syntactic shifts are encoded.
  • The Hixkaryana‑like OSV profile demonstrates the system’s support for rare typologies often missing from training data.
  • Automatically generated hard‑definition pairs (e.g., “Fusional” vs. “Agglutinative”) provide ready glosses for linguistic annotation.
  • The recommended literature, including Bybee’s work on morphology, offers a theoretical backdrop for the system’s morphosyntactic modeling.
Design Patterns
👍 👎 ♥ Save
Abstract
Ferrofluids, colloidal dispersions of magnetic nanoparticles, are renowned for pattern formation like few other materials. The Rosensweig instability of a horizontal ferrofluid-air interface in perpendicular magnetic field is especially well known classically, this instability sets the air-ferrofluid interface into an array of spikes that correspond to a new free energy minimum of the system. However, once the pattern is formed, it does not exhibit any notable thermal or non-equilibrium fluctuations, i.e., it is passive. In this work, we present an active version of the Rosensweig patterns. We realize them experimentally by driving a dispersion of magnetic nanoparticles with an electric field into a non-equilibrium gradient state and by inducing the instability using a magnetic field. The coupling of electric and magnetic forcing leads to patterns that can be adjusted from quiescent classic Rosensweig-like behavior (at low activity) to highly dynamic ones displaying peak and defect dynamics, as well as tunability of structure periodicities beyond what is possible in the classic systems (at high activity). We analyze the results using an active agent-based approach as well as a continuum perspective. We construct a simple equilibrium-like effective Rosensweig model to describe the onset of the patterns and propose a minimal Swift-Hohenberg type model capturing the essential active pattern dynamics. Our results suggest that classic continuum systems exhibiting pattern formation can be activated to display life-inspired non-equilibrium phenomena.
👍 👎 ♥ Save
Technical University of
Abstract
Structured variational quantum algorithms such as the Quantum Approximate Optimisation Algorithm (QAOA) have emerged as leading candidates for exploiting advantages of near-term quantum hardware. They interlace classical computation, in particular optimisation of variational parameters, with quantum-specific routines, and combine problem-specific advantages -- sometimes even provable -- with adaptability to the constraints of noisy, intermediate-scale quantum (NISQ) devices. While circuit depth can be parametrically increased and is known to improve performance in an ideal (noiseless) setting, on realistic hardware greater depth exacerbates noise: The overall quality of results depends critically on both, variational parameters and circuit depth. Although identifying optimal parameters is NP-hard, prior work has suggested that they may exhibit regular, predictable patterns for increasingly deep circuits and depending on the studied class of problems. In this work, we systematically investigate the role of classical parameters in QAOA performance through extensive numerical simulations and suggest a simple, yet effective heuristic scheme to find good parameters for low-depth circuits. Our results demonstrate that: (i) optimal parameters often deviate substantially from expected patterns; (ii) QAOA performance becomes progressively less sensitive to specific parameter choices as depth increases; and (iii) iterative component-wise fixing performs on par with, and at shallow depth may even outperform, several established parameter-selection strategies. We identify conditions under which structured parameter patterns emerge, and when deviations from the patterns warrant further consideration. These insights for low-depth circuits may inform more robust pathways to harnessing QAOA in realistic quantum compute scenarios.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Programming Paradigms
You can edit or add more interests any time.

Unsubscribe from these updates