π― Top Personalized Recommendations
University of Ljubljana
Why we think this paper is great for you:
This paper delves into fundamental economic theories of productivity, offering you a deep dive into the conceptual underpinnings of how value is created.
Abstract
Neoclassical economic theory presents marginal productivity (MP) theory using the scalar notion of marginal products, and takes pains, implicitly or explicitly, to show that competitive equilibrium satisfies the supposedly ethical principle: ``To each what he and the instruments he owns produces.'' This paper shows that MP theory can also be formulated in a mathematically equivalent way using vectorial marginal products--which however conflicts with the above-mentioned ``distributive shares'' picture. Vectorial MP theory also facilitates the presentation of modern treatment of the labor theory of property which on the descriptive side is based on the fact that, contrary to the distributive shares picture, one legal party gets the production vector consisting of 100 percent of the liabilities for the used-up inputs and 100 percent of the produced outputs in a productive opportunity. On the normative side, the labor theory of property is just the application of the usual juridical norm of imputation to the question of property appropriation.
Keywords: marginal productivity theory, property theory, imputation of responsibility, vectorial marginal products
JEL Classification]{D2, D3, D63, P14
AI Summary - Vectorial marginal products, which account for simultaneous changes in all inputs and outputs along a least-cost expansion path, provide a more plausible and mathematically sound representation of production. [3]
- Conventional scalar Marginal Productivity (MP) theory, which posits factors 'produce' their marginal product, is mathematically equivalent to a vectorial formulation but ideologically misleading. [2]
- The 'predistributive question'β'Who is to be the firm?' (i.e., who appropriates the entire production vector of assets and liabilities)βis more fundamental than the conventional 'distributive shares' question. [2]
- Orthodox MP theory suffers from 'the metaphor' (treating non-responsible things as responsible agents), 'the mistake' (ignoring that one party gets the whole production vector), and 'the miracle' (factors producing output without using other inputs). [2]
- The juridical principle of imputation dictates that only persons can be de facto responsible for production outcomes, implying that workers should appropriate the 'whole product of labor' (outputs minus non-labor inputs). [2]
- The employment system, by separating labor's de facto responsibility for the whole product from its legal appropriation, constitutes an 'institutional robbery' where the firm (capital owners) appropriates the profits. [2]
- The persistence of scalar MP theory, despite its factual implausibility, is attributed to its ideological 'advantages' in justifying the existing system of renting persons and the distribution of profits. [2]
- Vectorial Marginal Products: A mathematically equivalent formulation of MP theory where the marginal effect of increasing one input is represented as a vector, accounting for changes in both outputs and other inputs along a least-cost expansion path. [2]
- Production Vector (Whole Product): A vector representing a productive opportunity, where negative components are used-up inputs (liabilities) and positive components are produced outputs (assets). [2]
- Predistributive Question: The fundamental question of 'Who is to be the firm?' or 'Who is to appropriate the production vector?' prior to any consideration of distributive shares. [2]
USC
Why we think this paper is great for you:
This research addresses making large language models dependable for crucial tasks, directly enhancing their utility in your productivity workflows.
Abstract
Current large language models (LLMs) excel in verifiable domains where outputs can be checked before action but prove less reliable for high-stakes strategic decisions with uncertain outcomes. This gap, driven by mutually reinforcing cognitive biases in both humans and artificial intelligence (AI) systems, threatens the defensibility of valuations and sustainability of investments in the sector.
This report describes a framework emerging from systematic qualitative assessment across 7 frontier-grade LLMs and 3 market-facing venture vignettes under time pressure. Detailed prompting specifying decision partnership and explicitly instructing avoidance of sycophancy, confabulation, solution drift, and nihilism achieved initial partnership state but failed to maintain it under operational pressure. Sustaining protective partnership state required an emergent 7-stage calibration sequence, built upon a 4-stage initialization process, within a 5-layer protection architecture enabling bias self-monitoring, human-AI adversarial challenge, partnership state verification, performance degradation detection, and stakeholder protection.
Three discoveries resulted: partnership state is achievable through ordered calibration but requires emergent maintenance protocols; reliability degrades when architectural drift and context exhaustion align; and dissolution discipline prevents costly pursuit of fundamentally wrong directions. Cross-model validation revealed systematic performance differences across LLM architectures.
This approach demonstrates that human-AI teams can achieve cognitive partnership capable of preventing avoidable regret in high-stakes decisions, addressing return-on-investment expectations that depend on AI systems supporting consequential decision-making without introducing preventable cognitive traps when verification arrives too late.
Why we think this paper is great for you:
You'll find this paper explores the practical trade-offs between large language model performance and resource consumption, essential for understanding their economic impact.
Abstract
The rapid advancement of AI technologies and their accelerated adoption in software development necessitates a systematic evaluation of their environmental impact alongside functional correctness. While prior studies have examined sustainability in large language models, existing approaches lack systematic frameworks for evaluating accuracy-energy trade-offs in Code Language Models (CLMs). In this paper, we present a framework, BRACE, to benchmark CLMs on a unified scale of energy efficiency and functional correctness (referred to as accuracy). We benchmark 22 state-of-the-art models on code generation and summarization tasks, proposing two rating methods: Concentric Incremental Rating Circles (CIRC) and Observation to Expectation Rating (OTER). CIRC provides deterministic Euclidean-based rankings with static trade-offs that are robust to outliers, and OTER offers trend-aware evaluation with dynamic trade-offs that capture the complex correlation between energy and accuracy, each offering a distinct perspective and addressing the problem in a unique way. These rating methods enable us to rate LLMs on a 1-5 scale reflecting their combined capabilities in terms of energy efficiency and functional correctness. Our analysis reveals models generally perform better in the code summarization tasks as they are not enforced to generate a grammar-based and syntactically correct output. Also, we find that models' size does not have a significant impact on their ratings, indicating that if models utilize their parameters efficiently, they can be ranked higher on these scales. The proposed BRACE framework empowers practitioners to make evidence-based model selections that balance sustainability with task requirements, guiding rating choice -- CIRC for deterministic comparisons or OTER for trend-aware evaluation -- based on deployment priorities.
YUX Design
Why we think this paper is great for you:
This paper highlights the often-overlooked effort required to make AI systems truly functional and productive, a key consideration for adopting new tools.
Abstract
Frontier LLMs are optimised around high-resource assumptions about language, knowledge, devices, and connectivity. Whilst widely accessible, they often misfit conditions in the Global South. As a result, users must often perform additional work to make these systems usable. We term this alignment debt: the user-side burden that arises when AI systems fail to align with cultural, linguistic, infrastructural, or epistemic contexts. We develop and validate a four-part taxonomy of alignment debt through a survey of 411 AI users in Kenya and Nigeria. Among respondents measurable on this taxonomy (n = 385), prevalence is: Cultural and Linguistic (51.9%), Infrastructural (43.1%), Epistemic (33.8%), and Interaction (14.0%). Country comparisons show a divergence in Infrastructural and Interaction debt, challenging one-size-fits-Africa assumptions. Alignment debt is associated with compensatory labour, but responses vary by debt type: users facing Epistemic challenges verify outputs at significantly higher rates (91.5% vs. 80.8%; p = 0.037), and verification intensity correlates with cumulative debt burden (Spearmans rho = 0.147, p = 0.004). In contrast, Infrastructural and Interaction debts show weak or null associations with verification, indicating that some forms of misalignment cannot be resolved through verification alone. These findings show that fairness must be judged not only by model metrics but also by the burden imposed on users at the margins, compelling context-aware safeguards that alleviate alignment debt in Global South settings. The alignment debt framework provides an empirically grounded way to measure user burden, informing both design practice and emerging African AI governance efforts.
University of Warwick
Why we think this paper is great for you:
This work focuses on using AI to optimize team dynamics and workload distribution, directly improving group efficiency and fairness.
Abstract
The equitable assessment of individual contribution in teams remains a persistent challenge, where conflict and disparity in workload can result in unfair performance evaluation, often requiring manual intervention - a costly and challenging process. We survey existing tool features and identify a gap in conflict resolution methods and AI integration. To address this, we propose a framework and implementation design for a novel AI-enhanced tool that assists in dispute investigation. The framework organises heterogeneous artefacts - submissions (code, text, media), communications (chat, email), coordination records (meeting logs, tasks), peer assessments, and contextual information - into three dimensions with nine benchmarks: Contribution, Interaction, and Role. Objective measures are normalised, aggregated per dimension, and paired with inequality measures (Gini index) to surface conflict markers. A Large Language Model (LLM) architecture performs validated and contextual analysis over these measures to generate interpretable and transparent advisory judgments. We argue for feasibility under current statutory and institutional policy, and outline practical analytics (sentimental, task fidelity, word/line count, etc.), bias safeguards, limitations, and practical challenges.
University of Mississippi
Why we think this paper is great for you:
This paper provides an economic perspective on advanced AI systems, offering you insights into the broader economic implications of future AI development.
Abstract
Conventional wisdom holds that a misaligned artificial superintelligence (ASI) will destroy humanity. But the problem of constraining a powerful agent is not new. I apply classic economic logic of interjurisdictional competition, all-encompassing interest, and trading on credit to the threat of misaligned ASI. Using a simple model, I show that an acquisitive ASI refrains from full predation under surprisingly weak conditions. When humans can flee to rivals, inter-ASI competition creates a market that tempers predation. When trapped by a monopolist ASI, its "encompassing interest" in humanity's output makes it a rational autocrat rather than a ravager. And when the ASI has no long-term stake, our ability to withhold future output incentivizes it to trade on credit rather than steal. In each extension, humanity's welfare progressively worsens. But each case suggests that catastrophe is not a foregone conclusion. The dismal science, ironically, offers an optimistic take on our superintelligent future.