🎯 Top Personalized Recommendations
Dartmouth College
Why we think this paper is great for you:
This paper directly addresses how generative AI impacts labor markets and job application processes, which is highly relevant to your interest in labor market changes and job displacement. It explores the economic implications of AI's influence on traditional signaling mechanisms.
Abstract
Large language models (LLMs) like ChatGPT have significantly lowered the cost of producing written content. This paper studies how LLMs, through lowering writing costs, disrupt markets that traditionally relied on writing as a costly signal of quality (e.g., job applications, college essays). Using data from Freelancer.com, a major digital labor platform, we explore the effects of LLMs' disruption of labor market signaling on equilibrium market outcomes. We develop a novel LLM-based measure to quantify the extent to which an application is tailored to a given job posting. Taking the measure to the data, we find that employers have a high willingness to pay for workers with more customized applications in the period before LLMs are introduced, but not after. To isolate and quantify the effect of LLMs' disruption of signaling on equilibrium outcomes, we develop and estimate a structural model of labor market signaling, in which workers invest costly effort to produce noisy signals that predict their ability in equilibrium. We use the estimated model to simulate a counterfactual equilibrium in which LLMs render written applications useless in signaling workers' ability. Without costly signaling, employers are less able to identify high-ability workers, causing the market to become significantly less meritocratic: compared to the pre-LLM equilibrium, workers in the top quintile of the ability distribution are hired 19% less often, workers in the bottom quintile are hired 14% more often.
AI Summary - The disruption of signaling leads to a 5% reduction in average wages, a 1.5% reduction in overall hiring rate, and a 4% reduction in worker surplus, while employer surplus remains largely unaffected. [3]
- Total market surplus decreases by 1% in the no-signaling counterfactual, indicating a loss in overall efficiency due to LLMs' impact on signaling. [3]
- The positive correlation between worker ability and their cost of undertaking a job (0.19) is crucial, as it implies high-ability workers struggle to compete on wages alone when signaling is eliminated. [3]
- Signaling Effort (Bid Time): Measured as the time between a worker's first click on a job post and their application submission, serving as a proxy for cognitive effort expended. [3]
- LLMs significantly lowered the cost of producing written content, thereby disrupting markets that traditionally relied on writing as a costly signal of quality, such as job applications. [2]
- Before the mass adoption of LLMs, employers on Freelancer.com had a high willingness to pay for workers with more customized applications, with a one standard deviation higher signal being equivalent to a $26 lower bid. [2]
- Pre-LLM, customized application signals were predictive of worker effort and successful job completion, but these patterns significantly weakened or disappeared post-LLM, especially with the use of AI-writing tools. [2]
- A counterfactual simulation, where LLMs render written applications useless for signaling, results in a significantly less meritocratic market, with top-quintile ability workers hired 19% less often and bottom-quintile workers 14% more often. [2]
- LLM-based measure for application customization/relevance: A novel method using Meta's Llama 4 Maverick 17B model to quantify how tailored a worker's proposal is to a specific job post, based on nine criteria (five "custom" and four "generic"). [2]
- Copy-Pasting Correction: An adjustment to signal scores based on the normalized minimum Levenshtein distance between a proposal and other proposals by the same worker, mitigating false positives for customization. [2]
Why we think this paper is great for you:
This technical report details an LLM-based system featuring agentic search and a supervisor agent, offering insights into practical AGI applications and development methodologies. It showcases advancements in building intelligent forecasting systems.
Abstract
This technical report describes the AIA Forecaster, a Large Language Model (LLM)-based system for judgmental forecasting using unstructured data. The AIA Forecaster approach combines three core elements: agentic search over high-quality news sources, a supervisor agent that reconciles disparate forecasts for the same event, and a set of statistical calibration techniques to counter behavioral biases in large language models. On the ForecastBench benchmark (Karger et al., 2024), the AIA Forecaster achieves performance equal to human superforecasters, surpassing prior LLM baselines. In addition to reporting on ForecastBench, we also introduce a more challenging forecasting benchmark sourced from liquid prediction markets. While the AIA Forecaster underperforms market consensus on this benchmark, an ensemble combining AIA Forecaster with market consensus outperforms consensus alone, demonstrating that our forecaster provides additive information. Our work establishes a new state of the art in AI forecasting and provides practical, transferable recommendations for future research. To the best of our knowledge, this is the first work that verifiably achieves expert-level forecasting at scale.
University of Ljubljana
Why we think this paper is great for you:
This paper delves into fundamental economic theories of labor, which could provide a foundational understanding for analyzing broader shifts in the labor market. It offers a theoretical perspective on how labor's value is conceptualized.
Abstract
Neoclassical economic theory presents marginal productivity (MP) theory using the scalar notion of marginal products, and takes pains, implicitly or explicitly, to show that competitive equilibrium satisfies the supposedly ethical principle: ``To each what he and the instruments he owns produces.'' This paper shows that MP theory can also be formulated in a mathematically equivalent way using vectorial marginal products--which however conflicts with the above-mentioned ``distributive shares'' picture. Vectorial MP theory also facilitates the presentation of modern treatment of the labor theory of property which on the descriptive side is based on the fact that, contrary to the distributive shares picture, one legal party gets the production vector consisting of 100 percent of the liabilities for the used-up inputs and 100 percent of the produced outputs in a productive opportunity. On the normative side, the labor theory of property is just the application of the usual juridical norm of imputation to the question of property appropriation.
Keywords: marginal productivity theory, property theory, imputation of responsibility, vectorial marginal products
JEL Classification]{D2, D3, D63, P14
USP
Why we think this paper is great for you:
While not directly related to your core interests, this paper explores advanced detection methods in astrophysics, which might appeal to a broader scientific curiosity. It details the capabilities of next-generation observatories.
Abstract
The Cherenkov Telescope Array Observatory (CTAO) will enable detailed studies of Active Galactic Nuclei (AGN) in the very-high-energy (VHE) regime, as the next-generation ground-based gamma-ray observatory, designed to enhance sensitivity and energy coverage (20 GeV -- 300 TeV) over current Imaging Atmospheric Cherenkov Telescopes (IACTs). In the context of the CTAO Science Collaboration, within the AGN Population working group, we developed a variability-based strategy to improve predictions of AGNs detectable by CTAO, using Fermi-LAT data and normalized excess variance (NXS) as a tracer of flux variability. By extrapolating from 30-day to 3-day timescales, we expanded the sample of sources with short-timescale variability estimates from 87 to 407. This approach allows us to identify flaring and distant AGNs that are promising CTAO targets. The results are being used to support the CTAO extragalactic science program and will be included in an upcoming Consortium publication for the AGN Population collaboration.
Peking University
Why we think this paper is great for you:
This study investigates the variability of active galactic nuclei using astronomical data, offering insights into cosmic phenomena. It demonstrates systematic analysis techniques in observational astronomy.
Abstract
Changing-look active galactic nuclei (CLAGNs) are a unique population of AGNs that exhibit the appearance (turn-on) or disappearance (turn-off) of broad emission lines. This study aims to explore the intrinsic mechanisms of CLAGNs by investigating their photometric variability using data from the Zwicky Transient Facility (ZTF), which has provided high-cadence observations over the past five years. By visual inspections, we construct a sample of 152 CLAGNs from the literature, all of which show spectral transitions and large optical variability in their ZTF light curves. By analyzing 90 of these CLAGNs and the control samples of Type 1 AGNs, Type 2 AGNs, and extremely variable quasars (EVQs), matched in redshift ($0.2
Interests not found
We did not find any papers that match the below interests.
Try other terms also consider if the content exists in arxiv.org.
- AGI
- Job Displacement
- AGI Development
Help us improve your experience!
This project is on its early stages your feedback can be pivotal on the future of the project.
Let us know what you think about this week's papers and suggestions!
Give Feedback