Hi j34nc4rl0+ai_impacts_on_society,

Here is our personalized paper recommendations for you sorted by most relevant
AI for Society
Abstract
As AI becomes more "agentic," it faces technical and socio-legal issues it must address if it is to fulfill its promise of increased economic productivity and efficiency. This paper uses technical and legal perspectives to explain how things change when AI systems start being able to directly execute tasks on behalf of a user. We show how technical conceptions of agents track some, but not all, socio-legal conceptions of agency. That is, both computer science and the law recognize the problems of under-specification for an agent, and both disciplines have robust conceptions of how to address ensuring an agent does what the programmer, or in the law, the principal desires and no more. However, to date, computer science has under-theorized issues related to questions of loyalty and to third parties that interact with an agent, both of which are central parts of the law of agency. First, we examine the correlations between implied authority in agency law and the principle of value-alignment in AI, wherein AI systems must operate under imperfect objective specification. Second, we reveal gaps in the current computer science view of agents pertaining to the legal concepts of disclosure and loyalty, and how failure to account for them can result in unintended effects in AI ecommerce agents. In surfacing these gaps, we show a path forward for responsible AI agent development and deployment.
Abstract
This chapter starts with a sketch of how we got to "generative AI" (GenAI) and a brief summary of the various impacts it had so far. It then discusses some of the opportunities of GenAI, followed by the challenges and dangers, including dystopian outcomes resulting from using uncontrolled machine learning and our failures to understand the results. It concludes with some suggestions for how to control GenAI and address its dangers.
AI on Healthcare
Abstract
The integration of Artificial Intelligence (AI) into healthcare systems in low-resource settings, such as Nepal and Ghana, presents transformative opportunities to improve personalized patient care, optimize resources, and address medical professional shortages. This paper presents a survey-based evaluation and insights from Nepal and Ghana, highlighting major obstacles such as data privacy, reliability, and trust issues. Quantitative and qualitative field studies reveal critical metrics, including 85% of respondents identifying ethical oversight as a key concern, and 72% emphasizing the need for localized governance structures. Building on these findings, we propose a draft Responsible AI (RAI) Framework tailored to resourceconstrained environments in these countries. Key elements of the framework include ethical guidelines, regulatory compliance mechanisms, and contextual validation approaches to mitigate bias and ensure equitable healthcare outcomes.
Abstract
Rural healthcare faces persistent challenges, including inadequate infrastructure, workforce shortages, and socioeconomic disparities that hinder access to essential services. This study investigates the transformative potential of artificial intelligence (AI) in addressing these issues in underserved rural areas. We systematically reviewed 109 studies published between 2019 and 2024 from PubMed, Embase, Web of Science, IEEE Xplore, and Scopus. Articles were screened using PRISMA guidelines and Covidence software. A thematic analysis was conducted to identify key patterns and insights regarding AI implementation in rural healthcare delivery. The findings reveal significant promise for AI applications, such as predictive analytics, telemedicine platforms, and automated diagnostic tools, in improving healthcare accessibility, quality, and efficiency. Among these, advanced AI systems, including Multimodal Foundation Models (MFMs) and Large Language Models (LLMs), offer particularly transformative potential. MFMs integrate diverse data sources, such as imaging, clinical records, and bio signals, to support comprehensive decision-making, while LLMs facilitate clinical documentation, patient triage, translation, and virtual assistance. Together, these technologies can revolutionize rural healthcare by augmenting human capacity, reducing diagnostic delays, and democratizing access to expertise. However, barriers remain, including infrastructural limitations, data quality concerns, and ethical considerations. Addressing these challenges requires interdisciplinary collaboration, investment in digital infrastructure, and the development of regulatory frameworks. This review offers actionable recommendations and highlights areas for future research to ensure equitable and sustainable integration of AI in rural healthcare systems.
AI on Labor Market
Abstract
While there is excitement about the potential for algorithms to optimize individual decision-making, changes in individual behavior will, almost inevitably, impact markets. Yet little is known about such effects. In this paper, I study how the availability of algorithmic prediction changes entry, allocation, and prices in the US single-family housing market, a key driver of household wealth. I identify a market-level natural experiment that generates variation in the cost of using algorithms to value houses: digitization, the transition from physical to digital housing records. I show that digitization leads to entry by investors using algorithms, but does not push out investors using human judgment. Instead, human investors shift toward houses that are difficult to predict algorithmically. Algorithmic investors predominantly purchase minority-owned homes, a segment of the market where humans may be biased. Digitization increases the average sale price of minority-owned homes by 5% and reduces racial disparities in home prices by 45%. Algorithmic investors, via competition, affect the prices paid by owner-occupiers and human investors for minority homes; such changes drive the majority of the reduction in racial disparities. The decrease in racial inequality underscores the potential for algorithms to mitigate human biases at the market level.
AI on Education
Abstract
This paper explores integrating microlearning strategies into university curricula, particularly in computer science education, to counteract the decline in class attendance and engagement in US universities after COVID. As students increasingly opt for remote learning and recorded lectures, traditional educational approaches struggle to maintain engagement and effectiveness. Microlearning, which breaks complex subjects into manageable units, is proposed to address shorter attention spans and enhance educational outcomes. It uses interactive formats such as videos, quizzes, flashcards, and scenario-based exercises, which are especially beneficial for topics like algorithms and programming logic requiring deep understanding and ongoing practice. Adoption of microlearning is often limited by the effort needed to create such materials. This paper proposes leveraging AI tools, specifically ChatGPT, to reduce the workload for educators by automating the creation of supplementary materials. While AI can automate certain tasks, educators remain essential in guiding and shaping the learning process. This AI-enhanced approach ensures course content is kept current with the latest research and technology, with educators providing context and insights. By examining AI capabilities in microlearning, this study shows the potential to transform educational practices and outcomes in computer science, offering a practical model for combining advanced technology with established teaching methods.
Abstract
The abilities of Generative-Artificial Intelligence (AI) to produce real-time, sophisticated responses across diverse contexts has promised a huge potential in physics education, particularly in providing customized feedback. In this study, we investigate around 1200 introductory students' preferences about AI-feedback generated from three distinct prompt types: (a) self-crafted, (b) entailing foundational prompt-engineering techniques, and (c) entailing foundational prompt-engineering techniques along with principles of effective-feedback. The results highlight an overwhelming fraction of students preferring feedback generated using structured prompts, with those entailing combined features of prompt engineering and effective feedback to be favored most. However, the popular choice also elicited stronger preferences with students either liking or disliking the feedback. Students also ranked the feedback generated using their self-crafted prompts as the least preferred choice. Students' second preferences given their first choice and implications of the results such as the need to incorporate prompt engineering in introductory courses are discussed.
AI for Social Equality
Abstract
Uncertainty in artificial intelligence (AI) predictions poses urgent legal and ethical challenges for AI-assisted decision-making. We examine two algorithmic interventions that act as guardrails for human-AI collaboration: selective abstention, which withholds high-uncertainty predictions from human decision-makers, and selective friction, which delivers those predictions together with salient warnings or disclosures that slow the decision process. Research has shown that selective abstention based on uncertainty can inadvertently exacerbate disparities and disadvantage under-represented groups that disproportionately receive uncertain predictions. In this paper, we provide the first integrated socio-technical and legal analysis of uncertainty-based algorithmic interventions. Through two case studies, AI-assisted consumer credit decisions and AI-assisted content moderation, we demonstrate how the seemingly neutral use of uncertainty thresholds can trigger discriminatory impacts. We argue that, although both interventions pose risks of unlawful discrimination under UK law, selective frictions offer a promising pathway toward fairer and more accountable AI-assisted decision-making by preserving transparency and encouraging more cautious human judgment.
Abstract
Generative AI chatbots like OpenAI's ChatGPT and Google's Gemini routinely make things up. They "hallucinate" historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles. Recently, some have argued that these hallucinations are better understood as bullshit. Chatbots produce rich streams of text that look truth-apt without any concern for the truthfulness of what this text says. But can they also gossip? We argue that they can. After some definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some distinct harms -- what we call "technosocial harms" -- that follow from it.
AI Air Consumption
Abstract
Large-scale deep learning workloads increasingly suffer from I/O bottlenecks as datasets grow beyond local storage capacities and GPU compute outpaces network and disk latencies. While recent systems optimize data-loading time, they overlook the energy cost of I/O - a critical factor at large scale. We introduce EMLIO, an Efficient Machine Learning I/O service that jointly minimizes end-to-end data-loading latency T and I/O energy consumption E across variable-latency networked storage. EMLIO deploys a lightweight data-serving daemon on storage nodes that serializes and batches raw samples, streams them over TCP with out-of-order prefetching, and integrates seamlessly with GPU-accelerated (NVIDIA DALI) preprocessing on the client side. In exhaustive evaluations over local disk, LAN (0.05 ms & 10 ms RTT), and WAN (30 ms RTT) environments, EMLIO delivers up to 8.6X faster I/O and 10.9X lower energy use compared to state-of-the-art loaders, while maintaining constant performance and energy profiles irrespective of network distance. EMLIO's service-based architecture offers a scalable blueprint for energy-aware I/O in next-generation AI clouds.
AI for Social Good
Abstract
Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making. We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making. Participants interacted with politically diverse GPT-4o variants on information evaluation tasks. Partisan AI assistants enhanced human performance, increased engagement, and reduced evaluative bias compared to non-biased counterparts, with amplified benefits when participants encountered opposing views. These gains carried a trust penalty: participants underappreciated biased AI and overcredited neutral systems. Exposing participants to two AIs whose biases flanked human perspectives closed the perception-performance gap. These findings complicate conventional wisdom about AI neutrality, suggesting that strategic integration of diverse cultural biases may foster improved and resilient human decision-making.
AI for Social Equity
Abstract
Public-sector bureaucracies seek to reap the benefits of artificial intelligence (AI), but face important concerns about accountability and transparency when using AI systems. In particular, perception or actuality of AI agency might create ethics sinks - constructs that facilitate dissipation of responsibility when AI systems of disputed moral status interface with bureaucratic structures. Here, we reject the notion that ethics sinks are a necessary consequence of introducing AI systems into bureaucracies. Rather, where they appear, they are the product of structural design decisions across both the technology and the institution deploying it. We support this claim via a systematic application of conceptions of moral agency in AI ethics to Weberian bureaucracy. We establish that it is both desirable and feasible to render AI systems as tools for the generation of organizational transparency and legibility, which continue the processes of Weberian rationalization initiated by previous waves of digitalization. We present a three-point Moral Agency Framework for legitimate integration of AI in bureaucratic structures: (a) maintain clear and just human lines of accountability, (b) ensure humans whose work is augmented by AI systems can verify the systems are functioning correctly, and (c) introduce AI only where it doesn't inhibit the capacity of bureaucracies towards either of their twin aims of legitimacy and stewardship. We suggest that AI introduced within this framework can not only improve efficiency and productivity while avoiding ethics sinks, but also improve the transparency and even the legitimacy of a bureaucracy.
AI Impacts on Society
Abstract
This research paper examines, from a multidimensional perspective (cognitive, social, ethical, and philosophical), how AI is transforming human thought. It highlights a cognitive offloading effect: the externalization of mental functions to AI can reduce intellectual engagement and weaken critical thinking. On the social level, algorithmic personalization creates filter bubbles that limit the diversity of opinions and can lead to the homogenization of thought and polarization. This research also describes the mechanisms of algorithmic manipulation (exploitation of cognitive biases, automated disinformation, etc.) that amplify AI's power of influence. Finally, the question of potential artificial consciousness is discussed, along with its ethical implications. The report as a whole underscores the risks that AI poses to human intellectual autonomy and creativity, while proposing avenues (education, transparency, governance) to align AI development with the interests of humanity.
AI on Transportation
Abstract
The setting for the online transportation problem is a metric space $M$, populated by $m$ parking garages of varying capacities. Over time cars arrive in $M$, and must be irrevocably assigned to a parking garage upon arrival in a way that respects the garage capacities. The objective is to minimize the aggregate distance traveled by the cars. In 1998, Kalyanasundaram and Pruhs conjectured that there is a $(2m-1)$-competitive deterministic algorithm for the online transportation problem, matching the optimal competitive ratio for the simpler online metric matching problem. Recently, Harada and Itoh presented the first $O(m)$-competitive deterministic algorithm for the online transportation problem. Our contribution is an alternative algorithm design and analysis that we believe is simpler.
AI Energy Consumption
Abstract
The increasing demand for computational resources of training neural networks leads to a concerning growth in energy consumption. While parallelization has enabled upscaling model and dataset sizes and accelerated training, its impact on energy consumption is often overlooked. To close this research gap, we conducted scaling experiments for data-parallel training of two models, ResNet50 and FourCastNet, and evaluated the impact of parallelization parameters, i.e., GPU count, global batch size, and local batch size, on predictive performance, training time, and energy consumption. We show that energy consumption scales approximately linearly with the consumed resources, i.e., GPU hours; however, the respective scaling factor differs substantially between distinct model trainings and hardware, and is systematically influenced by the number of samples and gradient updates per GPU hour. Our results shed light on the complex interplay of scaling up neural network training and can inform future developments towards more sustainable AI research.
AI on Food
Abstract
We present MM-Food-100K, a public 100,000-sample multimodal food intelligence dataset with verifiable provenance. It is a curated approximately 10% open subset of an original 1.2 million, quality-accepted corpus of food images annotated for a wide range of information (such as dish name, region of creation). The corpus was collected over six weeks from over 87,000 contributors using the Codatta contribution model, which combines community sourcing with configurable AI-assisted quality checks; each submission is linked to a wallet address in a secure off-chain ledger for traceability, with a full on-chain protocol on the roadmap. We describe the schema, pipeline, and QA, and validate utility by fine-tuning large vision-language models (ChatGPT 5, ChatGPT OSS, Qwen-Max) on image-based nutrition prediction. Fine-tuning yields consistent gains over out-of-box baselines across standard metrics; we report results primarily on the MM-Food-100K subset. We release MM-Food-100K for publicly free access and retain approximately 90% for potential commercial access with revenue sharing to contributors.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • AI Water Consumption
  • AI for Social Fairness
  • AI on Air
  • AI on Energy
  • AI for Social Justice
  • AI on Water
You can edit or add more interests any time.

Unsubscribe from these updates