Institute for Futures Studies
Abstract
As artificial intelligence rapidly transforms society, developers and
policymakers struggle to anticipate which applications will face public moral
resistance. We propose that these judgments are not idiosyncratic but
systematic and predictable. In a large, preregistered study (N = 587, U.S.
representative sample), we used a comprehensive taxonomy of 100 AI applications
spanning personal and organizational contexts-including both functional uses
and the moral treatment of AI itself. In participants' collective judgment,
applications ranged from highly unacceptable to fully acceptable. We found this
variation was strongly predictable: five core moral qualities-perceived risk,
benefit, dishonesty, unnaturalness, and reduced accountability-collectively
explained over 90% of the variance in acceptability ratings. The framework
demonstrated strong predictive power across all domains and successfully
predicted individual-level judgments for held-out applications. These findings
reveal that a structured moral psychology underlies public evaluation of new
technologies, offering a powerful tool for anticipating public resistance and
guiding responsible innovation in AI.
Abstract
This study provides an in_depth analysis of the ethical and trustworthiness
challenges emerging alongside the rapid advancement of generative artificial
intelligence (AI) technologies and proposes a comprehensive framework for their
systematic evaluation. While generative AI, such as ChatGPT, demonstrates
remarkable innovative potential, it simultaneously raises ethical and social
concerns, including bias, harmfulness, copyright infringement, privacy
violations, and hallucination. Current AI evaluation methodologies, which
mainly focus on performance and accuracy, are insufficient to address these
multifaceted issues. Thus, this study emphasizes the need for new
human_centered criteria that also reflect social impact. To this end, it
identifies key dimensions for evaluating the ethics and trustworthiness of
generative AI_fairness, transparency, accountability, safety, privacy,
accuracy, consistency, robustness, explainability, copyright and intellectual
property protection, and source traceability and develops detailed indicators
and assessment methodologies for each. Moreover, it provides a comparative
analysis of AI ethics policies and guidelines in South Korea, the United
States, the European Union, and China, deriving key approaches and implications
from each. The proposed framework applies across the AI lifecycle and
integrates technical assessments with multidisciplinary perspectives, thereby
offering practical means to identify and manage ethical risks in real_world
contexts. Ultimately, the study establishes an academic foundation for the
responsible advancement of generative AI and delivers actionable insights for
policymakers, developers, users, and other stakeholders, supporting the
positive societal contributions of AI technologies.