Abstract
The conceptual framework proposed in this paper centers on the development of
a deliberative moral reasoning system - one designed to process complex moral
situations by generating, filtering, and weighing normative arguments drawn
from diverse ethical perspectives. While the framework is rooted in Machine
Ethics, it also makes a substantive contribution to Value Alignment by
outlining a system architecture that links structured moral reasoning to action
under time constraints. Grounded in normative moral pluralism, this system is
not constructed to imitate behavior but is built on reason-sensitive
deliberation over structured moral content in a transparent and principled
manner. Beyond its role as a deliberative system, it also serves as the
conceptual foundation for a novel two-level architecture: functioning as a
moral reasoning teacher envisioned to train faster models that support
real-time responsiveness without reproducing the full structure of deliberative
reasoning. Together, the deliberative and intuitive components are designed to
enable both deep reflection and responsive action. A key design feature is the
dual-hybrid structure: a universal layer that defines a moral threshold through
top-down and bottom-up learning, and a local layer that learns to weigh
competing considerations in context while integrating culturally specific
normative content, so long as it remains within the universal threshold. By
extending the notion of moral complexity to include not only conflicting
beliefs but also multifactorial dilemmas, multiple stakeholders, and the
integration of non-moral considerations, the framework aims to support morally
grounded decision-making in realistic, high-stakes contexts.
Abstract
As AI systems become increasingly embedded in organizational workflows and
consumer applications, ethical principles such as fairness, transparency, and
robustness have been widely endorsed in policy and industry guidelines.
However, there is still scarce empirical evidence on whether these principles
are recognized, valued, or impactful from the perspective of users. This study
investigates the link between ethical AI and user satisfaction by analyzing
over 100,000 user reviews of AI products from G2. Using transformer-based
language models, we measure sentiment across seven ethical dimensions defined
by the EU Ethics Guidelines for Trustworthy AI. Our findings show that all
seven dimensions are positively associated with user satisfaction. Yet, this
relationship varies systematically across user and product types. Technical
users and reviewers of AI development platforms more frequently discuss
system-level concerns (e.g., transparency, data governance), while
non-technical users and reviewers of end-user applications emphasize
human-centric dimensions (e.g., human agency, societal well-being). Moreover,
the association between ethical AI and user satisfaction is significantly
stronger for non-technical users and end-user applications across all
dimensions. Our results highlight the importance of ethical AI design from
users' perspectives and underscore the need to account for contextual
differences across user roles and product types.