CyberAgent, The Univerisy
Abstract
The rapid development of artificial intelligence (AI) has fundamentally
transformed creative work practices in the design industry. Existing studies
have identified both opportunities and challenges for creative practitioners in
their collaboration with generative AI and explored ways to facilitate
effective human-AI co-creation. However, there is still a limited understanding
of designers' collaboration with AI that supports creative processes distinct
from generative AI. To address these gaps, this study focuses on understanding
designers' collaboration with decision-making AI, which supports the
convergence process in the creative workflow, as opposed to the divergent
process supported by generative AI. Specifically, we conducted a case study at
an online advertising design company to explore how professional graphic
designers at the company perceive the impact of decision-making AI on their
creative work practices. The case company incorporated an AI system that
predicts the effectiveness of advertising design into the design workflow as a
decision-making support tool. Findings from interviews with 12 designers
identified how designers trust and rely on AI, its perceived benefits and
challenges, and their strategies for navigating the challenges. Based on the
findings, we discuss design recommendations for integrating decision-making AI
into the creative design workflow.
Abstract
Selecting a college major is a difficult decision for many incoming freshmen.
Traditional academic advising is often hindered by long wait times,
intimidating environments, and limited personalization. AI Chatbots present an
opportunity to address these challenges. However, AI systems also have the
potential to generate biased responses, prejudices related to race, gender,
socioeconomic status, and disability. These biases risk turning away potential
students and undermining reliability of AI systems. This study aims to develop
a University of Maryland (UMD) A. James Clark School of Engineering
Program-specific AI chatbot. Our research team analyzed and mitigated potential
biases in the responses. Through testing the chatbot on diverse student
queries, the responses are scored on metrics of accuracy, relevance,
personalization, and bias presence. The results demonstrate that with careful
prompt engineering and bias mitigation strategies, AI chatbots can provide
high-quality, unbiased academic advising support, achieving mean scores of 9.76
for accuracy, 9.56 for relevance, and 9.60 for personalization with no
stereotypical biases found in the sample data. However, due to the small sample
size and limited timeframe, our AI model may not fully reflect the nuances of
student queries in engineering academic advising. Regardless, these findings
will inform best practices for building ethical AI systems in higher education,
offering tools to complement traditional advising and address the inequities
faced by many underrepresented and first-generation college students.