Abstract
AI technologies are increasingly deployed in high-stakes domains such as
education, healthcare, law, and agriculture to address complex challenges in
non-Western contexts. This paper examines eight real-world deployments spanning
seven countries and 18 languages, combining 17 interviews with AI developers
and domain experts with secondary research. Our findings identify six
cross-cutting factors - Language, Domain, Demography, Institution, Task, and
Safety - that structured how systems were designed and deployed. These factors
were shaped by sociocultural (diversity, practices), institutional (resources,
policies), and technological (capabilities, limits) influences. We find that
building AI systems required extensive collaboration between AI developers and
domain experts. Notably, human resources proved more critical to achieving safe
and effective systems in high-stakes domains than technological expertise
alone. We present an analytical framework that synthesizes these dynamics and
conclude with recommendations for designing AI for social good systems that are
culturally grounded, equitable, and responsive to the needs of non-Western
contexts.