Abstract
Climate data science faces persistent barriers stemming from the fragmented
nature of data sources, heterogeneous formats, and the steep technical
expertise required to identify, acquire, and process datasets. These challenges
limit participation, slow discovery, and reduce the reproducibility of
scientific workflows. In this paper, we present a proof of concept for
addressing these barriers through the integration of a curated knowledge graph
(KG) with AI agents designed for cloud-native scientific workflows. The KG
provides a unifying layer that organizes datasets, tools, and workflows, while
AI agents -- powered by generative AI services -- enable natural language
interaction, automated data access, and streamlined analysis. Together, these
components drastically lower the technical threshold for engaging in climate
data science, enabling non-specialist users to identify and analyze relevant
datasets. By leveraging existing cloud-ready API data portals, we demonstrate
that "a knowledge graph is all you need" to unlock scalable and agentic
workflows for scientific inquiry. The open-source design of our system further
supports community contributions, ensuring that the KG and associated tools can
evolve as a shared commons. Our results illustrate a pathway toward
democratizing access to climate data and establishing a reproducible,
extensible framework for human--AI collaboration in scientific research.
KT
Abstract
KT developed a Responsible AI (RAI) assessment methodology and risk
mitigation technologies to ensure the safety and reliability of AI services. By
analyzing the Basic Act on AI implementation and global AI governance trends,
we established a unique approach for regulatory compliance and systematically
identify and manage all potential risk factors from AI development to
operation. We present a reliable assessment methodology that systematically
verifies model safety and robustness based on KT's AI risk taxonomy tailored to
the domestic environment. We also provide practical tools for managing and
mitigating identified AI risks. With the release of this report, we also
release proprietary Guardrail : SafetyGuard that blocks harmful responses from
AI models in real-time, supporting the enhancement of safety in the domestic AI
development ecosystem. We also believe these research outcomes provide valuable
insights for organizations seeking to develop Responsible AI.
AI Insights - The risk taxonomy categorizes threats into data, model, deployment, and societal dimensions, each with measurable indicators.
- A multiâstage assessment pipeline integrates static code analysis, adversarial testing, and humanâinâtheâloop audits to quantify robustness.
- SafetyGuard employs a lightweight transformerâbased policy network that intercepts outputs in real time, achieving <5âŻms latency on edge devices.
- Compliance mapping aligns each risk factor with specific clauses of the Basic Act on AI, enabling automated audit reports.
- Pilot deployments in Korean telecom and finance sectors demonstrated a 30âŻ% reduction in policyâviolating incidents after Guardrail integration.
- The report proposes a future research agenda on explainable mitigation strategies and crossâborder dataâsharing protocols.