Abstract
Personal service robots are increasingly used in domestic settings to assist
older adults and people requiring support. Effective operation involves not
only physical interaction but also the ability to interpret dynamic
environments, understand tasks, and choose appropriate actions based on
context. This requires integrating both hardware components (e.g. sensors,
actuators) and software systems capable of reasoning about tasks, environments,
and robot capabilities. Frameworks such as the Robot Operating System (ROS)
provide open-source tools that help connect low-level hardware with
higher-level functionalities. However, real-world deployments remain tightly
coupled to specific platforms. As a result, solutions are often isolated and
hard-coded, limiting interoperability, reusability, and knowledge sharing.
Ontologies and knowledge graphs offer a structured way to represent tasks,
environments, and robot capabilities. Existing ontologies, such as the
Socio-physical Model of Activities (SOMA) and the Descriptive Ontology for
Linguistic and Cognitive Engineering (DOLCE), provide models for activities,
spatial relationships, and reasoning structures. However, they often focus on
specific domains and do not fully capture the connection between environment,
action, robot capabilities, and system-level integration. In this work, we
propose the Ontology for roBOts and acTions (OntoBOT), which extends existing
ontologies to provide a unified representation of tasks, actions, environments,
and capabilities. Our contributions are twofold: (1) we unify these aspects
into a cohesive ontology to support formal reasoning about task execution, and
(2) we demonstrate its generalizability by evaluating competency questions
across four embodied agents - TIAGo, HSR, UR3, and Stretch - showing how
OntoBOT enables context-aware reasoning, task-oriented execution, and knowledge
sharing in service robotics.