Executable Ontologies in Robotics: Event-Driven Semantic Control of Autonomous Systems

Robotic systems traditionally rely on layered control architectures where perception, planning, and execution are separated and coordinated through imperative code and state machines. While effective in constrained environments, this approach becomes fragile when robots must operate in dynamic, uncertain, or evolving contexts.

In this paper, Alexander Boldachev explores how executable ontologies can be used as a unifying semantic foundation for robotics. Instead of encoding robot behavior as hard-coded control logic, the proposed approach represents actions, states, constraints, and goals as executable semantic models driven by event-based causality.

The core idea is to treat robotic activity as a continuous flow of semantically typed events produced by agents (robots, sensors, controllers) interacting with their environment. These events are interpreted by a semantic execution engine that evaluates conditions, triggers actions, and updates the system state without relying on rigid procedural control flows.

Key topics covered in the article include:

  • Limitations of traditional control pipelines and finite-state machines in robotics

  • Event-driven semantic modeling of robot actions, perceptions, and goals

  • Executable ontologies as a coordination layer between perception, planning, and execution

  • Runtime adaptability and modification of robot behavior without code rewrites

  • Implications for multi-robot systems, human–robot interaction, and AI-driven autonomy

By embedding semantics, causality, and temporal structure directly into the control model, this approach enables robotic systems that are more adaptive, explainable, and composable. It also provides a natural bridge between robotics, knowledge graphs, and AI agents, making it especially relevant for autonomous fleets, service robots, and complex cyber-physical systems.

Read the full academic paper on arXiv: https://arxiv.org/abs/2511.15274