University of Oxford, UK
Abstract
Probabilistic model checking is an approach to the formal modelling and
analysis of stochastic systems. Over the past twenty five years, the number of
different formalisms and techniques developed in this field has grown
considerably, as has the range of problems to which it has been applied. In
this paper, we identify the main application domains in which probabilistic
model checking has proved valuable and discuss how these have evolved over
time. We summarise the key strands of the underlying theory and technologies
that have contributed to these advances, and highlight examples which
illustrate the benefits that probabilistic model checking can bring. The aim is
to inform potential users of these techniques and to guide future developments
in the field.
AI Insights - PRISM automates verification for Markov decision processes and continuousâtime Markov chains.
- Acceptance sampling for discreteâevent systems uses probabilistic verification to bound failure rates.
- Stateâspace explosion forces symbolic and compositional techniques to keep runtimes tractable.
- Effective use requires a strong grasp of probability theory, stochastic processes, and formal modeling.
- Foundational books include Probabilistic Model Checking: Principles and Applications and Model Checking for Probabilistic Systems.
- ICCAD and FMSE workshops drive crossâdisciplinary advances between hardware design and formal methods.
Abstract
Industrial monitoring systems, especially when deployed in Industry 4.0
environments, are experiencing a shift in paradigm from traditional rule-based
architectures to data-driven approaches leveraging machine learning and
artificial intelligence. This study presents a comparison between these two
methodologies, analyzing their respective strengths, limitations, and
application scenarios, and proposes a basic framework to evaluate their key
properties. Rule-based systems offer high interpretability, deterministic
behavior, and ease of implementation in stable environments, making them ideal
for regulated industries and safety-critical applications. However, they face
challenges with scalability, adaptability, and performance in complex or
evolving contexts. Conversely, data-driven systems excel in detecting hidden
anomalies, enabling predictive maintenance and dynamic adaptation to new
conditions. Despite their high accuracy, these models face challenges related
to data availability, explainability, and integration complexity. The paper
suggests hybrid solutions as a possible promising direction, combining the
transparency of rule-based logic with the analytical power of machine learning.
Our hypothesis is that the future of industrial monitoring lies in intelligent,
synergic systems that leverage both expert knowledge and data-driven insights.
This dual approach enhances resilience, operational efficiency, and trust,
paving the way for smarter and more flexible industrial environments.