Active automata learning is a technique of querying black box systems and modelling their behaviour. In this paper, we aim to apply active learning in parts. We formalise the conditions on systems---with a decomposable set of actions---that make learning in parts possible. The systems are themselves decomposable through non-intersecting subsets of actions. Learning these subsystems/components requires less time and resources. We prove that the technique works for both two components as well as an arbitrary number of components. We illustrate the usefulness of this technique through a classical example and through a real example from the industry.
Isabelle/SACM is a tool for automated construction of model-based assurance cases with integrated formal methods, based on the Isabelle proof assistant. Assurance cases show how a system is safe to operate, through a human comprehensible argument demonstrating that the requirements are satisfied, using evidence of various provenances. They are usually required for certification of critical systems, often with evidence that originates from formal methods. Automating assurance cases increases rigour, and helps with maintenance and evolution. In this paper we apply Isabelle/SACM to a fragment of the assurance case for an autonomous underwater vehicle demonstrator. We encode the metric unit system (SI) in Isabelle, to allow modelling requirements and state spaces using physical units. We develop a behavioural model in the graphical RoboChart state machine language, embed the artifacts into Isabelle/SACM, and use it to demonstrate satisfaction of the requirements.
Formal methods used to validate software designs, like Alloy, OCL, and B, are powerful tools to analyze complex structures (e.g., architectures, object-relational mappings) captured as sets of relational constraints. However, their applicability is limited when software is subject to uncertainty (derived, e.g., from lack of control over third-party components, interaction with physical elements). In contrast, quantitative verification has emerged as a powerful way of providing quantitative guarantees about the performance, cost, and reliability of systems operating under uncertainty. However, quantitative verification methods do not retain thefl exibility of relational modeling in describing structures, forcing engineers to trade structural exploration for analytic capabilities that concern probabilistic and other quantitative guarantees. This paper contributes a method (HaiQ) that enhances structural modeling/synthesis with quantitative guarantees in the style provided by quantitative verification. It includes a language for describing structure and (stochastic) behavior of systems, and a temporal logic that allows checking probability and reward-based properties over sets of feasible design alternatives implicitly described by the relational constraints in a HaiQ model. We report the results of applying a prototype tool in two domains, on which we show the feasibility of synthesizing structural designs that optimize probabilistic and other quantitative guarantees.
Cyber-Physical Systems (CPSs) are integrations of distributed computing systems with physical processes that monitor and control entities in a physical environment. Although the range of their applications include several critical domains, the current trend is to verify CPSs with simulation-test systems rather than formal methodologies. In this paper, we test the effectiveness of statistical model checking, within the Modest Toolset, when analyzing the security of a non-trivial quadruple-tank water system equipped with an ad-hoc intrusion detection system (IDS) capable of mitigating attacks. Our goal is to evaluate the impact of three carefully chosen cyber-physical attacks, i.e., attacks targeting sensors and/or actuators of the system with potential consequences on the safety of the inner physical process. Our security analysis estimates both the physical impact of the attacks and the performance of the proposed IDS.
Many software applications contain confidential information, which has to be prevented from leaking through unauthorized access. To enforce confidentiality, there are language-based security mechanisms that rely on information flow control. Typically, these mechanisms work post-hoc by checking whether confidential data is accessed unauthorizedly after the complete program is written. The disadvantage is that incomplete programs cannot be interpreted properly and information flow properties cannot be built in constructively. In this work, we present a methodology to construct programs incrementally using refinement rules to follow a lattice-based information flow policy. In every refinement step, confidentiality and functional correctness of the program is guaranteed, such that insecure programs are prohibited by construction. Our contribution is fourfold. We formalize refinement rules for the constructive information flow control methodology, prove soundness of the refinement rules, show that our approach is at least as expressive as standard language-based mechanisms for information flow, and implement it in a graphical editor called CorC. Our methodology is also usable for integrity properties, which are dual to confidentiality.
In the context of robotic software, the selection of an appropriate planner is one of the most crucial software engineering decisions. Robot planners aim at computing plans (i.e., blueprint of actions) to accomplish a complex mission. While many planners have been proposed in the robotics literature, they are usually evaluated on showcase examples, making hard to understand whether they can be effectively (re)used for realising complex missions, with heterogeneous robots, and in real-world scenarios.
In this paper we propose ENFORCE, a framework which allows wrapping FM-based planners into comprehensive software engineering tools, and considers complex robotic missions. ENFORCE relies on (i) realistic maps (e.g, fire escape maps) that describe the environment in which the robots are deployed; (ii) temporal logic for mission specification; and (iii) Uppaal model checker to compute plans that satisfy mission specifications. We evaluated ENFORCE by analyzing how it supports computing plans in real case scenarios, and by evaluating the generated plans in simulated and real environments. The results show that while ENFORCE is adequate for handling single-robot applications, the state explosion still represents a major barrier for reusing existing planners in multi-robot applications.
A challenge that has gathered much attention in recent years is automated synthesis of correct-by-construction software systems from declarative specifications. The specification language is typically a subset of linear temporal logic called generalized reactivity of rank 1, for which there exists an efficient synthesis algorithm. Specifications in this language model the system as the interaction between an environment and a controller, the former satisfying a set of assumptions and the latter a set of guarantees. In order for a solution to exist, a sufficient set of assumptions implying the guarantees must be provided. The assumptions must be as general as possible and small enough to be intelligible by engineers that need to assess their consistency with the true environment where the synthesized controller will operate.
The search for such assumptions is generally a refinement approach driven by counterstrategies, characterizations of undesirable environment behaviors that force the violation of the guarantees; assumptions are progressively refined in order to exclude such behaviors. In this work we provide a heuristic to drive this counterstrategy-guided search towards smaller refinements. We define a concept of minimality of refinements with respect to counterstrategies and provide an algorithm that provably finds minimal refinements with little time overhead. We show experimentally that it consistently produces one or more shorter solutions than state of the art for a set of popular case studies. We also demonstrate that in a popular case study (AMBA-AHB protocol) our heuristic finds a close-to-optimal solution that cannot be found by previous fully automated approaches.
A wide range of interesting program properties are relational, i.e., they described a relation between two program runs. Two prominent relational properties are the regression verification (proving conditional program equivalence), and non-interference (proving the absence of information flow).
The verification of relational properties is hardly accessible to engineers due to the lack of appropriate specification languages for relational properties. In previous work, we introduced the concept of generalized test tables: a table-based specification language, which allows the tight temporal specification of functional (nonrelational) properties for reactive systems.
We introduce relational test tables - an extension of generalized test tables for the specification of relational properties. Relational test tables support specification of k-safety properties (a super set of relational properties) between k ≥ 2 program runs. We show the applicability of relational test tables by specifying and verifying change scenarios and information flow of reactive systems. We provide an implementation of the verification pipeline for programs following the IEC 61131-3 coding standard under http://github.com/VerifAPS/verifaps-lib.
We present a transformation-system-based technique in the frame-work of string solving, by reformulating a classical combinatorics on words result, the Lemma of Levi. We further enrich the induced rules by simplification steps based on results from the combinatorial theory of word equations, as well as by the addition of linear length constraints. This transformation-system approach cannot solve all equations efficiently by itself. To improve the efficiency of our transformation-system approach we integrate existing successful string solvers, which are called based on several heuristics. The experimental evaluation we performed shows that integrating our technique as an inprocessing step improves in general the performance of existing solvers.
Industrial control systems are moving from isolated to distributed and cloud-connected architectures. While the operational benefits of this migration form the driving force for this trend, the necessary security assurance is often difficult to achieve. Formal methods, including model checking, provide capable technologies to deal with this challenge. However, when formal verification must account for the complexity of modern control systems the state space being explored grows drastically as more details are included in the analysis. This may eventually cause a state space explosion, which makes formal verification infeasible. To address this, we propose a method for decomposing cloud-connected control systems into modules representing the different parts of the system (clients, the cloud, the control network, etc.). Based on the decomposed version of the system, we use UPPAAL to model several well-known cyber attacks and formally verify the system's behavior under these attacks. To determine viability of our approach, we first use statistical model checking SMC to assess the probabilities of success for selected attacks. Based on SMC outcomes, we use symbolic model checking to individually analyse the sub-system affected by each attack. The results obtained from this analysis are then used to demonstrate the feasibility of our approach. We demonstrate our method using an actual control system architecture provided by our industrial partner.
Software smells have negative impacts on the reliability and modifiability of software systems. The smells in architecture design can be cascaded down to the implementation level and cause issues that require much effort to fix. Therefore, early detection of the architecture smells can benefit the overall quality of the software system. This paper presents an integration of methods that formally define the software architecture design towards architecture smell detection. Our approach serves as a framework that allows the architectural structures and behaviours to be formally analysed based on a coherent technique. We evaluated the accuracy and performance of our approach with the models generated from open source projects. The results show that our approach is effective and functions well.
Adoption of new digital technologies is impacting all aspects of society. While these new technologies are accepted rapidly within the consumer segment, in the area of industrial control systems the pace of change in computing is slower. This is often due to the criticality and security constraints of such systems, since degraded or hijacked control could lead to injuries or competitive disadvantages. Nowadays a critical component of control systems is the key management protocol for protecting communication. This is specifically important as more and more devices become part of industrial control networks. The key management system must be reliable and robust in order to ensure stable operation of the system with minimum downtime. This often means that the system needs to be autonomous and dynamic, capable of periodically changing the keys automatically and authenticating the system components. Different techniques have been used to examine the reliability and robustness of the key management systems, one promising approach is by using formal methods. In this paper we present a formally verified key management system for use within distributed industrial control systems. We demonstrate that the key management system can reliably handle authentication/communication operations in real-time as well as joining/leaving of control units within the system. We use UPPAAL to analyse several security properties, showing that our models satisfy a collection of requirements defined by our industrial partner and are viable for dynamic key management applications.
UML models are standard artifacts used by software engineers for designing software. As software is designed, different UML diagram types (e.g., class diagrams and sequence diagrams) are produced by software designers. Since the various UML diagram types describe different aspects of a software system, they are not independent but strongly depend on each other, hence they must be consistent. Inconsistencies cause faults in thefi nal software systems. It is, therefore, paramount that they get detected, analyzed, andfi xed. Consistency rules are a useful tool proposed in the literature to detect inconsistencies. They categorize constraints that help in identifying inconsistencies when violated. This case study aims at collecting and analyzing UML models with OCL consistency rules proposed in the literature and at promoting the development of a reference benchmark that can be reused by the (FM-)research community. We collected 33 UML consistency rules and 206 different UML diagrams contained in 34 open-source UML models presented in the literature. We propose an FM-based encoding of the consistency rules in OCL. This encoding allows analyzing whether the consistency rules are satisfied or violated within the 34 UML models. To assess the proposed benchmark, we analyzed how the UML models, consistency rules, diagram types contained in the benchmark help in assessing the consistency of UML models, and the consistency of diagrams across the different software development phases. Our results show that the considered UML models and consistency rules allowed identifying 2731 inconsistencies and that those inconsistencies refer to different software development phases. We concluded that the considered UML models and consistency rules could be considered as an initial benchmark that can be further extended by the research community.
In a distributed scenario it is possible to find systems consisting of independent parties that collaboratively execute a business process, but cannot disclose a subset of the data used in this process to each other. Such systems can be modelled using the PE-BPMN notation: a privacy-enhanced extension of the BPMN process modeling notation. Given a PE-BPMN model, we address the problem of verifying that the content of certain data objects is not leaked to unauthorized parties. To this end, we formalise the semantics of PE-BPMN collaboration diagrams via a translation into process algebraic specifications. This formalisation enables us to apply model checking to detect unintended data leakages in a PE-BPMN model. We specifically consider data leakages in the context of secret sharing technology. The approach has been implemented on top of the mCRL2 toolset, and integrated into the Pleak toolset supporting privacy analysis of business processes. The proposal has been evaluated using real scenarios.