Verifying the consistency of model merging is an important step towards the support for team collaboration in software modeling and evolution. Since merging conflicts are inevitable, this has triggered intensive research on conflict management in different domains. Despite these efforts, techniques for high-level conflict representation have hardly been investigated yet. In this paper, we propose an approach to specify model merging conflicts. This approach includes the Conflict Pattern Language (CPL), a formalism for specifying conflicts in different modeling languages. CPL is based on the OCL grammar and is tooled by an editor and a parser. CPL facilitates the slow and error-prone task of specifying model merging conflicts and can be used to specify conflicts in any EMF-based model. We evaluated our approach with a case study, including five different conflict cases. The results are promising about how CPL can be used for specifying syntactic and semantic conflicts.
In highly dynamic environments, systems are expected to make decisions on the fly based on their observations that are bound to be partial. As such, the reasons for its runtime behaviour may be difficult to understand. In these cases, accountability is crucial, and decisions by the system need to be traceable. Logging is essential to support explanations of behaviour, but it poses challenges. Concerns about analysing massive logs have motivated the introduction of structured logging, however, knowing what to log and which details to include is still a challenge. Structured logs still do not necessarily relate events to each other, or indicate time intervals. We argue that logging changes to a runtime model in a provenance graph can mitigate some of these problems. The runtime model keeps only relevant details, therefore reducing the volume of the logs, while the provenance graph records causal connections between the changes and the activities performed by the agents in the system that have introduced them. In this paper, we demonstrate a first version towards a reusable infrastructure for the automated construction of such a provenance graph. We apply it to a multithreaded traffic simulation case study, with multiple concurrent agents managing different parts of the simulation. We show how the provenance graphs can support validating the system behaviour, and how a seeded fault is reflected in the provenance graphs.
In this work, we propose a bounded verification approach for state machine (SM) models that is independent of any model checking tools. This independence is achieved by encoding the execution semantics of SM models as Satisfiability Modulo Theories (SMT) formulas that reduce the verification of a SM to the satisfiability problem for its corresponding formula. More specifically, our approach takes as input a SM model, a depth bound, and the system properties (as invariants), and then automatically verifies models of systems in a three-phase process: (1) First it generates all possible execution paths of the model to the specified bound, and encodes each of the execution paths as SMT formulas; (2) It then augments the SMT formulas with the negation of the given invariants; and (3) Finally, it uses an SMT solver to check the satisfiability of the instrumented formula. We have applied our approach in the context of UML-RT (the UML profile for modeling real-time embedded systems) and assessed the applicability, performance, and scalability of our approach using several case studies.
Smart contracts are immutable digital programs deployed onto blockchain platforms to codify agreements. They enable blockchain technology to play a vital role in many fields, such as finance, health care, and energy. An important aspect of modeling and deploying smart contracts is to define the business process and rules that govern the agreements under which the corresponding actions are executed. Unfortunately, these models use a mix of technical and business-centric terminologies that are different based on the underlying blockchain platform that the smart contract is targeting. To address this issue, in this paper, we followed a feature-oriented domain analysis approach to identify the commonalities and variations between three of the common blockchain platforms that are used to deploy smart contracts; namely IBM Hyperledger Composer, Azure Blockchain Workbench, and Ethereum. Accordingly, we propose a reference model for smart contracts. The reference model is then realized as a modeling framework that enables developers to model and generate the structural code required to deploy a smart contract onto multiple blockchain platforms. The coverage of the proposed reference model was shown through mapping the concepts of the reference models to its corresponding constructs within each blockchain platform. Moreover, we provide three use cases to show how the proposed framework can empower developers to generate the structural code of smart contracts for the target platform through model transformation.
Models of complex systems and systems of systems are described with NAF (NATO Architecture Framework) or DoDAF (DoD Architecture Framework). Business Process Model and Notation (BPMN) is part of NAF and allows describing the behaviour of the different participants in the model. This notation is used by the French Army as well as its main suppliers to describe the interactions between participants involved in a mission. It is therefore important the models are correct. The VeriMoB project was a research project financed by the DGA (Direction Générale de l'Armement) which aimed at developing a tool that will help users to validate their BPMN models among the stakeholders. The tool had to cover three main aspects: a static verification, an interactive execution, and an automatic exploration of the possible scenarios. This paper will present the work done trying to convert BPMN to SDL in order to fulfil this objective with an existing toolset.
The ScOSA project (Scalable On-board Computing for Space Avionics) of the German Aerospace Center aims at combining radiation hardened space hardware together with unreliable, but high performance COTS (commercial off-the-shelf) components as the processing nodes in a heterogeneous on-board network in order to provide future space missions with the necessary processing capabilities. However, such a system needs to cope with node failures. Our approach is to use a static reconfiguration graph that controls how software tasks are mapped to the processing nodes, and how this mapping should change in response to possible node failures.
In this paper we present a model-based approach and a tool for automatic generation of reconfiguration graphs. Based on the software and hardware models, we traverse the graph of all possible failure situations. For every node of this graph we solve a combinatorial optimization problem of mapping tasks to processing nodes either with an SMT solver or using a genetic algorithm. The resulting reconfiguration graph can then be translated into the configuration files that are deployed on the target system, eliminating the need for tedious and error-prone manual configuration design.
Automating enactment along with traceability management of processes using model-driven engineering methods could be of significant benefit to the Network Functions Virtualization (NFV) paradigm in view of its move towards zero-touch automation of the orchestration and management of network services (NS). Earlier, we proposed an integrated process modelling and enactment environment with traceability support, MAPLE-T, for NS management. In this paper, we extend MAPLE-T with the notion of intents. We propose the usage of intents at both the process model (PM) and model-transformation levels as part of our traceability information. We define intents as information representing the objective of the PM actions/activities and their implementations. We extend MAPLE-T with traceability visualization support to visualize trace links relating models at different levels through the captured intents. The intent-enriched traceability information and the enhanced visualization enable semantically richer traceability analysis. We apply our traceability generation and analysis approach to the NS design process in order to show the benefits of intents not only for the process, but also for the whole NS lifecycle management operations.
One of the main activities of the Interfaces and Architecture (IFA) working group within the Network Function Virtualisation (NFV) Industry Specification Group (ISG) at the European Telecommunications Standards Institute (ETSI) is the specification of requirements and information modelling of NFV descriptors and artefacts. The information elements are spread across 11 different NFV-IFA specifications covering the scope of functionality of the individual functional blocks and reference points.
Part of the work of the NFV-IFA working group is dedicated to producing a unified Information Model (IM) for NFV providing a consolidated view based on the different specifications. The IM has been helpful for the identification of gaps and inconsistencies in the specifications and in implementations of the specifications.
With the growing size of the IM and the corresponding specifications and the rapid release cycles resulting from the intense work within the ISG, the IM plays an important role in helping to ensure that the specifications are consistent.
Previous work outlined the foundations for facilitating the co-evolution of models and standards based on the NFV IM and models extracted from the related standardised specifications. It defined a methodology for consistency checking and alignment of the IM and the related specifications focusing on structural aspects.
This article refines the methodology to address semantic descriptions as well as further aspects related to the co-evolution of standards and models. We also report on our experiences with the application of a prototypical implementation of the methodology during the continued alignment and maintenance of the information model and the related standardised specifications with the NFV-IFA working group and how the feedback from the working group provided insights on how to refine the methodology even further.
Computer simulations are useful tools to predict and test accumulated knowledge about the behavior of complex systems. The use of process-based simulators has historically been established in combination with object-oriented languages and requires stackful coroutines for their execution. This approach delivers excellent performance if the compiler has a built-in coroutine-concept that is translated into machine code directly. Languages without this feature move their implementations of coroutines into libraries which typically use OS-dependent calls or non-portable assembly to implement them and perform worse than the direct approach.
The recent emergence of compiler-based rewrite systems for stackless coroutines in major languages such as C++, Go and RUST in combination with the decline of domain-specific languages for simulator-construction, opens up new avenues that require re-evaluation of the trade-offs between stackful and stackless coroutine-based simulation libraries. This paper shows that stackless coroutines obtained through compiler-assisted rewrites can be used to imitate stackful ones. This allows modellers to express their implementation at a high level of abstraction without losing portability or efficiency compared to the best hand-written solution. To that end, we develop the minimal core of a simulator framework in RUST by taking advantage of RUST'S implementation of stackless coroutines. The paper shows the technical details as well as practical implications of that transformation by comparison to the simulation language SLX.
Model-driven engineering advocates the use of different modelling languages and multiple views to describe the characteristics of a complex system. This allows to express a specific system characteristic with the most appropriate modelling language. However, establishing the conceptual relationships between elements from different languages and then consistently maintaining the links between model elements are non-trivial tasks. In this paper, we propose Action-Driven Consistency (ADC) for maintaining the links between different model elements from different languages defined with the Perspectives for Multi-Language Systems (PML) framework. PML aims to promote modularity in language reuse, inter-language consistency, and combination of languages. A perspective groups different languages, each playing a role for a common modelling purpose. PML defines perspective actions based on existing language actions to maintain consistent models. In this work, we present generic templates from which perspective actions can be generated given relationships between language metaclasses. This allows the perspective designer to focus on these key relationships and frees her from the error-prone implementation of perspective actions. We illustrate our approach with a perspective that combines class diagram and use case diagram languages for the purpose of requirement elicitation and apply it to a bank application.
The advent of modeling in software engineering, like other engineering fields, has revolutionized the formalism and pace of software development. However, software applications are not built from scratch, instead, other existing software artifacts are reused and combined with new artifacts. This notion of software reuse has been in existence for decades. When structural models such as class diagrams are reused, the reusing and reused models often need to be merged and the result visualized to the modeler. However, layout mechanisms such as GraphViz, JGraphX, and other related layout tools do not retain the original layout and rather arbitrarily layout the merged models. Therefore, important information that corresponds to the mental map of a modeler and is conveyed by the specific layout is currently lost. This paper aims to establish a robust layout algorithm called rpGraph that retains the general layout of the reusing and reused models after merging. rpGraph uses the relative positioning of model elements to inform the positioning of merged model elements. Our findings are evaluated with 20 example model reuses from a library of reusable software model artifacts. A comparison of the merged layouts of rpGraph, GraphViz, and JGraphX shows that rpGraph performs better in terms of retaining the original layouts.
Emerging socio-cyber-physical systems integrate social concerns, often captured with goal models, with complex systems, where structure and behavior are often captured in SysML. Traceability between these two types of models is important to reason about consistency, completeness, and the impact of modifications. However, managing traceability during the co-evolution of these two views is not well supported as SysML does not provide sophisticated goal-modeling capabilities out of the box. This paper proposes an approach where the Goal-oriented Requirement Language (GRL) is used to capture and analyze social concerns as a supplement to SysML models, and where traceability is handled via a third-party requirements management system, namely IBM Rational DOORS. The approach is supported with tools automating the import in DOORS of relevant parts of the GRL and SysML models from their respective modeling environments (jUCMNav and No Magic's Cameo Systems Modeler). A traceability information model is proposed to connect elements from GRL and SysML models in a way that enables automating important completeness and consistency checks, even as the models evolve. The approach is illustrated and evaluated with a Smart Home example, with a discussion of benefits and limitations.
One of the key requirements for designing safety critical cyber physical systems (CPS) is to ensure resiliency. Typically, the cyber sub-system in a CPS is empowered with protection devices that quickly detect and isolate faulty components to avoid failures. However, these protection devices can have internal faults that can cause cascading failures, leading to system collapse. Thus, to guarantee the resiliency of the system, it is necessary to identify the root cause(s) of a given system disturbance to take appropriate control actions. Correct failure diagnosis in such systems depends upon an integrated fault model of the system that captures the effect of faults in CPS as well as nominal and faulty operation of protection devices, sensors, and actuators.
In this paper, we propose a novel graph based qualitative fault modeling formalism for CPS, called, Temporal Causal Diagrams (TCDs) that allow system designers to effectively represent faults and their effects in both physical and cyber sub-systems. The paper also discusses in detail the fault propagation and execution semantics of a TCD model by translating to timed automata and thus allowing an efficient means to quickly analyze, validate and verify the fault model. In the end, we show the efficacy of the modeling approach with the help of a case study from energy system.
Goal modelling is one the most important early activities in requirements engineering. Here, we describe a vision for a conceptual basis for the systematic identification and treatment of uncertainty in goal modelling. We aim to characterize the wide variety of uncertainty in goal modelling and to provide a theoretical framework for systematic uncertainty analysis. We thus adopt Walker's taxonomy which distinguishes among three dimensions of uncertainty: location, level, and nature. In addition, we propose to adapt Walker's uncertainty matrix as a heuristic tool to categorize various dimensions of uncertainty in goal modelling to serve as a conceptual framework for improving comprehension and communication of uncertainty between modellers and stakeholders and among modellers themselves. Understanding the various dimensions of uncertainty is a vital step towards the sufficient recognition and treatment of uncertainty in goal modelling activities. This in turn will help identify and prioritize critical uncertainties, which affect the goal modelling process in its entirety. We thus propose a long-term research agenda and urge community contributions in this research direction.
The digitalization of a phenomenon allows us to understand, discuss, and predict its behavior. In environmental areas, and specifically in a pandemic situation this is not an exception. The language selected to perform this conceptualization must help in the understanding of the phenomenon and must be capable to be executed, if possible, automatically, to simplify the verification process. Also, the language must be an agreement between the different parties involved in the model definition; all the specialist must feel confident with the language to be able to collaborate in a transdisciplinary approach in the model definition. In this paper, we present a conceptualization of the COVID-19 pandemic situation. The model represented in Specification and Description Language allows a detailed parametrization of the pandemic situation and a further expansion of an initial model defined using the System Dynamics approach to Cellular Automaton approach. This expansion of the conceptualization done in SDL simplifies the validation of the different models obtained. Also, the use of SDL simplifies the integration in the model of real-time data to perform the system validations. We detail the conceptualization done using SDL for the Cellular Automaton model.
On one hand, there has been a growing interest towards the application of AI-based learning and evolutionary programming for self-adaptation under uncertainty. On the other hand, self-explanation is one of the self-* properties that has been neglected. This is paradoxical as self-explanation is inevitably needed when using such techniques. In this paper, we argue that a self-adaptive autonomous system (SAS) needs an infrastructure and capabilities to be able to look at its own history to explain and reason why the system has reached its current state. The infrastructure and capabilities need to be built based on the right conceptual models in such a way that the system's history can be stored, queried to be used in the context of the decision-making algorithms.
The explanation capabilities are framed in four incremental levels, from forensic self-explanation to automated history-aware (HA) systems. Incremental capabilities imply that capabilities at Level n should be available for capabilities at Level n + 1. We demonstrate our current reassuring results related to Level 1 and Level 2, using temporal graph-based models. Specifically, we explain how Level 1 supports forensic accounting after the system's execution. We also present how to enable on-line historical analyses while the self-adaptive system is running, underpinned by the capabilities provided by Level 2. An architecture which allows recording of temporal data that can be queried to explain behaviour has been presented, and the overheads that would be imposed by live analysis are discussed. Future research opportunities are envisioned.