MODELS '18- Proceedings of the 21th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems

Full Citation in the ACM Digital Library

SESSION: Foundations

Exploring Potency

The original notion of potency -- one of the core features underpinning many forms of multi-level modeling -- has come under pressure in several ways: First, since its inception new modeling challenges have come to the fore that raise serious questions about classic potency. Second, classic potency was developed in the context of constructive modeling and does not accommodate exploratory modeling, thus representing a major hindrance to the unification of constructive and exploratory modeling in a multi-level modeling context. Third, as the discipline of multi-level modeling has evolved, a number of alternative interpretations of potency have emerged. In part, these are based on different underlying principles, yet an explicit recognition of the respective differences at a foundational level and an explicit discussion of the tradeoffs involved has been missing from the literature to date. In this paper, I identify limitations of classic potency, propose to evolve it to a potency notion based on a new foundation which -- along with further novel proposals -- addresses the aforementioned challenges, and finally conduct a comparison to three alternative definitions of potency.

From (Imperfect) Object Diagrams to (Imperfect) Class Diagrams: New Ideas and Vision Paper

In order to achieve effective support for software development, the transition between an informal and provisional mode of tool operation, which is conducive to design exploration, and a formal mechanistic mode required for computer-based design capture is crucial. This contribution proposes a smooth transition for designing class models starting from informal, sketchy object models. We propose a lenient development approach and discuss the possibilities and problems of a transformation from object diagrams to class diagrams. While classes describe abstract concepts, objects are representations of what can be seen in the real world, so it might be easier to start modeling with objects instead of classes. An object diagram can however not describe a whole system, it is only used as the first step of an iterative process to create a complete model. During this process, our object and class diagrams provide a notation for highlighting missing or conflicting parts. Based on these imperfect object diagrams, educated guesses can be made for resulting, imperfect class diagrams, which can then be refined to a complete, formal description of the modeled system.

On the Quest for Flexible Modelling

Modelling is a fundamental activity in Software Engineering, and central to model-based engineering approaches. It is used for different purposes, and so its nature can range from informal (e.g., as a casual mechanism for problem discussion and understanding) to fully formal (e.g., to enable the automated processing of models by model transformations). However, existing modelling tools only serve one of these two extreme purposes: either to create informal drawings or diagrams, or to build models fully conformant to their modelling language. This lack of reconciliation is hampering the adoption of model-based techniques in practice, as they are deemed too imprecise in the former case, and too rigid in the latter.

In this new ideas paper, we claim that modelling tools need further flexibility covering different stages, purposes and approaches to modelling. We detail requirements for such a new generation of modelling tools, describe our first steps towards their realization in the Kite metamodelling tool, and showcase application scenarios.

Mathematical Programming for Anomaly Analysis of Clafer Models

Clafer combines UML-like class- and meta-modeling with feature-oriented variability-modeling and first-order logic constraints. The considerable expressiveness of Clafer mainly stems from its built-in variability constructs, multiplicity annotations and recursive model structures which yield a potentially unbounded number of valid model instances. As a result, automated reasoning about semantic properties like model consistency (i.e., existence of valid model instances) and anomalies (e.g., false cardinality bounds) is very challenging. Recent analysis techniques are inherently incomplete as they impose an a-priori finite search space with either manually or heuristically adjusted bounds. In this paper, we present a novel approach for automated search-space restriction for a considerably rich, yet decidable fragment of the Clafer language that guarantees sound and complete detection results for a wide range of semantic anomalies. Our approach employs principles from mathematical programming by encoding Clafer models as Mixed Integer Linear Programs (MILP). Our experimental evaluation shows remarkable improvements of runtime efficiency as well as effectiveness of anomaly detection as compared to existing techniques.

SESSION: Transformations

From Single- to Multi-Variant Model Transformations: Trace-Based Propagation of Variability Annotations

In annotative approaches to model-driven product line engineering (MDPLE), model elements are decorated with variability annotations defining the product variants in which they are included. A multi-variant model transformation (MVMT) has to propagate these annotations from source to target models. We propose trace-based propagation as a grey box solution to this problem: After executing a variability ignorant single-variant transformation (SVMT), annotations are propagated a posteriori based on the trace produced by the SVMT. Trace-based propagation allows to reuse SVMTs, and can be implemented in a generic way, independently of SVMT languages and tools, making it suitable for use in a heterogeneous MDPLE environment. A formal proof demonstrates that trace-based propagation achieves commutativity of filters and transformations, obviating the need to manually edit target model annotations.

Expressing Confidence in Models and in Model Transformation Elements

The expression and management of uncertainty, both in the information and in the operations that manipulate it, is a critical issue in those systems that work with physical environments. Measurement uncertainty can be due to several factors, such as unreliable data sources, tolerance in the measurements, or the inability to determine if a certain event has actually happened or not. In particular, this contribution focuses on the expression of one kind of uncertainty, namely the confidence on the model elements, i.e., the degree of belief that we have on their occurrence, and on how such an uncertainty can be managed and propagated through model transformations, whose rules can also be subject to uncertainty.

Model Transformation Product Lines

Model transformations enable automation in Model-Driven Engineering (MDE) and are key to its success. The emphasis of MDE on using domain-specific languages has caused a proliferation of meta-models, many of them capturing variants of base languages. In this scenario, developing a transformation for a new meta-model is usually performed manually with no reuse, even if comparable transformations for similar meta-models exist. This is a suboptimal process that precludes a wider adoption of MDE in industry.

To improve this situation, we propose applying ideas from software product lines to transformation engineering. Our proposal enables the definition of meta-model product lines to capture the variability within a domain, on top of which transformations can be defined in a modular way. We call this construction transformation product line (TPL), and propose mechanisms for their construction, extension and analysis. TPLs are supported by a tool, Merlin, which is agnostic to the transformation language and lifts analyses based on model finding to the TPL. Finally, we report on an evaluation showing the benefits of building and analysing TPLs compared to building and analysing each individual transformation.

Expressive and Efficient Model Transformation with an Internal DSL of Xtend

Model transformation (MT) of very large models (VLMs), with millions of elements, is a challenging cornerstone for applying Model-Driven Engineering (MDE) technology in industry. Recent research efforts that tackle this problem have been directed at distributing MT on the Cloud, either directly, by managing clusters explicitly, or indirectly, via external NoSQL data stores. In this paper, we draw attention back to improving efficiency of model transformations that use EMF natively and that run on non-distributed environments, showing that substantial performance gains can still be reaped on that ground.

We present Yet Another Model Transformation Language (YAMTL), a new internal domain-specific language (DSL) of Xtend for defining declarative MT, and its execution engine. The part of the DSL for defining MT is similar to ATL in terms of expressiveness, including support for advanced modelling contructs, such as multiple rule inheritance and module composition. In addition, YAMTL provides support for specifying execution control strategies. We experimentally demonstrate that the presented transformation engine outperforms other representative MT engines by using the batch transformation component of the VIATRA CPS benchmark. The improvement is, at least, one order of magnitude over the up-to-now fastest solution in all of the assessed scenarios. The software artefacts accompanying this work have been approved by the artefact evaluation committee and are available at http://remodd.org/node/585.

SESSION: Verification, Validation and Planning

Evolutionary Algorithm for Bug Localization in the Reconfigurations of Models at Runtime

Systems with models at runtime are becoming increasingly complex, and this is also accompanied by more software bugs. In this paper, we focus on bugs appearing as the result of dynamic reconfigurations of the system due to context changes. We materialize our approach for bug localization in reconfigurations as an evolutionary algorithm. We guide the evolutionary algorithm with a fitness function that measures the similarity to the description of the bug report. The result is a ranked list of reconfiguration sequences, which is intended to identify the reconfiguration rules that are relevant to the bug. We evaluated our approach in BSH and CAF, two real-world industrial case studies, measuring the results in terms of recall, precision, F-measure and Matthews Correlation Coefficient (MCC). In our evaluation, we compare our approach with two other approaches: a baseline that is the one used by our industrial partners for bug localization and a random search as sanity check. Our study shows that our approach, which takes advantage of the reconfigurations of models at runtime, outperforms the other two approaches. We also performed a statistical analysis to provide evidence of the significance of the results.

Integrating the Designer in-the-loop for Metamodel/Model Co-Evolution via Interactive Computational Search

Metamodels evolve even more frequently than programming languages. This evolution process may result in a large number of instance models that are no longer conforming to the revised meta-model. On the one hand, the manual adaptation of models after the metamodels' evolution can be tedious, error-prone, and time-consuming. On the other hand, the automated co-evolution of metamodels/models is challenging especially when new semantics is introduced to the metamodels. In this paper, we propose an interactive multi-objective approach that dynamically adapts and interactively suggests edit operations to developers and takes their feedback into consideration. Our approach uses NSGA-II to find a set of good edit operation sequences that minimizes the number of conformance errors, maximizes the similarity with the initial model (reduce the loss of information) and minimizes the number of proposed edit operations. The designer can approve, modify, or reject each of the recommended edit operations, and this feedback is then used to update the proposed rankings of recommended edit operations. We evaluated our approach on a set of metamodel/model coevolution case studies and compared it to fully automated coevolution techniques.

Unified LTL Verification and Embedded Execution of UML Models

The increasing complexity of embedded systems leads to uncertain behaviors, security flaws, and design mistakes. With model-based engineering, early diagnosis of such issues is made possible by verification tools working on design models. However, three severe drawbacks remain to be fixed. First, transforming design models into executable code creates a semantic gap between models and code. Furthermore, for formal verification, a second transformation (towards a formal language) is generally required, which complicates the diagnosis process. Finally, an equivalence relation between verified formal models and deployed code should be built, proven, and maintained. To tackle these issues, we introduce a UML interpreter that fulfills multiple purposes: simulation, formal verification, and execution on both desktop computer and bare-metal embedded target. Using a single interpreter for all these activities ensures operational semantics consistency. We illustrate our approach on a level crossing example, showing verification of LTL properties on a desktop computer, as well as execution on a stm32 embedded target.

A Model-Driven Solution to Support Smart Mobility Planning

Multimodal journey planners have been introduced with the goal to provide travellers with itineraries involving two or more means of transportation to go from one location to another within a city. Most of them take into account user preferences, their habits and are able to notify travellers with real time traffic information, delays, schedules update, etc.. To make urban mobility more sustainable, the journey planners of the future must include: (1) techniques to generate journey alternatives that take into account not only user preferences and needs but also specific city challenges and local mobility operators resources; (2) agile development approaches to make the update of the models and information used by the journey planners a self-adaptive task; (3) techniques for the continuous journeys monitoring able to understand when a current journey is no longer valid and to propose alternatives. In this paper we present the experiences matured during the development of a complete solution for mobility planning based on model-driven engineering techniques. Mobility challenges, resources and remarks are modelled by corresponding languages, which in turn support the automated derivation of a smart journey planner. By means of the introduced automation, it has been possible to reduce the complexity of encoding journey planning policies and to make journey planners more flexible and responsive with respect to adaptation needs.

SESSION: Selected Papers for Industry Day

Lessons Learned from Model-Based Safety Assessment with SysML and Component Fault Trees

Mastering the complexity of safety assurance for modern, software-intensive systems is challenging in several domains, such as automotive, robotics, and avionics. Model-based safety analysis techniques show promising results to handle this challenge by automating the generation of required artifacts for an assurance case. In this work, we adapt prominent approaches and propose facilitation of SysML models with component fault trees (CFTs) to support the fault tree analysis (FTA). While most existing approaches based on CFTs are only targeting the system topology, e. g., UML Class Diagrams, we propose an integration of CFTs with SysML Internal Block Diagrams as well as SysML Activity Diagrams. We conclude with best practices and lessons learned that emerged from applying our approach to automotive use-cases.

Digital Behavioral Twins for Safe Connected Cars

Driving is a social activity which involves endless interactions with other agents on the road. Failing to locate these agents and predict their possible future actions may result in serious safety hazards. Traditionally, the responsibility for avoiding these safety hazards is solely on the drivers. With improved sensor quantity and quality, modern ADAS systems are able to accurately perceive the location and speed of other nearby vehicles and warn the driver about potential safety hazards. However, accurately predicting the behavior of a driver remains a challenging problem. In this paper, we propose a framework in which behavioral models of drivers (Digital Behavioral Twins) are shared among connected cars to predict potential future actions of neighboring vehicles, therefore improving the safety of driving. We provide mathematical formulations of models of driver behavior and the environment, and discuss challenging problems during model construction and risk analysis. We also demonstrate that our digital twins framework can accurately predict driver behaviors and effectively prevent collisions using a case study in a virtual driving simulation environment.

SESSION: Model Analysis and Testing

Engineering Software Diversity: a Model-Based Approach to Systematically Diversify Communications

Automated diversity is a promising mean of increasing the security of software systems. However, current automated diversity techniques operate at the bottom of the software stack (operating system and compiler), yielding a limited amount of diversity. We present a novel Model-Driven Engineering approach to the diversification of communicating systems, building on abstraction, model transformations and code generation. This approach generates significant amounts of diversity with a low overhead, and addresses a large number of communicating systems, including small communicating devices.

Extending Complex Event Processing to Graph-structured Information

Complex Event Processing (CEP) is a powerful technology in realtime distributed environments for analyzing fast and distributed streams of data, and deriving conclusions from them. CEP permits defining complex events based on the events produced by the incoming sources in order to identify complex meaningful circumstances and to respond to them as quickly as possible. However, in many situations the information that needs to be analyzed is not structured as a mere sequence of events, but as graphs of interconnected data that evolve over time. This paper proposes an extension of CEP systems that permits dealing with graph-structured information. Two case studies are used to validate the proposal and to compare its performance with traditional CEP systems. We discuss the benefits and limitations of the CEP extensions presented.

Enabling Model Testing of Cyber-Physical Systems

Applying traditional testing techniques to Cyber-Physical Systems (CPS) is challenging due to the deep intertwining of software and hardware, and the complex, continuous interactions between the system and its environment. To alleviate these challenges we propose to conduct testing at early stages and over executable models of the system and its environment. Model testing of CPSs is however not without difficulties. The complexity and heterogeneity of CPSs renders necessary the combination of different modeling formalisms to build faithful models of their different components. The execution of CPS models thus requires an execution framework supporting the cosimulation of different types of models, including models of the software (e.g., SysML), hardware (e.g., SysML or Simulink), and physical environment (e.g., Simulink). Furthermore, to enable testing in realistic conditions, the cosimulation process must be (1) fast, so that thousands of simulations can be conducted in practical time, (2) controllable, to precisely emulate the expected runtime behavior of the system and, (3) observable, by producing simulation data enabling the detection of failures. To tackle these challenges, we propose a SysML-based modeling methodology for model testing of CPSs, and an efficient SysML-Simulink cosimulation framework. Our approach was validated on a case study from the satellite domain.

Towards Testing from Finite State Machines with Symbolic Inputs and Outputs

SESSION: Experience Report

Measures to report the Location Problem of Model Fragment Location

Model Fragment Location (MFL) aims at identifying model elements that are relevant to a requirement, feature, or bug. Many MFL approaches have been introduced in the last few years to address the identification of the model elements that correspond to a specific functionality. However, there is a lack of detail when the measurements about the search space (models) and the measurements about the solution to be found (model fragment) are reported. Generally, the only reported measure is the model size. In this paper, we propose using five measurements (size, volume, density, multiplicity, and dispersion) to report the location problems. These measurements are the result of analyzing 1,308 MFLs in a family of industrial models over the last four years. Using two MFL approaches, we emphasize the importance of these measurements in order to compare results. Our work not only proposes improving the reporting of the location problem, but it also provides real measurements of location problems that are useful to other researchers in the design of synthetic location problems.

Improving the Developer Experience with a Low-Code Process Modelling Language

Context: The OutSystems Platform is a development environment composed of several DSLs, used to specify, quickly build and validate web and mobile applications. The DSLs allow users to model different perspectives such as interfaces and data models, define custom business logic and construct process models. Problem: The DSL for process modelling (Business Process Technology (BPT)), has a low adoption rate and is perceived as having usability problems hampering its adoption. This is problematic given the language maintenance costs. Method: We used a combination of interviews, a critical review of BPT using the "Physics of Notation" and empirical evaluations of BPT using the System Usability Scale (SUS) and the NASA Task Load indeX (TLX), to develop a new version of BPT, taking these inputs and Outsystems' engineers culture into account. Results: Evaluations conducted with 25 professional software engineers showed an increase of the semantic transparency on the new version, from 31% to 69%, an increase in the correctness of responses, from 51% to 89%, an increase in the SUS score, from 42.25 to 64.78, and a decrease of the TLX score, from 36.50 to 20.78. These differences were statistically significant. Conclusions: These results suggest the new version of BPT significantly improved the developer experience of the previous version. The end users background with OutSystems had a relevant impact on the final concrete syntax choices and achieved usability indicators.

A Feature-based Survey of Model View Approaches

SESSION: Empirical Studies

Model-Based Software Engineering: A Multiple-Case Study on Challenges and Development Efforts

A recurring theme in discussions about the adoption of Model-Based Engineering (MBE) is its effectiveness. This is because there is a lack of empirical assessment of the processes and (tool-)use of MBE in practice. We conducted a multiple-case study by observing 2 two-month MBE projects from which software for a Mars rover were developed. We focused on assessing the distribution of the total software development effort over different development activities. Moreover, we observed and collected challenges reported by the developers during the execution of projects. We found that the majority of the effort is spent on the collaboration and communication activities. Furthermore, our inquiry into challenges showed that tool-related challenges are the most encountered.

An Empirical Investigation to Understand the Difficulties and Challenges of Software Modellers When Using Modelling Tools

Software modelling is a challenging and error-prone task. Existing Model-Driven Engineering (MDE) tools provide modellers with little aid, partly because tool providers have not investigated users' difficulties through empirical investigations such as field studies. This paper presents the results of a two-phase user study to identify the most prominent difficulties that users might face when developing UML Class and State-Machine diagrams using UML modelling tools. In the first phase, we identified the preliminary modelling challenges by analysing 30 Class and State-Machine models that were previously developed by students as a course assignment. The result of the first phase helped us design the second phase of our user study where we empirically investigated different aspects of using modelling tools: the tools' effectiveness, users' efficiency, users' satisfaction, the gap between users' expectation and experience, and users' cognitive difficulties. Our results suggest that users' greatest difficulties are in (1) remembering contextual information and (2) identifying and fixing errors and inconsistencies.

Effort Used to Create Domain-Specific Modeling Languages

Domain-specific modeling languages and generators have been shown to significantly improve the productivity and quality of system and software development. These benefits are typically reported without explaining the size of the initial investment in creating the languages, generators and related tooling. We compare the investment needed across ten cases, in two different ways, focusing on the effort to develop a complete modeling solution for a particular domain with the MetaEdit+ tool. Firstly, we use a case study research method to obtain detailed data on the development effort of implementing two realistically-sized domain-specific modeling solutions. Secondly, we review eight publicly available cases from various companies to obtain data from industry experiences with the same tool, and compare them with the results from our case studies. Both the case studies and the industry reports indicate that, for this tool, the investment required to create domain-specific modeling support is modest: ranging from 3 to 15 man-days with an average of 10 days.

A Decade of Software Design and Modeling: A Survey to Uncover Trends of the Practice

We present the results of a survey of 228 software practitioners conducted on two phases ten years apart. The goal of the study is to uncover trends in the practice of software design and the adoption patterns of modeling languages such as UML. The first phase was conducted in April-December 2007 and included 113 responses. The second phase was conducted in March-November 2017 and included 115 responses. Both surveys were conducted online, employed identical solicitation mechanisms, and included the same set of questions. The survey results are analyzed within each phase and across phases. We present the results and analysis of the data identifying upward and downward trends in design and modeling practices. The results suggest some increase in formal and informal modeling and identify key challenges with modeling platforms and tools. The results can help researchers, practitioners, and educators to focus efforts on issues of relevance and significance to the profession.

SESSION: Patterns, Refactoring and Refinement

Recommending Model Refactoring Rules from Refactoring Examples

Models, like other first-class artifacts such as source code, are maintained and may be refactored to improve their quality and, consequently, one of the derived artifacts. Considering the size of the manipulated models, automatic support is necessary for refactoring tasks. When the refactoring rules are known, such a support is simply the implementation of these rules in editors. However, for less popular and proprietary modeling languages, refactoring rules are generally difficult to define. Nevertheless, their knowledge is often embedded in practical examples. In this paper, we propose an approach to recommend refactoring rules that we lean automatically from refactoring examples. The evaluation of our approach on three modeling languages shows that, in general, the learned rules are accurate.

Refactoring Architecture Models for Compliance with Custom Requirements

In the process of software-intensive systems engineering, architectures need to be designed that are compliant to the requirements. For this, architects need to examine those requirements with regard to their architectural impact. Accessing and interpreting the requirements is however not always possible, for instance if custom requirements are yet unknown at the time when the architecture is modeled. Ideally, architectural knowledge as derived from custom requirements could be imposed upon architecture models. This paper proposes a novel concept for automated refactoring of architecture models in order to meet such requirements by formalizing architectural knowledge using model verification and model transformations. Industrial application within a telecommunications service provider is demonstrated in the domain of cloud application orchestration: service providers are enabled to autonomously customize solutions predefined by vendors according to their own internal requirements.

Model-Driven Trace Diagnostics for Pattern-based Temporal Specifications

Offline trace checking tools check whether a specification holds on a log of events recorded at run time; they yield a verification verdict (typically a boolean value) when the checking process ends. When the verdict is false, a software engineer needs to diagnose the property violations found in the trace in order to understand their cause and, if needed, decide for corrective actions to be performed on the system. However, a boolean verdict may not be informative enough to perform trace diagnostics, since it does not provide any useful information about the cause of the violation and because a property can be violated for multiple reasons.

The goal of this paper is to provide a practical and scalable solution to solve the trace diagnostics problem, in the settings of model-driven trace checking of temporal properties expressed in TemPsy, a pattern-based specification language. The main contributions of the paper are: a model-driven approach for trace diagnostics of pattern-based temporal properties expressed in TemPsy, which relies on the evaluation of OCL queries on an instance of a trace metamodel; the implementation of this trace diagnostics procedure in the TemPsy-Report tool; the evaluation of the scalability of TemPsy-Report, when used for the diagnostics of violations of real properties derived from a case study of our industrial partner. The results show that TemPsy-Report is able to collect diagnostic information from large traces (with one million events) in less than ten seconds; TemPsy-Report scales linearly with respect to the length of the trace and keeps approximately constant performance as the number of violations increases.

On Computing Instructions to Repair Failed Model Refinements

A model refinement step is the process of removing underspecification from a model by applying syntactic changes such that the transformed model's semantics is subsumed by the semantics of the original model. Performing a refinement step is error-prone and thus needs automated and meaningful support for repair in case an intended refinement step yields an incorrect result. This paper introduces sufficient conditions on a modeling language that enable fully automatic calculation of syntactic changes, which transform one model to a refinement of another model. In contrast to previous work, this paper's approach is independent of a concrete modeling language, computes shortest syntactic changes to maintain the developer's intention behind the model as much as possible, and does not assume availability of powerful model composition operators. The method relies on partitioning the syntactic change operations applicable to each model in equivalence classes and on excluding syntactic changes that are not part of shortest changes leading to a refining model. This paper contains formal proofs for the modeling language independent results and shows the method's applicability and usefulness by instantiating the framework with three modeling languages. The results provide a language independent and fully automated method to repair refinement steps under intuitive assumptions as well as language independent foundational insights concerning the relation between syntactic changes and the impact of their application on a model's semantics.

SESSION: Model Management

Towards sound, optimal, and flexible building from megamodels

The model-driven development of systems involves multiple models, metamodels and transformations. Transformations -- which may be bidirectional -- specify, and provide means to enforce, desired "consistency" relationships between models. We can describe the whole configuration using a megamodel. As development proceeds, and various models are modified, we need to be able to restore consistency in the megamodel, so that the consequences of decisions first recorded in one model are appropriately reflected in the others. At the same time, we need to minimise the amount of recomputation needed; in particular, we would like to avoid reapplying a transformation when no relevant changes have occurred in the models it relates. In general, however, different results are obtained depending on which models are allowed to be modified and on the order and direction of transformation application. In this paper we propose using an orientation model to make important choices explicit. We explain the relationship between software build systems and the megamodel consistency problem. We show how to extend the formalised build system pluto to provide a means of restoring consistency in a megamodel that is, in appropriate senses, flexible, sound and optimal.

Robust Hashing for Models

The increased adoption of model-driven engineering (MDE) in complex industrial environments highlights the value of a company's modeling artefacts. As such, any MDE ecosystem must provide mechanisms to both, protect, and take full advantage of these valuable assets.

In this sense, we explore the adaptation of the Robust Hashing technique to the MDE domain. Indeed, robust hashing algorithms (i.e. hashing algorithms that generate similar outputs from similar input data), have been proved useful as a key building block in intellectual property protection, authenticity assessment and fast comparison and retrieval solutions for different application domains. We present a novel robust hashing mechanism for models based on the use of model fragmentation and locality sensitive hashing. We discuss the usefulness of this technique on a number of scenarios and its feasibility by providing a prototype implementation and corresponding experimental evaluation.

Incremental View Model Synchronization Using Partial Models

View models are abstractions of a set of source models derived by unidirectional model transformations. In this paper, we propose a view model transformation approach which provides a fully compositional transformation language built on an existing graph query language to declaratively compose source and target patterns into transformation rules. Moreover, we provide a reactive, incremental, validating and inconsistency-tolerant transformation engine that reacts to changes of the source model and maintains an intermediate partial model by merging the results of composable view transformations followed by incremental updates of the target view. An initial scalability evaluation of an open source prototype tool built on top of an open source model transformation tool is carried out in the context of the open Train Benchmark framework.

Towards Scalable Model Views on Heterogeneous Model Resources

When engineering complex systems, models are used to represent various systems aspects. These models are often heterogeneous in terms of modeling language, provenance, number or scale. They can be notably managed by different persistence frameworks adapted to their nature. As a result, the information relevant to engineers is usually split into several interrelated models. To be useful in practice, these models need to be integrated together to provide global views over the system under study. Model view approaches have been proposed to tackle such an issue. They provide an unification mechanism to combine and query heterogeneous models in a transparent way. These views usually target specific engineering tasks such as system design, monitoring, evolution, etc. In our present context, the [email protected] industrially-supported European initiative defines a set of large-scale use cases where model views can be beneficial for tracing runtime and design time data. However, existing model view solutions mostly rely on in-memory constructs and low-level modeling APIs that have not been designed to scale in the context of large models stored in different kinds of sources. This paper presents the current status of our work towards a general solution to efficiently support scalable model views on heterogeneous model resources. It describes our integration approach between model view and model persistence frameworks. This notably implies the refinement of the view framework for the construction of large views from multiple model storage solutions. This also requires to study how parts of queries can be computed on the contributing models rather than on the view. Our solution has been benchmarked on a practical large-scale use case from the [email protected] project, implementing a runtime -- design time feedback loop. The corresponding EMF-based tooling support and modeling resources are fully available online.

SESSION: Hardware Embedded Systems and CPS

Slicing UML-based Models of Real-time Embedded Systems

Models of Real-time Embedded (RTE) systems may encompass many components with often many different kinds of dependencies describing, e.g., structural relationships or the flow of data, control, or messages. Understanding and properly accounting for them during development, testing and debugging can be challenging. This paper presents an approach for slicing models of RTE systems to facilitate model understanding and other model-level activities. A key novelty of our approach is the support for models with composite components (with multiple hierarchical levels) and capturing the dependencies that involve structural and behavioural model elements, including a combination of the two. Moreover, we describe an implementation of our approach in the context of Papyrus-RT, an open source Model Driven Engineering (MDE) tool based on the modeling language UML-RT. We conclude the paper with the results of applying our slicer to a set of UML-RT models to validate our approach and to demonstrate the applications of our approach for facilitating model-level analysis tasks, such as testing and debugging.

HITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing of Cyber Physical Systems

Hardware-in-the-loop (HiL) testing is an important step in the development of cyber physical systems (CPS). CPS HiL test cases manipulate hardware components, are time-consuming and their behaviors are impacted by the uncertainties in the CPS environment. To mitigate the risks associated with HiL testing, engineers have to ensure that (1) HiL test cases are well-behaved, i.e., they implement valid test scenarios and do not accidentally damage hardware, and (2) HiL test cases can execute within the time budget allotted to HiL testing. This paper proposes an approach to help engineers systematically specify and analyze CPS HiL test cases. Leveraging the UML profile mechanism, we develop an executable domain-specific language, HITECS, for HiL test case specification. HITECS builds on the UML Testing Profile (UTP) and the UML action language (Alf). Using HITECS, we provide analysis methods to check whether HiL test cases are well-behaved, and to estimate the execution times of these test cases before the actual HiL testing stage. We apply HITECS to an industrial case study from the satellite domain. Our results show that: (1) HITECS is feasible to use in practice; (2) HITECS helps engineers define more complete and effective well-behavedness assertions for HiL test cases, compared to when these assertions are defined without systematic guidance; (3) HITECS verifies in practical time that HiL test cases are well-behaved; and (4) HITECS accurately estimates HiL test case execution times.

Hybrid Co-simulation: It's About Time

SESSION: Graphical Modelling and Modelling Applications

Towards a Language Server Protocol Infrastructure for Graphical Modeling

The development of modern IDEs is still a challenging and time-consuming task, which requires implementing the support for language-specific features such as syntax highlighting or validation. When the IDE targets a graphical language, its development becomes even more complex due to the rendering and manipulation of the graphical notation symbols. To simplify the development of IDEs, the Language Server Protocol (LSP) proposes a decoupled approach based on language-agnostic clients and language-specific servers. LSP clients communicate changes to LSP servers, which validate and store language instances. However, LSP only addresses textual languages (i.e., character as atomic unit) and neglects the support for graphical ones (i.e., nodes/edges as atomic units). In this paper, we present our vision to decouple graphical language IDEs discussing the alternatives for integrating LSP's ideas in their development. Moreover, we propose a novel LSP infrastructure to simplify the development of new graphical modeling tools, in which Web technologies may be used for editor front-ends while leveraging existing modeling frameworks to build language servers.

Visualizations of Evolving Graphical Models in the Context of Model Review

Code reviewing is well recognized as a valuable software engineering practice for improving software quality. Today a large variety of tools exist that support code reviewing and are widely adopted in open source and commercial software projects. They commonly support developers in manually inspecting code changes, providing feedback on and discussing these code changes, as well as tracking the review history. As source code is usually text-based, code reviewing tools also only support text-based artifacts. Hence, code changes are visualized textually and review comments are attached to text passages. This renders them unsuitable for reviewing graphical models, which are visualized graphically in diagrams instead of textually and hence require graphical change visualizations as well as annotation capabilities on the diagram level. Consequently, developers currently have to switch back and forth between code reviewing tools and comparison tools for graphical models to relate reviewer comments to model changes. Furthermore, adding and discussing reviewer comments on the diagram level is simply not possible. To improve this situation, we propose a set of coordinated visualizations of reviewing-relevant information for graphical models including model changes, diagram changes, review comments, and review history. The proposed visualizations have been implemented in a prototype tool called Mervin supporting the reviewing of graphical UML models developed with Eclipse Papyrus. Using this prototype, the proposed visualizations have been evaluated in a user study concerning effectiveness. The evaluation results show that the proposed visualizations can improve the review process of graphical models in terms of issue detection.

Dissimilarity Measures for Clustering Space Mission Architectures

The application of model transformations to the process of design space exploration and multi-objective optimization allows for comprehensive exploration of an architectural trade space. For many applications, such as the design of missions involving multiple spacecraft, the resulting set of Pareto-optimal solution models can be too large to be consumed directly, requiring additional analyses in order to gain meaningful insights. In this paper, we investigate the use of automated clustering techniques for grouping similar solution models, and introduce and study a number of both generic and domain-specific methods for measuring the similarity of the solution models. We report results from applying our approach to the exploration of the design space of a spacecraft-based interferometry array in a lunar orbit. For purposes of evaluation and validation, results from the application to the case study are correlated with the results from a study in which solution models were clustered manually by groups of domain experts. The results show tradeoffs in the granularity and extensibility of applying clustering approaches to spacecraft mission architecture models. Also, what humans consider to be relevant in assessing architectural similarity varies and is often biased by their background and expertise. We conclude that providing the subjects with a range of clustering tools has the potential to strongly enhance the ability to explore the complex design space of multi-spacecraft missions, and gain deep insights into the trade space.

SESSION: Transformations 2

Change Propagation-based and Composition-based Co-evolution of Transformations with Evolving Metamodels

Transformations constitute significant key components of an automated model-driven engineering solution. As metamodels evolve, model transformations may need to be co-evolved accordingly. A conducted experiment on transformations' co-evolution highlighted the existing gap in the literature where only limited few co-evolution scenarios are covered without supporting alternatives that occur in practice. To make matters worse, when a developer needs to drift apart from the proposed co-evolution, no automatic support is provided to the developer. This paper first proposes a change propagation-based co-evolution of transformations. The premise is that knowledge of the metamodel evolution can be propagated by means of resolutions to drive the transformation co-evolution. To deal with particular cases where developers must drift from the proposed resolutions, we introduce a composition-based mechanism that allows developers to compose resolutions meeting their needs. Our work is evaluated on 14 case studies consisting in original and evolved metamodels and ETL Epsilon transformations. A comparison of our co-evolved transformations with the 14 versioned ones showed the usefulness of our approach that reached an average 96% of correct co-evolution. On three other case studies, our composition-based co-evolution showed to be useful to eight developers in selecting resolutions that best meet their needs. Among the applied resolutions, four developers applied six resolutions that were the direct result of a composition.

Integration of Visual Contracts and Model Transformation for Enhanced MDE Development

Model transformations are an important aspect of Model-Driven Engineering as models throughout the software development process are transformed and refined until, finally, application code is generated. However, model transformations are complex to build, maintain, and verify for correctness. We propose the combination of visual contracts, an implementation independent approach for specifying correctness requirements for verification purposes, with operator-based model transformation execution to integrate both the specification of transformation requirements and the transformations themselves within the same framework. The graphical operator-based notation is used to define both the constraints of a contract and the transformation definition. This allows reuse of operators between the two and maintains implementation independence as the operators can be directly executed or compiled to other model-transformation languages. To illustrate the concept, we report on a prototype integration of visual contracts with our our existing operator-based model transformation framework and applied it in the law-enforcement context to transform relational data sources into Elasticsearch for multisource analytics.

Assurance via model transformations and their hierarchical refinement

Assurance is a demonstration that a complex system (such as a car or a communication network) possesses an importantproperty, such as safety or security, with a high level of confidence. In contrast to currently dominant approaches to building assurance cases, which are focused on goal structuring and/or logical inference, we propose considering assurance as a model transformation (MT) enterprise: saying that a system possesses an assured property amounts to saying that a particular assurance view of the system comprising the assurance data, satisfies acceptance criteria posed as assurance constraints. While the MT realizing this view is very complex, we show that it can be decomposed into elementary MTs via a hierarchy of refinement steps. The transformations at the bottom level are ordinary MTs that can be executed for data specifying the system, thus providing the assurance data to be checked against the assurance constraints. In this way, assurance amounts to traversing the hierarchy from the top to the bottom and assuring the correctness of each MT in the path. Our approach has a precise mathematical foundation (rooted in process algebra and category theory) --- a necessity if we are to model precisely and then analyze our assurance cases. We discuss the practical applicability of the approach, and argue that it has several advantages over existing approaches.

SESSION: Synthesis & Simulation

From Deployment to Platform Exploration: Automatic Synthesis of Distributed Automotive Hardware Architectures

In order to cope with the rising complexity of today's systems, model-based development of software-intensive embedded systems has become a de-facto standard in recent years. In a previous work, we demonstrated how such a model-based approach can enable automatization of certain development steps, namely the deployment of logical (platform-independent) system models to technical (platform-specific) system models. Together with Continental, we especially focused on industrial applicability.

In this work, we demonstrate how we extended, again in cooperation with Continental, the previous approach in order to enable a synthesis of the topology of technical platforms (E/E architectures) together with a deployment. We furthermore introduced variability concepts in order to model variants of technical platforms which is an industrial required need. Our approach is thus capable of calculating a platform architecture and its topology which is optimized in terms of the deployment of logical system models, constraints, optimization objectives and choses the optimal variant for all technical models.

Highly-Optimizing and Multi-Target Compiler for Embedded System Models: C++ Compiler Toolchain for the Component and Connector Language EmbeddedMontiArc

Component and Connector (C&C) models, with their corresponding code generators, are widely used by large automotive manufacturers to develop new software functions for embedded systems interacting with their environment; C&C example applications are engine control, remote parking pilots, and traffic sign assistance. This paper presents a complete toolchain to design and compile C&C models to highly-optimized code running on multiple targets including x86/x64, ARM and WebAssembly. One of our contributions are algebraic and threading optimizations to increase execution speed for computationally expensive tasks. A further contribution is an extensive case study with over 50 experiments. This case study compares the runtime speed of the generated code using different compilers and mathematical libraries. These experiments showed that programs produced by our compiler are at least two times faster than the ones compiled by MATLAB/Simulink for machine learning applications such as image clustering for object detection. Additionally, our compiler toolchain provides a complete model-based testing framework and plug-in points for middleware integration. We make all materials including models and toolchains electronically available for inspection and further research.

Efficient use of local energy: An activity oriented modeling to guide Demand Side Management

Self-consumption of renewable energies is defined as electricity that is produced from renewable energy sources, not injected to the distribution or transmission grid or instantaneously withdrawn from the grid and consumed by the owner of the power production unit or by associates directly contracted to the producer. Designing solutions in favor of self-consumption for small industries or city districts is challenging. It consists in designing an energy production system made of solar panels, wind turbines, batteries that fit the annual weather prediction and the industrial or human activity. In this context, this paper reports the context of this business domain, its challenges, and the application of modeling that leads to a solution. Through this article, we highlight the essentials of a domain specific modeling language designed to let domain experts run their own simulations, we compare with existing practices that exist in such a company and we discuss the benefits and the limits of the use of modeling in such context.