MODELS '20: Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings

Full Citation in the ACM Digital Library

SESSION: Tools and demos

Gentleman: a light-weight web-based projectional editor generator

In the activity of software development and modeling, users should benefit from as much freedom as possible to express themselves, and this characteristic also extends to the tools they use. In recent years, projectional editors have proven to be a valid approach to obtain such capabilities by enabling language extension and composition and various notations. However, current solutions are heavyweight, platform-specific, and suffer from poor usability. To better support this paradigm and minimize the risk of arbitrary and accidental constraints in expressivity, we introduce Gentleman, a lightweight web-based projectional editor generator. Gentleman allows the user to define a model and projections for its concepts, and use the generated editor to create the model instances. We demonstrate how to define a projectional editor for Mindmap modeling, covering model definition, text and table projection, multi-projection, and styling to showcase its main features.

TyphonML: a modeling environment to develop hybrid polystores

Designing and deploying a hybrid data persistence architecture that involves a combination of relational and NoSQL databases is a complex, technically challenging, and error-prone task. In this tool paper, we propose TyphonML, a modeling language and supporting environment, which permits modelers to specify data that need to be persisted in hybrid architectures, by abstracting over the specificities of the underlying technologies. The language enables the specification of both conceptual entities and available data layer technologies, and then how the modeled entities have to be mapped to the available database systems. TyphonML models are used to generate microservice-based infrastructures, which permit users to interact with the designed hybrid polystores at the conceptual level. In this tool paper, we show the different components of the TyphonML environment at work through a demonstration scenario.

A profiler for the matching process of henshin

Model transformations are essential operations in Model-Driven Software Engineering (MDSE). Due to the increasing size and complexity of software systems developed with the help of MDSE, the input models for transformations are also getting bigger. In order to still be able to use model transformations efficiently, more attention should be paid to their performance. Especially with declarative languages, the execution of a transformation remains hidden from the developer, so there is no way to understand the reasons for a long execution time.

We present our profiler for the declarative transformation language Henshin, which provides information about the execution at the level of transformations. In particular, the goal of our profiler is to provide information about the matching process to find an isomorphic subgraph. With the help of our monitoring approach, we collect information about the transformation execution at runtime, which is then aggregated and presented to the developer in various visualizations.

In our tool demo we present the two ways to invoke our profiler and explain by an example which information the profiler provides. In addition, we interpret the displayed information in the context of our example and show how a possible performance improvement can be achieved. A screencast of the demo is available at https://youtu.be/TVq6MN8drJM

Automated video game world map synthesis by model-based techniques

World maps contribute a significant part of the interactivity and entertainment to modern video games. While large-scale industrial world map generation tools exist, their use usually implies a substantial learning curve, and the cost of licences restricts the accessibility of these tools to individual game developers.

In this paper, we introduce a world map generator for Unity-based games that exploits model-based techniques. After the game-specific concepts of the world map are captured and turned into a metamodel, the world map generation problem is first formulated as a consistent graph generation problem solved by a state-of-the-art model generator. This graph model is subsequently refined into a concrete world within the Unity game engine by (1) mapping the abstract graph elements into Unity game objects and (2) creating a height map based on user-defined properties with the Perlin Noise technique. Demonstration video: https://youtu.be/03BbD61EKpk

Papyrus for gamers, let's play modeling

Gamification refers to the exploitation of gaming mechanisms for serious purposes, like learning hard-to-train skills such as modeling.

We present a gamified version of Papyrus, the well-known open source modeling tool. Instructors can use it to easily create new modeling games (including the tasks, solutions, levels, rewards...) to help students learning any specific modeling aspect.

The evaluation of the game components is delegated to the GDF gamification framework that bidirectionally communicates with the Papyrus core via API calls. Our gamified Papyrus includes as well a game dashboard component implemented with HTML/CSS/Javascript and displayed thanks to the integration of a web browser embedded in an Eclipse view.

MReplayer: a trace replayer of distributed UML-RT models

In this paper, we present MReplayer that supports ordering and replaying of execution traces of distributed systems that are developed using communicated state machine models. Despite the existing solutions that require detailed traces annotated with timestamps (logical or physical), MReplayer only requires a minimum amount of traces without timestamps. Instead, it uses model analysis techniques to order and replay the traces. MReplayer is composed of a set of engines that support an end-to-end solution to trace ordering and replay of distributed systems in three steps: first, a model of a distributed system is instrumented using model transformations to generate execution traces and broadcasts the traces either using a TCP connection or a log file. Second, static analysis of state machine models is performed to extract run-to-completion steps from them. Third, using the information collected (execution traces and rc-steps) in the previous steps, a lightweight centralized version of the distributed system is created and presented to users in a web-based application. We have implemented our approach using UML for Real-time (UML-RT) which is a language specifically designed for real-time embedded systems with soft real-time constraints. Finally, we have evaluated MReplayer against several use cases with various complexities. The result shows that MReplayer can reduce the size of the trace information collected by more than half while incurring similar runtime overhead.

A video that demonstrates the tool: https://youtu.be/WG5ggqPoJHg

User-centred tooling for modelling of big data applications

We outline the key requirements for a Big Data modelling recommender tool. Our web-based tool is suitable for capturing system requirements in big data analytics applications involving diverse stakeholders. It promotes awareness of the datasets and algorithm implementations that are available to leverage in the design of the solution. We implement these ideas in BiDaML-web, a proof of concept recommender system for Big Data applications, and evaluate the tool using an empirical study with a group of 16 target end-users. Participants found the integrated recommender and technique suggestion tools helpful and highly rated the overall BiDaML web-based modelling experience. BiDaML-web is available at https://bidaml.web.app/ and the source code can be accessed at https://github.com/tarunverma23/bidaml.

CyprIoT project: an open source toolset to model and generate a network of things

The IoT lacks a consistent software engineering approach to meet its requirements. MDE disposes of many techniques that can contribute in this respect. It can help automate many redundant software engineering tasks thanks to code generation. The literature contains proper tools to design and generate the internal behavior of things. However, it lacks a dedicated tool for networking. This paper's contribution is a toolset consisting of a DSL, based on Xtext and EMF, to create a model of a network of things, and an extensible code generator, based on ATL and Acceleo, to generate the network artifacts from this model.

ModelMine: a tool to facilitate mining models from open source repositories

Mining Software Repositories (MSR) has opened up new pathways and rich sources of data for research and practical purposes. This research discipline facilitates mining data from open source repositories and analyzing software defects, development activities, processes, patterns, and more. Contemporary mining tools are geared towards data extraction, analysis primarily from textual artifacts and have limitations in representation, ranking and availability. This paper presents ModelMine, a novel mining tool focuses on mining model-based artifacts and designs from open source repositories. ModelMine is designed particularly to mine software repositories, artifacts and commit history to uncover information about software designs and practices in open-source communities. ModelMine supports features that include identification and ranking of open source repositories based on the extent of presence of model-based artifacts and querying repositories to extract models and design artifacts based on customizable criteria. It supports phase-by-phase caching of intermediate results to speed up the processing to enable efficient mining of data. We compare ModelMine against a state-of-the-art tool named PyDriller in terms of performance and usability. The results show that ModelMine has the potential to become instrumental for cross-disciplinary research that combines modeling and design with repository mining and artifacts extraction. URL: https://www.smreza.com/projects/modelmine/

Insights collaboration space: a team collaboration app for the design of data-driven services

Next to technical expertise, the ability to effectively communicate and collaborate with a large number of different stakeholders with varying technical backgrounds has always been one of the vital skills of good software architects. So far, software architects have had to mainly interact with domain experts, UX designers, managers, testers, operators, and developers, with developers being one of the most important stakeholder groups for them. In the future, we believe, there will be a new group of stakeholders equally important to developers: data scientists. So the ability to effectively communicate and collaborate with data scientists will become a paramount skill of software architects. Data-driven services will be a large driver for innovation and new business models in the upcoming years. However, the design of those services in a quality that is acceptable to users will require much closer collaboration of the different disciplines than in traditional software engineering projects. In terms of software engineering, it is crucial to facilitate this challenging cross-disciplinary collaboration in the best possible way. As a result of a research cooperation between John Deere and Fraunhofer IESE, we contribute the ICSpace app, an insights collaboration space for cross-disciplinary teams jointly working on the design of data-driven services.

Concrete syntax-based find for graphical DSLs

There are services available in the most software tools we have got used to like, copy, paste, cut, find, and replace. However, the state of the art is not so good with tools of graphical languages. Even many commercial modelling tools have limited support of the find feature. We propose to add find as a service of graphical DSL tool development frameworks. This way find is available in any DSL built using the DSL tool development framework. The concrete syntax-based find has been implemented as a service of the DSL tool development framework ajoo. Two graph-based languages: UML Activity diagrams and Deterministic Finite Automata (DFA) transition diagrams are used to demonstrate usage of the concrete syntax-based find.

Using Benji to systematically evaluate model comparison algorithms

Model comparison is a critical task in model-driven engineering. Its correctness enables an effective management of model evolution, synchronisation, and even other tasks, such as model transformation testing. The literature is rich as concerns comparison algorithms approaches, however the same cannot be said for their systematic evaluation. In this paper we present Benji, a tool for the generation of model comparison benchmarks. In particular, Benji provides domain-specific languages to design experiments in terms of input models and possible manipulations, and based on those generates corresponding benchmark cases. In this way, the experiment specification can be exploited as a systematic way to evaluate available comparison algorithms against the problem under study.

Strengthening validation of model behavior through filmstrip templates in the tool USE

This contribution focuses on testing behavioral aspects of UML and OCL models. In our approach, a so-called model validator can automatically generate test cases (object models) by using manually written configurations for object models and additional OCL invariants. However, describing configurations can be a challenging task, especially for novel or part-time modelers. This paper presents an extension of the tool USE (UML-based Specification Environment) with valuable options for (a) filmstrip model configuration and (b) filmstrip templates in the model validation process. Developers specify the configuration for (application) model elements and accordingly, a filmstrip model configuration and a filmstrip template are automatically generated. A filmstrip template identifies recurring model parts which can reduce model validation time. The newly added functionalities strengthen the underlying testing technique by making it more developer-friendly.

Enhancing development and consistency of UML models and model executions with USE studio

The UML and OCL tool USE (UML-based Specification Environment) has been developed over more than one decade offering domain-specific languages for describing (1) UML class and statechart models, (2) OCL constraints for invariants (on classes and states) and pre- and postconditions (on operations and transitions), and (3) SOIL (Simple Ocl-like Imperative Language) command sequences for (3a) operation implementations and (3b) executions of model test cases. The three languages have been originally developed as independent textual languages intended for conventional editing. This contribution introduces a new integrated development environment for the three languages to give the developer projectional editing features. We discuss a number of advantages for model development in a developer interface called USE Studio1: (1) completion mechanisms for language syntax elements and already defined developer model elements, (2) structured, focused views on related language elements (e.g., one common view on all model associations), (3) consistency guarantees between the underlying model and model executions, and (4) basic common refactorings for the model and model executions.

MMINT-A 2.0: tool support for the lifecycle of model-based safety artifacts

In recent years, the complexity of safety-critical systems such as automotive systems has been rapidly increasing. The need to address safety concerns in such systems led to the development of industry-specific safety standards. The standards mandate activities that generate model-based safety artifacts (e.g., safety cases and fault trees). Given the importance of these safety models, tool support is needed to facilitate manipulating them throughout their lifecycle while maintaining their connection to system models.

In this paper, we report on MMINT-A 2.0, an extended version of our tool MMINT-A, aimed to facilitate the creation, analysis, and evolution of safety models. We demonstrate the tool-supported methodology of MMINT-A 2.0 on an automotive example.

POSTER SESSION: Posters

Metamodel specialization based DSL for DL lifecycle data management

A new Domain Specific Language (DSL) based approach to Deep Learning (DL) lifecycle data management (LDM) is presented: a very simple but universal DL LDM tool, still usable in practice (called Core tool); and an advanced extension mechanism, that converts the Core tool into a DSL tool building framework for DL LDM tasks. The method used is based on the metamodel specialisation approach for DSL modeling tools introduced by authors.

Enabling language engineering for the masses

While language workbenches---tools to define software languages together with their IDEs---are yet to become ubiquitous in industry, a noticeable amount of domain modeling is still done using word processors and spreadsheet calculators. We suggest an approach to use a word processor to define (modeling) languages in example-driven way: a language is defined by giving examples of code written in it, which are then annotated to specify abstract syntax, formatting rules, dynamic semantics, and so on. Such a definition can be used to validate similar documents and to generate an API for processing models, or it can serve as a front-end and later be transformed to an (equivalent) definition in a language workbench. We discuss how a similar approach for language definition can be implemented in a form of a language workbench.

What's the grade of your diagram?: towards a streamlined approach for grading UML diagrams

After 25 years, university courses still teach UML modeling to some extent. Also, UML is still the go-to language when practitioners need to model software systems. Evaluating UML models created by students remains a challenge faced by instructors. Some challenges are: (1) assessment criteria are not clear, making it difficult to justify test scores and produce qualitative feedback to enhance the learning process; (2) evaluating UML models is a time-consuming task, limiting the broader development of models by students; and (3) difficulty in giving feedback on the modeling in a timely manner. While recognizing this problem, current literature does not explore it to full extent. This article sheds light on these issues when introducing the UMLGrade, which is an initial proposal to streamline the process of grading UML diagrams. UMLGrade seeks to enhance the learning of UML models through assessment reports considering semantic, syntactic aspects, design rules, readability, and object-oriented principles. An initial process is introduced which can serve as a starting point for new initiatives.

From things' modeling language (ThingML) to things' machine learning (ThingML2)

In this paper, we illustrate how to enhance an existing state-of-the-art modeling language and tool for the Internet of Things (IoT), called ThingML, to support machine learning on the modeling level. To this aim, we extend the Domain-Specific Language (DSL) of ThingML, as well as its code generation framework. Our DSL allows one to define things, which are in charge of carrying out data analytics. Further, our code generators can automatically produce the complete implementation in Java and Python. The generated Python code is responsible for data analytics and employs APIs of machine learning libraries, such as Keras, Tensorflow and Scikit Learn. Our prototype is available as open source software on Github.

SESSION: Educators symposium

Automatic assessment of students' software models using a simple heuristic and machine learning

Software models are increasingly popular. To educate the next generation of software engineers, it is important that they learn how to model software systems well, so that they can design them effectively in industry. It is also important that instructors have the tools that can help them assess students' models more effectively. In this paper, we investigate how a tool that combines a simple heuristic with machine learning techniques can be used to help assess student submissions in model-driven engineering courses. We apply our proposed technique to first identify submissions of high quality and second to predict approximate letter grades. The results are comparable to human grading and a complex rule-based technique for the former and surprisingly accurate for the latter.

Towards a better understanding of interactions with a domain modeling assistant

The enrolment of software engineering students has increased rapidly in the past few years following industry demand. At the same time, model-driven engineering (MDE) continues to become relevant to more domains like embedded systems and machine learning. It is therefore important to teach students MDE skills in an effective manner to prepare them for future careers in academia and industry. The use of interactive online tools can help instructors deliver course material to more students in a more efficient manner, allowing them to offload repetitive or tedious tasks to these systems and focus on other teaching activities that cannot be easily automated. Interactive online tools can provide students with a more engaging learning experience than static resources like books or written exercises. Domain modeling with class diagrams is a fundamental modeling activity in MDE. While there exist multiple modeling tools that allow students to build a domain model, none of them offer an interactive learning experience. In this paper, we explore the interactions between a student modeler and an interactive domain modeling assistant with the aim of better understanding the required interaction. We illustrate desired interactions with three examples and then formalize them in a metamodel. Based on the metamodel, we explain how to form a corpus of learning material that supports the assistant interactions.

From classic to agile: experiences from more than a decade of project-based modeling education

Modeling is one of the key activities in software and systems engineering as models can serve as placeholders for an existing or planned system, and permit to simplify their treatment by abstracting to problem-relevant aspects. Since 2007, we teach the course "Modeling II" at the Hasso Plattner Insitute at the University of Potsdam on modeling complex IT-systems with UML considering modeling paradigms such as object-oriented, component-based modeling and service-oriented architectures. We report in this paper on experiences from 13 years project-based education in the course, describe in particular a transitioning from a classic to a more agile project setting, and compare both approaches.

On teaching descriptive and prescriptive modeling

Models may be used for purposes relating (a) to understanding, predicting, and communicating model aspects, and (b) to implementing the model and capturing the design intent. Models that are primarily used for understanding, predicting and communicating are referred to as descriptive models, whereas models mainly used for implementation are called prescriptive models. This contribution focuses on teaching both the common and the distinguishing aspects of the two model categories. We start with an example for a general descriptive and prescriptive model, independent of particular software modeling languages, and continue to discuss an example demonstrating how UML and OCL can be applied for specifying both a descriptive and a prescriptive model. Finally, we discuss essentials to be learned from this teaching venture.

SESSION: Doctoral symposium

Automated generation of test scenario models for the system-level safety assurance of autonomous vehicles

Autonomous vehicles controlled by advanced machine learning techniques are significantly gaining in popularity. However, the safety engineering practices currently used in such vehicles are not capable of justifying that AI techniques would prevent unsafe situations with a designated level of confidence and reliability. One related challenge is the perpetually changing environment that autonomous vehicles must interact with, which must be taken into consideration when deriving test suites for their safety assurance. As a result, a common approach for testing autonomous vehicles involves subjecting them to test scenarios and evaluating their system-level quality of service. As it stands, such system-level testing approaches do exist but only at a prototypical and conceptual level: these approaches cannot handle complex system-level traffic scenarios and related coverage criteria. I plan to address this challenge through my PhD studies by (1) defining situation coverage as an abstract coverage criteria for autonomous vehicle testing, (2) evaluating situation coverage of existing test suites obtained by off-the-shelf simulation tools, and (3) proposing a test suite generation approach that provides test scenarios with increasing situation coverage as output.

A feature-oriented approach: from usage scenarios to automated system of systems validation in the automotive domain

New mobility solutions can be characterized as a System of Systems (SoS). SoS characteristics such as emergent system behavior and the operational and managerial independence of the constituent systems pose particular challenges for requirements analysis and system validation. In this paper, we formulate research goals for an automated and model-based analysis of behavior requirements in an automotive SoS context. Based on a problem statement, we propose an integrative requirements analysis and system validation approach. This approach realizes a closed chain from stakeholder requirements to the validation of system behavior, whereby new information is continuously provided via automated checks and short feedback loops. This allows an early and iterative refinement of system objectives and requirements and reduces uncertainty during SoS development. The current research status is summarized and future work is outlined.

Artificial intelligence empowered domain modelling bot

With the increasing adoption of Model-Based Software Engineering (MBSE) to handle the complexity of modern software systems in industry and inclusion of modelling topics in academic curricula, it is no longer a question of whether to use MBSE but how to use it. Acquiring modelling skills to properly build and use models with the help of modelling formalisms are non-trivial learning objectives, which novice modellers struggle to achieve for several reasons. For example, it is difficult for novice modellers to learn to use their abstraction abilities. Also, due to high student-teacher ratios in a typical classroom setting, novice modellers may not receive personalized and timely feedback on their modelling decisions. These issues hinder the novice modellers in improving their modelling skills. Furthermore, a lack of modelling skills among modellers inhibits the adoption and practice of modelling in industry. Therefore, an automated and intelligent solution is required to help modellers and other practitioners in improving their modelling skills. This doctoral research builds an automated and intelligent solution for one modelling formalism - domain models, in an avatar of a domain modelling bot. The bot automatically extracts domain models from problem descriptions written in natural language and generates intelligent recommendations, particularly for teaching modelling literacy to novice modellers. For this domain modelling bot, we leverage the capabilities of various Artificial Intelligence techniques such as Natural Language Processing and Machine Learning.

A model-driven approach for cobotic cells based on Petri nets

The advance in the development of industrial robots has accelerated significantly in recent years. One of the driving forces behind this is collaboration between humans and robots in shared working areas, resulting in increased productivity and thus reduced cost. Unfortunately, the development of software for collaborating robots is a complex, time-intensive, and demanding task. This is because robotic technology changes fast and knowledge is hard to make available for reuse. Hence, a model-driven approach is necessary to facilitate the development of software for collaborating robots. This work will introduce such a model-driven approach based on hybrid Petri nets, as a formal technique for modeling the various aspects of robotic software. Therefore, specific challenges of employing Petri net based models are discussed, focusing on the management, usage and adaptation of the different models required for applications of collaborating robots. Building on that, a model-driven architecture is proposed, which solves the identified challenges of application development for collaborating robots.

A concern-oriented software engineering methodology for micro-service architectures

Component-Based Systems (CBS) allow for the construction of modular, highly scalable software. Decomposing a system into individually maintainable and deployable components enables a targeted replication of performance bottlenecks, and promotes code modularity. Over the last years, the Micro-Service Architecture (MSA) style has become a popular approach to maximize the benefits of CBS. However, MSA introduces new challenges, by imposing a conceptual and technological stack on adherent projects, which require new critical design choices. Throughout my PhD I want to investigate to which extent a systematic reuse of MSA solutions of various granularity can streamline MSA application development by guiding design decisions.

Multi-language systems based on perspectives to promote modularity, reusability, and consistency

Model-driven engineering advocates the use of different modelling languages and multiple views to describe the characteristics of a complex system. This allows to express a specific system characteristic with the most appropriate modelling language. However, establishing the conceptual relationships between elements from different languages and then consistently maintaining links between model elements are non-trivial tasks. In this research, we present a framework for the specification and development of multi-language systems based on perspectives to promote modularity in language reuse, inter-language consistency, and combination of languages. A perspective groups different languages for a modelling purpose and defines the role of each participating language. A perspective defines composite actions for building a consistent multi-model system and maintaining the links between different model elements. The aim of this framework is that the perspective designer only needs to specify relationships between different languages. A generative approach then ensures model consistencies, hence freeing the perspective designer from the error-prone implementation of the consistency mechanism and simplifying the modular combination of different languages.

Enhancing collaborative modeling

Various studies conducted in the context of model-driven engineering (MDE) identified insufficient collaboration support of modeling tools. In the course of this paper, we present a research agenda to improve collaborative graphical modeling with a focus on users and change history. In contrast to other approaches, user level edit operations should be persisted instead of calculating differences of model versions a posteriori. We expect a detailed series of edit operations to be more self-descriptive than the differences between two versions of a model, which likely represents only the result of multiple edit operations. The expected advantages based on the more detailed history include a more understandable change history of an evolving model and new possibilities in collaboration, such as micro cherry picking of a set of specific edit operations instead of a whole commit. In addition, branching of only sub elements of a model as a way to explore alternatives during concurrent modeling is target of our research. Compared to existing approaches which mostly support either synchronous or asynchronous collaboration, the proposed research aims to support both through the use of Event Sourcing.

Creating an accessible and understandable modelling language for cell-based simulations

The study of morphogenesis has increasingly entailed the use of computer simulations to predict intricate behaviours from these systems, which has led to the development of tools for computational biologists to build their own simulations. However, uptake of these tools by experimental biologists has been slow and concerns remain over the assumptions underlying such tools, which are not readily explicable to a domain expert without prior programming knowledge. To demonstrate how these concerns might be addressed, we propose the creation of a domain-specific language (DSL) of one such simulation, the MemAgent-Spring model (MSM). By designing this DSL around regimented biological concepts identified by observing discussions between experimentalists and modellers, we hope to better understand how the usability and reproducibility of the MSM might be improved, therefore potentially increasing its usage by experimentalists.

Mining of DSLs and generator templates from reference applications

Domain-Specific Languages (DSLs) found application in different domains. The development of Model-Driven Development (MDD) components is facilitated by a wealth of frameworks like EMF, Xtext, and Xtend. However, the development of the necessary IDE components still can take up to several weeks or even months until it can be used in a production environment. The first step during the development of such an MDD infrastructure is to analyse a set of reference applications to deduce the DSL used by the domain experts and the templates used in the generator. The analysis requires technical expertise and is usually performed by MDD infrastructure developers, who have to adhere to a close communication with domain experts and are exposed to high cognitive load and time-consuming tasks.

The objective of this PhD project is to reduce the initial effort during the creation of new MDD infrastructure facilities for either a new domain or newly discovered platforms within a known domain. This should be made possible by the (semi-)automatic analysis of multiple codebases using Code Clone Detection (CCD) tools in a defined process flow. Code clones represent schematically redundant and generic code fragments which were found in the provided codebase. In the process, the key steps include (i) choosing appropriate reference applications (ii) distinguishing the codebase by clustering the files, (iii) reviewing the quality of the clusters, (iv) analysing the cluster by tailored CCD, and (v) transforming of the code clones, depending on the code clone type, to extract a DSL and the corresponding generator templates.

WORKSHOP SESSION: 3rd workshop on modeling in automotive system and software engineering: MASE 2020

Automatically learning formal models: an industrial case from autonomous driving development

The correctness of autonomous driving software is of utmost importance as incorrect behaviour may have catastrophic consequences. Though formal model-based engineering techniques can help guarantee correctness, challenges exist in widespread industrial adoption. One among them is the model construction problem. Manual construction of formal models is expensive, error-prone, and intractable for large systems. Automating model construction would be a great enabler for the use of formal methods to guarantee software correctness and thereby for safe deployment of autonomous vehicles. Such automated techniques can be beneficial in software design, re-engineering, and reverse engineering. In this industrial case study, we apply active learning techniques to obtain formal models from an existing autonomous driving software (in development) implemented in MATLAB. We demonstrate the feasibility of active automata learning algorithms for automotive industrial use. Furthermore, we discuss the practical challenges in applying automata learning and possible directions for integrating automata learning into automotive software development workflow.

Risk-based compatibility analysis in automotive systems engineering

Software is the new leading factor for innovation in the automotive industry. With the increase of software in road vehicles new business models, such as after-sale updates (i.e., Function-on-Demand) and Over-the-Air-Updates come into focus of manufacturers. When updating a road vehicle in the field, it is required to ensure functional safety. An update shall not influence existing functionality and break its safety. Hence, it must be compatible with the existing software. The compatibility of an update is ensured by testing. However, testing all variants of a highly configurable system, such as a modern car's software, is infeasible, due to the combinatorial explosion. To address this problem, in this paper, we propose a risk-based change-impact analysis to identify system variants relevant for retesting after an update. We combine existing concepts from product sampling, risk-based testing, and configuration prioritization and apply them to automotive architectures. For validating our concept, we use the Body Comfort System case study from the automotive industry. Our evaluation reveals that the concept backed by tool support may reduce testing effort by identifying and prioritizing incompatible variants wrt to a system update.

Scenarios in the loop: integrated requirements analysis and automotive system validation

The development of safety-relevant systems in the automotive industry requires the definition of high-quality requirements and tests for the coordination and monitoring of development activities in an agile development environment. In this paper we describe a Scenarios in the Loop (SCIL) approach. SCIL combines (1) natural language requirements specification based on Behavior-Driven Development (BDD) with (2) formal and test-driven requirements modeling and analysis, and (3) integrates discipline-specific tools for software and system validation during development. A central element of SCIL is a flexible and executable scenario-based modeling language, the Scenario Modeling Language for Kotlin (SMLK). SMLK allows for an intuitive requirements formalization, and supports engineers to move iteratively, and continuously aided by automated checks, from stakeholder requirements to the validation of the implemented system. We evaluated the approach using a real example from the field of e-mobility.

WORKSHOP SESSION: 1st international workshop on open model based engineering environment: OpenMBEE 2020

Assisted authoring of model-based systems engineering documents

In systems engineering practices, system design and analysis have historically been performed using a document-centric approach where stakeholders produce a number of documents that represent their views on a system under development. Given the ad-hoc, disparate, and informal nature of natural language documents, these views become quickly inconsistent. Rigor in engineering work is also lost in the transition from model-based engineering design and analysis to engineering documents. Once the documents are delivered, the engineering portion of the work is disconnected. In the Open Model Based Engineering Environment (OpenMBEE), Cross-References (aka transclusions) synthesize relevant engineering information where model elements are not simply hyperlinked, but de-referenced in place in a document, upgrading a document-based process with model-based engineering technology. Those Cross-References are nowadays partially created manually, putting a burden on the engineer who is authoring the document. This paper presents an approach which can assist the engineer by providing machine-generated suggestions for Cross-References using language processing, graph analysis, and clustering technologies on model data managed by the OpenMBEE infrastructure.

Model checking as a service: towards pragmatic hidden formal methods

Executable models can be used to support all engineering activities in Model-Based Systems Engineering. Testing and simulation of such models can provide early feedback about design choices. However, in today's complex systems, failures could arise due to subtle errors that are hard to find without checking all possible execution paths. Formal methods, and especially model checking can uncover such subtle errors, yet their usage in practice is limited due to the specialized expertise and high computing power required. Therefore we created an automated, cloud-based environment that can verify complex reachability properties on SysML State Machines using hidden model checkers. The approach and the prototype is illustrated using an example from the aerospace domain.

Breesse: bridging EMF, simulink and stateflow for model-based design of safety-critical systems

Both the Eclipse platform and MathWorks have successfully provided entire ecosystems and tooling for Model-Driven Engineering (MDE). On the one hand, the Eclipse community has built a rich set of open source tools and applications to address different MDE needs. Several of these tools and applications are actively used for developing academic and industrial systems. On the other hand, MathWorks with its Simulink and Stateflow technologies has focused on design modelling, simulation and code generation to deliver one of the most widely used modelling frameworks for developing embedded and safety-critical systems. Leveraging these two MDE ecosystems in the form of an integrated environment for embedded and safety-critical system development would be expected. Nonetheless, these two ecosystems rarely interact due to MathWorks' closed nature and proprietary file formats.

This paper presents Breesse, a live bridge for the Eclipse Modeling Framework ecosystem and the MathWorks Simulink and Stateflow ecosystem. Breesse is an open source tool that was built in response to the needs of two industry partners who develop avionics systems. It was realized with EMF technologies and the MATLAB Engine API for Java. Breesse is able to import the contents of Simulink and Stateflow design models and libraries into EMF-based Simulink and Stateflow representations. These EMF-based representations enable the manipulation of the design models in other existing EMF-based tools for MDE. Evaluation of the tool was carried out through its use in three avionics system designs.

Towards boosting the OpenMBEE platform with model-code consistency

Eventual consistency between design and implementation is imperative for the quality and maintainability of software systems. Towards achieving this consistency, engineers can analyze the gaps between models and corresponding code to gain insights into differences between design and implementation. Due to the different levels of abstraction of the involved artifacts, this analysis is a complex task to automate. We study an industrial MBSE setting where we aim to provide model-code gap analysis between SysML system models and corresponding C/C++ code through structural consistency checks. To this end, we propose an extension of the OpenMBEE platform, to include code as one of the synchronized development artifacts in addition to models and documentation. In this paper, we outline our initial research idea to include code as a view in this platform and we propose to explicitly link the code to generated documentation, and thereby to the model.

WORKSHOP SESSION: 2nd modelling language engineering and execution workshop: MLE 2020

Runtime modeling and analysis of IoT systems

Internet-of-things systems are difficult to understand and debug due to their distributed nature and weak connectivity. We address this problem by using relational reference attribute grammars to model and analyze IoT systems with unreachable parts. A transitive device-dependency analysis is given as an example.

A composition algorithm for reusable workflow models

The use of model composition algorithms is becoming more wide-spread. For example, model composition can be applied in the context of Software Product Lines when integrating optional features, or when reusing models in multiple contexts. While composition of structural models is relatively straightforward, behavioural composition is more challenging. In this paper, we propose a composition algorithm for workflow-oriented (modelling) languages, and show how the same algorithm can be reused to compose models expressed with two different requirement modelling languages: Use Case Maps (UCM) and Use Case Specifications (UCS). Despite UCM being a graphical language and UCS being a textual language, both modelling languages model software/system functionality by describing a series of responsibilities or steps. We discuss the effectiveness of our algorithm using an example and a case study.

Time semantics of executable activity diagrams for relativized conformance testing

The executable subset of UML provides the ability to execute and simulate design models prior to implementation. In this paper, we introduce time semantics and relativized time input/output conformance rtioco relation for executable UML activity diagrams. The aforementioned features are essential and viable to perform online conformance testing. We present a tool support by extending the fUML execution engine, and demonstrate the approach on an example.

CouchEdit: a relaxed conformance editing approach

Graphical models have long become an integral part of many system development activities. However, the support of such graphical notations through tools is still believed to be not optimal. We present CouchEdit, a novel approach for a diagram editor framework that realizes the relaxed conformance editing concept. Similar to textual editors, where the conformance to a formal language is checked in the background, this concept offers the modeler the freedom to draw unrestricted diagrams that are analyzed and (partially) translated into a metamodel instance. This paper presents the concepts and the general architecture of this framework and introduces core mechanisms that can ease the integration of new graphical languages into the implemented prototype.

A survey on service composition languages

In recent years, service-oriented architecture (SOA) has been adopted by industry in developing enterprise systems. Web service composition has been one of the challenging topics in SOA. Numerous approaches have been proposed to tackle this problem. In industry big companies such as Amazon, Netflix, and Uber have developed their own web service composition languages and tools. In academia, on the other hand, there have also been attempts to resolve some of the complexities in web service composition. In this survey we identify and evaluate current prominent service composition languages, and discuss our key findings. After a scan of dozens of service composition systems, 14 systems that used a language-based approach were included in this study. We believe that our findings will help people from industry and academia to learn about some of the major active composition languages and get an overall idea about their commonalities and differences.

WORKSHOP SESSION: 2nd workshop on artificial intelligence and model-driven engineering: MDE intelligence 2020

DoMoBOT: a bot for automated and interactive domain modelling

Domain modelling transforms domain problem descriptions written in natural language (NL) into analyzable and concise domain models (class diagrams) during requirements analysis or the early stages of design in software development. Since the practice of domain modelling requires time in addition to modelling skills and experience, several approaches have been proposed to automate or semi-automate the construction of domain models from problem descriptions expressed in NL. Despite the existing work on domain model extraction, some significant challenges remain unaddressed: (i) the extracted domain models are not accurate enough to be used directly or with minor modifications in software development, (ii) existing approaches do not facilitate the tracing of the rationale behind the modelling decisions taken by the model extractor, and (iii) existing approaches do not provide interactive interfaces to update the extracted domain models. Therefore, in this paper, we introduce a domain modelling bot called DoMoBOT, explain its architecture, and implement it in the form of a web-based prototype tool. The bot automatically extracts a domain model from a problem description written in NL with an accuracy higher than existing approaches. Furthermore, the bot enables modellers to update a part of the extracted domain model and in response the bot re-configures the other parts of the domain model pro-actively. To improve the accuracy of extracted domain models, we combine the techniques of Natural Language Processing and Machine Learning. Finally, we evaluate the accuracy of the extracted domain models.

Enhancing model transformation synthesis using natural language processing

In this paper we examine how model transformation specifications can be derived from requirements and examples, using a combination of natural language processing (NLP), machine learning (ML) and inductive logic programming (ILP) techniques, together with search-based software engineering (SBSE) for metamodel matching. The AI techniques are employed in order to improve the performance and accuracy of the base SBSE approach, and enable this to be used for a wider range of transformation cases. We propose a specific approach for the co-use of the techniques, and evaluate this on a range of transformation examples from different sources.

A comparative study of reinforcement learning techniques to repair models

In model-driven software engineering, models are used in all phases of the development process. These models may get broken due to various editions during the modeling process. To repair broken models we have developed PARMOREL, an extensible framework that uses reinforcement learning techniques. So far, we have used our version of the Markov Decision Process (MDP) adapted to the model repair problem and the Q-learning algorithm. In this paper, we revisit our MDP definition, addressing its weaknesses, and proposing a new one. After comparing the results of both MDPs using Q-Learning to repair a sample model, we proceed to compare the performance of Q-Learning with other reinforcement learning algorithms using the new MDP. We compare Q-Learning with four algorithms: Q(λ), Monte Carlo, SARSA and SARSA (λ), and perform a comparative study by repairing a set of broken models. Our results indicate that the new MDP definition and the Q(λ) algorithm can repair with faster performance.

Towards an assessment grid for intelligent modeling assistance

The ever-growing complexity of systems, the growing number of stakeholders, and the corresponding continuous emergence of new domain-specific modeling abstractions has led to significantly higher cognitive load on modelers. There is an urgent need to provide modelers with better, more Intelligent Modeling Assistants (IMAs). An important factor to consider is the ability to assess and compare, to learn from existing and inform future IMAs, while potentially combining them. Recently, a conceptual Reference Framework for Intelligent Modeling Assistance (RF-IMA) was proposed. RF-IMA defines the main required components and high-level properties of IMAs. In this paper, we present a detailed, level-wise definition for the properties of RF-IMA to enable a better understanding, comparison, and selection of existing and future IMAs. The proposed levels are a first step towards a comprehensive assessment grid for intelligent modeling assistance. For an initial validation of the proposed levels, we assess the existing landscape of intelligent modeling assistance and three future scenarios of intelligent modeling assistance against these levels.

WORKSHOP SESSION: 2nd international workshop on analytics and mining of model repositories: AMMoRe 2020

Metamodel deprecation to manage technical debt in model co-evolution

Model-Driven Engineering helps formalize problem-domains by using metamodels. Modeling ecosystems consisting of purposely designed editors, transformations, and code generators are defined on top of the metamodels. Analogously to other software forms, metamodels can evolve---consequently, the validity of existing artifacts might be compromised. Coupled evolution provides techniques for restoring artifacts' validity in response to metamodel evolution. In this paper, we propose using deprecation in metamodeling to mitigate the difficulties in performing a class of adaptations that must be operated manually. Technical debt in co-evolution can be regarded as the outcome of procrastinating the migration of artifacts and, thus, must be reduced if not eliminated. Tool support for the adoption of deprecation and technical debt is used to demonstrate the feasibility of the methods.

An extensible tool-chain for analyzing datasets of metamodels

Metamodels play a crucial role in any modeling environment as they formalize the modeling constructs underpinning the definition of conforming artifacts, including models, model transformations, code generators, and editors. Understanding the structural characteristics and the quality of the metamodels that are available in public repositories before their reuse is a critical task that demands the adoption of different tools, which might not be easy to adopt. Even the selection of metamodels to be used for experimenting with new tools is not straightforward as it involves exploring various sources of information and dig in each metamodel to check its appropriateness for the evaluation of the tool under development. In this paper, we present a dataset of metamodels, which has been collected for experimenting with different approaches conceived by the authors. The dataset has been automatically curated using a toolchain, which has been re-designed post-ante the definition of the proposed approaches to foster its future reuse.

WORKSHOP SESSION: 14th workshop on models and evolution: ME 2020

Edelta 2.0: supporting live metamodel evolutions

Evolving metamodels is a delicate task, both from the programming effort's point of view and, more importantly, from the correctness point of view: the evolved version of a metamodel must be correct and must not contain invalid elements (e.g., dangling references). In this paper we present the new version of Edelta, which provides EMF modelers with linguistic constructs for specifying both basic and complex refactorings. Edelta 2.0 is supported by an Eclipse-based IDE, which provides in this new version a "live" development environment for evolving metamodels. The modelers receive an immediate feedback of the evolved versions of the metamodels in the IDE. Moreover, Edelta performs many static checks, also by means of an interpreter that keeps track on-the-fly of the evolved metamodel, enforcing the correctness of the evolution right in the IDE, based on the flow of the execution of the refactoring operations specified by the user. Finally, Edelta 2.0 allows the users to easily introduce additional validation checks in their own Edelta programs, which are taken into consideration by the Edelta compiler and the IDE.

LUV is not the answer: continuous delivery of a model driven development platform

The OutSystems Platform is a visual model-driven development and delivery platform that allows developers to create enterprise-grade cross platform web and mobile applications.

The platform consists of several inter-dependent components, most notably Service Studio, the Platform Server, and LifeTime. Service Studio is an integrated development environment used to create applications that are then compiled by the Platform Server. LifeTime is used to stage applications between different environments (e.g., development, testing, production).

Our meta-model is versioned using a version number that we call Last Upgrade Version (LUV). Service Studio, the Platform Server, and the models they create/process are associated with a particular LUV. As a general rule, a platform component is only able to process models with the same LUV as the component itself.

This approach is not very flexible: a change to the meta-model requires releasing a new set of platform components that our customers then need to install. Although there's low resistance to installing new versions of Service Studio, the same is not true for the Platform Server. Thus, for all practical purposes LUV changes are tied to releases of major versions of the OutSystems Platform.

In this paper we share the techniques that allowed us to transition to a Continuous Delivery process in which our meta-model can evolve freely with no impact on our installed base.

Automated provenance graphs for models@run.time

Software systems are increasingly making decisions autonomously by incorporating AI and machine learning capabilities. These systems are known as self-adaptive and autonomous systems (SAS). Some of these decisions can have a life-changing impact on the people involved and therefore, they need to be appropriately tracked and justified: the system should not be taken as a black box. It is required to be able to have knowledge about past events and records of history of the decision making. However, tracking everything that was going on in the system at the time a decision was made may be unfeasible, due to resource constraints and complexity. In this paper, we propose an approach that combines the abstraction and reasoning support offered by models used at runtime with provenance graphs that capture the key decisions made by a system through its execution. Provenance graphs relate the entities, actors and activities that take place in the system over time, allowing for tracing the reasons why the system reached its current state. We introduce activity scopes, which highlight the high-level activities taking place for each decision, and reduce the cost of instrumenting a system to automatically produce provenance graphs of these decisions. We demonstrate a proof of concept implementation of our proposal across two case studies, and present a roadmap towards a reusable provenance layer based on the experiments.

WORKSHOP SESSION: 2nd international workshop on security for and by model-driven engineering: SecureMDE 2020

Towards model-based development of decentralised peer-to-peer data vaults

Using centralised data storage systems has been the standard practice followed by online service providers when managing the personal data of their users. This method requires users to trust these providers and, to some extent, users are not in full control over their data. The development of applications around decentralised data vaults, i.e., encrypted storage systems located in user-managed devices, can give this control back to the users as sole owners of the data. However, the development of such applications is not effort-free, and it requires developers to have specialised knowledge, such as how to deploy secure and peer-to-peer communication systems. We present Vaultage, a model-based framework that can simplify the development of data vault applications. We demonstrate its core features through a social network application case study and include some initial evaluation results, showing Vaultage's code generation capabilities and some profiling analysis of the generated network components.

Operational design for advanced persistent threats

The Advanced Persistent Threats (APT) are sophisticated and well-resourced attacks targeting valuable assets. For APTs both the attack and the defense require advanced planning and strategies similar to military operations. The existing cyber-security-aware methodologies achieve valuable results for regular cyber-threats, however they fail to adequately address APTs. The armed forces around the world use the Operational Design methodology to plan actionable strategies for achieving their military objectives. However, this conceptual methodology lacks the tools and the automation needed to scale to the complexity of todays advanced persistent cyber-attacks. In this paper we propose a tool-supported Operational Design-based methodology for cyberspace mission planning. Our approach relies on a structural modeling language, used by the French armed forces, that is extended with behavioral specifications for modeling the operational situation. The APT objectives are captured through temporal logic specifications. The expert is assisted by model-checking tools to perform the typical capacity-based operation design. The approach is illustrated by studying a mission on a water pumping station. After capturing its partial understanding of the system, the attacker formalizes the mission objectives and explores the design space defined around its five operational capabilities.

Detecting human vulnerably in socio-technical systems: a naval case study

The increasing number of cyberattacks requires to incorporate security concerns all along the system development life-cycle. In this context, detecting and evaluating vulnerabilities early in system modelling helps fix security issues and improves resilience of systems. Nowadays, due to the increasing complexity of modern systems, the level of responsibility dedicated to human operator has growning up. This is particularly visible in Socio-Technical Systems (STS) where humans are considered as subsystems. Thus, to improve the resilience of the overall system, it is necessary to manage the vulnerability of humans. We developed a language called HoS-ML and a specific tool allowing a system architect to evaluate human vulnerability in STS during early stage of the system design. In this paper we present an industrial STS case study using our approach. We briefly present the language and his metamodel before to model a real industrial case study to illustrate our approach..

Towards a model driven formal approach for merging data, access control and business processes

Information Systems (IS) are mostly built on three major concerns: data, security and business processes. Dealing with the correctness of these concerns makes the IS more efficient, which is of a great importance for organizations, given the impact their IS can have on their productivity. However, these concerns are often defined by different stakeholders leading to several inconsistencies between the resulting artifacts. This paper outlines our model driven approach to align and formally analyze in a decoupled way the Unified Modeling Language (UML) for data modeling, Role-based Access Control (RBAC) for security policies definition and the Business Process Model and Notation (BPMN) for process modeling. Several works addressed the integration of BPMN and UML or BPMN and Security, but none of them tried to make the bridge between the three concerns in a unified framework in order to formally establish their consistency. In this ongoing work we propose two kinds of formal analysis: the static analysis that deals with the structural properties of the three concerns, and the dynamic analysis, which is intended to apply the B method and underlying formal reasoning tools in order to animate processes and formally check their feasibility regarding data and security.

Method and framework for security risks analysis guided by safety criteria

As previously discussed [19], the challenges to achieve a consistent intertwining between safety and security are rather diverse and complex. Recent advances in safety and security suggest that risks analyses provide guidance for achieving a comprehensive alignment. However, for many domains, like in aeronautics, security is rather a recent concern whereas aircraft development has been mostly guided by safety criteria for several decades. The referred disparity along with the fact that security is, in many respects, a discipline still in evolution, imposes restrictions for specifying and applying methods to conduct safety and security co-engineering as a unified process. In this paper, we present the progress in the development of a model-based method, a framework and a tool useful to conduct a security risks analysis guided by safety criteria and goals. Among others, the approach relies on know-how found in the state of the art, in standards like ED202, ED203 (EUROCAE)1, as well as in open knowledge bases like CAPEC and CWE (MITRE)2. These sources are integrated which allows the instantiation of patterns of attacks, vulnerabilities, and architectures, which are crucial elements to semi-automate the analysis. A rule-based algorithm for exploring potential attack paths across an architecture is proposed and implemented. The approach is finally demonstrated by analyzing a combined attack-failure path in a Flight Control System which can undermine the safety of a modern aircraft. The framework and tool support seek safety-security by design and aim to facilitate the reuse of case studies and to settle a basis for repeatability and results comparison.

Evaluating tool support for embedded operating system security: an experience feedback

Embedded systems are more and more connected to a variety of networks, which increases their attack surface. At the same time, more and more objects are augmented with embedded systems, which increases the potential impact of attacks. Cybersecurity must therefore be taken into account while designing and developing embedded software systems. While there are multiple complementary facets to the security of such systems, we focus on embedded operating system security, which is critical to build secure applications. In order to evaluate the applicability of the many available cybersecurity techniques and tools, we need to define a relevant case study. Given that a system's inputs are especially vulnerable, we have specified a fictive device driver, which we have both modeled in UML and implemented in C. We report here on the initial application of a couple of techniques to analyse the security of this device driver at the model and code levels.

WORKSHOP SESSION: 1st LowCode workshop: LowCode 2020

An empirical study on visual programming docker compose configurations

Infrastructure-as-Code tools, such as Docker and Docker Compose, play a crucial role in the development and orchestration of cloud-native and at-scale software. However, as IaC relies mostly on the development of text-only specifications, these are prone to misconfigurations and hard to debug. Several works suggest the use of models as a way to abstract their complexity, and some point to the use of visual metaphors. Yet, few empirical studies exist in this domain. We propose a visual programming notation and environment for specifying Docker Compose configurations and proceed to empirically validate its merits when compared with the standard text-only specification. The goal of this work is to produce evidence of the impact that visual approaches may have on the development of IaC. We observe that the use of our solution reduced the development time and error proneness, primarily for configurations definition activities. We also observed a preference for the approach in terms of ease of use, a positive sentiment of its usefulness and intention to use.

Closing the gap between designers and developers in a low code ecosystem

Nowadays, going digital is a must for a company to thrive and remain competitive. The digital transformation allows companies to react timely and adequately to the constantly evolving markets. This transformation is not without challenges. Among these is the growing demand for skilled software developers. Low-code platforms have risen to mitigate this pressure point by allowing people with non-programming backgrounds to craft digital systems capable of solving business relevant problems.

Professional development teams are composed of many different profiles - product owners, analysts, UX and UI designers, front-end and back-end developers, among others. Market competition puts unprecedented demands on the collaboration of these professionals. Current methodologies provide tools and approaches for many of these types of collaboration. However, the reality of established industry practices for UX and UI designers collaborating with front-end developers, still leaves a lot to improve in terms of effectiveness and efficiency.

This work developed an innovative approach using model transformation and meta-modelling techniques that drastically improves the efficiency of transforming UX/UI design artefacts into low-code web-technology. The approach has been applied to a recognized and established enterprise-grade low-code platform and evaluated in practice by a team of professional designers and front-end developers. Preliminary practical results show savings between 20 and 75% according to the project complexity in the effort invested by development teams in the above mentioned process.

Towards a low-code solution for monitoring machine learning model performance

As the use of machine learning techniques by organisations has become more common, the need for software tools that provide the robustness required in a production environment has become apparent. In this paper, we review relevant literature and outline a research agenda for the development of a low-code solution for monitoring the performance of a deployed machine learning model on a continuous basis.

Understanding the role of model transformation compositions in low-code development platforms

Low-code development platforms (LCDPs) permit developers that do not have strong programming experience to produce complex software systems. Visual environments permit to specify workflows consisting of sequential or parallel executions of services that are directly available in the considered LCDP or are provided by external entities. Specifying workflows involving different LCDPs and services can be a difficult task. In this paper, we propose the adoption of concepts and tools related to the composition of model transformations to support the specification of complex workflows in LCDPs. We elaborate on how LCDPs services can be considered as model transformations and thus, workflows of services can be considered as model transformation compositions. The architecture of the environment supporting the proposed solution is presented.

Intelligent run-time partitioning of low-code system models

Over the last 2 decades, several dedicated languages have been proposed to support model management activities such as model validation, transformation, and code generation. As software systems become more complex, underlying system models grow proportionally in both size and complexity. To keep up, model management languages and their execution engines need to provide increasingly more sophisticated mechanisms for making the most efficient use of the available system resources. Efficiency is particularly important when model-driven technologies are used in the context of low-code platforms where all model processing happens in pay-per-use cloud resources. In this paper, we present our vision for an approach that leverages sophisticated static program analysis of model management programs to identify, load, process and transparently discard relevant model partitions - instead of naively loading the entire models into memory and keeping them loaded for the duration of the execution of the program. In this way, model management programs will be able to process system models faster with a reduced memory footprint, and resources will be freed that will allow them to accommodate even larger models.

Towards the next generation of reactive model transformations on low-code platforms: three research lines

Low-Code Development Platforms have emerged as the next-generation, cloud-enabled collaborative platforms. These platforms adopt the principles of Model-Driven Engineering, where models are used as first-class citizens to build complex systems, and model transformations are employed to keep a consistent view between the different aspects of them. Due to the online nature of low-code platforms, users expect them to be responsive, to complete complex operations in a short time. To support such complex collaboration scenarios, the next-generation of low-code platforms must (i) offer a multi-tenant environment to manage the collaborative work of engineers, (ii) provide a model processing paradigm scaling up to hundreds of millions of elements, and (iii) provide engineers a set of selection criteria to choose the right model transformation engine in multi-tenant execution environments. In this paper, we outline three research lines to improve the performance of reactive model transformations on low-code platforms, by motivating our research with a case study from a systems engineering domain.

Towards automating the construction of recommender systems for low-code development platforms

Low-code development platforms allow users with a low technical background to build complete software solutions, typically by means of graphical user interfaces, diagrams or declarative languages. In these platforms, recommender systems play an important role as they can provide users with relevant, personalised suggestions generated according to previously developed software solutions. However, developing recommender systems requires a high investment of time as it implies the selection and implementation of a suitable recommendation method, its configuration for the problem and domain at hand, and its evaluation to assess the accuracy of its recommendations.

To alleviate these problems, in this paper, we present the first steps towards a generic model-driven framework capable of generating ad-hoc, task-oriented recommender systems for their integration on low-code platforms. As a proof of concept, we present some preliminary results obtained from an offline evaluation of our framework on three datasets of class diagrams. The results show that the proposed framework is capable of providing relevant recommendations in the given context.

Towards access control for collaborative modelling apps

Domain-specific languages (DSLs) are small languages tailored to narrow domains. Their purpose is to cope with the needs of domain experts, who might not have a software engineering background. In previous work, we proposed the novel notion of Active DSLs, which are graphical DSLs extended to benefit from mobility using geolocation and interactions with external services and devices. Active DSLs are the central component of a mobile collaborative appl called DSL-comet.

Modelling using DSLs can be done collaboratively by a group of stakeholders, and the levels of required confidentiality and integrity may vary across modelling artefacts. While preventing the access to protected data has been tackled for DSLs used on static environments like laptops and desktop computers, it has not been envisioned for modelling on mobile devices. The latter poses further challenges as access permissions may depend not just on user profiles but also on conditions that only make sense in mobility, such as geolocation or information retrieved from nearby sensors.

Embracing the approach of Active DSLs, we propose an annotation meta-model to provide fine-grained role-based access control to any domain meta-model, hence enabling model element protection when collaborating in mobility. The paper describes our current implementation and our envisioned low-code solution, which includes a cloud-based textual editor to define role hierarchies and permissions for the domain meta-models.

Democratizing the development of recommender systems by means of low-code platforms

In recent years, recommender systems have gained an increasingly crucial role in software engineering. Such systems allow developers to exploit a plethora of reusable artifacts, including source code and documentation, which can support the development activities. However, recommender systems are complex tools that are difficult to personalize or fine-tune if developers want to improve them for increasing the relevance of the retrievable recommendations.

In this paper, we propose a low-code development approach to engineering recommender systems. Low-code platforms enable the creation and deployment of fully functional applications by mainly using visual abstractions and interfaces and requiring little or no procedural code. Thus, we aim to foster a low-code way of building recommender systems by means of a metamodel to represent the peculiar components. Then, dedicated supporting tools are also proposed to help developers easily model and build their custom recommender systems. Preliminary evaluations of the approach have been conducted by reimplementing real recommender systems, confirming the feasibility of developing them in a low-code manner.

DevOpsML: towards modeling DevOps processes and platforms

DevOps and Model Driven Engineering (MDE) provide differently skilled IT stakeholders with methodologies and tools for organizing and automating continuous software engineering activities-from development to operations, and using models as key engineering artifacts, respectively. Both DevOps and MDE aim at shortening the development life-cycle, dealing with complexity, and improve software process and product quality.

The integration of DevOps and MDE principles and practices in low-code engineering platforms (LCEP) are gaining attention by the research community. However, at the same time, new requirements are upcoming for DevOps and MDE as LCEPs are often used by non-technical users, to deliver fully functional software. This is in particular challenging for current DevOps processes, which are mostly considered on the technological level, and thus, excluding most of the current LCEP users. The systematic use of models and modeling to lowering the learning curve of DevOps processes and platforms seems beneficial to make them also accessible for non-technical users.

In this paper, we introduce DevOpsML, a conceptual framework for modeling and combining DevOps processes and platforms. Tools along with their interfaces and capabilities are the building blocks of DevOps platform configurations, which can be mapped to software engineering processes of arbitrary complexity. We show our initial endeavors on DevOpsML and present a research roadmap how to employ the resulting DevOpsML framework for different use cases.

Challenges & opportunities in low-code testing

Low-code is a growing development approach supported by many platforms. It fills the gap between business and IT by supporting the active involvement of non-technical domain experts, named Citizen Developer, in the application development lifecycle.

Low-code introduces new concepts and characteristics. However, it is not investigated yet in academic research to point out the existing challenges and opportunities when testing low-code software. This shortage of resources motivates this research to provide an explicit definition to this area that we call it Low-Code Testing.

In this paper, we initially conduct an analysis of the testing components of five commercial Low-Code Development Platforms (LCDP) to present low-code testing advancements from a business point of view. Based on the low-code principles as well as the result of our analysis, we propose a feature list for low-code testing along with possible values for them. This feature list can be used as a baseline for comparing low-code testing components and as a guideline for building new ones. Accordingly, we specify the status of the testing components of investigated LCDPs based on the proposed features. Finally, the challenges of low-code testing are introduced considering three concerns: the role of citizen developer in testing, the need for high-level test automation, and cloud testing. We provide references to the state-of-the-art to specify the difficulties and opportunities from an academic perspective. The results of this research can be used as a starting point for future research in low-code testing area.

Automated migration of EuGENia graphical editors to the web

Domain-specific languages (DSLs) are languages tailored for particular domains. Many frameworks and tools have been proposed to develop editors for DSLs, especially for desktop IDEs, like Eclipse.

We are witnessing the advent of low-code development platforms, which are cloud-based environments supporting rapid application development by using graphical languages and forms. While this approach is very promising, the creation of new low-code platforms may require the migration of existing desktop-based editors to the web. However, this is a technically challenging task.

To fill this gap, we present ROCCO, a tool that migrates Eclipse-based graphical modelling editors to the web, to facilitate their integration with low-code platforms. The tool reads a meta-model annotated with EuGENia annotations, and generates a web editor using the DPG web framework used by the UGROUND company. In this paper, we present the approach, including tool support and an evaluation based on migrating nine editors created by third parties, which shows the usefulness of the tool.

Towards transparent combination of model management execution strategies for low-code development platforms

Low-code development platforms are taking an important place in the model-driven engineering ecosystem, raising new challenges, among which transparent efficiency or scalability. Indeed, the increasing size of models leads to difficulties in interacting with them efficiently. To tackle this scalability issue, some tools are built upon specific computational strategies exploiting reactivity, or parallelism. However, their performances may vary depending on the specific nature of their usage. Choosing the most suitable computational strategy for a given usage is a difficult task which should be automated. Besides, the most efficient solutions may be obtained by the use of several strategies at the same time. This paper motivates the need for a transparent multi-strategy execution mode for model-management operations. We present an overview of the different computational strategies used in the model-driven engineering ecosystem, and use a running example to introduce the benefits of mixing strategies for performing a single computation. This example helps us present our design ideas for a multi-strategy model-management system. The code-related and DevOps challenges that emerged from this analysis are also presented.

Efficiently querying large-scale heterogeneous models

With the increase in the complexity of software systems, the size and the complexity of underlying models also increases proportionally. In a low-code system, models can be stored in different backend technologies and can be represented in various formats. Tailored high-level query languages are used to query such heterogeneous models, but typically this has a significant impact on performance. Our main aim is to propose optimization strategies that can help to query large models in various formats efficiently. In this paper, we present an approach based on compile-time static analysis and specific query optimizers/translators to improve the performance of complex queries over large-scale heterogeneous models. The proposed approach aims to bring efficiency in terms of query execution time and memory footprint, when compared to the naive query execution for low-code platforms.

Low-code engineering for internet of things: a state of research

Developing Internet of Things (IoT) systems has to cope with several challenges mainly because of the heterogeneity of the involved sub-systems and components. With the aim of conceiving languages and tools supporting the development of IoT systems, this paper presents the results of the study, which has been conducted to understand the current state of the art of existing platforms, and in particular low-code ones, for developing IoT systems. By analyzing sixteen platforms, a corresponding set of features has been identified to represent the functionalities and the services that each analyzed platform can support. We also identify the limitations of already existing approaches and discuss possible ways to improve and address them in the future.

Test mocks for low-code applications built with OutSystems

Unit testing is a core component of continuous integration and delivery, which in turn is key to faster and more frequent delivery of solutions to customers. Testing at the unit level allows program components to be tested in complete isolation, therefore these tests can be carried out quicker thus reducing troubleshoot time. But to test at this level, dependencies between application components (e.g. a web service connection) need to be removed. There have been advances in mocking and stubbing techniques that remove these dependencies. However, these advances have been made for high-level programming languages, while low-code development technology has yet to take full advantage of these techniques. This paper presents a mocking solution prototype for the OutSystems low-code development platform. The proposed mocking mechanism removes dependencies to components that the developer wants to abstract a test from, as for instance web services or other pieces of logic of an application.

Positioning of the low-code movement within the field of model-driven engineering

Low-code is being promoted as the key infrastructure for the digital transformation of our society. But is there something fundamentally new behind the low-code movement? How does it relate to other concepts like Model-Driven Engineering or Model-Driven development? And what are the implications for researchers in the modeling community?. This position paper tries to shed some light on these issues.

WORKSHOP SESSION: 17th workshop on model driven engineering, verification and validation: MoDeVVa 2020

TL: an abstract specification language for bidirectional transformations

Model transformation verification has been hindered by the complex language mechanisms and semantics of mainstream transformation languages. In this paper we describe an abstract formalism, TL, for the definition of bidirectional and unidirectional transformations in a purely declarative manner. In contrast to model transformation languages such as ATL or QVT-R, there is no implicit or explicit sequencing of rules in TL specifications. Reasoning about TL specifications is therefore facilitated. We show that semantics-preserving translations can be defined from TL to subsets of the mainstream transformation languages.

A language agnostic approach to modeling requirements: specification and verification

Modeling is a complex and error prone activity which can result in ambiguous models containing omissions and inconsistencies. Many works have addressed the problem of checking models' consistency. However, most of these works express consistency requirements for a specific modeling language. On the contrary, we argue that in some contexts those requirements should be expressed independently from the modeling language of the models to be checked. We identify a set of modeling requirements in the context of embedded systems design that are expressed independently from any modeling language concrete syntax. We propose a dedicated semantic domain to support them and give a formal characterization of those requirements that is modeling language agnostic.

SysML models: studying safety and security measures impact on performance using graph tainting

Designing safe, secure and efficient embedded systems implies understanding interdependences between safety, security and performance requirements and mechanisms. In this paper, we introduce a new technique for analyzing the performance impact of safety/security implemented as hardware and software mechanisms and described in SysML models. Our analysis approach extracts a dependency graph from a SysML model. The SysML model is then simulated to obtain a list of simulation transactions. Then, to study the latency between two events of interest, we progressively taint the dependency graph according to simulation transactions and to dependencies between all software and hardware components. The simulation transactions are finally classified according to which vertex taint they correspond, and are displayed according to their timing and related hardware device. Thus a designer can easily spot which components need to be re-modeled in order to meet the performance requirement. A Rail Carriage use case studied in the scope of the H2020 AQUAS project illustrates our approach, in particular how tainting can handle the multiple occurrences of the same event.

Validity frame concept as effort-cutting technique within the verification and validation of complex cyber-physical systems

The increasing performance demands and certification needs of complex cyber-physical systems (CPS) raise the complexity of the engineering process, not only within the development phase, but also in the Verification and Validation (V&V) phase. A proven technique to handle the complexity of CPSs is Model-Based Design (MBD). Nevertheless, the verification and validation of complex CPSs is still an exhaustive process and the usability of the models to front-load V&V activities heavily depends on the knowledge of the models and the correctness of the conducted virtual experiments. In this paper, we explore how the effort (and cost) of the V&V phase of the engineering process of complex CPSs can be reduced by enhancing the knowledge about the system components, and explicitly capturing it within their corresponding validity frame. This effort reduction originates from exploiting the captured system knowledge to generate efficient V&V processes and by automating activities at different model life stages, such as the setup and execution of boundary-value or fault-injection tests. This will be discussed in the context of a complex CPS: a safety-critical adaptive cruise control system.

Modular deployment of UML models for V&V activities and embedded execution

To design embedded systems, multiple models of their environments are typically required for different purposes such as simulation, verification, and actual execution. Some of these models abstract the actual physical environment to facilitate Verification and Validation (V&V) activities. Others capture the connection to hardware peripherals, necessary to deploy the systems on actual embedded boards. However, mapping a system to different environment models for different purposes remains a complex task for two main reasons. First, the environment is often tightly coupled with the system, and the board used for its execution. Second, formal properties verified during the design phase must be preserved at runtime. To tackle these issues, we propose an approach for designing UML models in a modular way and deploying them for V&V activities or embedded execution. This approach uses UML modularity mechanisms to specify the system in a generic way, and to connect it to a given (abstract or real) environment. This technique has been applied on several UML models of embedded systems to analyze their behaviors by simulation and LTL model-checking before deploying them on embedded STM32 boards.

Metrics for OCL expressions: development, realization, and applications for validation

UML and OCL descriptions may be regarded as one fundamental way of formulating models in software engineering. Here, an approach for determining the complexity of OCL expressions based on metrics is studied. Well-chosen metrics, in general, offer support for a developer in ordering, classifying and focusing model elements in terms of importance during the development process for taking decisions. For OCL, fine-grained metrics are known. We develop and validate a new metric for OCL expressions and show how to realize the new and the known metrics. The development was accompanied by a comparative study with modeling experts that gave crucial feedback and influenced overall decisions. We also show how through considering OCL constraints with high metric complexity values, the model validation and verification process can be enhanced.

WORKSHOP SESSION: 7th international workshop on multi-level modelling: MULTI 2020

Deductive reconstruction of MLT* for multi-level modeling

In the last two decades, about a dozen proposals were made to extend object-oriented modeling by multiple abstraction levels. One group of proposals designates explicit levels to objects and classes. The second group uses the powertype pattern to implicitly establish levels. From this group, we consider two proposals, DeepTelos and MLT*. Both have been defined via axioms and both give a central role to the powertype pattern. In this paper, we reconstruct MLT* with the deductive axiomatization style used for DeepTelos. The resulting specification is executed in a deductive database to check MLT* multi-level models for errors and complete them with derived facts that do not have to be explicitly asserted by modelers. This leverages the rich rules of MLT* with the deductive approach underlying DeepTelos. The effort also allows us to clearly establish the relation between DeepTelos and MLT*, in an attempt to clarify the relations between approaches in this research domain. As a byproduct, we supply MLT-Telos as a fully operational deductive implementation of MLT* to the research community.

Join potency: a way of combining separate multi-level models

Multi-level modeling has become a mature modeling paradigm both theoretically and by technical means. It has proved itself when a single domain has to be created without accidental complexity. However, when several interconnected domains are to be handled, multi-level modeling is still not as capable as legacy metamodeling. Our position paper aims to narrow the gap by the introduction of a novel technique that can combine several multi-level models from different domains statically. Besides its theoretical proposal, the solution is also defined in our multi-layer modeling framework (Dynamic Multi-Layer Algebra) and is demonstrated by an illustrative example.

Meaningful metrics for multi-level modelling

One of the key enablers of further growth of multi-level modeling will be the development of objective ways to allow multi-level modeling approaches to be compared to one another and to two-level modeling approaches. While significant strides have been made regarding qualitative comparisons, there is currently no adequate way to quantitatively assess to what extent a multi-level model may be preferable over another model with respect to high-level qualities such as understandability, maintainability, and control capacity. In this paper, we propose deep metrics, as an approach to quantitatively measure high-level model concerns of multi-level models that are of interest to certain stakeholders. Beyond the stated goals, we see deep metrics as furthermore supporting the comparison of modeling styles and aiding modelers in making individual design decisions. We discuss what makes a metric "depth-aware" so that it can appropriately capture multi-level model properties, and present two concrete proposals for metrics that measure high-level multi-level model qualities.

Contingent level classes: motivation, conceptualization, modeling guidelines, and implications for model management

It has been known for some time that the level of a class may vary with the context it is used in. There are a few approaches that enable modelers to deal with corresponding requirements. However, they usually provide workarounds to avoid the problem of one class being on different levels at the same time. In this paper, the need for those classes, which are called contingent level classes, is motivated from a conceptual perspective. A conceptualization of contingent level classes is presented that addresses principal integrity issues and accounts for resulting constraints on class properties and relationships. Based on that conceptualization, the paper provides an analysis of specific challenges related to change operations on models that include contingent level classes. Subsequently, a set of patterns for coping with certain kinds of change operations is presented.

Implicit requirements for ontological multi-level types in the UNICLASS classification

In the multi-level type modeling community, claims that most enterprise application systems use ontologically multi-level types are ubiquitous. To be able to empirically verify this claim one needs to be able to expose the (often underlying) ontological structure and show that it does, indeed, make a commitment to multi-level types. We have not been able to find any published data showing this being done. From a top-level ontology requirements perspective, checking this multi-level type claim is worthwhile. If the datasets for which the top-level ontology is required are ontologically committed to multi-level types, then this is a requirement for the top-level ontology. In this paper, we both present some empirical evidence that this ubiquitous claim is correct as well as describing the process we used to expose the underlying ontological commitments and examine them. We describe how we use the bCLEARer process to analyse the UNICLASS classifications making their implicit ontological commitments explicit. We show how this reveals the requirements for two general ontological commitments; higher-order types and first-class relations. This establishes a requirement for a top-level ontology that includes the UNICLASS classification to be able to accommodate these requirements. From a multi-level type perspective, we have established that the bCLEARer entification process can identify underlying ontological commitments to multi-level type that do not exist in the surface linguistic structure. So, we have a process that we can reuse on other datasets and application systems to help empirically verify the claim that ontological multi-level types are ubiquitous.

WORKSHOP SESSION: 2nd international workshop on multi-paradigm modelling for cyber-physical systems: MPM4CPS 2020

Connecting conceptual models using relational reference attribute grammars

Model-driven engineering can be used to create problem-specific, conceptual models abstracting away unwanted details. Models at runtime take this principle to the time a system is running. Connecting and synchronizing multiple models creates several problems. Usually, models used at runtime must communicate with other systems over the network, they are often based on different paradigms, and in most settings a fast and reactive behaviour is required. We aim for a structured way to define and organize such connections in order to minimize development cost, network usage and computation effort while maximizing interoperability. In order to achieve those goals, we present an extension of the paradigm of models based on reference attribute grammars by creating a dedicated problem-specific language for those connections. We show how to connect several runtime models to a robotic system in order to control this robot and to provide guarantees for safe coexistence with nearby humans. We show, that using our approach, connections can be specified more concisely while maintaining the same efficiency as hand-written code.

Towards a digital twin for cyber-physical production systems: a multi-paradigm modeling approach in the postal industry

This paper presents our early-stage research on a Multi-Paradigm Modeling (MPM) approach as an initial step towards the definition of a Digital Twin (DT) for Cyber-Physical Production Systems (CPPSs). This work takes place in the context of the digitalization of the mail sorting process at La Poste, the French national postal service company. Indeed, La Poste is currently investing on robotics modules for automatically loading mail containers. The main objective is to reduce the painful work for human operators while optimizing the robots usage. We already worked on targeting such a balance in a past effort that resulted in the production of different kinds of models of the La Poste CPPS. However, these models were defined separately and are not directly related to the underlying business process in particular. Thus, we propose an MPM approach starting from this business process as now modeled explicitly in a BPMN model. Then, we refine the high-level business activities into finer-grained activities represented in a UML Activity model. From these latest, we derive the specification of a Multi-Agent System (MAS) developed with the JADE framework and emulating the behavior of the La Poste CPPS. Our longer term objective is to pave the way for supporting the definition of a DT for this CPPS, and potentially for other CPPSs in different contexts in the future.

Towards employing ABM and MAS integrated with MBSE for the lifecycle of sCPSoS

Cyber-Physical Systems (CPSs) are natural evolutions of embedded systems consisting of embedded computing devices and networks interacting with physical processes and possibly with a human. By introduction of Internet of Things (IoT), and Industry 4.0, CPSs are used in an interconnected way each of which may belong to a different stakeholder, building a complex System of Systems (SoS) called CPSoS. These systems need to employ some techniques to transform the collected data to knowledge with which the system can make better decisions. These capabilities can create smart CPSoS or sCPSoS.

However, these systems are highly complex from both structural and behavioural point of views. Naturally, there is a need for multiple abstraction levels using different paradigms to model these system, called Multi-Paradigm modeling (MPM).

In this paper, the challenges and opportunities of using agent technologies, including intelligent agents, Agent-based modeling and Simulation (ABM), and Multi-Agent Systems (MAS) (or Agent-oriented Software Engineering (AOSE)) in an integrated way with Model-based System Engineering (MBSE) techniques are discussed to cover the whole lifecycle of sCPSoS, from simulation and analysis, to development, operation and monitoring.

Toward client-agnostic hybrid model editor tools as a service

General-purpose languages (GPLs) have reached a point where they can be easily learned with little background in computing. This has allowed them to cultivate users rapidly, and incentivized the creation of IDEs (Integrated Development Environments) and IDE tools for these languages.

Step-wise refinement in multi-paradigm modeling

Multi-Paradigm Modeling (MPM) is a common term aimed at using multiple modeling paradigms by combining different levels of abstraction and views, each expressed in appropriate modeling formalisms. Recently, our research group has started investigating the capabilities of our multi-layer framework, the Dynamic Multi-Layer Algebra (DMLA) in the context of multi-paradigm modeling. Multi-layer modeling is a new modeling paradigm originated from multi-level modeling, offering a highly flexible abstraction management through its advanced deep instantiation and domain linking formalism. Following the mantra of "modeling everything explicitly, at the right level of abstraction(s), using the most appropriate formalism(s)", we have to solve both a horizontal (across domains) and a vertical (across abstraction levels) issue to address the heterogeneity problem of models.

For the vertical issue, DMLA provides the step-wise refinement of features. In DMLA, it is allowed to have a domain concept containing several components, some of which are concrete, while others are more abstract, i.e. not yet specified completely. Instantiation works as a refinement between two entities. Namely, the meta entity defines the abstract structural and behavioral rules that instances must obey through concretization of those abstractions. The number of instantiation steps is not limited: one can use as many instantiation steps as needed in order to derive concrete instance objects from abstract concepts. In fact, as we descend the instantiation hierarchy, more and more constraints are being attached and the constraints are getting more and more strict in order to further concretize the intermittent entities until the most concrete objects --- with all of their fields set to concrete values --- have been reached. We believe that this behavior closely reflects the vertical aspect of MPM.

For the horizontal issue, we propose a solution which is capable of validating consistency among separate domain models. In the multi-layer setup of DMLA, the modular design of independent technical domains can be used to weave different paradigm models via explicitly modeled cross-domain constraint links, similarly to the global relationships in megamodels. The rigid and strict validation mechanism of DMLA enforces that cross-domain constraints must be satisfied in order to create valid connections between the heterogeneous modeling concepts. Unlike OCL, cross-domain constraints in DMLA can also be refined gradually according to practical needs, thus one can combine separate domain models along diverse abstraction levels. The solution proposed here addresses only the problem of maintaining consistency of already existing domain models.

This talk proposes a methodology by elaborating our novel MPM abstraction implemented in our multi-layer modeling framework (DMLA) using step-wise refinement and cross-domain constraint links. The feasibility of the approach is also demonstrated by a walkthrough of the concrete model management steps of a simplified scenario taken from the domain of home automation.

Towards adaptive abstraction for continuous time models with dynamic structure

Humans often switch between multiple levels of abstraction when reasoning about salient properties of complex systems. These changes in perspective may be leveraged at runtime to improve both performance and explainability, while still producing identical answers to questions about the properties of interest. This technique, which switches between multiple abstractions based on changing conditions in the modelled system, is also known as adaptive abstraction.

The Modelica language represents systems as a-causal continuous equations, which makes it appropriate for the modelling of physical systems. However adaptive abstraction requires dynamic structure modelling. This raises many technical challenges in Modelica since it has poor support for modifying connections during simulation. Its equation-based nature means that all equations need to be well-formed at all times, which may not hold when switching between levels of abstraction. The initialization of models upon switching must also be carefully managed, as information will be lost or must be created when switching abstractions [1].

One way to allow adaptive abstraction is to represent the system as a multi-mode hybrid Modelica model, a mode being an abstraction that can be switched to based on relevant criteria. Another way is to employ a co-simulation [2] approach, where modes are exported as "black boxes" and orchestrated by a central algorithm that implements adaptivity techniques to dynamically replace components when a switching condition occurs.

This talk will discuss the benefits of adaptive abstraction using Modelica, and the conceptual and technical challenges towards its implementation. As a stand-in for a complex cyber-physical system, an electrical transmission line case study is proposed where attenuation is studied across two abstractions having varying fidelity depending on the signal. Our initial results, as well as our explorations towards employing Modelica models in a co-simulation context using the DEVS formalism [4] are discussed. A Modelica only solution allows to tackle complexity via decomposition, but does not improve performances as all modes are represented as a single set of equations. The co-simulation approach might offer better performances [3], but complicates the workflow.

TwinOps - DevOps meets model-based engineering and digital twins for the engineering of CPS

The engineering of Cyber-Physical Systems (CPS) requires a large set of expertise to capture the system requirements and to derive a correct solution. Model-based Engineering and DevOps aim to efficiently deliver software with increased quality. Model-based Engineering relies on models as first-class artifacts to analyze, simulate, and ultimately generate parts of a system. DevOps focuses on software engineering activities, from early development to integration, and then improvement through the monitoring of the system at run-time. We claim these can be efficiently combined to improve the engineering process of CPS.

In this paper, we present TwinOps, a process that unifies Model-based Engineering, Digital Twins, and DevOps practice in a uniform workflow. TwinOps illustrates how to leverage several best practices in MBE and DevOps for the engineering Cyber-Physical systems. We illustrate our contribution using a Digital Twins case study to illustrate TwinOps benefits, combining AADL and Modelica models, and an IoT platform.