The Variability Testing of Software Product Lines (VTSPL) concerns the selection of the most representative products to be tested according to specific goals. Works in the literature use a great variety of objectives and distinct algorithms. However, they neither address all the objectives at the same time nor offer an automatic tool to support this task. To this end, this work introduces Nautilus/VTSPL, a tool to address the VTSPL problem, created by instantiating Nautilus Framework. Nautilus/VTSPL allows the tester to experiment and configure different objectives and categories of many-objective algorithms. The tool also offers support to visualization of the generated solutions, easing the decision-making process.
Features are typically used to describe the functionalities of software systems. They help understanding systems as well as planning their evolution and managing systems. Especially agile methods foster their use. However, to use features, their locations need to be known. When not documented, they are easily forgotten and then need to be recovered, which is costly. While automated feature-location techniques exist, they are not usable in practice given their inaccuracies. We take a different route and advocate to record locations early using a lightweight annotation system, where feature information is embedded in software assets. However, given the potential design space of annotations, a unified notation and tool support is needed. Extending our prior work, we present a unified, concise notation for embedded annotations, which we implemented in FAXE, a library for parsing and retrieving such annotations, use-able in third-party tooling. We demonstrate its use, especially for an advanced use case of feature-oriented isolated development by automating partial commits.
Variability is present in most modern object-oriented softwareintensive systems, despite that they commonly do not follow a product line approach. In these systems, variability is implicit and hardly documented as it is implemented by different traditional mechanisms, namely inheritance, overloading, or design patterns. This hampers variability management as automatic identification of variation points (vp-s) with variants is very difficult. symfinder is a symmetry-based tooled approach that enables automatic identification of potential vp-s with variants in such systems. Then, it visualizes them relying on their density in code assets. From the Java-only version presented at SPLC'2019, we present here several notable improvements. They concern an added support for C++ systems, the identification of vp-s implemented by Decorator and Template pattern instances, an enhanced visualization (e.g., to display all variants, and package coloring), as well as automation of the mapping of potential vp-s to domain features.
Companies considering adopting a product line engineering approach should ideally analyze the pros and cons to determine the sound reasons for this decision. In order to support this analysis, in previous work we proposed the APPLIES evaluation framework. This framework provides information to evaluate the convenience of adopting a product line engineering approach.
This paper presents an empirical evaluation of APPLIES. This experience includes 18 potential practitioners that used the framework to evaluate the convenience of adopting product line engineering in 19 different companies. The collected evidence was used to evaluate the perceived usefulness, intention to use and ease of use of APPLIES. The results presented increase confidence that APPLIESis a useful tool, but also identify some possibilities for improvement. In addition, four categories for classifying potential adopters of product line engineering emerged during the analysis of the results: unprepared adopter, potential adopter, ready adopter, and unmotivated adopter. These categories could be useful to classify companies that are considering adopting product line engineering.
Product Line Engineering (PLE) enables strategic reuse within an organisation, thus reducing development costs, decreasing the time to market, and increasing product quality. As a core activity in PLE, variability management supports modelling of commonality and variability throughout the engineering life cycle. Given the increased complexity of modern software-intensive systems, variability management is becoming increasingly important. Transitioning to PLE approaches is a challenging task, as potential benefits must be carefully weighed against costs introduced by PLE approaches. This paper presents a collaborative approach for reverse-engineering variability and configuration knowledge with minimal domain expert involvement and provides insights into the experience we gained from our industrial collaboration.
Software Product Lines (SPLs) are commonly adopted with an extractive approach, by performing a reengineering process in legacy systems, when dealing with variability and reuse became challenging. As a starting activity of the process, the legacy systems are analyzed to retrieve, categorize, and group their features in terms of commonality and variability. Due to the importance of this feature retrieving, we proposed the Prepare, Assemble, and Execute framework for SPL reengineering (PAxSPL). PAxSPL aims at guiding users to customize the feature retrieval for their scenario. In an initial evaluation of the PAxSPL in a real-world scenario, we could observe the need for including scoping activities and implementing a tool to make the framework more adoptable in practice. In this paper, we describe how we performed these improvements. We performed the evolution of PAxSPL by including SPL scoping concepts and activities into our framework as well as developing a supporting tool. We also conducted a pilot study to evaluate how PAxSPL allows instantiating a scenario where the SPL reengineering were conducted. The results show that all artifacts, activities, and techniques from the scenario could be properly represented. However, we also identified a potential limitation during the assembly of techniques regarding parallel activities. The main contribution is PAxSPL_v2 that makes the framework more adherent to industries performing the reengineering of legacy systems into SPLs.
Product line engineering is often conducted in an incremental way, in which the variability artifacts evolve in the space, the time, as well as the asset dimension. In order to cope with the evolution of the variability, the VITAL approach and tool have been developed and used in different industrial settings to analyze variability realizations relying on the C preprocessor. Over the last decade, further promising analysis approaches and tools have been developed. To understand, if and how they could enhance the VITAL approach, we have conducted an analysis of promising technologies.
In this paper, we share some of our findings along our comparative study on variability code analysis technologies. As we have conducted the study in the light of the intended VITAL enhancement, the study does not claim completeness. Nevertheless, we believe that the findings can help researchers and industrial practitioners to gain an overview and find entry points for their own investigations.
A plethora of variability modeling approaches has been developed in the last 30 years, e.g., feature modeling, decision modeling, Orthogonal Variability Modeling (OVM), and UML-based variability modeling. While feature modeling approaches are probably the most common and well-known group of variability modeling approaches, even within that group multiple variants have been developed, i.e., there is not just one type of feature model. Many variability modeling approaches have been demonstrated as useful for a certain purpose, e.g., domain analysis or configuration of products derived from a software product line. Nevertheless, industry frequently develops their own custom solutions to manage variability. The (still growing) number of modeling approaches simply makes it difficult to find, understand, and eventually pick an approach for a specific (set of) systems or context. In this paper, we discuss usage scenarios, required capabilities and challenges for an approach for (semi-)automatically transforming variability models. Such an approach would support researchers and practitioners experimenting with and comparing different variability models and switching from one modeling approach to another. We present the key components of our envisioned approach and conclude with a research agenda.
Feature modeling is the "de facto" standard to describe the common and variant parts of software product lines. Different tools, approaches, and operations for the automated analysis of feature models (AAFM) have been proposed in the last 20 years. The increasing popularity of languages such as Python made the usage of AAFM techniques require lots of integration efforts with exiting Java-based tools. In this paper, we present a design for a Python-based framework to analyze feature models. This framework implements the most common operations while enabling support for multiple solvers and backends.
A product line is an approach for systematically managing configuration options of customizable systems, usually by means of features. Products are generated by utilizing configurations consisting of selected features. Product-line evolution can lead to unintended changes to product behavior. We illustrate that updating configurations after product-line evolution requires decisions of both, domain engineers responsible for product-line evolution as well as application engineers responsible for configurations. The challenge is that domain and application engineers might not be able to talk to each other. We propose a formal foundation and a methodology that enables domain engineers to guide application engineers through configuration evolution by sharing knowledge on product-line evolution and by defining configuration update operations. As an effect, we enable knowledge transfer between those engineers without the need to talk to each other. We evaluate our method by providing formal proofs that show product behavior of configurations can be preserved for typical evolution scenarios.
Many variability modeling approaches have been proposed to explicitly represent the commonalities and variability in (software) product lines. Unfortunately, practitioners in industry still develop custom solutions to manage variability of various artifacts, like requirements documents or design spreadsheets. These custom-developed variability representations often miss important variability information, e.g., information required to assemble production goods. In this paper, we introduce the Variability Evolution Roundtrip Transformation (VERT) process. The process enables practitioners from the Cyber-Physical Production Systems domain to transform custom-developed engineering variability artifacts to a feature model, evolve and optimize the model, and transform it back to the original engineering artifacts. We build on an existing transformation approach for variability models and show the feasibility of the process using a real-world use case from an industry partner. We report on an initial feasibility study conducted with our industry partners' domain experts and on lessons learned regarding variability transformation of engineering variability artifacts.
Highly-Configurable Software Systems (HCSSs) support the systematic evolution of systems in space, i.e., the inclusion of new features, which then allow users to configure software products according to their needs. However, HCSSs also change over time, e.g., when adapting existing features to new hardware or platforms. In practice, HCSSs are thus developed using both version control systems (VCSs) and preprocessor directives (#ifdefs). However, the use of a preprocessor as variability mechanism has been criticized regarding the separation of concerns and code obfuscation, which complicates the analysis of HCSS evolution in VCSs. For instance, a single commit may contain changes of totally unrelated features, which may be scattered over many variation points (#ifdefs), thus making the evolution history hard to understand. This complexity often leads to error-prone changes and high costs for maintenance and evolution. In this paper, we propose an automated approach to mine HCSS features taking into account evolution in space and time. Our approach uses constraint satisfaction problem solving to mine newly introduced, removed and changed features. It finds a configuration containing the feature revisions which are needed to activate a specific program location. Furthermore, it increments the revision number of each changed feature. Thus, our approach enables to analyze when and which features often change over time, as well as their interactions, for every single commit of a HCSS. Our approach can contribute to future research on understanding the characteristics of HCSS and supporting developers during maintenance and evolution tasks.
The proliferation of cyber-physical systems has encouraged the emergence of new technologies and paradigms to improve the performance of IoT-based applications. Edge Computing proposes using the nearby devices in the frontier/Edge of the access network for deploying application tasks. However, the functionality of cyberphysical systems, which is usually distributed in several devices and computers, imposes specific requirements on the infrastructure to run properly. The evolution of an application to meet new user requirements and the high diversity of hardware and software technologies in the edge can complicate the deployment of evolved applications.
The aim of our approach is to apply Multi Layer Feature Models, which capture the variability of applications and the infrastructure, to support the deployment in edge-based environments of cyber-physical systems applications. This separation can support the evolution of application and infrastructure. Considering that IoT/Edge/Cloud infrastructures are usually shared by many applications, the SPL deployment process has to assure that there will be enough resources for all of them, informing developers about the alternatives of deployment. Prior to its deployment and leaning on the infrastructure feature models, the developer can calculate what is the configuration of minimal set of devices supporting application requirements of the evolved application. In addition, the developer can find which is the application configuration that can be hosted in the current evolved infrastructure.
Software ecosystems (SECOs) involve a number of actors that work together for a shared market. The software products within the software ecosystem typically have a common technological platform, and consist of a keystone player at the center of the ecosystem with niche players addressing market segments the keystone player would otherwise not have access to.
Stakeholder identification is critical to the financial and functional success of software development projects, however the task of identifying all stakeholders in a SECO is often not possible due to the high volume of stakeholders and open interfaces. The identification of key stakeholders should ensure that the most relevant requirements are elicited during a software planning cycle.
The objective of this research is to examine how key stakeholders can be identified in complex SECOs. This research takes a design science approach and the main component of the current research is the design of an artifact in the form of a reference process model, that is applied in a real-world environment to develop a business process model. Consequently, this research aims to facilitate academia and industry in SECO optimization, especially from a requirements management perspective.
The Industry 4.0 initiative envisions the flexible and optimized production of customized products on Cyber-Physical Production Systems (CPPSs) that consist of subsystems coordinated to conduct complex production processes. Hence, accurate CPPS modeling requires integrating the modeling of variability for Product-Process-Resource (PPR) aspects. Yet, current variability modeling approaches treat structural and behavioral variability separately, leading to inaccurate CPPS production models that impede CPPS engineering and optimization. This paper proposes a PhD project for integrated variability modeling of PPR aspects to improve the accuracy of production models with variability for CPPS engineers and production optimizers. The research project follows the Design Science approach aiming for the iterative design and evaluation of (a) a framework to categorize currently incomplete and scattered models and methods for PPR variability modeling as a foundation for an integrated model; and (b) a modeling approach for more accurate integrated PPR variability modeling. The planned research will provide the Software Product Line (SPL) and CPPS engineering research communities with (a) novel models, methods, and insights on integrated PPR variability modeling, (b) open data from CPPS engineering use cases for common modeling, and (c) empirical data from field studies for shared analysis and evaluation.
Managing the evolution of system families in space and time, i.e., system variants and their revisions is still an open challenge. The software product line (SPL) approach can support the management of product variants in space by reusing a common set of features. However, feature changes over time are often necessary due to adaptations and/or bug fixes, leading to different product versions. Such changes are commonly tracked in version control systems (VCSs). However, VCSs only deal with the change history of source code, and, even though their branching mechanisms allow to develop features in isolation, VCS does not allow propagating changes across variants. Variation control systems have been developed to support more fine-grained management of variants and to allow tracking of changes at the level of files or features. However, these systems are also limited regarding the types and granularity of artifacts. Also, they are cognitively very demanding with increasing numbers of revisions and variants. Furthermore, propagating specific changes over variants of a system is still a complex task that also depends on the variability-aware change impacts. Based on these existing limitations, the goal of this doctoral work is to investigate and define a flexible and unified approach to allow an easy and scalable evolution of SPLs in space and time. The expected contributions will aid the management of SPL products and support engineers to reason about the potential impact of changes during SPL evolution. To evaluate the approach, we plan to conduct case studies with real-world SPLs.
Vulnerabilities in software systems result from faults, which occur at different stages in a software's life cycle, for example, in the design (i.e., undesired feature-interactions), the development (i.e., buffer overflows), or the operation (i.e., configuration errors). Various databases provide detailed information about vulnerabilities in software systems or the way to exploit it, but face severe limitations. The information is scattered across these databases, fluctuates in quality and granularity, and provides only an insight into a single vulnerability per entry. Even for a single software system it is challenging for any security-related stakeholder to determine the threat level, which consists of all vulnerabilities of the software system and its environment (i.e., operating system). Manual vulnerability management is feasible only to a limited extend if we want to identify all configurations that are affected by vulnerabilities, or determine a system's threat level and the resulting risk we have to deal with. For variant-rich systems, we also have to deal with variability, allowing different stakeholders to understand the threats to their particular setup. To deal with this variability, we propose vulnerability feature models, which offer a homogeneous view on all vulnerabilities of a software system. These models and the resulting analyses offer advantages in many disciplines of the vulnerability management process. In this paper, we report the research plan for our project, in which we focus on the model-based evaluation of vulnerabilities. This includes research objectives that take into account the design of vulnerability feature models, their application in the process of vulnerability management, and the impact of evolution, discovery, and verification of vulnerabilities.
The impact of the Internet of Things (IoT) on the modern industrial and commercial systems is hard to be underestimated. Almost every domain favours from the benefits that IoT brings, and healthcare does not make an exception. This is also clearly demonstrated by a widespread adoption of eHealth systems that often arise from software product lines. Nevertheless, the benefits that IoT brings come together with new threats and risks.
An eHealth system that processes many types of sensitive data sets the context for this thesis. Security and privacy gain crucial importance for successful operation and broad user acceptance of the system because of the properties of the data flows that it initiates and operates. However, due to a large number of feature combinations that originate from the software product line nature of the eHealth system in question, a combinatorial explosion of relevant configurations makes reaching security and privacy goals more difficult. Furthermore, another combinatorial explosion of threats and corresponding mitigation strategies for every configuration complicates the situation even further. Nonetheless, configurations that meet specific risk budgets need to be in place.
Within this thesis, a new threat and risk management (TRM) framework will be provided. It is based on STRIDE and LINDDUN methodologies, and it will overcome existing limitations by employing components on feature space modelling, risk-driven scoring, configuration decision support, and regulatory compliance. Research outcomes that have been reached so far show promising developments on the vital framework components.