An intuitive method is needed to achieve buy-in from all sectors of Engineering for a way to gauge release-over-release change for a given product's sequence of releases. Also, customers need to know if there are extant releases that are more reliable than the ones they already rely on in their networks. A new Release-Over-Release (RoR) metric can both enable customers to clearly understand the reliability risk of migrating to other available releases, and also enable Engineering to understand if their software engineering efforts are actually improving release reliability.
Background: A grand challenge for Requirement Engineering (RE) research is to help practitioners understand which RE methods work in what contexts and why. RE researchers recognize that for an RE method to be adopted in industry, RE practitioners should be able to evaluate the relevance of empirical studies to their practice. One possible approach to relevance evaluation is the set of perspective-based checklists proposed by Kitchenham et al. Specifically, the checklist from the practitioner's perspective seems to be a good candidate for evaluating the relevance of RE studies to RE practice. However, little is known about the applicability of this checklist to the RE field. Moreover, this checklist also requires a deeper analysis of its reliability. Aim: We propose a perspective-based checklist to the RE community that allows evaluating the relevance of experimental studies in RE from the practitioner's/consultant's viewpoint. Method: We followed an iterative design-science based approach in which we first analyzed the problems with a previously published checklist and then developed an operationalized proposal for a new checklist to counter these problems. We performed a reliability evaluation of this new checklist by having two practitioners apply the checklist on 24 papers that report experimental results on software requirements specifications' comprehensibility. Results: We report first-hand experiences of practitioners in evaluating the relevance of primary studies in RE, by using a perspective-based checklist. With respect to the reliability of the adjusted checklist, 9 of out 19 questions show an acceptable proportion of agreement (between two practitioners). Conclusions: Based on our experience, the contextualization and operationalization of a perspective-based checklist helps to make it more useful for the practitioners. However, to increase the reliability of the checklist, more reviewers and more discussion cycles are necessary.
Context: Case studies are a useful approach for conducting empirical studies of software engineering, in part because they allow a phenomenon to be studied in its real-world context. However, given that there are several kinds of case studies, each with its own strengths and weaknesses, researchers need to know how to choose which kind to employ for a specific research study.
Aim: The objective of this research is to compare two case study approaches: embedded, longitudinal case studies, and multi-case studies.
Approach: We compared two actual software engineering case studies: a multi-case study involving interviews with 46 practitioners at 9 international companies engaged in offshoring and outsourcing, and a single case, participant observation embedded case study lasting 13 months in a mid-sized Irish software company. Both case studies were exploring similar problems of understanding the activities performed by members of scrum development teams.
Results: We found that both multi-case and embedded case studies are suitable for exploratory research (hypothesis development) but that embedded research may also be more suitable for explanatory research (hypothesis testing). We also found that longitudinal case studies offer better confirmability, while multi-case studies offer better transferability.
Conclusion: We propose a set of illustrative research questions to assist with the selection of the appropriate case study method.
Software Inspection is an important approach to find defects in Software Engineering (SE) artifacts. While there has been extensive research on traditional software inspection with pen-and-paper materials, modern SE poses new environments, methods, and tools for the cooperation of software engineers. Technologies, such as Human Computation (HC), provide tool support for distributed and tool-mediated work processes. However, there is little empirical experience on how to leverage HC for software inspection. In this vision paper, we present the context for a research program on this topic and introduce the preliminary concept of a theory-based experiment line to facilitate designing experiment families that fit together to answer larger questions than individual experiments. We present an example feature model for an experiment line for Software Inspection with Human Computation and discuss its expected benefits for the research program, including the coordination of research, design and material reuse, and aggregation facilities.
To assess the benefits of introducing Agile practices, it is important to get a clear understanding of the baseline situation, i.e. the situation before their introduction. Without a clear baseline, we cannot properly assess the extent of impacts, both positive and negative, of introducing Agile practices. This paper provides a preliminary guideline to help researchers in capturing and reporting baseline situations. The guideline has been developed through the study of literature and interviews with industry practitioners, and validated by experts in academia.
Conducting empirical research in software engineering industry is a process, and as such, it should be generalizable. The aim of this paper is to discuss how academic researchers may address some of the challenges they encounter during conducting empirical research in the software industry by means of a systematic and structured approach. The protocol developed in this paper should serve as a practical guide for researchers and help them with conducting empirical research in this complex environment.