Scientist developers have not yet routinely adopted systematic testing techniques to assure software quality. A key challenge is the oracle problem, a situation in which appropriate mechanisms are unavailable for checking if the code produces the expected output when executed using a set of test cases (TCs). Metamorphic testing alleviates the oracle problem by specifying the relationship that a source TC and its follow-up TC shall meet. Such relationships are called metamorphic relations (MRs) which are necessary properties of the intended program's functionality. Existing approaches handle the MRs in a flat manner. This paper introduces a novel way to facilitate a hierarchy of MRs to be developed incrementally. We illustrate our approach by testing U.S. EPA's Storm Water Management Model (SWMM). The results offer concrete insights into developing effective MRs to systematically test scientific software.
Computational science and engineering communities develop complex application software with multiple mathematical models that need to interact with one another. Partly due to complexity of verifying scientific software, and partly because of the way incentives work in science, there has been insufficient testing of these codes. With a spotlight on the results produced with scientific software, and increasing awareness of software testing and verification as a critical contributor to the reliability of these results, testing is gaining more attention by the developing teams. However, many science teams struggle to find a good solution for themselves due either to lack of training or lack of resources within the team. In this experience paper we describe test development methodologies utilized in two different scenarios: one explains a methodology for building granular tests where none existed before, while the second demonstrates a methodology for selecting test cases that build confidence in the software through a process similar to scaffolding. The common insight from both the experiences is that testing should be a part of software design from the beginning for better software and scientific productivity.
Software is a vital part of modern research. The competence to develop sustainable software becomes increasingly important for research organizations. The DLR - a large research organization in Germany - has set up a software engineering initiative to address typical obstacles in this regard such as missing long-term funding, lack of incentives, or missing knowledge about essential software development practices. In this paper, we describe the concept and activities of the initiative as well as discuss the impact of these activities on the identified obstacles.
The number of scientific publications is increasing each year, specifically in the field of computer science. In order to condense existing knowledge, evidence-based software engineering is concerned with systematic literature reviews, surveys, and other kinds of literature analysis. These methods are used to summarize the evidence on empirical studies - or approaches in general - and to identify gaps for new research opportunities. However, executing systematic review processes requires a considerable amount of time and effort. Consequently, researchers have proposed several semi-automated approaches to support and facilitate different steps of such methods. With our current research, we aim to assist researchers to efficiently and effectively execute different steps, namely the search for and selection of primary studies. In this paper, we report several issues we identified during our research that threaten any kind of literature analysis and hamper suitable tool support. We further recommend solutions to mitigate these threats. Overall, our goal is to raise researchers' and publishers' awareness regarding several potential threats on literature analysis, to support software engineers in designing suitable tools for research, and to encourage the research community to solve these threats.
It is common for science and engineering courses to include one computing unit, usually in first year. In a newly-developed first-year unit, we have combined Python coding, Science and Engineering applications and research-oriented skills to help students understand how coding may be applied in their studies and research. Student responses have been positive, and the unit continues to evolve in response to student and faculty feedback. With increasing uptake in the unit, it is hoped that a wave of computational literacy will foster an increase in the application of computational techniques by undergraduate and postgraduate students.
About 20 years ago the need for scientists and engineers to have basic knowledge of software development skills and tools became apparent. Without these so-called software carpentry skills, developers were wasting time and compromising the quality of their work. Since that time great progress has been made with software carpentry, as evidenced by the growing understanding of the importance of tools, and by the growth of the namesake Software Carpentry foundation and other similar projects. With scientific software developers now prepared to move forward, we should turn our attention to the next logical step after carpentry: Software Engineering (SE) applied to Scientific Computing Software (SCS). Past attempts with SE for SCS have not always been successful; therefore, this paper proposes a vision for future success, including SE specifically adapting ideas to SCS, SCS recognizing the value of software artifacts other than the code, and all parties increasing the emphasis on empirical evidence and the quality of replicability. Several ideas are proposed for turning the proposed vision into a reality, including promoting requirements documentation for replicability, building assurance cases for correctness (and other qualities), and automatic generation of all documentation and code.