The SafeDNN project at NASA Ames explores new techniques and tools to ensure that systems that use Deep Neural Networks (DNN) are safe, robust and interpretable. Research directions we are pursuing in this project include: symbolic execution for DNN analysis, label-guided clustering to automatically identify input regions that are robust, parallel and compositional approaches to improve formal SMT-based verification, property inference and automated program repair for DNNs, adversarial training and detection, probabilistic reasoning for DNNs. In this talk I will highlight some of the research advances from SafeDNN.
With the rise of AI-based systems, such as self-driving cars, Google search, and automated decision-making systems, new challenges have emerged for the testing community. Verifying such software systems is becoming an extremely difficult and expensive task, often constituting up to 90% of the software expenses. Software in a self-driving car, for example, must safely operate in an infinite number of scenarios, which makes it extremely hard to find bugs in such systems. In this talk, I will explore some of these challenges, and introduce our work which aims at improving the bug-detection capabilities of automated software testing. First, I will talk about a framework that maps the effectiveness of automated software testing techniques, by identifying software features that impact the ability of these techniques to achieve high code coverage. Next, I will introduce our latest work that incorporates defect prediction information to improve the efficiency of search-based software testing to detect software bugs.
To enable automated software testing, the ability to automatically navigate to a state of interest and to explore all, or at least sufficient number of, instances of such a state is fundamental. When testing a computer game the problem has an extra dimension, namely the virtual world where the game is played on. This world often plays a dominant role in constraining which logical states are reachable, and how to reach them. So, any automated testing algorithm for computer games will inevitably need a layer that deals with navigation on a virtual world. Unlike e.g. navigating through the GUI of a typical web-based application, navigating over a virtual world is much more challenging. This paper discusses how concepts from geometry and graph-based path finding can be applied in the context of game testing to solve the problem of automated navigation and exploration. As a proof of concept, the paper also briefly discusses the implementation of the proposed approach.
Research demonstrated that faults seeded mutation using operators can be representative of faults in real systems. In this paper, we study the relationship between the different operators used to insert mutants in the fault domain of the system under test and the effectiveness of different state machine test suites at killing those mutants. We are particularly interested in the effectiveness of two interrelated state machine testing strategies at finding different types of faults. Those are the round-trip paths strategy and the transition tree strategy. Using empirical evaluation, we compare the effectiveness of more than two thousand unique test suites at killing mutants seeded using eight different mutation operators. We perform experiments on four experimental objects and provide qualitative analysis of the results. We conclude that neither of the two studied strategies is more effective than the other at killing a certain type of mutants. However, the structure of the finite state machine and the nature of the system under test affect the type of faults detected by the different testing strategies.
In this tool demonstration paper, we propose a tool named Fuzz4B (Fuzzing for Beginner), which is a front-end to a representative fuzzer AFL for developers who are inexperienced in fuzz testing. Fuzz4B is not only a front-end, but it also allows developers to reproduce a crash and minimize a fuzz that causes the crash. As a usage example, we demonstrated the use of Fuzz4B to perform fuzz testing to discover a failure of an open source library librope. Fuzz4B and its video are available at: <a>https://github.com/Ryu-Miyaki/Fuzz4B</a>.
Robotic Process Automation (RPA) is a technology that has grown tremendously in the last years, due to its usability in the area of process automation. An essential part of any software development process is quality assurance, so testing will be very important for RPA processes. However, the classical software techniques are not always suitable for the RPA software robots due to the mix of the graphical description of the robots and their implementations. In this short paper, we describe the state of the practice for testing of software robots and propose some ideas of test automation using model-based testing.