Modern computer games typically have a huge interaction spaces and non-deterministic environments. Automation in testing can provide a vital boost in development and it further improves the overall software's reliability and efficiency. Moreover, layout and game logic may regularly change during development or consecutive releases which makes it difficult to test because the usage of the system continuously changes. To deal with the latter, tests also need to be robust. Unfortunately, existing game testing approaches are not capable of maintaining test robustness. To address these challenges, this paper presents an agent-based approach for robust automated testing based on the reasoning type of AI.
Modern web applications require long quality assurance sessions to be appreciated by users. Test automation reduces delivery times but requires the development of effective and maintainable test scripts so that the advantages of its use are not lost.
The usage of the Page object (PO) pattern has proven to be very effective in GUI testing, however, the manual development of Page objects, a sort of web page facade exposing methods to the test scripts, requires a relevant effort, which is often only repaid during evolution.
In this paper, we describe a novel approach, almost totally automated, that takes advantage of the features offered by Selenium IDE for generating more maintainable Selenium WebDriver test scripts and Page objects for web applications. The only manual step required to the tester/developer is to add comments to the Selenese produced by Selenium IDE during registrations through a plugin. The very first estimate we conducted to evaluate our tool-based approach appears to be promising.
Layout-based (2nd Generation) and Visual (3rd Generation) GUI testing are two very common approaches for mobile application testing. The two techniques expose complementary advantages and drawbacks, and the literature on GUI Testing has highlighted the benefits of an approach based on a translation from one generation to the other.
The objective of this work is to provide an improvement to our prototype tool, TOGGLE, designed to translate 2nd Generation test suites, written with the Espresso framework, to 3rd Generation ones that can be run by the EyeAutomate and Sikuli tool.
We extended TOGGLE by adding (1) support for context-based gestures, performed through the scrollTo and onData commands, and (2) support for the combination of Layout-based locators with logical operators.
We evaluated the new version of the tool on five different experimental subjects. For each of the applications, 30 test cases were developed and automatically translated with TOGGLE+.
We observed an increase of 68% of translatable test cases when transitioning from the previous prototype to the current version of the tool. The generated Visual test cases also proved to have high robustness, with flakiness of just 2% (i.e., 98% correct executions).
Developing interactive systems and testing with realistic scenarios requires a detailed understanding of how these systems are used in their real environment. In this paper, we report on our experience from implementing a usage monitoring approach for the touch-enabled human machine interface of an industrial machine. The approach supports automated recording of user interface events as basis for analyzing interactions of users with the system. It collects information about navigation paths to different screens, activities on these screens, and the usage of functionality provided by the application. We evaluated three different approaches to integrate the required usage monitoring into the UI, considering aspects such as necessary changes to the existing code base, dependencies to third-party libraries, and the entailed performance overhead. The paper provides a detailed description of the implementation of the selected approach and a discussion of the lessons we learned from integrating the monitoring in an existing application.
Model-based testing (MBT) has been previously used to validate embedded systems. However, (i) creation of a model conforming to the behavioural aspects of an embedded system, (ii) generation of executable test scripts and (iii) assessment of test verdict, re-quires a systematic process. In this paper, we have presented a three-phase tool-supported MBT workflow for the testing of an embedded system, that spans from requirements specification to test verdict assessment. The workflow starts with a simplistic, yet practical, application of a Domain-Specific Language (DSL) based on Gherkin-like style, which allows the requirements engineer to specify requirements and to extract information about model elements(i.e. states and transitions). This is done to assist the graphical modelling of the complete system under test (SUT). Later stages of the workflow generates an executable test script that runs on a domain-specific simulation platform. We have evaluated this tool-supported workflow by specifying the requirements, extracting information from the DSL and developing a model of a subsystem of the train control management system developed at Alstom Transport AB in Sweden. The C# test script generated from the SUT model is successfully executed at the Software-in-the-Loop (SIL) execution platform and test verdicts are visualized as a sequence of passed and failed test steps.