SEFAIS '18- Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems

Full Citation in the ACM Digital Library

SESSION: AI applications

A data-driven generative model for GPS sensors for autonomous driving

Autonomous driving (AD) is envisioned to have a significant impact on people's life regarding safety and comfort. Positioning is one of the key challenges in realizing AD, where global navigation systems (GNSS) is traditionally used as an important source of information. The area of GNSS are well explored and the different sources of error are deeply investigated. However the existing modeling methods often have very comprehensive requirements for the training data where all affecting conditions such as ephemeris data should be well known. The main goal of this paper is to develop a solution to model GPS error that only requires information which is available in the vehicle without having access to detailed information about the conditions. We propose a statistical generative model using autoregression and Gaussian mixture models and develop a learning algorithm to estimate the parameters using the data collected in real traffic. The proposed model is evaluated by comparing the produced artificial data with the validation data collected at different traffic conditions and the results indicate that the model is successfully mimicking the sensor behavior.

How machine perception relates to human perception: visual saliency and distance in a frame-by-frame semantic segmentation task for highly/fully automated driving

In this paper, we investigate the link between machine perception and human perception for highly/fully automated driving. We compare the classification results of a camera-based frame-by-frame semantic segmentation model (Machine) with a well-established visual saliency model (Human) on the Cityscapes dataset. The results show that Machine classifies foreground objects better if they are more salient, indicating a similarity with the human visual system. For background objects, the accuracy drops when the saliency increases, giving evidence for the assumption that Machine has an implicit concept of saliency.

Emotion-awareness for intelligent vehicle assistants: a research agenda

EVA1 is describing a new class of emotion-aware autonomous systems delivering intelligent personal assistant functionalities. EVA requires a multi-disciplinary approach, combining a number of critical building blocks into a cybernetics systems/software architecture: emotion aware systems and algorithms, multimodal interaction design, cognitive modelling, decision making and recommender systems, emotion sensing as feedback for learning, and distributed (edge) computing delivering cognitive services.

SESSION: AI engineering methods

Distributed deep reinforcement learning on the cloud for autonomous driving

This paper proposes an architecture for leveraging cloud computing technology to reduce training time for deep reinforcement learning models for autonomous driving by distributing the training process across a pool of virtual machines. By parallelizing the training process, careful design of the reward function and use of techniques like transfer learning, we demonstrate a decrease in training time for our example autonomous driving problem from 140 hours to less than 1 hour. We go over our network architecture, job distribution paradigm, reward function design and report results from experiments on small sized cluster (1--6 training nodes) of machines. We also discuss the limitations of our approach when trying to scale up to massive clusters.

Towards a holistic software systems engineering approach for dependable autonomous systems

Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.

Towards a methodology for training with synthetic data on the example of pedestrian detection in a frame-by-frame semantic segmentation task

In order to make highly/fully automated driving safe, synthetic training and validation data will be required, because critical road situations are too divers and too rare. A few studies on using synthetic data have been published, reporting a general increase in accuracy. In this paper, we propose a novel method to gain more in-depth insights in the quality, performance, and influence of synthetic data during training phase in a bounded setting. We demonstrate this method for the example of pedestrian detection in a frame-by-frame semantic segmentation class.

SESSION: Verification of self-driving cars

Deep learning for self-driving cars: chances and challenges

Artificial Intelligence (AI) is revolutionizing the modern society. In the automotive industry, researchers and developers are actively pushing deep learning based approaches for autonomous driving. However, before a neural network finds its way into series production cars, it has to first undergo strict assessment concerning functional safety. The chances and challenges of incorporating deep learning for self-driving cars are presented in this paper.

Exploiting learning and scenario-based specification languages for the verification and validation of highly automated driving

We propose a series of methods based on learning key structural properties from traffic data-basis and on statistical model checking, ultimately leading to the construction of a scenario catalogue capturing requirements for controlling criticality for highly autonomous vehicles. We sketch underlying mathematical foundations which allow to derive formal confidence levels that vehicles tested by such a scenario catalogue will maintain the required control of criticality in real traffic matching the probability distributions of key parameters of data recorded in the reference data base employed for this process.

Automotive safety and machine learning: initial results from a study on how to adapt the ISO 26262 safety standard

Machine learning (ML) applications generate a continuous stream of success stories from various domains. ML enables many novel applications, also in safety-critical contexts. However, the functional safety standards such as ISO 26262 did not evolve to cover ML. We conduct an exploratory study on which parts of ISO 26262 represent the most critical gaps between safety engineering and ML development. While this paper only reports the first steps toward a larger research endeavor, we report three adaptations that are critically needed to allow ISO 26262 compliant engineering, and related suggestions on how to evolve the standard.