Decoding Hazardous Events in Driving Videos

Abstract

Decoding the human brain state with BCI methods can be seen as a building block for human-machine interaction, providing a noisy but objective, low-latency information channel including human reactions to the environment. Specifically in the context of autonomous driving, human judgement is relevant in high-level scene understanding. Despite advances in computer vision and scene understanding, it is still challenging to go from the detection of traffic events to the detection of hazards. We present a preliminary study on hazard perception, implemented in the context of natural driving videos. These have been augmented with artificial events to create potentially hazardous driving situations. We decode brain signals from electroencephalography (EEG) in order to classify single events into hazardous and non-hazardous ones. We find that event-related responses can be discriminated and the classification of events yields an AUC of 0.79. We see these results as a step towards incorporating EEG feedback into more complex, real-world tasks.

Publication
Proceedings of the 7th Graz Brain-Computer Interface Conference 2017