First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework

Song, Zili, Wang, Shuolei, Kong, Weikai, Peng, Xiangjun and Sun, Xu (2019) First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework. In: Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. Association for Computing Machinery, Utrecht, Netherlands, pp. 387-391. ISBN 9781450369206

PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Available under Licence Creative Commons Attribution.
Download (1MB) | Preview


Existing programmable simulators enable researchers to customize different driving scenarios to conduct in-lab automotive driver simulations. However, software-based simulators for cognitive research generate and maintain their scenes with the support of 3D engines, which may affect users' experiences to a certain degree since they are not sufficiently realistic. Now, a critical issue is the question of how to build scenes into real-world ones. In this paper, we introduce the first step in utilizing video-to-video synthesis, which is a deep learning approach, in OpenDS framework, which is an open-source driving simulator software, to present simulated scenes as realistically as possible. Off-line evaluations demonstrated promising results from our study, and our future work will focus on how to merge them appropriately to build a close-to-reality, real-time driving simulator.

Item Type: Book Section
Keywords: Video Synthesis; Driving Simulator; Machine Learning
Schools/Departments: University of Nottingham Ningbo China > Faculty of Science and Engineering > Department of Mechanical, Materials and Manufacturing Engineering
Identification Number:
Depositing User: Wu, Cocoa
Date Deposited: 21 May 2020 06:47
Last Modified: 21 May 2020 06:47

Actions (Archive Staff Only)

Edit View Edit View