Deakin University
Browse
- No file added yet -

Synthetic Traffic Sign Image Generation Applying Generative Adversarial Networks

Download (1.11 MB)
journal contribution
posted on 2024-06-19, 03:41 authored by Christine Dewi, Rung-Ching Chen, Yan-Ting Liu
Recently, it was shown that convolutional neural networks (CNNs) with suitably annotated training data and results produce the best traffic sign detection (TSD) and recognition (TSR). The whole system’s efficiency is determined by the data collecting process based on neural networks. As a result, the datasets for traffic signs in most nations throughout the globe are difficult to recognize because of their diversity. To address this problem, we must create a synthetic image to enhance our dataset. We apply deep convolutional generative adversarial networks (DCGAN) and Wasserstein generative adversarial networks (Wasserstein GAN, WGAN) to generate realistic and diverse additional training images to compensate for the original image distribution’s data shortage. This study focuses on the consistency of DCGAN and WGAN images created with varied settings. We utilize an actual picture with various numbers and scales for training. Additionally, the Structural Similarity Index (SSIM) and the Mean Square Error (MSE) were used to determine the image’s quality. In our study, we computed the SSIM values between pictures and their corresponding real images. When more training images are used, the images created have a significant degree of similarity to the original image. The results of our experiment reveal that the most leading SSIM values are achieved when 200 total images of [Formula: see text] pixels are utilized as input and the epoch is 2000.

History

Journal

Vietnam Journal of Computer Science

Volume

9

Pagination

333-348

Location

Singapore

Open access

  • Yes

ISSN

2196-8888

eISSN

2196-8896

Language

eng

Publication classification

C1.1 Refereed article in a scholarly journal

Issue

3

Publisher

World Scientific Publishing