Investigating the effect of loss functions on single-image GAN performance
Küçük Resim Yok
Tarih
2024
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Bursa Teknik Üniversitesi
Erişim Hakkı
info:eu-repo/semantics/openAccess
Özet
Loss functions are crucial in training generative adversarial networks (GANs) and shaping the resulting outputs. These functions, specifically designed for GANs, optimize generator and discriminator networks together but in opposite directions. GAN models, which typically handle large datasets, have been successful in the field of deep learning. However, exploring the factors that influence the success of GAN models developed for limited data problems is an important area of research. In this study, we conducted a comprehensive investigation into the loss functions commonly used in GAN literature, such as binary cross entropy (BCE), Wasserstein generative adversarial network (WGAN), least squares generative adversarial network (LSGAN), and hinge loss. Our research focused on examining the impact of these loss functions on improving output quality and ensuring training convergence in single-image GANs. Specifically, we evaluated the performance of a single-image GAN model, SinGAN, using these loss functions in terms of image quality and diversity. Our experimental results demonstrated that loss functions successfully produce high-quality, diverse images from a single training image. Additionally, we found that the WGAN-GP and LSGAN-GP loss functions are more effective for single-image GAN models.
Açıklama
Anahtar Kelimeler
Computer Vision, Bilgisayar Görüşü [EN] Pattern Recognition, Örüntü Tanıma [EN] Deep Learning, Derin Öğrenme [EN] Neural Networks, Nöral Ağlar [EN] Machine Learning (Other), Makine Öğrenme (Diğer)
Kaynak
Journal of Innovative Science and Engineering
WoS Q Değeri
Scopus Q Değeri
Cilt
8
Sayı
2












