Artificial Intelligence

   

Acl-Gan: Multi-Domain Image-to-Image Translation Gan Using New Losses to Reduce the Time for Hyperparameter Optimization and Training

Authors: JeongIk Cho

StarGAN, which has impressive performance in image-to-image translation, is based on the determination of three important hyperparameters: adversarial weight, classification weight, and reconstruction weight, which have a significant impact on the performance of the model. In this study, by proposing an attribute loss that can replace conditional GAN losses: adversarial loss and classification loss, the time required for the optimization of attribute weight replaced the time required for the optimization of adversarial weight and classification weight, which can drastically reduce the time required for hyperparameter optimization. Proposed attribute loss is the sum of the losses of each GAN when creating a GAN for each attribute, and since each GAN shares a hidden layer, it does not increase the amount of computation much. Also, propose simplified content loss, which reduces computation by simplifying reconstruction loss. Reconstruction loss of StarGAN goes through the generator twice, while simplified content loss goes through only once, reduce the amount of computation. Also, propose an architecture that prevents background distortion through image framing and improves training speed through a bidirectional progressive growing generator.

Comments: 13 Pages.

Download: PDF

Submission history

[v1] 2019-09-03 20:56:00
[v2] 2019-10-05 10:38:22
[v3] 2019-11-05 18:35:57
[v4] 2019-11-19 21:46:23

Unique-IP document downloads: 11 times

Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website.

Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.

comments powered by Disqus