Synthesizing skin lesion images using CycleGANs – a case study
Generative adversarial networks (GANs) have seen some success as a way to synthesize training data for supervised machine learning models. In this work, we design two novel approaches for synthetic image generation based on CycleGANs, aimed at generating realistic-looking, class-specific dermoscopic skin lesion images. We evaluate the images’ usefulness as additional training data for a convolutional neural network trained to perform a difficult lesion classification task. We are able to generate visually striking images, but their value for augmenting the classifier’s training data set is low. This is in-line with other researcher’s investigations into similar GAN models, indicating the need for further research into forcing GAN models to produce samples further from the training data distribution, and to find ways of guiding the image generation using feedback from the ultimate classification objective.