Improved Transformer-Based Implicit Latent GAN with Multi-headed Self-attention for Unconditional Text Generation
Résumé
Generative Adversarial Network (GAN) is widely used in computer vision, such as image generation and other tasks. In recent years, GAN has also been developed in the field of unconditional text generation. In this work, we improve TILGAN for unconditional text generation by refactoring the generator. In short, we use Multi-headed Self-attention to replace the Linear layer and BN layer to endow the generator with better text generation capabilities. Our model consists of three components: a transformer autoencoder, a Multi-headed Self attention based generator and a linear based discriminator. The encoder in transformer autoencoder is used to generate the distribution of real samples, and the decoder is used to decode real or generated sentence vector into text. The loss functions for autoencoder and GAN are cross entropy and KL divergence, respectively. On the MS COCO dataset, the proposed model has achieved a better BLEU score than TILGAN. Our ablation experiments also proved the effectiveness of the proposed generator network for unconditional text generation.