Grounded Vocabulary for Image Retrieval Using a Modified Multi-Generator Generative Adversarial Network (IEEE Access 2021)
IEEE Access
- Impact Factor 2021: 3.47
Authors
- Kuekyeng Kim, Chanjun Park, Jaehyung Seo, Heuiseok Lim
Abstract
With the recent increase in requirement of both natural-language and visual information, the demand for research on seamless multi-modal processing for effective retrieval of these types of information has increased. However, because of the unstructured nature of images, it is difficult to retrieve images that accurately represent the input text. In this study, we utilized an augmented version of a multi-generator generative adversarial network that uses BERT embeddings and attention maps as input to enable grounded vocabulary for visual representations. We compared the performance of our proposed model with those of other state-of-the-art text input-based image retrieval methods on the MSCOCO and Flikr30K datasets, and the results showed the potential of our proposed method. Even with limited vocabulary, our proposed model was comparable to other state-of-the-art performances on R@10 or even exceed them in R@1. Moreover, we revealed the unique properties of our method by demonstrating how it could perform successfully even when using more descriptive text or short sentences as input.
Check out the This Link for more info on our paper.