vieCap4H-VLSP 2021: Vietnamese Image Captioning for Healthcare Domain using Swin Transformer and Attention-based LSTM

التفاصيل البيبلوغرافية
العنوان: vieCap4H-VLSP 2021: Vietnamese Image Captioning for Healthcare Domain using Swin Transformer and Attention-based LSTM
المؤلفون: Nguyen, Thanh Tin, Nguyen, Long H., Pham, Nhat Truong, Nguyen, Liu Tai, Do, Van Huong, Nguyen, Hai, Nguyen, Ngoc Duy
المصدر: VNU Journal of Science: Computer Science and Communication Engineering, 38(2), 2022
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence, Computer Science - Computation and Language
الوصف: This study presents our approach on the automatic Vietnamese image captioning for healthcare domain in text processing tasks of Vietnamese Language and Speech Processing (VLSP) Challenge 2021, as shown in Figure 1. In recent years, image captioning often employs a convolutional neural network-based architecture as an encoder and a long short-term memory (LSTM) as a decoder to generate sentences. These models perform remarkably well in different datasets. Our proposed model also has an encoder and a decoder, but we instead use a Swin Transformer in the encoder, and a LSTM combined with an attention module in the decoder. The study presents our training experiments and techniques used during the competition. Our model achieves a BLEU4 score of 0.293 on the vietCap4H dataset, and the score is ranked the 3$^{rd}$ place on the private leaderboard. Our code can be found at \url{https://git.io/JDdJm}.
Comment: Accepted for publication in the VNU Journal of Science: Computer Science and Communication Engineering
نوع الوثيقة: Working Paper
DOI: 10.25073/2588-1086/vnucsce.369
URL الوصول: http://arxiv.org/abs/2209.01304
رقم الأكسشن: edsarx.2209.01304
قاعدة البيانات: arXiv
الوصف
DOI:10.25073/2588-1086/vnucsce.369