دورية أكاديمية

Visual question answering model for fruit tree disease decision-making based on multimodal deep learning

التفاصيل البيبلوغرافية
العنوان: Visual question answering model for fruit tree disease decision-making based on multimodal deep learning
المؤلفون: Yubin Lan, Yaqi Guo, Qizhen Chen, Shaoming Lin, Yuntong Chen, Xiaoling Deng
المصدر: Frontiers in Plant Science, Vol 13 (2023)
بيانات النشر: Frontiers Media S.A., 2023.
سنة النشر: 2023
المجموعة: LCC:Plant culture
مصطلحات موضوعية: disease decision-making, deep learning, multimodal fusion, visual question answer, bilinear model, co-attention mechanism, Plant culture, SB1-1110
الوصف: Visual Question Answering (VQA) about diseases is an essential feature of intelligent management in smart agriculture. Currently, research on fruit tree diseases using deep learning mainly uses single-source data information, such as visible images or spectral data, yielding classification and identification results that cannot be directly used in practical agricultural decision-making. In this study, a VQA model for fruit tree diseases based on multimodal feature fusion was designed. Fusing images and Q&A knowledge of disease management, the model obtains the decision-making answer by querying questions about fruit tree disease images to find relevant disease image regions. The main contributions of this study were as follows: (1) a multimodal bilinear factorized pooling model using Tucker decomposition was proposed to fuse the image features with question features: (2) a deep modular co-attention architecture was explored to simultaneously learn the image and question attention to obtain richer graphical features and interactivity. The experiments showed that the proposed unified model combining the bilinear model and co-attentive learning in a new network architecture obtained 86.36% accuracy in decision-making under the condition of limited data (8,450 images and 4,560k Q&A pairs of data), outperforming existing multimodal methods. The data augmentation is adopted on the training set to avoid overfitting. Ten runs of 10-fold cross-validation are used to report the unbiased performance. The proposed multimodal fusion model achieved friendly interaction and fine-grained identification and decision-making performance. Thus, the model can be widely deployed in intelligent agriculture.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 1664-462X
Relation: https://www.frontiersin.org/articles/10.3389/fpls.2022.1064399/full; https://doaj.org/toc/1664-462X
DOI: 10.3389/fpls.2022.1064399
URL الوصول: https://doaj.org/article/25e84562c9024e548b656c577dbb878c
رقم الأكسشن: edsdoj.25e84562c9024e548b656c577dbb878c
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:1664462X
DOI:10.3389/fpls.2022.1064399