Release of Pre-Trained Models for the Japanese Language

التفاصيل البيبلوغرافية
العنوان: Release of Pre-Trained Models for the Japanese Language
المؤلفون: Sawada, Kei, Zhao, Tianyu, Shing, Makoto, Mitsui, Kentaro, Kaga, Akio, Hono, Yukiya, Wakatsuki, Toshiaki, Mitsuda, Koh
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: AI democratization aims to create a world in which the average person can utilize AI techniques. To achieve this goal, numerous research institutes have attempted to make their results accessible to the public. In particular, large pre-trained models trained on large-scale data have shown unprecedented potential, and their release has had a significant impact. However, most of the released models specialize in the English language, and thus, AI democratization in non-English-speaking communities is lagging significantly. To reduce this gap in AI access, we released Generative Pre-trained Transformer (GPT), Contrastive Language and Image Pre-training (CLIP), Stable Diffusion, and Hidden-unit Bidirectional Encoder Representations from Transformers (HuBERT) pre-trained in Japanese. By providing these models, users can freely interface with AI that aligns with Japanese cultural values and ensures the identity of Japanese culture, thus enhancing the democratization of AI. Additionally, experiments showed that pre-trained models specialized for Japanese can efficiently achieve high performance in Japanese tasks.
Comment: 9 pages, 1 figure, 5 tables, accepted for LREC-COLING 2024. Models are publicly available at https://huggingface.co/rinna
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.01657
رقم الأكسشن: edsarx.2404.01657
قاعدة البيانات: arXiv