SHAS: Approaching optimal Segmentation for End-to-End Speech Translation

التفاصيل البيبلوغرافية
العنوان: SHAS: Approaching optimal Segmentation for End-to-End Speech Translation
المؤلفون: Tsiamas, Ioannis, Gállego, Gerard I., Fonollosa, José A. R., Costa-jussà, Marta R.
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: Speech translation models are unable to directly process long audios, like TED talks, which have to be split into shorter segments. Speech translation datasets provide manual segmentations of the audios, which are not available in real-world scenarios, and existing segmentation methods usually significantly reduce translation quality at inference time. To bridge the gap between the manual segmentation of training and the automatic one at inference, we propose Supervised Hybrid Audio Segmentation (SHAS), a method that can effectively learn the optimal segmentation from any manually segmented speech corpus. First, we train a classifier to identify the included frames in a segmentation, using speech representations from a pre-trained wav2vec 2.0. The optimal splitting points are then found by a probabilistic Divide-and-Conquer algorithm that progressively splits at the frame of lowest probability until all segments are below a pre-specified length. Experiments on MuST-C and mTEDx show that the translation of the segments produced by our method approaches the quality of the manual segmentation on 5 language pairs. Namely, SHAS retains 95-98% of the manual segmentation's BLEU score, compared to the 87-93% of the best existing methods. Our method is additionally generalizable to different domains and achieves high zero-shot performance in unseen languages.
Comment: Accepted to Interspeech 2022. For an additional 2-page Appendix refer to v1
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2202.04774
رقم الأكسشن: edsarx.2202.04774
قاعدة البيانات: arXiv