Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research

التفاصيل البيبلوغرافية
العنوان: Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research
المؤلفون: Major, Vincent, Surkis, Alisa, Aphinyanaphongs, Yindalon
سنة النشر: 2017
المجموعة: Computer Science
Statistics
مصطلحات موضوعية: Computer Science - Computation and Language, Statistics - Machine Learning
الوصف: Conventional text classification models make a bag-of-words assumption reducing text into word occurrence counts per document. Recent algorithms such as word2vec are capable of learning semantic meaning and similarity between words in an entirely unsupervised manner using a contextual window and doing so much faster than previous methods. Each word is projected into vector space such that similar meaning words such as "strong" and "powerful" are projected into the same general Euclidean space. Open questions about these embeddings include their utility across classification tasks and the optimal properties and source of documents to construct broadly functional embeddings. In this work, we demonstrate the usefulness of pre-trained embeddings for classification in our task and demonstrate that custom word embeddings, built in the domain and for the tasks, can improve performance over word embeddings learnt on more general data including news articles or Wikipedia.
Comment: 10 pages. Accepted to AMIA 2018 Annual Symposium, San Francisco, November 3-7, 2018
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/1705.06262
رقم الأكسشن: edsarx.1705.06262
قاعدة البيانات: arXiv