Evaluation Toolkit For Robustness Testing Of Automatic Essay Scoring Systems

التفاصيل البيبلوغرافية
العنوان: Evaluation Toolkit For Robustness Testing Of Automatic Essay Scoring Systems
المؤلفون: Anubha Kabra, Mehar Bhatia, Yaman Kumar Singla, Junyi Jessy Li, Rajiv Ratn Shah
سنة النشر: 2020
مصطلحات موضوعية: FOS: Computer and information sciences, Artificial Intelligence (cs.AI), Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computation and Language (cs.CL)
الوصف: Automatic scoring engines have been used for scoring approximately fifteen million test-takers in just the last three years. This number is increasing further due to COVID-19 and the associated automation of education and testing. Despite such wide usage, the AI-based testing literature of these "intelligent" models is highly lacking. Most of the papers proposing new models rely only on quadratic weighted kappa (QWK) based agreement with human raters for showing model efficacy. However, this effectively ignores the highly multi-feature nature of essay scoring. Essay scoring depends on features like coherence, grammar, relevance, sufficiency and, vocabulary. To date, there has been no study testing Automated Essay Scoring: AES systems holistically on all these features. With this motivation, we propose a model agnostic adversarial evaluation scheme and associated metrics for AES systems to test their natural language understanding capabilities and overall robustness. We evaluate the current state-of-the-art AES models using the proposed scheme and report the results on five recent models. These models range from feature-engineering-based approaches to the latest deep learning algorithms. We find that AES models are highly overstable. Even heavy modifications(as much as 25%) with content unrelated to the topic of the questions do not decrease the score produced by the models. On the other hand, irrelevant content, on average, increases the scores, thus showing that the model evaluation strategy and rubrics should be reconsidered. We also ask 200 human raters to score both an original and adversarial response to seeing if humans can detect differences between the two and whether they agree with the scores assigned by auto scores.
اللغة: English
URL الوصول: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b92611ec674f7701161c1ef5e8061574
http://arxiv.org/abs/2007.06796
حقوق: OPEN
رقم الأكسشن: edsair.doi.dedup.....b92611ec674f7701161c1ef5e8061574
قاعدة البيانات: OpenAIRE