Crowdsourcing Image Annotation for Nucleus Detection and Segmentationin Computational Pathology: Evaluating Experts, Automated Methods, and the Crowd

التفاصيل البيبلوغرافية
العنوان: Crowdsourcing Image Annotation for Nucleus Detection and Segmentationin Computational Pathology: Evaluating Experts, Automated Methods, and the Crowd
المؤلفون: Irshad, Humayun, Montaser-Kouhsari, , Laleh, Waltz, Gail, Bucur, Octavian, Jones, , Nicholas C., Dong, , Fei, Knoblauch, , Nicholas W., Andrew H Beck
المصدر: Harvard University OpenScholar, Working Paper.
الوصف: The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in computational pathology: nucleus detection and nucleus segmentation. We designed and implemented crowdsourcing experiments using the CrowdFlower platform, which provides access to a large set of labor channel partners that accesses and manages millions of contributors worldwide. We obtained annotations from four types of annotators and compared concordance across these groups. We obtained: crowdsourced annotations for nucleus detection and segmentation on a total of 810 images; annotations using automated methods on 810 images; annotations from research fellows for detection and segmentation on 477 and 455 images, respectively; and expert pathologist-derived annotations for detection and segmentation on 80 and 63 images, respectively. For the crowdsourced annotations, we evaluated performance across a range of contributor skill levels (1, 2, or 3). The crowdsourced annotations (4,860 images in total) were completed in only a fraction of the time and cost required for obtaining annotations using traditional methods. For the nucleus detection task, the research fellow-derived annotations showed the strongest concordance with the expert pathologist-derived annotations (F-M =93.68%), followed by the crowd-sourced contributor levels 1,2, and 3 and the automated method, which showed relatively similar performance (F-M = 87.84%, 88.49%, 87.26%, and 86.99%, respectively). For the nucleus segmentation task, the crowdsourced contributor level 3-derived annotations, research fellow-derived annotations, and automated method showed the strongest concordance with the expert pathologist-derived an
Original Identifier: 221816
نوع الوثيقة: redif-paper
اللغة: English
الإتاحة: https://ideas.repec.org/p/qsh/wpaper/221816.html
رقم الأكسشن: edsrep.p.qsh.wpaper.221816
قاعدة البيانات: RePEc