تقرير
Rethinking Fairness for Human-AI Collaboration
العنوان: | Rethinking Fairness for Human-AI Collaboration |
---|---|
المؤلفون: | Ge, Haosen, Bastani, Hamsa, Bastani, Osbert |
سنة النشر: | 2023 |
المجموعة: | Computer Science Statistics |
مصطلحات موضوعية: | Computer Science - Machine Learning, Statistics - Machine Learning |
الوصف: | Existing approaches to algorithmic fairness aim to ensure equitable outcomes if human decision-makers comply perfectly with algorithmic decisions. However, perfect compliance with the algorithm is rarely a reality or even a desirable outcome in human-AI collaboration. Yet, recent studies have shown that selective compliance with fair algorithms can amplify discrimination relative to the prior human policy. As a consequence, ensuring equitable outcomes requires fundamentally different algorithmic design principles that ensure robustness to the decision-maker's (a priori unknown) compliance pattern. We define the notion of compliance-robustly fair algorithmic recommendations that are guaranteed to (weakly) improve fairness in decisions, regardless of the human's compliance pattern. We propose a simple optimization strategy to identify the best performance-improving compliance-robustly fair policy. However, we show that it may be infeasible to design algorithmic recommendations that are simultaneously fair in isolation, compliance-robustly fair, and more accurate than the human policy; thus, if our goal is to improve the equity and accuracy of human-AI collaboration, it may not be desirable to enforce traditional fairness constraints. |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2310.03647 |
رقم الأكسشن: | edsarx.2310.03647 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |