摘要:The opaque nature of algorithmic decision-making often leads to public distrust in AI-driven decisions, highlighting the need for explainable AI (XAI) in public services. While research has focused on AI model development, public preferences for XAI remain underexplored. This study investigates public acceptance of XAI in a healthcare context using a discrete choice experiment with 178 participants. Based on four XAI attributes-global explanations, local explanations, presentation formats, and information quantity-16 choice sets were created. Results from a mixed logit model show that XAI enhances public understanding, with a preference for local explanations using affirmative and counterfactual information. Participants also favored concise, visually and textually integrated explanations. These findings underscore the importance of integrating XAI into public services, emphasizing local explanations and clear, visually supported formats to build trust and align with public preferences.
关键词:Public servicesexplainable artificial intelligencepublic preferencediscrete choice experiment
DOI:10.1080/10447318.2025.2480845
原文刊载于:INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION,MAR 2025
WOS链接: https://webofscience.clarivate.cn/wos/woscc/full-record/WOS:001451738800001