נושא הפרוייקט
מספר פרוייקט
מחלקה
שמות סטודנטים
אימייל
שמות מנחים
האם נוכל להסביר את ההשפעה המותנית של XAI באמצעות תאוריית המסלול הכפול בקבלת החלטות?
Can we explain the contingent effect of XAI through dual process theory?
תקציר בעיברית
תקציר באנגלית
In recent years, the widespread adoption of Artificial Intelligence (AI) has been witnessed across diverse industries, such as medicine and finance. However, this growth has brought a range of concerns to the forefront, including issues related to bias, privacy infringements, and the reliability of AI technologies. Of particular concern is the lack of transparency of AI systems’ outputs, which often lack adequate explanations for their decision-making processes. This lack of transparency has been defined by researchers as “black box” systems. Regulatory bodies have taken steps to enforce greater transparency and accountability in AI decision-making processes in order to address this issue. Consequently, the field of Explainable Artificial Intelligence (XAI) has emerged, seeking to develop AI systems that are capable of generating comprehensive explanations for their outputs while maintaining optimal performance standards. The fundamental objectives of XAI are to evaluate, improve, justify, and learn from AI systems by constructing detailed explanations of their functioning and outputs. XAI research has been criticized for not prioritizing user needs and relying instead on researchers' intuitions of what makes a "good" explanation, without incorporating relevant theories from social sciences. This study aims to address this criticism by demonstrating that the definition of a "good" explanation – one that can enhance user acceptance of AI system recommendations – is context-dependent and influenced by user preferences. To achieve this, the Elaboration Likelihood Model (ELM) will be employed in order to examine the impact of user motivation levels and explanation features on the acceptance of AI recommendations. Empirically, we conducted an online experiment with 600 participants (users) who were asked to complete a stock price estimation task using an online platform we developed for this study. Consistent with the ELM literature, we manipulated as independent variables user motivation ($10 or $1 prizes) and the quality (high or low) and length (long or short) of the explanation. The dependent variables were different aspects of AI acceptance. The findings of our first experiment showed a significant effect of explanation length on AI acceptance under conditions of low motivation, consistent with our ELM-based hypotheses. These users were more affected by long explanations than by short explanations, irrespective of their quality, consistent with peripheral processing. Moreover, we observed a consistently positive response from users with both low and high motivation when presented with high-quality explanations. Our results suggest that the quality of explanations provided by AI systems to users plays a crucial role in shaping users' assessments of the algorithm's recommendations. These results provide initial support to our view that one explanation does not fit all users in all situations.