Technology Ethicist. Professor at ELISAVA Barcelona School of Design and Engineering ||||| Author of ‘The Goods of Design’ (Rowman & Littlefield, 2021) - CHOICE 2022 Outstanding Academic Title.
The upshot of this story is that Air Canada lost a court case after its chatbot gave fake information about refunds to a customer, and now they have to honor the fake refund policies. This is the story the peddlers of AI chatbots don’t want their prospective clients to see but the news is everywhere. I’m sure we will see more and more things like this happening. Now, imagine this scenario with a chatbot operated by a government agency providing incorrect advice to a person seeking more information on whether they qualify (they have the right to) for particular social benefits or not. Imagine if a person is told they don’t qualify when they, in fact, do. Using chatbots to provide information on social welfare programs or other related services is often presented as a great improvement. Allegedly, chatbots can simplify the daunting process of wading through mountains of information. Be that as it may, this poses a big risk to all, but especially to vulnerable populations who need accurate information the most. Unlike the refund case or commercial information in general, individuals seeking information about social benefits might not realize they received incorrect advice because they don’t know what they are entitled to, potentially missing out on benefits. https://lnkd.in/dAcaj_iK