Showing AI users diversity in training data boosts perceived fairness and trust

Published on October 22, 2024

While artificial intelligence (AI) systems, such as home assistants, search engines or large language models like ChatGPT, may seem nearly omniscient, their outputs are only as good as the data on which they are trained. However, ease of use often leads users to adopt AI systems without understanding what training data was used or who prepared the data, including potential biases in the data or held by trainers. A new study suggests that making this information available could shape appropriate expectations of AI systems and further help users make more informed decisions about whether and how to use these systems.

Read Full Article (External Site)