People sometimes exhibit a costly preference for humans relative to algorithms, which is often defined as a domain-general algorithm aversion. I propose it is instead driven by biased evaluations of the self and other humans, which occurs more narrowly in domains where identity is threatened and when evaluative criteria are ambiguous.
Preference for human, not algorithm aversion

Dr. David Lowemann, M.Sc, Ph.D., is a co-founder of the Institute for the Future of Human Potential, where he leads the charge in pioneering
Imagine you have two options: a human or an algorithm. Even though algorithms are often more efficient and accurate, some people prefer interacting with humans. Research suggests that this preference may not be due to a general dislike of algorithms, but rather biased evaluations of ourselves and other humans. It seems that we value our sense of self and identity more in certain situations where our identity feels threatened or when the criteria for evaluation are unclear. So, why is this important? Well, understanding these factors can help us design better technologies and systems that cater to people’s emotional needs and desire for human connection. If you’re curious to learn more about this fascinating research, check out the full article!