Every day, we encounter situations where precise information vanishes—like calculating how many pizzas to order for a party or guessing how long a home repair might take. These moments require something more nuanced than cold calculation: they demand guesstimation, a skill that blends intuition with strategic thinking.

Researchers have discovered that guesstimation isn’t random guesswork, but a sophisticated mental process where our brains rapidly organize incomplete information. By studying how people approach these estimation challenges, scientists are uncovering fascinating insights into problem-solving strategies. The most intriguing finding? When people pause to deliberately consider their initial gut reaction, their answers become significantly more accurate.

This research goes beyond simple number-crunching. It reveals how human cognition adapts to uncertainty, creating mental frameworks that help us navigate complex scenarios. By understanding these cognitive strategies, we might develop smarter AI tools that mimic our remarkable ability to make intelligent approximations. Imagine artificial intelligence that doesn’t rely solely on massive datasets, but can also reason creatively—much like we do when facing an unknown challenge.

Abstract
In many real-world settings, people often have to make judgments with incomplete information. Estimating unknown quantities without using precise quantitative modeling and data is called guesstimation, which is often needed in forecasting settings. Furthermore, research in education found that solving guesstimation problems builds general problem-solving skills. In this paper, we present an empirical investigation on how people solve guesstimation problems. We study their problem-solving behavior with think-aloud methods, and we identify solution strategies that are frequently used. In a two-response paradigm, we first ask for gut-feeling answers to guesstimation questions and then allow deliberation before a second answer is given. Comparing the quality of these two answers reveals that deliberation improves the answer quality significantly. In a second experiment, we additionally elicit participants’ confidence about their deliberated answers by asking for an entire distribution instead of just a point estimate. We find that participants are generally overconfident in their answers. We discuss guesstimation tasks as suitable test-beds for studying human deliberative judgments in general and in the more specific context of improving forecasting through appropriate artificial intelligence tools.

Read Full Article (External Site)