Artificial Intelligence vs Human Decision Making

UK Srivastava!
4 min readJul 16, 2023

--

Today, aside from rules engines, AI isn’t making many strategic decisions, but what’s the difference between humans and machines when it comes to reasoning and decision making? How long will it be before machines have the same capabilities — or is it even possible to replicate?

According to Hans Moravic, the namesake of the Moravic Paradox, robots will be as smart or surpass human intelligence by 2040, and eventually, as the dominant species, they will merely preserve us as a living museum to honor the species that brought them into existence.

Sounds like Hans wasn’t very fun…

The more optimistic point of view is that human intelligence, paired with the little we know about consciousness, emotion, and the three pounds of mush between our ears, is quite unique.

So while we humans are still calling the shots, we’re digging into a few topics around how human decision making differs from machines.

If biases are ‘bad’, why do we have them?

In this newsletter, we’ve been exploring cognitive biases and techniques to limit their impact on the decision making process — particularly in groups.

Biases are hardwired and as we mentioned in an earlier post, counterarguments suggest the methods used to test their ‘negative’, irrational effects do not take into account many meaningful, real-world factors.

We make strategic decisions in environments of extreme uncertainty with fierce competition and there are countless confounding variables outside of our control — known and unknown.

This starts to surface plenty of interesting questions…

  • Why are emotion, trust, competition, and perception meaningful factors in making decisions?
  • Why do we hold irrational convictions and have trouble thinking probabilistically?
  • Why are we optimized for this ability to model our environment off of very little information?
  • Why does ‘investigative’, abductive reasoning come so naturally to us?

Gary Klein, Gerd Gigerenzer, Phil Rosenzweig, and others make the argument that these things that make us very human, hold the secret to how we make complex, highly consequential decisions in high-speed, low-information situations.

To be clear, there’s a strong overlap where both camps agree. In a 2010 interview, Kahneman and Klein debated the two points of view:

  • Both agree that explicit decision making processes are important, particularly when evaluating information.
  • Both believe intuition can and should be used, though Kahneman stresses it should be delayed as long as possible.
  • Both agree that domain expertise matters, but Kahneman argues biases are particularly strong in experts and must be corrected.

So why do our brains rely so heavily on biases and heuristics?

Our brains optimize for energy consumption. They consume roughly 20% of the energy we produce in a day (and to think Aristotle thought the brain’s primary function was simply a radiator to keep the heart from overheating).

From there, energy consumption within the brain is a black box, but research suggests, in general, the functions that require more processing, such as complex problem-solving, decision making, and working memory, tend to use more energy than functions that are more routine or automatic, like breathing and digesting.

For this reason, the brain tends to not make decisions.

It does this by creating structures for what Daniel Kahneman calls ‘system 1’ thinking. These structures use cognitive ‘shortcuts’ (heuristics) to make energy-efficient decisions that feel conscious but rely on a foundation of subconscious functions. When we elevate decisions that need more cognitive power, Kahneman calls this ‘system 2’ thinking.

Since Kahneman’s book Thinking, Fast and Slow is an incredibly popular New York Times best-seller, this may be a review, but this is what we’re typically taught: Biases and heuristics impair decision making — that intuition is often flawed in human judgment.

There’s a counterargument to the biases and heuristics model proposed by Kahneman and Amos Tversky, and it’s critical of the fact that their studies were done in controlled, lab-like environments with decisions that have relatively certain outcomes (as opposed to the often complex, consequential decisions we make in life and work).

These arguments broadly fall under Ecological-Rationality and Naturalistic Decision Making (NDM). In short, they generally argue the same thing: Humans, armed with these heuristics, often rely on recognition-primed decision making. The recognition of patterns in our experiences helps us make decisions quickly and effectively in these high-stakes, highly uncertain situations.

Humans are quite good at extrapolating very little information into models for decision making based on our experiences — regardless of whether or not the judgments we make, on their own, are objectively rational — we have this ability to strategize.

As the founder of Deepmind, Demis Hassabis, expressed in an interview with Lex Friedman, as these intelligent systems become more intelligent, it makes it easier to understand what makes human cognition different.

There seems to be something deeply human about our desire to understand ‘why’, perceive meaning, act with conviction, inspire, and maybe most importantly — cooperate in groups.

“Human intelligence is largely externalized, contained not in your brain but in your civilization. Think of individuals as tools, whose brains are modules in a cognitive system much larger than themselves — a system that is self-improving and has been for a long time.” — Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do

Though the last 50 years have rendered incredible leaps in understanding how we make decisions, it may be artificial intelligence, through its limitations, that uncovers more about the power of human cognition.

Or humanity becomes the Tamagotchis of our robot overlords…

--

--

UK Srivastava!
UK Srivastava!

Written by UK Srivastava!

A student trying to understand the reality!!

No responses yet