
Would you rather be Sherlock or Rain Man in today’s world?
We are fascinated with people who have photographic memory, like when Dustin Hoffman portrayed the autistic savant, Raymond in the 1988 film Rain Man, suddenly spitting out the telephone number and address of a diner waitress because he had read the local phone book the day before.
We are confused by, and in awe of people who can not only remember innumerable details but also shape seemingly random dots into a conclusion, like when Benedict Cumberbatch portrayed a modern-day Sherlock Holmes who immerses himself in his “mind palace.”
Simply recalling facts and information is so 20th century. We grew up in school systems that glorified memorization. After all, creating meritocratic evaluation systems required testing for the right answers. To ensure a question with a right answer, it must be based on factual information or a rule-based system, such as grammar or math.
Asian education systems elevated test taking to an art form, influencing public educators from around the world. One could make the case that testing for facts is a “fair” way of testing for “intelligence.” But as we know, the real world does not run on “right answers,” and very few of complex and important problems are “rule-based.”
Welcome to the wicked world – a world not of simplicity and certainty, but of complexity and probability.
So again, would you rather be Rain Man or Sherlock in today’s world?
In this clip of the 2012 “Sherlock” episode, “The Hounds of Baskerville,” Sherlock Holmes is trying to figure out why people are having hallucinations about large dogs, piecing random words together like “liberty” and “In”. He’s not merely recalling other words or ideas associated with those words; he is applying them to the context of his situation, analyzing and evaluating associations before constructing a plausible conclusion. This ability to move beyond simple recollection to deep analysis and contextual application is what sets Sherlock apart from someone like Rain Man.
AI Climbing Bloom’s Taxonomy
To understand this distinction more clearly, it’s useful to look at Bloom’s Taxonomy, a framework that categorizes cognitive functions from basic to advanced levels: remembering, understanding, applying, analyzing, evaluating, and creating. Bloom, a pioneering educator of the 20th century, believed that while the ability to remember basic knowledge is foundational, true cognitive prowess lies in higher-order thinking.
Where Rain Man excels in the lowest tier—remembering facts—Sherlock embodies the more sophisticated levels of Bloom’s Taxonomy. However, in the Baskerville example, Sherlock climbs the taxonomy, not on recalling all sorts of random facts in his head, but understanding and applying them in context of the problem he is trying to solve, analyzing that knowledge by comparing, contrasting, and organizing that data, before evaluating what is relevant and not relevant to come to a conclusion.
This higher-order thinking is precisely what is increasingly valued in our complex, modern world—a world where the ability to critically analyze and apply knowledge is far more powerful than merely recalling it.
In my view, this scene is a visual representation of what AI could potentially be capable of in the future, but humans still have the edge.
Open AI’s Next Level – Reasoning
Strawberries are in season in Sam Altman’s garden, or at least that is the tongue-in-cheek impression the CEO of Open AI may be trying to leave in the minds of his followers with his August 7 tweet on X. Project Strawberry, previously known as Project Q* within Open AI, is designed to take large language models to the next level.
Open AI has internally socialized a model explaining the five levels needed to be attained to get to Artificial General Intelligence (AGI).
- Level 1 (Chatbots): AI with conversational language.
- Level 2 (Reasoners): Human-level problem solving.
- Level 3 (Agents): Systems that can take action.
- Level 4 (Innovators): AI that can aid in invention.
- Level 5 (Organizations): AI that can do the work of an entire organization
According to OpenAI, we are today (August, 2024) at Level 1. If you haven’t had a great conversation with a LLM chatbot, then you have been missing out.
While ChatGPT and Claude are pretty good conversational partners, they are limited in their ability to solve problems on their own. You can get great insight that leads to your ability to solve a problem, but these chatbots require considerable help from us. Humans have developed a considerable number of cognitive skills that help us do what Sherlock does: link and interpret, compare and contrast, categorize and organize, judge and critique, all in ways that are logical to us.
AI cannot do that. Yet.
Open AI is hinting they are close to releasing an AI tool that has reached Level 2. According to this video from the popular community known as TheAIGRID, “if we now have…human level problem solving, then this is a really big deal.” The narrator of this video went on to say,
If you look at what Reuters said… “Open AI executives told employees that the company believes it is currently on the first level….but on the cusp of reaching the second, which it calls Reasoners. This refers to systems that can do basic problem solving tasks as well as a human with a doctorate level education who doesn’t have access to any tools.” So this is pretty incredible because not only is it human problem solving, but it is without any tool. This would mark a really incredible milestone in terms of the AGI levels because this would mean that we’re at level two.
Calling Bullshit
What happens when this next AI tsunami hits, and we discover a whole new level of cognitive capability from our chatbots? The responses we get will seem even more convincing, more effective at explaining how it got to its conclusions.
If these replies continue to feel more convincing, then we may continue to let our guard down and more frequently accept the responses of AI chatbots as truth.
This would be a mistake, and a concern.
The World Economic Forum’s Future of Jobs 2023 report stated that the top two skills that “remain the most important skills for workers in 2023” are analytical thinking and creative thinking. In other words, cognitive skills, our ability to reason. Even in 2016, when the WEF started the Jobs Report, they were citing Complex Problem Solving Skills and Cognitive Abilities as the most required skill set then and 5 years later.
In other words, way before artificial intelligence was public discourse, world leaders recognized a critical skill gap, which became more obvious as people were overrun by fake news and disinformation, particularly during political elections, like in the US presidential campaign, and referendum votes on major issues, like Brexit.
And now, in a world of growing complexity, information overload and intentional deception, it is more important to apply critical thinking skills. It is so important that the University of Washington offers a course called “INFO 270 / BIOL 270, aka “Calling Bullshit: Data Reasoning in a Digital World.” In this course excerpt, which you can watch here, professors Carl T. Bergstrom and Jevin West offer “Tips for spotting bullshit”:
- If a claim seems to be too good to be true….
- Beware of confirmation bias!
- Multiple working hypotheses
- Think about orders of magnitude
- Beware of unfair comparisons
And it’s not just about safeguarding yourself against deception. It is about leveraging doubt and uncertainty to grow innovation, as BCG explains in its article, To Drive Innovation with GenAI, Start By Questioning Your Assumptions.”
GenAI’s most obvious contribution is in idea generation and validation—the divergence and convergence phases of innovation. Yet it can play an even more important role in helping leaders confront and update the strategic assumptions at the foundation of their business and innovation strategies: the doubt phase of the cycle. Organizations that regularly question their assumptions are more resilient because they are more likely to see, and position themselves to benefit from, the shifts on which competitive advantage turns.
Get Smart
We are in the midst of mindboggling change and we cannot afford to give the AI revolution just a passing glance. There’s work to do. We’ve got to get into better shape and sharpen our cognitive skills. We need to enhance our understanding of AI, not only about the direction and ethics of AI, but also about how to better understand and dialogue with AI.
As the World Economic Forum has emphasized for nearly a decade, honing our critical thinking skills is vital to more effectively navigating the increasingly wicked waves of volatility, uncertainty, complexity and ambiguity coming our way. Don’t fall for the seductive songs of the Sirens. Don’t settle for any port in a storm.
Be curious. Be critical. Think.
ARTICLE FAQS
1. What is the difference between memory and reasoning, and why does this matter now?
Memory recalls facts. Reasoning applies knowledge in context, analyzes options, evaluates relevance, and forms conclusions. Bloom’s Taxonomy places reasoning in higher tiers, which align with modern problem solving.
2. What are OpenAI’s five AI levels, and where are we now?
Level 1 Chatbots. Level 2 Reasoners. Level 3 Agents. Level 4 Innovators. Level 5 Organizations. Current capability sits at Level 1, with public signals of progress toward Level 2.
3. Why does progress toward reasoning models increase risk for users?
More persuasive answers raise overtrust risk. Errors and hallucinations still occur, while explanations sound authoritative. Vigilance becomes a daily requirement.
4. Which human skills matter most as AI improves?
Analytical thinking, creative thinking, and complex problem solving. Global workforce reports have highlighted these priorities for years.
5. How should people evaluate AI outputs and bold claims?
Use a practical checklist. If a claim sounds too good to be true, pause. Watch for confirmation bias. Hold multiple working hypotheses. Check orders of magnitude. Avoid unfair comparisons.
6. What mindset shift should education and workplaces pursue?
Move beyond memorization toward applying, analyzing, evaluating, and creating. Design tasks where context matters and reasoning steps are visible. Reward sound process, not only final answers.
7. What near-term actions help teams prepare for reasoning AI?
Set verification norms for AI outputs. Require sources and transparent reasoning. Track uncertainty estimates. Run frequent red-team reviews. Keep humans accountable for final decisions. Invest in AI literacy and dialogue skills.

