The Two Minds: How Humans Think Smarter Than AI

I follow Dr. Yann LeCun on 𝕏 (formerly Twitter) as he engages the public on AI's complex science and ethics. His involvement gives me hope the best minds work toward beneficial AI. Recently, he engaged in a Twitter discourse that prompted me to write this post. 

Dr. LeCun has been very clear about the limitations of the Large Language Model (LLM) for a long time. Sadly,  a good chunk of the social media crowd freaks out about how close we are to Artificial General Intelligence (AGI), human-level intelligence. They come to this conclusion based on their interactions with LLMs, which are very effective role-playing and token prediction engines trained on the written text of modern humans. 

Dr. LeCun argues that even the mightiest AI needs more human/animal reasoning and planning. Where does this gap arise from? Dr. LeCun highlights fast, instinctive thought versus deliberate analysis.  

In "Thinking Fast and Slow," Daniel Kahneman described System 1 for instinctive reaction and System 2 for deeper consideration, enabling complex planning.  

Today's AI uses reactive System 1, thinking like a baseball player effortlessly swinging. Per Dr. LeCun, "LLMs produce answers with fixed computation–no way to devote more effort to hard problems." While GPT-3 responds fluidly, it cannot iterate toward better solutions using causality models, the essence of System 2.

Systems like the chess engine AlphaZero better showcase System 2 reasoning by uncovering counterintuitive long-term plans after learning the game's intricacies. Yet modeling cause-and-effect in the real world still challenges AI, according to Dr. LeCun. 

Dr. LeCun argues that planning AI needs "world models" to forecast the outcomes of different action sequences. However, constructing sufficiently advanced simulations remains an open problem. Dr. LeCun notes that "hierarchical planning" compounding objectives still eludes AI while easy for humans/animals. Mastering planning requires "world models" that extrapolate decisions' cascading impacts over time and space like humans intuitively can.

Meanwhile, raw reasoning alone cannot replicate human intelligence. Equally crucial is common sense from experiencing the natural world's messy complexity. This likely explains AI's glaring oversights about ordinary events compared to humans. Combining reasoning prowess with curated knowledge of how the world works offers exciting possibilities for AI with balanced reactive and deliberative capacities akin to our minds.

The great paradox of AI is that models can far exceed humans at specific tasks thanks to computing, yet lacking general thinking skills. Closing this reasoning gap is essential for robust, trustworthy AI. Dr. LeCun's insights guide integrating planning, causality, common sense, and compositionality into AI. Doing so inches us closer to artificial general intelligence that lives up to its name.

Want to Follow Dr. LeCun and Other Top Scientists?

Like Dr. LeCun, CPROMPT.AI tracks 130+ top AI scientists by checking their social media profiles updating their information in a single directory of WHO IS WHO of AI.