We've all seen the recent feats of AI, like ChatGPT and DALL-E 2, churning out essays, computer code, artworks, and more with a simple text prompt. The outputs seem intelligent, creative even. But are these AI systems innovative in the way humans are? Developmental psychologists argue there's a fundamental difference.
In a recent paper published in Perspectives on Psychological Science, researchers Eunice Yiu, Eliza Kosoy, and Alison Gopnik make the case that while today's large language models excel at imitating existing patterns in data, they lack the flexible, truth-seeking abilities that allow even young children to innovate tools and discover new causal structures in the world.
The Core Idea: Imitation Versus Innovation
The authors explain that AI systems like ChatGPT are best understood not as intelligent agents but as "cultural technologies" that enhance the transmission of information from person to person. Much like writing, print, and the Internet before them, large language models are highly skilled at extracting patterns from vast datasets of text and images created by humans. They are, in effect, "giant imitation engines" in language and visual creativity.
However, cultural evolution depends on imitation and innovation – the ability to expand on existing ideas or create new ones. This capacity for innovation requires more than statistical analysis; it demands interacting with the world in an exploratory, theory-building way to solve what scientists call "the inverse problem." Children as young as four can innovate essential tools and discover new causal relationships through active experimentation, going beyond the patterns they've observed.
So, while AI models can skillfully continue trends and genres created by humans, they need more flexible reasoning skills to push boundaries and explore new creative territory. As Gopnik told the Wall Street Journal, "To be truly creative means to break out of previous patterns, not to fulfill them."
Evidence: Comparing AI and Child Tool Innovation
To test this imitation versus innovation hypothesis, the researchers conducted experiments comparing how children, adults, and significant AI models like Claude and GPT-4 handled tool innovation tasks.
In one scenario, participants were asked to select an object to draw a circle without the usual compass tool, choosing from either:
- An associated but irrelevant item – a ruler
- A visually dissimilar but functionally relevant item – a round-bottomed teapot
- An irrelevant item – a stove
The results showed:
- Both kids and adults excelled at selecting the teapot, demonstrating an ability to discover new causal affordances in objects.
- The AI models struggled, often picking the associated ruler instead of realizing the teapot's potential.
This suggests that while statistical learning from text can capture superficial relationships between objects, it falls short when more creative abstraction is needed.
This research shows that today's AI still can't match a child's innate curiosity and drive to experiment. We see this on the CPROMPT.AI platform, where users ideate and iterate prompt apps to explore topics and share perspectives without external incentives or curation. It's a case where human creativity shines!
AI models provide an incredible tool for enhancing human creativity through more accessible access to knowledge and quick iterations. The CPROMPT.AI no-code interface lets anyone transform AI chat into usable web apps for free. You dream it, you build it, no programming required.
The interplay between human and artificial intelligence promises even more innovation. But the next giant leap will likely come from AI that, like children, actively learns by doing rather than purely analyzing patterns. Budding young scientists have a lesson for the best minds in AI!
- Large language models - AI systems trained on massive text or image datasets, like ChatGPT and DALL-E, to generate new text or images.
- Inverse problem - The challenge of inferring causes from observed effects and making predictions. Solving it requires building models of the external world through exploration.
- Affordance - The possible uses and actions latent in an object based on its properties. Recognizing affordances allows innovative tool use.
- Overimitation - Copying all details of a task, even non-causal ones. AI models have high-fidelity imitation but may lack human social imitation abilities.
- Causal over hypotheses: An abstract hypothesis that reduces hypotheses about more concrete causal relationships. Discovering these allows generalization.