Posts for Tag: Anthropomorphism

Why We Shouldn't Humanize AI

Recently, I came across an article on VOX called Why it’s important to remember that AI isn’t human by Raphaël Millière and Charles Rathkopf that made me think about the dozens of people I read and hear on 𝕏 (formerly Twitter) and 𝕏 Spaces that write or talk to ChatGPT or other LLM as if they are interacting with a real human being. I use polite language in crafting my prompts because I am told that if the context of my input is closer to a strong, robust pattern, it might be better at predicting my desired content. I don't say "please" because I think of it as a human. But do you see when you talk to ChatGPT? A cold, emotionless bot spitting out responses? Or a friendly, helpful companion ready to converse for hours? Our instincts push us towards the latter, though the truth lies somewhere in between. Through the linguistic lens, we view all things, artificial intelligence included. And therein lies the trouble.

Human language marked the pinnacle of human cognition. No other species could conjugate verbs, compose poems, or write legal briefs. Language remained uniquely ours until an AI startup called Anthropic released Claude - a large language model capable of debating ethics, critiquing sonnets, and explaining its workings with childlike clarity. 

Seemingly overnight, our exclusivity expired. Yet we cling to the assumption that only a human-like mind could produce such human-like words. When Claude chatters away, we subconsciously project intentions, feelings, and even an inner life onto its algorithms. This instinct to anthropomorphize seeps through our interactions with AI, guiding our perceptions down an erroneous path. As researchers Raphaël Millière and Charles Rathkopf explain, presuming language models function like people can "mislead" and "blind us to the potentially radical differences in the way humans and [AI systems] work."

Our brains constantly and unconsciously guess at meanings when processing ambiguous phrases. If I say, "She waved at him as the train left the station," you effortlessly infer I mean a person gestured farewell to someone aboard a departing locomotive. Easy. Yet, multiply such ambiguity across millions of neural network parameters, and deducing intended significances becomes more complex. Claude's coders imbued it with no personal motivations or desires. Any interpretation of its statements as possessing some undisclosed yearning or sentiment is sheer fabrication. 

Nonetheless, the impressiveness of Claude's conversational skills compels us to treat it more as a who than a what. Study participants provided more effective prompts when phrasing requests emotionally rather than neutrally. The Atlantic's James Somers admitted to considering Claude "a brilliant, earnest non-native English speaker" to interact with it appropriately. Without awareness, we slide into anthropomorphic attitudes.

The treacherous assumption underpinning this tendency is that Claude runs on the same psychological processes enabling human discourse. After all, if a large language model talks like a person, it thinks like one, too. Philosopher Paul Bloom calls this impulse psychological essentialism - an ingrained bias that things possess an inherent, hidden property defining their categorization. We extend such essentialist reasoning to minds, intuitively expecting a binary state of either minded or mindless. Claude seems too adept with words not to have a mind, so our brains automatically classify it as such.

Yet its linguistic mastery stems from algorithmic calculations wholly unrelated to human cognition. Insisting otherwise is anthropocentric chauvinism - dismissing capabilities differing from our own as inauthentic. Skeptics argue Claude merely predicts following words rather than genuinely comprehending language. But as Millière and Rathkopf point out, this no longer limits Claude's potential skills than natural selection constrains humanity's. Judging artificial intelligence by conformity to the human mind will only sell it short.

The temptation persists, sustained by a deep-rooted psychological assumption the authors dub the "all-or-nothing principle." We essentialize minds as present or absent in systems, allowing no gradient between them. Yet properties like consciousness exist along a spectrum with inherently fuzzy boundaries. Would narrowing Claude's knowledge bases or shrinking its neural networks eventually leave something non-minded? There needs to be a clear cut-off separating minded from mindless AI. Still, the all-or-nothing principle compels us to draw one, likely in coordination with human benchmarks.

To properly evaluate artificial intelligence, Millière and Rathkopf advise adopting the empirical approach of comparative psychology. Animal cognition frequently defies anthropomorphic assumptions - observe an octopus instantaneously camouflaging itself. Similarly, unencumbered analysis of Claude's capacities will prove far more revealing than hamstrung comparisons to the human mind. Only a divide-and-conquer methodology tallying its strengths and weaknesses on its terms can accurately map large language models' contours.

The unprecedented eloquence of systems like Claude catches us off guard, triggering an instinctive rush toward the familiar. Yet their workings likely have little in common with our psychology. Progress lies not in noting where Claude needs to improve human behavior but in documenting its capabilities under unique computational constraints. We can only understand what an inhuman intelligence looks like by resisting the temptation to humanize AI.