Romans Blog

Humans and AI: More Similar Than We Think?

The discussion about similarities between humans and AI (particularly large language models) isn't new but it keeps resurfacing, so I want to go on the record. As someone who's used ChatGPT since its public launch, I've been fascinated not only by its capabilities but also by how raises questions about our own nature and what it means to be human.

From the beginning, people have been quick to dismiss AI's capabilities with absolute statements: AI is not (and can never be) creative, AI can't reason, AI doesn't understand. If you're arguing that AI performs these functions exactly like humans do, I agree – it doesn't. The mechanics of biological brains and software are fundamentally different. But that's not the interesting question. What intrigues me is the extent to which our functioning might be similar, and what commonalities we share.

Beyond "Stochastic Parrots"

Consider the Cambridge Dictionary's definition of creativity:

"the ability to produce original and unusual ideas, or to make something new or imaginative."

When an LLM generates novel content – a unique word, sentence, or idea – doesn't this technically meet this definition? Yes, some definitions include "imagination" and you could argue that creativity is, by definition, uniquely human. But that makes the discussion much less interesting.

One popular term used to describe LLMs is "stochastic parrots" – a term that emphasizes how large language models produce text by statistically imitating patterns from their training data without genuine understanding. These models function by continuously predicting the next word in a sequence based on learned probabilities.

The Autocomplete Brain

Here's a thought experiment: Think about your most recent conversation. As you were speaking, did you know what your next sentence would be? Your next five sentences? How far ahead did you actually plan your words? Or did each word emerge naturally, one after another, without extensive conscious planning? Could it be that your brain was rapidly predicting the optimal next word based on your lifetime of knowledge and memories – your personal dataset?

Sam Harris offers an interesting perspective on this through his argument against free will: Consider when you misspeak, stutter, or fumble a word. Was that a conscious choice? Why did it happen? "You're just as surprised as the next guy", Harris said on his Podcast. My mind was boggled when I heard that, and had to agree: I'm no more able to explain a verbal stumble than the person in front of me.

Aella made a similar point on Twitter recently:

Since the discourse around AI, it's been super weird to find out that people somehow don't think of human speech as mostly autocomplete language machines too. It seems like ppl think humans are doing something entirely different?

Here are the results of her poll:

Bildschirmfoto 2024-10-27 um 09

To be clear, I'm not suggesting that humans and LLMs are identical in their functioning. But I believe the parallels warrant a more nuanced discussion. If you have compelling arguments against the idea that there are at least some similarities between human and AI cognition, I'd love to hear them.