What the Human Brain Has in Common with Artificial Intelligence
- Jun 12
- 1 min read

In 2015, Google's Deep Dream project captured global attention by transforming ordinary photos into dreamy psychedelic images. This wasn't just an artistic experiment; it was a window into how artificial neural networks "see" the world. Deep Dream worked by enhancing the patterns that a neural network had learned during training, revealing how certain features are internally represented. These internal patterns aren't just visual quirks; they're encodings. These are abstractions that neural networks use to "make sense" of data.
In artificial intelligence, encoding refers to the transformation of input data (like images, text, or audio) into a numerical format that a model can process. Embeddings take this a step further, mapping complex, high-dimensional inputs into lower-dimensional vectors that preserve relationships and meaning. For example, in language models, similar words tend to have similar embeddings, enabling machines to understand context, tone, and even analogy.
The human brain actually does something remarkably similar. Current neuroscience research suggests that our brains encode sensory data through patterns of neural activation. These patterns form internal representations of concepts, smells, or faces. These are sensory signals being compressed and refined through experience. Just like AI embeddings, these representations preserve relationships; we recognize a dog whether it's a husky or a chihuahua because our brains extract essential features and encode them meaningfully.
By studying how AI models encode data, we gain insight into the nature of perception, abstraction, and memory. While AI is still far from replicating the full complexity of the brain, projects like Deep Dream offer a glimpse into shared strategies: distilling the chaotic world into compressed, useful, and meaningful representations of the world.
Comments