Simulating the Human Brain
- Callum Brown
- 14 hours ago
- 3 min read

AI has already made its place among the most influential technological developments of the modern era, but as the years tick by, it becomes clearer and clearer that growth is decelerating. Companies keep pledging to buy more chips, faster processors, and larger data centres, yet they achieve ever-diminishing returns. GPT-5 has somewhere between two and five trillion parameters — anywhere from double to quadruple the count of GPT-4 yet only achieved a roughly 20% increase on aggregate reasoning tests. Many users never even noticed the change. Yes, models keep getting better and better, but the problem is clear: companies cannot keep throwing money and parameters at the same old models to prevent them from plateauing. The solution may be neuromorphic computers.
Modern computers are built on von Neumann architecture, which emphasises strict separation between systems and components. At its core, von Neumann architecture demands two things: a central processing unit (CPU), which controls the rest of the machine and performs all logical and mathematical operations needed to run a program, and a memory unit, which stores files and instructions. Most computers now have additional parts such as the graphics processing unit (GPU) and motherboard, but they still keep the general approach the same. The CPU pulls necessary information from the memory, executes the commands given, and returns the result. This keeps computer functions clean, understandable, and less error-prone, but it has one problem. Simulating intelligence on machines like this is hard … like, really hard.
The human brain (which is, after all, the kind of intelligence we are trying to copy) is not built like this at all. Though our brains also have small networks that are deeply intertwined with things like speech or fine motor skills, they are so interconnected that they put modern computers to shame. The problem is, neurons and their interactions are hard to simulate in real time, especially when there are 86 billion of them. They need to be strung together, column by column, and updated in small time increments, which severely limits the ways they can interact. The solution may be found in perovskite memristors.
Memristors are an electronic component first theorised in 1971 by Leon Chua that seem to mirror neurons in every way necessary to build an analogue neural network. Neurons in the human brain fire only when there are enough connecting neurons firing with sufficient strength; this required input is called the action potential. Similarly, memristors allow a signal through to the connecting memristors once enough electric potential has accumulated. Crucially, the more times memristors fire, the lower their resistance is in the future, which makes the response stronger each time, like a neuron firing over and over. Just like a brain, the longer a memristor goes without firing, the more it returns to its previous state. These two functions allow circuits to learn and forget in the same way we do. These analogue components have another upside: they don’t run on a clock. When neural networks are simulated digitally, some number of input neurons fire all at once, then the next layer of neurons is updated, they all fire at once, and then the next layer is updated, and so on. With memristors, the neurons would fire as soon as they reach the potential and could update all the other neurons in the array, not just the next layer, leading to complicated feedback loops similar to those actually observed in organic systems.
It is exactly this convoluted behaviour that gives rise to the intricacy of organic thought. If this truly proves to be the way forward for artificial intelligence, it would transform the industry from the pay-to-win chip economy it is becoming, back into an innovation-driven confluence of material science, computer engineering, and biology.
Illustration by Abigail Svaasand




Comments