The brain-inspired chip that just made GPUs look wasteful

The brain-inspired chip that just made GPUs look wasteful

·2 min readTechnology & Tools

A chip the size of a thumbnail just solved physics equations that used to require a room full of supercomputers. It did it while sipping less power than a hearing aid.

In February 2026, researchers at Sandia National Laboratories published results in IEEE Spectrum showing that neuromorphic hardware (processors designed to mimic the firing patterns of biological neurons) solved partial differential equations using the finite element method. These are the same equations used to simulate everything from bridge stress to fluid dynamics in jet engines. The kind of math that normally demands thousands of GPU cores running at full blast.

The chip they used, Intel's Loihi 2, consumed a fraction of a single watt.

The numbers that should worry NVIDIA

Here is where the story gets uncomfortable for the $150 billion GPU industry. Intel's Hala Point system, built from 1,152 Loihi 2 chips, packs 1.15 billion artificial neurons and 128 billion synapses into a chassis the size of a microwave oven. It delivers 20 petaops (quadrillion operations per second) while using 100x less energy and running 50x faster than conventional CPU and GPU architectures on matching workloads.

But the real comparison that should raise eyebrows came from a German industrial case study: neuromorphic chips used 0.0032 joules per inference versus 11.3 joules on a standard x86 processor. That is not 10x better, not 100x. That is a 1,000x power reduction for the same task.

For context, training a single large language model on GPUs can consume as much electricity as 130 American homes use in a year. Neuromorphic systems do not train the same way, but for inference and real-time processing, the efficiency gap is staggering.

How a chip "thinks" like your brain

Traditional processors, including GPUs, operate on a clock cycle. Every transistor switches on and off billions of times per second, whether there is useful work to do or not. Your brain does not work this way. Neurons fire only when they receive enough input to cross a threshold, then go silent.

Neuromorphic chips replicate this with spiking neural networks (SNNs). Instead of constantly computing, they process information as discrete "spikes," activating only when data demands it. This is why they sip microwatts instead of gulping kilowatts.

Intel's newest iteration, the Loihi 3 chip unveiled in June 2025, pushes this architecture further: 1.15 billion artificial neurons on a single chip, operating at roughly 100 milliwatts. It is 30% faster than Loihi 2 and delivers up to 100x the energy efficiency of traditional GPUs for specific AI tasks. Commercial rollout is planned for Q3 2026, targeting healthcare monitoring, autonomous vehicles, and industrial automation.

The programmer problem nobody discusses

Here is the catch that most coverage glosses over. The GPU ecosystem runs on CUDA, PyTorch, and TensorFlow. Millions of engineers know these tools. Neuromorphic computing has no equivalent. The programming models are fundamentally different: you are not writing matrix multiplications, you are designing temporal spike patterns.

Intel released an open-source software toolkit alongside Loihi 3, and partnerships with Stanford and ETH Zurich aim to bridge the gap. But the reality is stark: the hardware is years ahead of the software ecosystem. A TU Delft research team built a drone that processes visual data 64x faster with 3x less energy using neuromorphic vision, but building that system required deep expertise in computational neuroscience, not just Python.

This is the bottleneck that will determine whether neuromorphic computing disrupts GPUs or stays in research labs. The technology works. Sandia proved it can handle serious mathematics. Mercedes-Benz estimates neuromorphic vision could reduce autonomous driving compute energy by roughly 90%. IBM's TrueNorth chip achieved 400 billion operations per second per watt back in 2014. The science is settled.

What happens next decides a $150 billion industry

The neuromorphic chip market is projected to grow at over 30% annually, but even aggressive forecasts put it at a few billion dollars by 2030. Compare that to NVIDIA's $60 billion in GPU revenue last year alone.

The disruption will not be head-on. Neuromorphic chips will not replace GPUs for training massive AI models anytime soon. They will carve out territory where GPUs are absurdly overpowered: edge devices, IoT sensors, always-on monitoring, battery-powered robotics. Analysts predict 70% of IoT devices could run neuromorphic processors by 2027.

The question is not whether this technology works. It is whether the AI-proof skills reshaping the workforce will include learning to program chips that think in spikes instead of clock cycles. If you build software for a living, the answer matters more than you think.

Sources and References

  1. Nature Machine Intelligence / Sandia National LaboratoriesResearchers at Sandia demonstrated Intel Loihi 2 can solve PDEs using FEM, achieving meaningful accuracy on problems previously requiring supercomputers.
  2. IEEE SpectrumNeuromorphic hardware solved complex mathematical equations efficiently, challenging the assumption that brain-inspired chips are inherently imprecise.
  3. Intel Newsroom / Sandia National LaboratoriesIntel Hala Point packs 1.15B neurons and 128B synapses across 1,152 Loihi 2 chips, delivering 20 petaops with 100x less energy than conventional architectures.
  4. Interesting EngineeringGerman case study: 1,000x power reduction (0.0032J vs 11.3J per inference). TU Delft drone: 64x faster, 3x less energy than GPUs.

Read about our editorial standards

You might also like: