Today, technology can do remarkable things.
Prosthetic limbs allow people without hands to eat, and those without legs to walk.
Brain computer interfaces can help paralyzed patients type, speak, or move robotic arms.
What once felt impossible can now be engineered with sensors, chips, and software.
And yet something crucial is missing.
Most systems can move the body, but they don’t truly communicate with the brain.
Prosthetics cannot be felt. Neural implants stimulate, but often crudely. Brain injuries disrupt signals in a way technology struggles to restore.
Researchers at Northwestern University have taken an important step toward closing that gap.
In a recent study published in Nature Neuroscience, the team introduced a soft, fully wireless device that sends information directly to the brain using light.
What’s interesting is that the study showed that the brain can learn to interpret these artificial signals. It’s like designing a machine that can speak the brain’s language.
How the wireless brain implant works

Unlike many neural implants, this device doesn’t penetrate brain tissue.
Instead, it’s a thin,flexible, and fully wireless brain implant that sits under the scalp. From the top of the skull, the device sends carefully programmed patterns of light through the skull bone to activate specific groups of neurons across the cortex.
At the core of the system is a programmable array of up to 64 micro-LEDs, each about the width of a human hair. These micro-LEDs emit precise pulses of red light, which penetrate tissue effectively and reach neurons deep enough to trigger activity.
The device is powered and controlled wirelessly with no batteries. This design allows normal movement and behavior, avoiding the limitations of earlier optogenetics systems that relied on fiber-optic cables, restricting movement and behaviour.
Teaching the brain to read artificial signals
To test whether the brain could actually make sense of these light signals, researchers worked with mice whose neurons were genetically engineered to respond to light.
Instead of stimulating a single point, the device delivered distributed patterns of light across multiple regions of the cortex. Similar to how real sensory experiences work. Sight, touch, and sound light up networks instead of single neurons.
The mice were trained to associate a particular stimulation pattern with a reward. Over time, they learned to recognize the correct pattern and act on it, consistently choosing the right option to receive the reward.
“By consistently selecting the correct port, the animal showed that it received the message. They can’t use language to tell us what they sense, so they communicate through their behavior.”
-study first author Mingzheng Wu.
Even without using natural senses like vision or touch, the animals successfully interpreted the artificial signals and made decisions based on them. Their brain didn’t just respond; it learned.
How is this different from earlier brain implants?
The new system builds on earlier work from the same research team. In 2021, they introduced the first fully implantable, wireless, battery-free device that used a single micro-LED to control neural activity. The earlier version proved the concept. It demonstrated feasibility but was limited in complexity.
The current device expands this capability by using multi-site stimulation. With 64 independently controlled micro-LEDs, researchers can generate a near-infinite number of patterns by adjusting timing, intensity, frequency, and sequence.
“In the first paper, we used a single micro-LED. Now we’re using an array of 64 micro-LEDs to control the pattern of cortical activity,” Wu said.
Because real perception depends on coordinated activity across wide areas of the brain, this multi-site approach more closely mirrors how the brain naturally processes sensory information.
In short, the device doesn’t stimulate neurons; it speaks the brain’s native language.
What this wireless device can enable in medicine
While the research is still early, the implications are broad. The team sees potential applications across multiple areas, including:
- Sensory feedback for advanced prosthetic limbs
- Artificial touch, vision, or hearing inputs
- Rehabilitation after stroke or brain injury
- Drug-free pain modulation
- Precise brain-controlled robotic systems
“Our brains are constantly turning electrical activity into experiences. This platform lets us create entirely new signals and see how the brain learns to use them.”
– Northwestern neurobiologist Yevgenia Kozorovitskiy, who led the experimental work.
John A. Rogers, who led the technology development, added:
The system represents a significant step forward in building devices that can interface with the brain without the need for burdensome wires or bulky external hardware.
What comes next
Now that the team has shown the brain can reliably interpret patterned light stimulation, the next question is scale.
- How many distinct signals can the brain learn?
- How complex can those patterns become?
- And how deeply can future versions reach?
Planned improvements include larger LED arrays, tighter spacing, and new wavelengths that can penetrate deeper brain regions.
The device is still in early stages and has been tested only in animals so far.
But as a proof of concept, it marks a meaningful step toward next-generation brain–computer interfaces and artificial perception systems.
-By Rinkle Dudhani and the AHT Team