Aalto researchers have demonstrated a breakthrough that could dramatically speed up artificial intelligence whilst slashing energy consumption.
Text by Martti Asikainen, 8.12.2025 | Photo by Adobe Stock Photos
Researchers at Aalto University have developed a method that performs complex AI calculations using light instead of electronics. Completing operations in a single flash that would take conventional computers multiple steps.
The international team, led by Dr Yufeng Zhang, created a system called POMMM (parallel optical matrix-matrix multiplication) that handles the mathematical operations underpinning modern AI through a single beam of coherent light. The research was published in Nature Photonics on 14 November 2025.
The breakthrough comes at an interesting moment for the technology industry. As artificial intelligence systems grow increasingly powerful and complex, their energy demands have become a pressing concern.
Training large language models, the systems behind tools like ChatGPT, can consume megawatt-hours of electricity, whilst running AI at scale presents similar challenges in terms of speed, power consumption, and heat generation.
Current AI systems rely on graphics processing units (GPUs) that must break down calculations into millions of sequential steps, consuming significant time and energy. Training large language models can use megawatt-hours of electricity.
The new approach encodes data into the properties of light waves—their brightness and timing. As these waves interact, they naturally perform the mathematical operations that AI systems need, all simultaneously.
“Our method performs the same kinds of operations that today’s GPUs handle, like convolutions and attention layers, but does them all at the speed of light,” Dr Zhang told the university in a statement. “Instead of relying on electronic circuits, we use the physical properties of light to perform many computations simultaneously.”
Dr Zhang offers an analogy: “Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins. Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together—with just one operation, one pass of light, all inspections and sorting happen instantly and in parallel.”
The team conducted extensive validation of their system against standard GPU calculations across various tasks, demonstrating remarkably high accuracy with error rates below 10%. In tests spanning matrix sizes from small 10-by-10 configurations to larger 50-by-50 arrangements, the optical system maintained consistent performance, with mean absolute errors remaining below 0.15.
Crucially, the researchers successfully ran neural networks originally designed for GPUs—including sophisticated image recognition systems—on their optical prototype without requiring any modifications to the AI models. They tested both convolutional neural networks and the more recent vision transformer architectures on standard datasets, encompassing the key operations that modern AI relies upon: multi-channel convolution, multi-head self-attention, and multi-sample fully connected layers.
The largest calculation they demonstrated multiplied matrices with dimensions of 256 by 9,216—involving over 9,000 elements and significantly beyond typical operations. This scalability test, performed as part of an image style transfer task, confirmed the system’s ability to handle the complex, large-scale computations required by real-world AI applications.
The system also proved versatile in handling different types of data. By using multiple wavelengths of light simultaneously (specifically 540 nanometres and 550 nanometres), the researchers successfully performed complete complex-valued matrix operations—demonstrating that the approach can be extended to enable even more sophisticated calculations in a single optical pass.
The current prototype, built from off-the-shelf optical components, achieved energy efficiency of 2.62 giga-operations per joule. However, the researchers say this could improve dramatically when integrated into dedicated photonic chips, as the system requires only passive optical elements once light begins propagating.
Professor Zhipei Sun, leader of Aalto University’s Photonics Group, said: “This approach can be implemented on almost any optical platform. In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption.”
Dr Zhang estimates the technology could be integrated into commercial platforms within three to five years, as major technology companies are already developing photonic hardware.
“This will create a new generation of optical computing systems, significantly accelerating complex AI tasks across a myriad of fields,” he says.
The research involved collaboration between Shanghai Jiao Tong University, Aalto University, and the Chinese Academy of Sciences.
Finnish AI Region
2022-2025.
Media contacts