NEWS

LONDON, MAY 14, 2025 – Oriole Networks, a London-based photonic systems start-up, has announced its PRISM (Photonic Routing Infrastructure for Scalable Models) solution. PRISM is the world’s first pure photonic switched network designed for data centers, high-performance computing (HPC), and distributed deep learning (DDL) workloads.

In today’s AI training and inference systems, it’s not the compute hardware slowing things down – it’s the memory and the network. Despite huge leaps in computing power, real-world performance often hits a wall, with only a small percentage of peak performance achieved due to network bottlenecks.

PRISM addresses network bottlenecks with higher throughput, lower and deterministic latency, reduced completion time and power consumption. The architecture integrates the physical network, AI communication models, and scheduling logic, eliminating network bottlenecks and ensuring maximum performance with minimal overhead. It is a full-stack, all-optical fabric designed to connect xPUs at scale with speed and simplicity.

PRISM offers several key features:

PRISM is a full-stack solution that includes:

PRISM addresses network bottlenecks with higher throughput, lower latency, and reduced power consumption. It provides a solid foundation for the next generation of high-performance data centers, helping build AI infrastructure with predictable performance at scale. PRISM will unleash next-gen distributed AI training and inference through the world’s first fast and energy-efficient pure photonic network.


About Oriole Networks


Accelerating AI in a Low-Carbon World – Oriole Networks is a photonic networking company, developing disruptive technologies for AI/ML and HPC networking that will revolutionize data centers. These technologies address AI’s biggest challenges – speed, latency, and sustainability. Our holistic approach replaces energy-hungry electrical switching with photonic switching. By using only light to move data in the network, our solution will increase the efficiency of LLM training and inference to unprecedented levels while dramatically reducing the energy consumption of data centers, currently putting a huge strain on energy grids in the US and Europe. We can offer faster, more efficient, and more sustainable AI without sacrificing the planet.