The increasing demand for AI-driven workloads has raised concerns about energy consumption and data transfer efficiency in data centers.
Hyperlume, a startup based in Ottawa, Canada, is working on optimizing chip-to-chip data transfer while minimizing energy use.
In 2023, U.S. data centers accounted for 4.4% of the country’s electricity use, a figure projected to rise to 12% by 2028.
This growing energy demand has intensified the need for faster, more efficient communication between chips to reduce latency and power consumption.
The company has developed a microLED-based optical interconnect solution designed to be a more efficient alternative to traditional copper-based data connections.
Copper wiring requires significant power to transmit data across server racks, while Hyperlume’s microLEDs aim to reduce energy use and improve transfer speeds.
The Technology Behind Hyperlume’s Solution
The startup was co-founded by Mohsen Asad and Hossein Fariborzi, who bring expertise in electrical engineering and low-power circuit design.
Their work focuses on addressing latency issues that can hinder chip performance and overall system efficiency. Improved communication between chips could enhance processing capacity and reduce bottlenecks in large-scale computing environments.
The development process involved evaluating multiple technologies. Silicon-based connections were considered but were found to be too costly for large-scale implementation.
Similarly, laser-based solutions offered high-speed data transfer but were prohibitively expensive. Hyperlume instead modified microLEDs to function similarly to fiber-optic connections without the associated costs.
The company pairs these microLEDs with a low-power ASIC that enhances chip communication, facilitating faster and more efficient data exchange.
Hyperlume’s technology is currently being tested by early adopters, primarily in North America, including hyperscalers, cable manufacturers, and companies focused on improving data center performance.
As the solution demonstrates its capabilities in real-world applications, demand is expected to grow.
Funding and Industry Support
Hyperlume recently secured a $12.5 million seed funding round led by BDC Capital’s Deep Tech Venture Fund and ArcTern Ventures.
Additional investors include MUUS Climate Partners, Intel Capital, and SOSV. This funding is intended to support the expansion of the company’s engineering team and accelerate product development.
Intel Capital’s Managing Director, Srini Ananth, highlighted Hyperlume’s role in addressing AI’s increasing energy demands, emphasizing that its technology directly targets bottlenecks affecting AI and data center performance.
ArcTern Ventures’ Managing Partner, Murray McCaig, noted the rising carbon emissions from data centers and the need for energy-efficient networking solutions.
Growing Demands of AI and High-Performance Computing
AI models now exceed 1 trillion parameters, and high-performance computing clusters are scaling beyond 100,000 GPUs.
Traditional copper interconnects face challenges in handling these growing bandwidth demands, often leading to inefficiencies. Hyperlume’s microLED-based optical interconnects aim to provide a solution by enabling high-speed, low-latency data transfer.
The company offers multiple form factors, including pluggable, mid-board, and co-packaged optical solutions, which could improve the way AI and high-performance computing systems manage data transmission.
Roadmap and Expansion Plans
With the newly secured funding, Hyperlume is focusing on several key areas to advance its technology:
- Scaling production to meet the demand for 800G and 1.6T interconnects.
- Strengthening partnerships with hyperscalers, chip manufacturers, and AI infrastructure providers.
- Enhancing optical interconnect technology to support the next generation of AI and semiconductor systems.
While its current efforts focus on improving optical connections between chips and server racks, Hyperlume envisions expanding its technology for broader AI and high-performance computing applications.
By addressing the increasing power and performance requirements of AI-driven infrastructure, the company aims to contribute to more efficient and sustainable data center operations.