Phillip Burr, Lumai’s Product Leader – Interview Series

Phillip Burr is the product leader at Lumai, with over 25 years of experience in global product management, has gained market and leadership roles in leading semiconductor and technology companies, and has a reliable track record in construction and expansion of products and services.
Lumai is a UK-based Deep Tech company that has developed 3D optical computing processors to accelerate AI workload. By performing matrix vector product using beams of three dimensions, their technology provides up to 50x performance and 90% reduction in power consumption compared to traditional silicon-based accelerators. This makes it particularly suitable for AI inference tasks, including large language models, while greatly reducing energy costs and environmental impacts.
What inspired Lumai’s establishment, how did Oxford research ideas develop into a commercial enterprise?
When one of Lumai’s founder Dr. Xianxin Guo received a research scholarship in 1851 at Oxford University, the initial spark was ignited. The interviewer learned about the potential of optical computing and asked Minyax whether the research on the removal of the company would be successful. This caused Wu’s creativity to launch, when he showed with another Lumai co-founder, Dr. James Spall, who demonstrated that using light to perform computations at the core of AI can significantly improve AI performance and reduce energy, and set the stage. They know that existing silicon-only hardware is (and still) working to improve performance without significantly increasing power and cost, so if they can solve this problem using optical computing, they can create the product customers want. They brought this idea to some VCs that supported them in forming Lumai. Lumai recently closed its second round of funding, raising more than $10 million and introducing more investors, who also believe optical computing can continue to scale and meet the ever-increasing demand for AI performance.
Your career is impressive, independent semiconductors, and more – what attracted you to join Lumai at this stage?
The short answer is team and technology. Lumai has an impressive team of optical, machine learning and data center experts, providing experience from people like Meta, Intel, Altera, Maxeler, Seagate and IBM (and my experiences in Arm, Indie, Mentie, Mentor Graphics and Motorola). I know that a team of outstanding people focused on solving the challenges of cutting AI reasoning costs can do amazing things.
I firmly believe that the future of AI requires new innovative breakthroughs in computing. Compared to today’s solutions, it’s a good opportunity to provide 50x AI computing performance and the promise of reducing the cost of AI inference to 1/10.
What are the early technology or business challenges your founding team faces in terms of companies from research breakthroughs to product-ready companies?
Research breakthroughs have proven that optics can be used for fast, very efficient matrix vector multiplication. Despite technological breakthroughs, the biggest challenge is to convince people to succeed if other optical computing startups fail. We had to spend time explaining that Lumai’s approach was very different, rather than relying on a single 2D chip, we reached scale and efficiency levels using 3D optics. Of course, there are many steps from laboratory research to technology that can be deployed at a large scale in a data center. We have long recognized that the key to success is to attract engineers with experience in large numbers and data center development products. Another area is software – it is crucial that standard AI frameworks and models have to benefit from Lumai’s processors, and we provide tools and frameworks to make AI software engineers as seamless as possible.
Lumai’s technique is said to use 3D optical matrix vector multiplication. Can you break it down for the average audience in simple terms?
AI systems require a lot of mathematical calculations called matrix vector multiplication. These calculations are engines that power AI responses. At Lumai, we operate with light instead of electricity. Here is how it works:
- We encode the information as a beam
- These beams pass through 3D space
- Light interacts with lenses and special materials
- These interactions complete mathematical operations
By using all three spatial dimensions, we can process more information using each beam. This makes our approach very effective – reducing the energy, time and cost required to run an AI system.
What are the main advantages of optical computing based on traditional silicon-based GPUs or even integrated photonics?
As the speed of advancement in silicon technology slows down significantly, the performance of every silicon-only AI processor (such as GPU) will gradually improve. Silicon-only solutions consume incredible power and are chasing a reduction in returns, which makes them extremely complex and expensive. The advantage of using optics is that once in the optical domain, there is little power consumption. Energy is used to enter the optical domain, but for example, in Lumai’s processor we can implement more than 1000 computational operations for each beam, each cycle, making it very efficient. This scalability cannot be achieved because physical size constraints and signal noise cannot be used with integrated photonics, and the number of computational operations of silicon photonic solutions is only 1/8 of the goal that Lumai can achieve.
How does Lumai’s processor achieve near-zero latency inference, and why is this a key factor in modern AI workloads?
Although we won’t claim that the Lumai processor provides zero-absence, it does perform very large (1024 x 1024) matrix vector operations in one cycle. Silicon solutions only typically divide the matrix into smaller matrices that are processed individually and then the results must be combined together. This takes time and leads to more memory and energy. Reducing the time, effort and cost of AI processing is critical to allowing more businesses to benefit from AI and benefit advanced AI in the most sustainable way.
Can you guide us into the PCIE compatible form that is integrated with existing data center infrastructure?
Lumai processors are used with standard CPUs using PCIE FOR -SOVER cards in a standard 4U shelf. We are working with a range of data center rack equipment vendors to enable Lumai processors to integrate with their own devices. We use standard network interfaces, standard software, etc., so externally, the Lumai processor looks like any other data center processor.
Energy use in data centers is a growing global focus. How does Lumai position itself as a sustainable solution for AI computing?
The energy consumption of data centers increases at an alarming rate. According to a report by Lawrence Berkeley National Laboratory, data center power usage in the U.S. is expected to triple by 2028, consuming up to 12% of the country’s power. Some data center operators are considering installing nuclear capabilities to provide the energy needed. The industry needs to look at different ways of AI, and we believe optics is the answer to this energy crisis.
Can you explain how Lumai’s architecture avoids scalability bottlenecks in current silicon and photonic methods?
The performance of the first Lumai processor is just the beginning of achievable. We expect our solution to continue to offer a huge performance leap: by increasing light-time velocity and vector width, all of which have no corresponding energy consumption increase. There is no other solution to achieve this. Standard digital silicon methods will continue to consume more and more costs and power to increase performance. Silicon photonics cannot achieve the required vector width, so companies that are studying data center integration photonics have moved to address other parts of the data center (e.g., optical interconnection or optical switching).
Do you see the role of optical computing more broadly in the future of AI and in computing as a whole?
The entire optics will play an important role in future data centers – from optical interconnects, optical networks, optical switching and of course optical AI processing. The requirement of AI in data centers is the key driver of this move to optics. Optical interconnects will make connections between AI processors faster, which is crucial for large AI models. Optical switching will enable more efficient networks, while optical computing will enable faster, more efficient, and lower-cost AI processing. Together, they will help enable more advanced AI, overcoming the challenges of computed side silicon scaling and the speed limits of interconnected side copper.
Thank you for your excellent interview, and readers who hope to learn more should visit Lumai.