Neurophos Raises $110M to Build Tiny Optical AI Chips That Could Transform Inference Computing

Dwijesh t

Austin-based photonics startup Neurophos has raised $110 million in Series A funding to develop ultra-efficient optical processors designed to revolutionize AI inferencing. The round was led by Gates Frontier, Bill Gates’ venture firm, with participation from Microsoft’s M12, Bosch Ventures, Aramco Ventures, Carbon Direct, and others.

Spun out of Duke University and the Metacept incubator, Neurophos builds on decades of research in metamaterials once famously used to create experimental “invisibility cloaks.” Today, that same science is powering a breakthrough in optical computing, aimed at solving one of AI’s biggest challenges: scaling compute performance while reducing power consumption.

Optical Processing Units for AI Inference

Neurophos has developed a metasurface modulator, a tiny optical component that can function as a tensor core for matrix-vector multiplication the core math behind modern AI models. By fitting thousands of these modulators on a single chip, the company claims it can deliver dramatically faster and more energy-efficient performance than today’s silicon-based GPUs and TPUs.

According to CEO Dr. Patrick Bowen, Neurophos’ optical processing unit (OPU) can run at 56 GHz, delivering up to 235 peta operations per second (POPS) while consuming just 675 watts. By comparison, Nvidia’s B200 GPU delivers around 9 POPS at 1,000 watts, highlighting the potential leap in both speed and efficiency.

Why Optical Chips Matter

Photonics has long promised faster and cooler computing because light travels faster than electricity and generates less heat. However, optical chips have traditionally faced challenges in size, manufacturing, and power-hungry digital-to-analog conversions. Neurophos claims its metasurface technology solves these issues by shrinking optical transistors by 10,000x, enabling dense integration using standard silicon foundry processes.

This miniaturization allows more computation to occur in the optical domain before conversion back to electronics, unlocking major energy savings a critical advantage as AI inference workloads explode across data centers.

Market Opportunity and Timeline

While Nvidia dominates the AI accelerator market, Neurophos believes its photonic architecture offers a fundamentally new path forward rather than incremental gains tied to silicon process nodes. The company plans to release its first commercial chips by mid-2028 and has already signed early customers, with Microsoft reportedly evaluating its technology closely.

The new funding will support development of data center-ready OPU modules, a full software stack, and early-access developer hardware, alongside expansion in Austin and a new engineering hub in San Francisco.

As AI inference demand surges, Neurophos’ optical processors could mark a major shift toward faster, greener, and more scalable AI computing.

Share This Article