Upscale AI and the AI ASIC Landscape
Networking Field Day 40
•
6m 37s
Upscale AI posits that traditional data center networking is a round peg in a square hole for AI, as existing infrastructures were designed for general-purpose web traffic rather than the massive, synchronized communication required by billions of parameters and trillion-token models. By focusing exclusively on AI traffic and removing the bloat of legacy enterprise features, Upscale AI aims to provide a reliable, predictive substrate that treats the entire cluster as a single, coordinated engine.
The technical strategy addresses the evolution of AI workloads from dense training to inference-centric agentic AI and persistent states. As models outgrow the memory capacity of single GPUs, parallelism, like slicing math problems across thousands of processors, becomes mandatory, creating a massive data movement problem. Upscale AI advocates for a distributed ecosystem where the network must be technology-agnostic to support a plethora of specialized ASICs, including GPUs, TPUs, and custom hyperscaler XPUs. This architecture moves away from reactive TCP-based recovery toward a lossless, RDMA-driven environment where the network proactively manages congestion to prevent computational stalls, ensuring that every GPU cycle is utilized to maximize tokens per watt.
To future-proof these investments, Upscale AI is developing a portfolio of scale-up and scale-out systems built on open standards like Ethernet, SONiC, and UA Link. Their scale-out systems leverage a partnership with NVIDIA's Spectrum-X, while their scale-up innovation involves purpose-built silicon and trays to support heterogeneous compute and memory pooling. By utilizing a unified software stack based on SONiC, the company provides a common substrate that simplifies operational onboarding for neoclouds and enterprises. Ultimately, Upscale AI's mission is to move beyond the current homogeneous nature of AI clusters, providing an open, standards-based fabric that allows diverse hardware to interoperate seamlessly for the next decade of AI innovation.
Presented by Deepti Chandra, VP Product and Marketing. Recorded live at Networking Field Day 40 in San Jose on April 9, 2026. Watch the entire presentation at https://techfieldday.com/appearance/upscale-ai-presents-at-networking-field-day-40/ or visit https://TechFieldDay.com/event/nfd40 or https://upscale.ai for more information.
Up Next in Networking Field Day 40
-
Upscale AI's Point of View with Aravi...
Upscale AI distinguishes between two critical domains: scale-up networking, which creates a large compute environment within a rack where multiple GPUs see a flat, unified memory, and scale-out networking, which connects these domains through memory copy operations. The presentation highlights th...
-
Netris Introduction and Overview with...
CEO and co-founder Alex Saroyan discusses the evolution of network engineering in the era of AI. Saroyan highlights that AI networking significantly differs from traditional data center networking due to the massive scale of GPU clusters, ranging from 1,000 to over 50,000 GPUs, and the sheer dens...
-
Netris and the Lifecycle of AI Networ...
Alex Saroyan, CEO and co-founder of Netris, provides insights from the company's experience in deploying and automating large-scale GPU clusters. This second part of the presentation focuses specifically on the life cycle of AI networking, emphasizing that sustainable AI business strategies requi...