AI Changes in the Norm with Upscale AI
Networking Field Day 40
•
26m
Upscale AI, founded in 2025, recently emerged from stealth as a unicorn following $300 million in combined seed and Series A funding. With a team of industry veterans, Upscale AI is focused on building a clean sheet networking architecture specifically for the backend and lean front-end of AI clusters. The speakers emphasize that traditional data center networking is a round peg in a square hole for AI, as existing infrastructures were designed for general-purpose web traffic rather than the massive, synchronized communication required by billions of parameters and trillion-token models.
The presentation details the shift from a simple client-server model to a distributed ecosystem, where the network acts as the nervous system of an intelligent manufacturing plant or token factory. In this environment, the key performance indicators (KPIs) have shifted from bits per second to tokens per second and tokens per watt. As large language models (LLMs) outgrow the memory capacity of single GPUs, parallelism, such as slicing math problems across thousands of processors, becomes mandatory. This creates a massive data movement problem where any network synchronization stall or hot spot directly results in idle compute time and lost revenue, making predictability and low latency table stakes rather than optional features.
To address these challenges, Upscale AI is developing a portfolio that includes both scale-up and scale-out solutions built on open standards like Ethernet, SONiC, and the Ultra Ethernet Consortium (UEC). Their scale-out systems are designed around a partnership with NVIDIA's Spectrum-X, while their scale-up innovation involves purpose-built silicon and trays to support heterogeneous compute environments. By focusing exclusively on AI traffic and removing the bloat of legacy enterprise features, Upscale AI aims to provide a reliable, predictive substrate that can proactively identify malfunctioning hardware. This architectural approach is intended to help operators maximize their token flywheel, ensuring that massive infrastructure investments yield the highest possible intelligence output per watt of power consumed.
Presented by Deepti Chandra, VP Product and Marketing. Recorded live at Networking Field Day 40 in San Jose on April 9, 2026. Watch the entire presentation at https://techfieldday.com/appearance/upscale-ai-presents-at-networking-field-day-40/ or visit https://TechFieldDay.com/event/nfd40 or https://upscale.ai for more information.
Up Next in Networking Field Day 40
-
Upscale AI Networking - What Has Chan...
Upscale AI argues that traditional cloud and front-end networks, which are largely based on a client-server architecture, are fundamentally ill-suited for the unique demands of AI workloads. While standard web traffic is connection-oriented and tolerant of latency, AI clusters rely on collective ...
-
Upscale AI and the AI ASIC Landscape
Upscale AI posits that traditional data center networking is a round peg in a square hole for AI, as existing infrastructures were designed for general-purpose web traffic rather than the massive, synchronized communication required by billions of parameters and trillion-token models. By focusing...
-
Upscale AI's Point of View with Aravi...
Upscale AI distinguishes between two critical domains: scale-up networking, which creates a large compute environment within a rack where multiple GPUs see a flat, unified memory, and scale-out networking, which connects these domains through memory copy operations. The presentation highlights th...