AI Infrastructure Field Day 3
Jericho4: Enabling Distributed AI Computing Across Data Centers with Broadcom
20m
Jericho4 – Ethernet Fabric Router is a purpose-built platform for the next generation of distributed AI infrastructure. In this session, we will examine how Jericho4 pushes beyond traditional scaling limits, delivering unmatched bandwidth, integrated security, and true lossless performance—while interconnecting more than one million XPUs across multiple data centers.
The presentation discusses the Jericho 4 solution for scaling AI infrastructure across data centers. Current limitations in power and space capacity necessitate interconnecting smaller data centers via high-speed networks. Jericho 4 addresses the growing challenges of load balancing, congestion control, traffic management, and security at scale by offering four key features. First, it allows building a single system with 36K ports, acting as a single, non-blocking routing domain. Second, it provides high bandwidth with hyper ports (3.2T), a native solution for the large data flows characteristic of AI workloads. Third, its embedded deep buffer supports lossless RDMA interconnections over distances exceeding 100 kilometers. Finally, Jericho 4 has embedded security engines to enable security without impacting performance.
The Jericho 4 family offers various derivatives to suit different deployment scenarios, including modular and centralized systems. The architecture supports scaling as a single system through various form factors, from compact boxes to disaggregated chassis, and further scaling across a fabric. Hyper ports improve link utilization by avoiding hashing and collisions, leading to reduced training times. The deep buffer handles the bursty nature of AI workloads, minimizing congestion and ensuring lossless data transmission even over long distances. The embedded security engine addresses security concerns by enabling point-to-point MACsec and end-to-end IPsec with no performance impact.
Presented by Henry (Xiguang) Wu, StrataDNX Product Marketing, Broadcom. Recorded live on September 10, 2025, at AI Infrastructure Field Day 3 in Santa Clara, California. Watch the entire presentation at https://techfieldday.com/appearance/broadcom-presents-at-ai-infrastructure-field-day-3/or visit https://www.broadcom.com/ or https://techfieldday.com/event/aiifd3/ for more information.
Up Next in AI Infrastructure Field Day 3
-
What is AI Ready Storage, with Hammer...
AI Ready Storage is data infrastructure designed to break down silos and give enterprises seamless, high-performance access to their data wherever it lives. With 73% of enterprise data trapped in silos and 87% of AI projects failing to reach production, the bottleneck isn’t GPUs—it’s data. Tradit...
-
Activating Tier 0 Storage Within GPU ...
The highest performing storage available today is an untapped resource within your server clusters that can be activated by Hammerspace to accelerate AI workloads and increase GPU utilization. This session covers how Hammerspace unifies local NVMe across server clusters as a protected, ultra-fast...
-
The Open Flash Platform Initiative wi...
The Open Flash Platform (OFP) Initiative is a multi-member industry collaboration founded in July 2025. The initiative’s goal is to redefine flash storage architecture, particularly for high-performance AI and data-centric workloads, by replacing traditional storage servers with an open approach ...