Shaun O’Meara, CTO at Mirantis, described the infrastructure layer that underpins Mirantis k0rdent AI. The IaaS stack is designed to manage bare metal, networking, and storage resources in a way that removes friction from GPU operations. It provides operators with a tested foundation where GPU servers can be rapidly added, tracked, and made available for higher level orchestration.
O’Meara emphasized that Mirantis has long experience operating infrastructure at scale. This history informed a design that automates many of the tasks that traditionally consume engineering time. The stack handles bare metal provisioning, integrates with heterogeneous server and network vendors, and applies governance for tenancy and workload isolation. It includes validated drivers for GPU hardware, which reduces the risk of incompatibility and lowers the time to get workloads running.
Anjelica Ambrosio demonstrated how the stack works in practice. She created a new GPU cluster through the Mirantis k0rdent AI interface, with the system automatically discovering hardware, configuring network overlays, and assigning compute resources. The demo illustrated how administrators can track GPU usage down to the device level, observing both allocation and health data in real time. What would normally involve manual integration of provisioning tools, firmware updates, and network templates was shown as a guided workflow completed in minutes.
O’Meara pointed out that the IaaS stack is not intended as a general-purpose cloud platform. It is narrowly focused on preparing infrastructure for GPU workloads and passing those resources upward into the PaaS layer. This focus reduces complexity but also introduces tradeoffs. Operators who need extensive support for legacy virtualization may need to run separate systems in parallel. However, for organizations intent on scaling AI, the IaaS layer provides a clear and efficient baseline.
By combining automation with vendor neutrality, the Mirantis approach reduces the number of unique integration points that operators must maintain. This lets smaller teams manage environments that previously demanded much larger staff. O’Meara concluded that the IaaS layer is what makes the higher levels of Mirantis k0rdent AI possible, giving enterprises a repeatable way to build secure, observable, and tenant-aware GPU foundations.
Presented by Shaun O’Meara, CTO, and Anjelica Ambrosio, Product Marketing Specialist, Mirantis. Recorded live on September 11, 2025, at AI Infrastructure Field Day 3 in Santa Clara, California. Watch the entire presentation at https://techfieldday.com/appearance/mirantis-presents-at-ai-infrastructure-field-day-3/ or visit https://www.mirantis.com or https://techfieldday.com/event/aiifd3/ for more information.
Up Next in AI Infrastructure Field Day 3
-
Mirantis PaaS Technology Stack with S...
Shaun O’Meara, CTO at Mirantis, described the platform services layer that sits above the GPU infrastructure and is delivered through Mirantis k0rdent AI. The PaaS stack is organized around composable service templates that let operators expose training, inference, and data services to tenants. S...
-
CTERA Intelligent Data Management fro...
Discover how CTERA addresses the complexities of hybrid cloud storage by enhancing operational efficiency and security, advocating a unified platform that extends from edge to cloud to manage increasing data demands. Practical use cases across various industries demonstrate how CTERA leverages AI...
-
CTERA Enterprise Intelligence: Unify ...
This session offers insight into the seamless integration of AI within the CTERA Intelligent Data Platform. Embedded AI and analytic Enterprise Data Services are explored, along with the underlying data fabric that facilitates secure, global data connectivity and ensures high-performance access. ...