Shaun O’Meara, CTO at Mirantis, presented the company’s approach to simplifying GPU infrastructure with what he described as a “GPU Cloud in a Box.” The concept addresses operational bottlenecks that enterprises and service providers face when deploying GPU environments: fragmented technology stacks, resource scheduling difficulties, and lack of integrated observability. Rather than forcing customers to assemble and maintain a full hyperscaler-style AI platform, Mirantis packages a complete, production-ready system that can be deployed as a single solution and then scaled or customized as requirements evolve.
The design is centered on Mirantis k0rdent AI, a composable platform that converts racks of GPU servers into consumable services. Operators can partition GPU resources into tenant-aware allocations, apply policy-based access, and expose these resources through service catalogs aligned with existing cloud consumption models. Lifecycle automation for Kubernetes clusters, GPU-aware scheduling, and tenant isolation are embedded into the system, reducing the engineering burden that is typically required to make such environments reliable.
A live demonstration was presented by Anjelica Ambrosio, AI Developer Advocate. For the first demo, she reviewed the customers’ experience using the Product Builder. She showed how a user can log into the Mirantis k0rdent AI self-service portal and provision products with the Product Builder within minutes, selecting from preconfigured service templates. The demo included creating a new cluster product, setting parameters, and deploying the product to the marketplace. Real-time observability dashboards displayed GPU utilization, job performance, and service health. The demonstration highlighted how the platform turns what was once a multi-week manual integration process into a repeatable and governed workflow. The next demo Anjelica presented was the Product Builder from the Operator’s experience, showing how products can be created using nodes and configuring dependencies with Graph View.
O’Meara explained that the “Cloud in a Box” model is not a closed appliance but a composable building block. It can be deployed in a data center, at an edge location, or within a hybrid model where a public cloud-hosted control plane manages distributed GPU nodes. Customers can adopt the system incrementally, beginning with internal workloads and later extending services to external markets or partners. This flexibility is particularly important for organizations pursuing sovereign cloud strategies, where speed of deployment, transparent governance, and monetization are essential.
The value is both technical and commercial. Technically, operators gain a validated baseline architecture that reduces common failure modes and accelerates time-to-service. Commercially, they can monetize GPU investments by offering consumption-based services that resemble hyperscaler offerings without requiring the same level of capital investment or staffing. O’Meara positioned the solution as a direct response to the core challenge confronting enterprises and service providers: transforming expensive GPU hardware into sustainable and revenue-generating AI infrastructure.
Presented by Shaun O’Meara, CTO, and Anjelica Ambrosio, Product Marketing Specialist, Mirantis. Recorded live on September 11, 2025, at AI Infrastructure Field Day 3 in Santa Clara, California. Watch the entire presentation at https://techfieldday.com/appearance/mirantis-presents-at-ai-infrastructure-field-day-3/ or visit https://www.mirantis.com or https://techfieldday.com/event/aiifd3/ for more information.
Up Next in AI Infrastructure Field Day 3
-
Mirantis IaaS Technology Stack with S...
Shaun O’Meara, CTO at Mirantis, described the infrastructure layer that underpins Mirantis k0rdent AI. The IaaS stack is designed to manage bare metal, networking, and storage resources in a way that removes friction from GPU operations. It provides operators with a tested foundation where GPU se...
-
Mirantis PaaS Technology Stack with S...
Shaun O’Meara, CTO at Mirantis, described the platform services layer that sits above the GPU infrastructure and is delivered through Mirantis k0rdent AI. The PaaS stack is organized around composable service templates that let operators expose training, inference, and data services to tenants. S...
-
CTERA Intelligent Data Management fro...
Discover how CTERA addresses the complexities of hybrid cloud storage by enhancing operational efficiency and security, advocating a unified platform that extends from edge to cloud to manage increasing data demands. Practical use cases across various industries demonstrate how CTERA leverages AI...