Back

MLOps on Kubernetes: Infrastructure Patterns That Scale

Online Webinar | March 19, 2026 | 1:00 PM - 2:00 PM CT

Overview

MLOps on Kubernetes is evolving fast. Teams are running Kubeflow, MLflow, Ray, and custom training pipelines—all on top of Kubernetes clusters that weren't designed for ML workloads. The infrastructure layer becomes the bottleneck.

This webinar explores infrastructure patterns that scale for MLOps: how to structure clusters for training vs. inference, manage GPU resources efficiently, handle model versioning and rollback, and keep your ML platform from becoming its own ops nightmare.

What You'll Learn

  • Infrastructure patterns for ML training and inference on Kubernetes
  • How Codiac simplifies the orchestration layer for MLOps tools
  • GPU resource management: quotas, scheduling, and cost control
  • Model deployment patterns: A/B testing, canary rollouts, and rollback
  • Integrating with MLflow, Kubeflow, and other MLOps frameworks

Location

Online (Zoom)

What to Expect

Technical deep-dive with architecture diagrams and live examples. Bring your MLOps questions!

Speakers

  • Ben Ghazi, Co-Founder, Codiac

Who Should Attend

  • ML Engineers & Data Scientists
  • MLOps & Platform Engineers
  • DevOps Engineers supporting ML infrastructure
  • Engineering leaders building ML platforms
Link copied to your clipboard