Are you ready to meet the demands of tomorrow? Evaluate your approach to skilling.

Productionise ML Models with Confidence

12 Weeks. Live Online Classes. Next Cohort September 2025

Our Partners

Establish robust MLOps practices

Engineer production-ready ML pipelines

Implement model deployment and automation

What you will learn?

Ship models that stay healthy long after launch. This cohort covers reproducible pipelines with DVC and Feature Stores, multi-model serving via FastAPI, TorchServe and Triton, and GitHub-Actions CI/CD that pushes blue-green releases to KServe. Observability with Prometheus, drift alerts using Evidently, and rollback playbooks prepare you for real-world incidents. Complete the industry capstone and walk into interviews as an ML Engineer, MLOps Engineer or Model Deployment Specialist.

  • • Refactor notebooks into modular pipelines with Pydantic & Hydra.
    • Track code, data and environment snapshots using DVC + Git branches.
    • Stand up a Feature Store (Feast) to guarantee offline/online parity and eliminate “it-worked-on-my-laptop” bugs.

  • • Containerise models with multi-stage Docker builds 70 % smaller than defaults.
    • Compare FastAPI, TorchServe, Triton & ONNX Runtime; choose REST vs gRPC for latency budgets.
    • Load-test endpoints with Locust; set SLAs for p95 latency.

  • • Build a GitOps pipeline with GitHub Actions that trains, tests, and signs models on every PR.
    • Orchestrate DAGs in Argo Workflows / Kubeflow Pipelines; promote models through dev → staging → prod with manual approvals.
    • Integrate MLflow Registry for model versioning and canary tags.

  • • Instrument live services with Prometheus + Grafana; capture throughput, latency, and hardware metrics.
    • Detect drift and data quality issues using Evidently AI and auto-trigger retraining jobs.
    • Implement audit trails & PII redaction to satisfy ISO 27001 & SOC-2 requirements.

  • • Deploy on KServe (Knative) and configure KEDA for event-driven GPU auto-scaling.
    • Consolidate multiple small models into a single multi-model endpoint to slash hosting costs by up to 60 %.
    • Benchmark serverless options (AWS SageMaker Serverless, Azure ML Managed Endpoints, Google Vertex AI) against self-managed Kubernetes.

  • • Build an end-to-end MLOps pipeline: data ingest → training → registry → CI/CD → deployed micro-service with live drift alerts.
    • Produce architecture diagrams, Helm charts, and a run-book for blue-green rollbacks.
    • Pitch solution and ROI analysis to a panel of hiring managers; receive line-by-line feedback and a reference letter.

Who Should Enrol?

Engineers and data scientists who already train models but now need to ship, scale, and maintain them in real production environments.

Prerequisites
Comfortable Python & Git, basic ML (classification/regression) knowledge, plus ~8–10 hrs/week for live sessions & project work. Completing our ML & Cloud First Look recorded course—or equivalent experience—is strongly advised.

  • Eager to own the full lifecycle beyond the notebook.

  • Tasked with adding ML features to products.

  • Who must incorporate model artefacts into existing CI/CD and observability stacks.

  • Targeting titles that include “ML Engineer” or “MLOps Engineer.”

Career Pathways

  • Design reproducible pipelines, package models, and expose scalable APIs consumed by product teams.

  • Automate CI/CD, orchestrate on Kubernetes, monitor drift & performance, and own rollback strategies.

  • Select serving frameworks, optimise inference, and slash hosting costs via multi-model endpoints.

  • Translate product requirements into ML micro-services, quantify ROI, and drive iterative releases.

  • Integrate model artefacts into existing data platforms, enforce lineage, governance, and compliance.

Graduates leave with a portfolio, GitHub repo, and recruiter-friendly talking points aligned to entry-level requisitions

Amir Charkhi
Technology leader | Adjunct Professor | Founder

With 20 + years across energy, mining, finance, and government, Amir turns real-world data problems into production AI. He specialises in MLOps, cloud data engineering, and Python, and now shares that know-how as founder of AI Tech Institute and adjunct professor at UWA, where he designs hands-on courses in machine learning and LLMs.

Intermeidate: ML Engineering Course

12 Weeks. Live Online Classes. Next Cohort 2nd September

Intermediate: Machine Learning Engineering Intermediate: Machine Learning Engineering
Quick View
Intermediate: Machine Learning Engineering
$3,950.00

Frequently Asked Questions

  • Beginner courses: none— we start with Python basics.
    Intermediate & Advanced: ability to write simple Python scripts and use Git is expected.

  • Plan on 8–10 hours: 2× 3-hour live sessions and 2–4 hours of project work. Advanced tracks may require up to 10 hours for capstone milestones.

  • All sessions are recorded and posted within 12 hours. You’ll still have access to Slack/Discord to ask instructors questions.

  • New intakes launch roughly every 8 weeks. Each course page shows the exact start date and the “Apply-by” deadline.

  • Just a laptop with Chrome/Firefox and a stable internet connection. All coding happens in cloud JupyterLab or VS Code Dev Containers—no local installs.

  • Yes. 100 % refund until the end of Week 2—no questions asked. After that, pro-rata refunds apply if you need to withdraw for documented reasons.

  • Absolutely. We issue invoices to companies and offer interest-free 3- or 6-month payment plans.

  • Live Q&A in every session, 24-hour Slack response time from instructors, weekly office-hours, and code reviews on your GitHub pull requests.