Are you ready to meet the demands of tomorrow? Evaluate your approach to skilling.

Build Real-World AI Solutions & Deploy at Scale

12 Weeks. Live Online Classes. Next Cohort September 2025

Our Partners

Master Prompt Engineering & RAG

Develop ML & Deep Learning models

Deploy scalable AI services and APIs

What you will learn?

  • • Refresh advanced SQL & feature engineering, then wire them into a Feature Store using Feast.
    • Track every experiment with MLflow; set up reproducible conda/poetry environments.
    • Design AB-style offline/online parity tests to avoid train-serving skew.

  • • Build modular pipelines with scikit-learn, XGBoost, and Optuna hyper-parameter sweeps.
    • Refactor notebooks into Python packages; enforce type-safety with Pydantic.
    • Spin up a lightweight CI workflow on GitHub Actions that runs unit tests & smoke models on every PR.

  • • Fine-tune Transformer models on Hugging Face; plug in hosted models from OpenAI / AWS Bedrock / Azure OpenAI.
    • Master embeddings, vector search, and RAG architecture with pgvector / Pinecone / Chroma.
    • Develop robust prompt chains with LangChain and evaluate them with guard-rails for bias & toxicity.

  • • Wrap models in FastAPI & async endpoints, auto-documented with OpenAPI / Swagger.
    • Add caching & rate-limiting, secure with JWT & OAuth.
    • Publish a versioned Python client SDK so front-end and mobile teams can integrate in hours—not days.

  • • Containerise services with Docker, use multi-stage builds for 70 % smaller images.
    • Deploy to Kubernetes (EKS, AKS, or GKE) with Helm charts; leverage KEDA for event-driven auto-scaling.
    • Roll out blue-green & canary releases via Argo Rollouts; monitor latency & drift with Prometheus + Grafana.

  • • Team project: deliver a production-ready AI service (e.g., real-time churn prediction + LLM explainer) accessible at a public HTTPS endpoint.
    • Produce an architecture doc, cost-of-ownership analysis, and post-mortem run-book.
    • Pitch to a panel of hiring managers & receive a written reference you can attach to applications.

Bridge the gap between notebooks and revenue-generating AI services. Over 12 weeks you’ll refactor ML code into modular packages, fine-tune transformers for Gen-AI features, and deploy FastAPI micro-services in Docker and Kubernetes with autoscaling. MLflow tracking, LangChain RAG pipelines and KEDA cost controls round out the stack. Graduates leave with a live, documented API and the skills employers demand for AI Engineer, ML Engineer or MLOps-heavy product teams.

Who Should Enrol?

Software & data professionals ready to turn notebooks into cloud-scale AI products.

Prerequisites
Solid Python, basic ML knowledge (regression/classification), comfort with Git. Completion of our Python & Git Kick-start plus ML & Cloud First Look (or equivalent) is recommended.

  • Moving toward ML-powered micro-services.

  • Who can train models but need deployment & DevOps discipline.

  • Tasked with owning machine-learning workloads.

  • Aiming for an AI Engineer or MLOps Engineer role within a year.

Career Pathways

  • Design, train, and deploy ML & Gen-AI models; own APIs that power product features.

  • Design, train, and deploy ML & Gen-AI models; own APIs that power product features.

  • Build RAG systems, fine-tune LLMs, and integrate guard-rails for safe enterprise use.

  • Automate CI/CD for models, orchestrate on Kubernetes, monitor drift & performance in production.

  • Collaborate with PMs to translate user problems into AI micro-services with measurable ROI.

  • Architect scalable, cost-efficient AI solutions on AWS, Azure, or GCP; advise on best-practice IaC & security.

Graduates leave with a portfolio, GitHub repo, and recruiter-friendly talking points aligned to entry-level requisitions

Amir Charkhi
Technology leader | Adjunct Professor | Founder

With 20 + years across energy, mining, finance, and government, Amir turns real-world data problems into production AI. He specialises in MLOps, cloud data engineering, and Python, and now shares that know-how as founder of AI Tech Institute and adjunct professor at UWA, where he designs hands-on courses in machine learning and LLMs.

Intermeidate: AI Engineering Course

12 Weeks. Live Online Classes. Next Cohort 2nd September

Intermediate: AI Engineering Intermediate: AI Engineering
Quick View
Intermediate: AI Engineering
$3,950.00

Frequently Asked Questions

  • Beginner courses: none— we start with Python basics.
    Intermediate & Advanced: ability to write simple Python scripts and use Git is expected.

  • Plan on 8–10 hours: 2× 3-hour live sessions and 2–4 hours of project work. Advanced tracks may require up to 10 hours for capstone milestones.

  • All sessions are recorded and posted within 12 hours. You’ll still have access to Slack/Discord to ask instructors questions.

  • New intakes launch roughly every 8 weeks. Each course page shows the exact start date and the “Apply-by” deadline.

  • Just a laptop with Chrome/Firefox and a stable internet connection. All coding happens in cloud JupyterLab or VS Code Dev Containers—no local installs.

  • Yes. 100 % refund until the end of Week 2—no questions asked. After that, pro-rata refunds apply if you need to withdraw for documented reasons.

  • Absolutely. We issue invoices to companies and offer interest-free 3- or 6-month payment plans.

  • Live Q&A in every session, 24-hour Slack response time from instructors, weekly office-hours, and code reviews on your GitHub pull requests.