Are you ready to meet the demands of tomorrow? Evaluate your approach to skilling.

Deploy State-of-the-Art Vision Models from GPU Lab to Edge Device
12 Weeks. Live Online Classes. Instructor-led.
Our Partners
Push to edge GPUs or Jetson devices with KServe + KEDA autoscaling
Train cutting-edge detection & segmentation models (YOLO-v8, Mask RCNN)
Compress, optimise & serve with ONNX Runtime and NVIDIA Triton
What you will learn?
-
Best-practice labelling, augmentation & synthetic data generation
Reproduce YOLO-v8 detection baseline; evaluate mAP, FPS
-
Fine-tune Mask RCNN / DeepLab-v3+ on custom classes
Apply mixed-precision & gradient-accumulation to fit big models on modest GPUs
-
Export to ONNX, TensorRT, and OpenVINO; prune & quantise to INT8
Benchmark latency vs accuracy trade-offs; hit target FPS/Watts budget
-
Wrap models in NVIDIA Triton with gRPC; add FastAPI gateway and JWT auth
Deploy to KServe; build Helm charts for Jetson / Orin devices; configure KEDA for GPU auto-scaling
-
Capture inference metrics via Prometheus; create Grafana CV dashboards
Detect data drift with Evidently-CV; auto-trigger retraining pipelines in Argo Workflows
-
Team project: quality-inspection, safety-monitoring, or inventory-tracking solution
Deliver live edge demo, cost analysis, and 8-page design document to hiring-manager panel
Go beyond the notebook and put vision models where they matter: in the factory, on the drone, at the edge. Starting with dataset curation and YOLO-v8 fine-tuning, you’ll compress models with ONNX/TensorRT, serve them through NVIDIA Triton, and deploy to Jetson or KServe with auto-scaling GPUs. The capstone—an end-to-end detection or segmentation system—gives you portfolio proof for Computer-Vision or Edge-AI roles in manufacturing, mining, retail, and robotics.
Who Should Enrol?
-
You already train image models and now need to deploy object-detection or segmentation services that hit latency and GPU-cost targets.
-
Level-up from tabular models to large-scale visual data pipelines, augmentation strategies and active-learning loops.
-
Build perception stacks that run reliably on edge devices and stream telemetry to the cloud for re-training.
-
Learn how to host heavy vision models behind REST/GRPC gateways, scale them with autoschedulers and cache results.
-
Gain the vocabulary and architectural patterns to scope, cost and roadmap vision features for mobile, web or embedded products.
Engineers who must bring computer-vision models all the way to production hardware.
Prerequisites
Solid Python, basic deep-learning knowledge. Free Deep Learning Core + Docker-K8s Mini-Camp bridge badges are included for all Intermediate-track graduates.
Career Pathways
-
Design, train and deploy classification, detection and segmentation models at scale.
-
Optimise and quantise models for mobile, IoT and robotics hardware with TensorRT or ONNX Runtime.
-
Automate data versioning, model registry, CI/CD and real-time monitoring for high-throughput image pipelines.
-
Integrate perception models with control loops for drones, vehicles or industrial robots.
-
Scope end-to-end vision systems, balance cloud vs. edge costs, and ensure GDPR/ethics compliance.
Graduates leave with a fully-containerised vision service running on cloud & edge, a public GitHub repo, and recruiter-ready talking points that match the majority of “Computer-Vision Engineer” and “Edge-AI Engineer” roles advertised today.
Amir Charkhi
Technology leader | Adjunct Professor | Founder
With 20 + years across energy, mining, finance, and government, Amir turns real-world data problems into production AI. He specialises in MLOps, cloud data engineering, and Python, and now shares that know-how as founder of AI Tech Institute and adjunct professor at UWA, where he designs hands-on courses in machine learning and LLMs.
Advanced: Computer Vision @ Scale
12 Weeks. Live Online Classes. Next Cohort 2nd September
Frequently Asked Questions
-
Beginner courses: none— we start with Python basics.
Intermediate & Advanced: ability to write simple Python scripts and use Git is expected. -
Plan on 8–10 hours: 2× 3-hour live sessions and 2–4 hours of project work. Advanced tracks may require up to 10 hours for capstone milestones.
-
All sessions are recorded and posted within 12 hours. You’ll still have access to Slack/Discord to ask instructors questions.
-
New intakes launch roughly every 8 weeks. Each course page shows the exact start date and the “Apply-by” deadline.
-
Just a laptop with Chrome/Firefox and a stable internet connection. All coding happens in cloud JupyterLab or VS Code Dev Containers—no local installs.
-
Yes. 100 % refund until the end of Week 2—no questions asked. After that, pro-rata refunds apply if you need to withdraw for documented reasons.
-
Absolutely. We issue invoices to companies and offer interest-free 3- or 6-month payment plans.
-
Live Q&A in every session, 24-hour Slack response time from instructors, weekly office-hours, and code reviews on your GitHub pull requests.