MLOps Engineer

ABOUT CLIENT

Our client is a leading research company specializing in technology innovation

JOB DESCRIPTION

Develop and maintain training and inference pipelines using PyTorch, which includes DDP support, mixed precision, checkpointing, experiment versioning, and reproducible evaluation workflows.
Take ownership of and advance inference serving infrastructure using vLLM and SGLang, with a focus on debugging issues in inference stacks like tool call parsers and reasoning parsers, and optimizing for throughput and latency.
Create and sustain robust tooling in Python and C++ to aid the complete training lifecycle, from data ingestion to model release.
Optimize compute workloads for bare-metal environments, encompassing CPU/GPU utilization, memory bandwidth, and I/O throughput.
Address low-level networking issues, distributed training errors, and hardware bottlenecks across NCCL, MPI, and high-speed interconnects like InfiniBand and RoCE.
Set up and manage ML environments, covering containers, package management, GPU drivers, and runtime configurations.
Establish CI/CD patterns for AI workloads, encompassing training, evaluation, quantization, and model release workflows.
Integrate monitoring, alerting, anomaly detection, and incident response for both training jobs and inference services.
Contribute to shared platform capabilities across reliability, observability, and cost management.
Develop and maintain scalable runtime infrastructure for model-backed services and APIs, including support for LLM-backed APIs, MCP servers, and agentic systems.

JOB REQUIREMENT

Proficiency in PyTorch internals, including DDP, FSDP, mixed precision training, TorchScript, and torch.compile.
Strong programming skills in Python and C++, with the ability to understand and modify unfamiliar codebases.
Solid understanding of computer science basics including data structures, concurrency, operating systems, and memory management.
Practical experience with vLLM and SGLang for production inference serving, serving quantized models such as FP8, INT8, and NVFP4.
Experience with RLHF and PPO training pipelines, including frameworks like veRL and TRL, and integration of reward models.
Solid understanding of distributed training setups, networking, and interconnects including NCCL, MPI, InfiniBand, and RoCE.
Experience in debugging and optimizing bare-metal Linux servers, including kernel parameters, NUMA topology, and GPU driver configuration.
Familiarity with job schedulers such as Airflow and experience in operating production-grade distributed infrastructure.
Strong understanding of containerized and cloud-native environments using Docker and Kubernetes.
Familiarity with ML compiler stacks such as LLVM, MLIR, TensorRT, or XLA.
Knowledge of model quantization techniques and deployment optimization, including GPTQ, AWQ, and bitsandbytes.
Contributions to open source ML projects, including PyTorch, vLLM, SGLang, or related inference and training tooling.
Experience with infrastructure-as-code tools such as Ansible, Terraform, or Nix for reproducible cluster setup.
Experience with custom or on-premise deployments, local clusters, or edge inference.
Familiarity with observability stacks like Prometheus, Grafana, or OpenTelemetry applied to training and inference workloads.
Experience building infrastructure for agentic systems including secure tool access, orchestration, and isolation boundaries.
Passion for clean, well-documented code and detail-oriented engineering.

WHAT'S ON OFFER

Work remotely in an environment that promotes open-source collaboration
Enjoy 14 days of leave and unlimited sick days
Access to GPUs, AI credits, opportunities for fast career progression, and other perks.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product

Technical Skills:

Machine Learning, Devops

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Remote

Salary:

Negotiation

Job ID:

J01855

Status:

Active

Related Job:

Senior Backend Engineer - NAVER Financial

Ho Chi Minh - Viet Nam


Product

Responsible for developing and maintaining the server-side components using the Kotlin programming language. Write and maintain technical documentation, including system architecture and API specifications.

Negotiation

View details

Android Engineer (Java/Kotlin)

Ho Chi Minh - Viet Nam


Product

  • Android

Develop Android App part of various Services Develop new services and improve structures Analyze and apply new technologies to services

Negotiation

View details

C++ Engineer - Market Data

Ho Chi Minh, Ha Noi - Viet Nam


Product

  • C/C++

Maintain/enhance our legacy C++-based tick data processing platform as needed, demonstrating product ownership, and helping migrate datasets to our new tick data processing platform. Contribute to our new, modern C++-based tick data processing platform - enhancing the platform to support additional tick data feeds across asset classes, developing/reviewing the implementation of tick data-based interval features/statistics, and adding new functionality to the platform all while maintaining high software standards and best practices. Collaborate with the Research and Portfolio Management organizations to facilitate the transition from our legacy platform to our new platform, supporting their price volume data needs for signal generation.

Negotiation

View details