Cloud Data Engineer

ABOUT CLIENT

Our client is a leading global technology company that provides a wide range of IT services and solutions. With a strong focus on innovation and digital transformation, our client helps businesses adapt to the ever-changing technological landscape. Their expertise in areas like cloud computing, cybersecurity, and AI makes them a valuable partner for organizations.

JOB DESCRIPTION

Develop and execute the data processing pipeline using Google Cloud Platform (GCP).
Collaborate with implementation teams throughout the project lifecycle to offer extensive technical proficiency for deploying enterprise-scale data solutions and leveraging contemporary data/analytics technologies on GCP.
Create data processing pipelines and architectures.
Automate DevOps procedures for all components of the data pipelines, ensuring seamless transition from development to production.
Translate business challenges into technical data problems while incorporating essential business drivers in coordination with product management.
Extract, load, transform, sanitize, and authenticate data.
Provide assistance and resolution for issues related to data pipelines.

JOB REQUIREMENT

Minimum of 4 years of experience in Data Engineering or a similar role
Strong Cloud-based Data Engineering experience in AWS, Azure, or GCP with at least 2 years of Cloud experience
Proficiency in GCP Cloud Data Engineering, including general infrastructure and data services such as Big Query, Dataflow, Airflow, and Cloud Function
Proficiency in AWS Cloud Data Engineering, including data pipeline technologies like Lake Formation, MWAA, EMR, and storage technologies like S3 and Glue
Proficiency in Azure Cloud Data Engineering, including Azure Data Lake Storage, Azure Databricks, Azure Data Factory, and Synapse
Successful design and implementation of large and complex data solutions using various architectural patterns such as Microservices
Advanced skills in SQL and Python
Experience with DataOps
Experience in using DevOps on Cloud data platforms such as Terraform for Infrastructure as Code (IaC), GitOps, Docker, and Kubernetes
Strong educational background in Information Technology (IT) and Information and Communication Technology (ICT)
Ability to influence both technical and business peers and stakeholders
Fluent in English verbal communication
Experience in Marketing domains is preferred

WHAT'S ON OFFER

This position offers hybrid working arrangements, with three days working in the office and flexible hours.
Salary is negotiable based on candidate expectations.
Employees are entitled to 18 paid leaves annually, which includes 12 annual leaves and 6 personal leaves.
The insurance plan includes coverage based on full salary, a 13th-month salary, and performance bonuses.
A monthly meal allowance of 730,000 VND is provided.
Employees receive 100% full salary and benefits from the start of employment.
Medical benefits are extended to the employee and their family.
The work environment is fast-paced, flexible, and multicultural with opportunities for travel to 49 countries.
The company provides complimentary snacks, refreshments, and parking facilities.
Internal training programs covering technical, functional, and English language skills are offered.
The regular working hours are from 08:30 AM to 06:00 PM on Mondays to Fridays, inclusive of meal breaks.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Outsource

Technical Skills:

Data Engineering, Cloud, Google Cloud, ETL/ELT

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Hybrid

Salary:

Negotiation

Job ID:

J01454

Status:

Close

Related Job:

DevOps Engineer

Others - Viet Nam


Product

  • Devops
  • Kubernetes
  • Network

Operate and evolve our Kubernetes platform across multiple clusters and environments (Prod, Dev, hybrid on-prem and public cloud), covering control plane operations, node lifecycle, upgrades, and autoscaling at every layer (Cluster Autoscaler, HPA, KEDA). Architect and manage hybrid cloud infrastructure spanning on-premises and public clouds (GCP, AWS), including workload placement, cross-cloud networking, and unified resource management. Own the CI/CD and GitOps experience end-to-end: container build pipelines, image optimization, and progressive delivery via ArgoCD / FluxCD. Own the observability stack as a single pane of glass across all clusters: Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus -- and help push toward agent-assisted SRE workflows. Manage and improve our inference platform: vLLM serving and AIBrix for multi-model orchestration and autoscaling across a fleet of NVIDIA GPUs. Operate platform services: Kafka, Redis, PostgreSQL, OpenSearch. Manage identity and access via Keycloak integrated with Google Workspace; harden SSO, RBAC, and secrets management across the platform. Harden network security across private load balancers, firewalls, and VPC segmentation; design and maintain hub-and-spoke / multi-AZ topologies. Support training infrastructure: self-service VM provisioning, RunPod burst capacity, Weights and Biases integration. Drive infrastructure reliability, cost efficiency, and capacity planning as the platform scales.

Negotiation

View details

Platform Engineer

Ho Chi Minh - Viet Nam


Product

  • Backend
  • Devops
  • Data Engineering

Build and maintain distributed infrastructure handling telemetry, sensory, and control data across cloud and edge environments Design and operate data ingestion and streaming pipelines connecting robot fleets to the cloud in real time, covering video, joint states, audio, and LiDAR Develop and maintain backend services and APIs that power the Company's developer-facing platform, with a focus on reliability and developer experience Manage and evolve cloud native infrastructure using Kubernetes, Docker, and infrastructure as code tooling Ensure platform reliability through monitoring, alerting, autoscaling, failover, and incident response Support ML and robotics teams with data infrastructure for training pipelines, policy rollout, and hardware-in-the-loop simulation Implement secure APIs with access control, rate limiting, and usage metering as we scale

Negotiation

View details

Software Engineer (Digital Twin)

Ho Chi Minh - Viet Nam


Product

  • Python
  • C/C++

Build and maintain high-fidelity digital twin environments for Asimov across MuJoCo, Isaac Sim, and Unreal Engine, calibrated to real hardware behavior. Design and own the systems -- not just the environments -- that let locomotion, autonomy, and perception teams generate, validate, and iterate on simulation scenarios at scale. Build pipelines for asset import, USD and MJCF workflows, sensor modeling, and real-to-sim calibration to keep digital twins synchronized with evolving hardware. Develop photorealistic rendering pipelines in Unreal Engine for synthetic data generation and perception model training. Work with hardware and mechatronics teams to model actuator dynamics, contact physics, and structural behavior, ensuring simulation parameters reflect physical ground truth. Integrate digital twin environments with the Company's locomotion training pipeline (Cyclotron) and autonomy stack, enabling teams to run experiments and close the sim-to-real gap. Contribute to the open-source Asimov simulation stack, including tooling, documentation, and reproducible environment workflows.

Negotiation

View details