Senior Lead Data Engineer

JOB DESCRIPTION

As the Data Engineer Lead, you will build useful and disruptive products, design and develop company applications, and coordinate a Data development team to deliver quality software technologies and systems. You will provide technical input to product teams and stakeholders to help make business and product decisions and be part of creating great technology products. This is a challenging but rewarding opportunity that will increase your experience in a fast moving product startup company. Join us to disrupt the logistic industry with your skills, abilities and passion for technologies.
Join our Data team which focuses on Deliveree's service quality and business insights.
As a Data Engineer Lead in the team, you will:
Lead our Data Team
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Transform Deliveree’s product data into a centralized Data Warehouse.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with Backend, Product, CS, Business, Finance... teams on product logic to provide reports and dashboards.
Work on the development of our enterprise-grade ecosystem of Data Analytics products (pipeline, crawler, analytics, distributed system, and real time fraud checking,...)

JOB REQUIREMENT

At least 5+ years of experience as a data engineer and at least 1 year of experience as a leader
At least 2-3 years experience with object-oriented/object function scripting languages Python
Understanding of Linux
Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience with PostgreSQL, Airflow, AsyncIO is a huge plus.
Experience with big data tools: Hadoop, Spark, Kafka, etc.

WHAT'S ON OFFER

REGIONAL COMPANY
An exciting opportunity to work with the fastest growing international logistics player.
International environment where you can work and learn with coworkers from different southeast asian markets.
Opportunities for onsite trip in our operating markets.
Relocation Package to HCMC if from far city or country.
FOOD & BEVERAGE
Free high quality office lunch buffet or restaurant menu.
All Day Coffee Station Machine with some of the best coffee beans around.
Free Late Dinner Menu from near restaurant.
Free Flow of Coffee and Drinks (Juice, Coke, Sprite, Red Bull)
All Day Free Snack
Every Friday Special Snack & Beers
COOL SPONSORSHIP
Sponsorship for 6 or 12 months Gym (2 floors above) to stay healthy and in shape!
Monthly Mobile Data Allowance
Company Sponsorship for Personal Laptop
BONUSES
New Product Launch Bonus Package
Loyalty Bonus Package
13th Month Salary
HEALTH & LEAVES
Annual Health Checkup
Attractive Healthcare Insurance Package
15 Days Paid Annual Leave
SOCIAL & ENTERTAINMENT
Welcome Deliveree T-Shirt
Welcome Gift Funky Toy (as part of our longest tradition)
Regular Team Social Events
Cool Entertainment Area (Guitar, Video Games, ...)  

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product, Logistics & Supply Chain

Technical Skills:

Data Engineering

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Salary:

$ 5,000 - $ 6,000

Job ID:

J01096

Status:

Close

Related Job:

DevOps Engineer

Others - Viet Nam


Product

  • Devops
  • Kubernetes
  • Network

Operate and evolve our Kubernetes platform across multiple clusters and environments (Prod, Dev, hybrid on-prem and public cloud), covering control plane operations, node lifecycle, upgrades, and autoscaling at every layer (Cluster Autoscaler, HPA, KEDA). Architect and manage hybrid cloud infrastructure spanning on-premises and public clouds (GCP, AWS), including workload placement, cross-cloud networking, and unified resource management. Own the CI/CD and GitOps experience end-to-end: container build pipelines, image optimization, and progressive delivery via ArgoCD / FluxCD. Own the observability stack as a single pane of glass across all clusters: Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus -- and help push toward agent-assisted SRE workflows. Manage and improve our inference platform: vLLM serving and AIBrix for multi-model orchestration and autoscaling across a fleet of NVIDIA GPUs. Operate platform services: Kafka, Redis, PostgreSQL, OpenSearch. Manage identity and access via Keycloak integrated with Google Workspace; harden SSO, RBAC, and secrets management across the platform. Harden network security across private load balancers, firewalls, and VPC segmentation; design and maintain hub-and-spoke / multi-AZ topologies. Support training infrastructure: self-service VM provisioning, RunPod burst capacity, Weights and Biases integration. Drive infrastructure reliability, cost efficiency, and capacity planning as the platform scales.

Negotiation

View details

Platform Engineer

Ho Chi Minh - Viet Nam


Product

  • Backend
  • Devops
  • Data Engineering

Build and maintain distributed infrastructure handling telemetry, sensory, and control data across cloud and edge environments Design and operate data ingestion and streaming pipelines connecting robot fleets to the cloud in real time, covering video, joint states, audio, and LiDAR Develop and maintain backend services and APIs that power the Company's developer-facing platform, with a focus on reliability and developer experience Manage and evolve cloud native infrastructure using Kubernetes, Docker, and infrastructure as code tooling Ensure platform reliability through monitoring, alerting, autoscaling, failover, and incident response Support ML and robotics teams with data infrastructure for training pipelines, policy rollout, and hardware-in-the-loop simulation Implement secure APIs with access control, rate limiting, and usage metering as we scale

Negotiation

View details

Software Engineer (Digital Twin)

Ho Chi Minh - Viet Nam


Product

  • Python
  • C/C++

Build and maintain high-fidelity digital twin environments for Asimov across MuJoCo, Isaac Sim, and Unreal Engine, calibrated to real hardware behavior. Design and own the systems -- not just the environments -- that let locomotion, autonomy, and perception teams generate, validate, and iterate on simulation scenarios at scale. Build pipelines for asset import, USD and MJCF workflows, sensor modeling, and real-to-sim calibration to keep digital twins synchronized with evolving hardware. Develop photorealistic rendering pipelines in Unreal Engine for synthetic data generation and perception model training. Work with hardware and mechatronics teams to model actuator dynamics, contact physics, and structural behavior, ensuring simulation parameters reflect physical ground truth. Integrate digital twin environments with the Company's locomotion training pipeline (Cyclotron) and autonomy stack, enabling teams to run experiments and close the sim-to-real gap. Contribute to the open-source Asimov simulation stack, including tooling, documentation, and reproducible environment workflows.

Negotiation

View details