Big Data Engineer

JOB DESCRIPTION

Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities.
Implementing ETL process to transform data from OLTP databases to OLAP DB and Data Lake using event streaming platforms such as Kafka.
Develop, transform large datasets and maintain robust data pipelines that can support various use cases with high performance.
Monitoring performance and advising any necessary infrastructure changes.
Defining data retention, data governance policies and framework

JOB REQUIREMENT

At least 5 years experience in Java programming languages.
At least 5 years experience with Big Data, Java Spring, Kafka Streams, Spark Streams frameworks.
Experience in large scale deployment and performance tuning.
Experience with schema design and dimensional data modeling
Experience with non-relational and relational databases (MySQL, MongoDB)
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Experience with data pipeline and workflow management tool
Fluent written and spoken English.
Strong analytical and problem-solving skills.
Bonus Points if You
Have experience with Delta Lake technology.
Good Docker/Kubernetes knowledge is a plus.
Good Kibana, Elasticsearch ELK stack knowledge is a plus.

WHAT'S ON OFFER

We have a track record of success and a vision and a plan for a promising future. Our company has closed to 100% market share for player location regulatory compliance in the US gaming space. And we have fuelled that momentum with the expansion into new markets - media & entertainment and fintech.
We are proud of our values and we live them in all of our actions, conversations, and work: There’s always a way; Together we can do more; Aim higher. Then higher; Act with integrity; For the greater good.
We are proud to be part of a global team that develops award-winning solutions for some of the world’s largest and most innovative companies.
We will support you on your learning journey. We invest in employee career growth and development. Our learning & development commitment includes leadership and technical development, a substantial budget for education and training, as well as dedicated work hours for self-study.
We care about our team. Our team is talented, has a bias for action, and is known for their positive attitude and energy. Team members are generously rewarded with competitive salaries, incentives, and a comprehensive benefits package.
We care about giving back to the communities in which we live and work. We supports a  broad range of community initiatives through donations and employee volunteer activities.
We know that work can be fun. We take the time to create employee events and experiences where everyone can connect and celebrate.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product

Technical Skills:

Data Engineering, Java

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Salary:

Negotiation

Job ID:

J01078

Status:

Close

Related Job:

DevOps Engineer

Others - Viet Nam


Product

  • Devops
  • Kubernetes
  • Network

Operate and evolve our Kubernetes platform across multiple clusters and environments (Prod, Dev, hybrid on-prem and public cloud), covering control plane operations, node lifecycle, upgrades, and autoscaling at every layer (Cluster Autoscaler, HPA, KEDA). Architect and manage hybrid cloud infrastructure spanning on-premises and public clouds (GCP, AWS), including workload placement, cross-cloud networking, and unified resource management. Own the CI/CD and GitOps experience end-to-end: container build pipelines, image optimization, and progressive delivery via ArgoCD / FluxCD. Own the observability stack as a single pane of glass across all clusters: Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus -- and help push toward agent-assisted SRE workflows. Manage and improve our inference platform: vLLM serving and AIBrix for multi-model orchestration and autoscaling across a fleet of NVIDIA GPUs. Operate platform services: Kafka, Redis, PostgreSQL, OpenSearch. Manage identity and access via Keycloak integrated with Google Workspace; harden SSO, RBAC, and secrets management across the platform. Harden network security across private load balancers, firewalls, and VPC segmentation; design and maintain hub-and-spoke / multi-AZ topologies. Support training infrastructure: self-service VM provisioning, RunPod burst capacity, Weights and Biases integration. Drive infrastructure reliability, cost efficiency, and capacity planning as the platform scales.

Negotiation

View details

Platform Engineer

Ho Chi Minh - Viet Nam


Product

  • Backend
  • Devops
  • Data Engineering

Build and maintain distributed infrastructure handling telemetry, sensory, and control data across cloud and edge environments Design and operate data ingestion and streaming pipelines connecting robot fleets to the cloud in real time, covering video, joint states, audio, and LiDAR Develop and maintain backend services and APIs that power the Company's developer-facing platform, with a focus on reliability and developer experience Manage and evolve cloud native infrastructure using Kubernetes, Docker, and infrastructure as code tooling Ensure platform reliability through monitoring, alerting, autoscaling, failover, and incident response Support ML and robotics teams with data infrastructure for training pipelines, policy rollout, and hardware-in-the-loop simulation Implement secure APIs with access control, rate limiting, and usage metering as we scale

Negotiation

View details

Software Engineer (Digital Twin)

Ho Chi Minh - Viet Nam


Product

  • Python
  • C/C++

Build and maintain high-fidelity digital twin environments for Asimov across MuJoCo, Isaac Sim, and Unreal Engine, calibrated to real hardware behavior. Design and own the systems -- not just the environments -- that let locomotion, autonomy, and perception teams generate, validate, and iterate on simulation scenarios at scale. Build pipelines for asset import, USD and MJCF workflows, sensor modeling, and real-to-sim calibration to keep digital twins synchronized with evolving hardware. Develop photorealistic rendering pipelines in Unreal Engine for synthetic data generation and perception model training. Work with hardware and mechatronics teams to model actuator dynamics, contact physics, and structural behavior, ensuring simulation parameters reflect physical ground truth. Integrate digital twin environments with the Company's locomotion training pipeline (Cyclotron) and autonomy stack, enabling teams to run experiments and close the sim-to-real gap. Contribute to the open-source Asimov simulation stack, including tooling, documentation, and reproducible environment workflows.

Negotiation

View details