Cloud Engineer (AWS Kafka)

ABOUT CLIENT

Our client is using new technology to develop products for the banking industry

JOB DESCRIPTION

Designing, implementing, and maintaining streaming solutions using AWS Managed Streaming for Apache Kafka (MSK).
Monitoring and managing Kafka clusters to ensure optimal performance, scalability, and uptime.
Configuring and fine-tuning MSK clusters, including partitioning strategies, replication, and retention policies.
Collaborating with engineering teams to design and implement event-driven systems and microservices architectures.
Developing and maintaining robust data pipelines for real-time data processing and streaming using Kafka.
Ensuring seamless integration between MSK/SQS/SNS and other AWS services such as Lambda, EventBridge Pipes, S3.
Analyzing and optimizing the performance of Kafka clusters and streaming pipelines to meet high-throughput and low-latency requirements.
Implementing best practices for Kafka topic design, consumer group management, and message serialization (e.g., Avro).
Implementing security best practices for MSK, including encryption, authentication, and access controls.
Ensuring compliance with industry standards and regulations related to data streaming and event processing.
Setting up comprehensive monitoring and alerting for Kafka clusters and streaming applications using AWS CloudWatch and Datadog.
Troubleshooting and resolving issues related to data loss, message lag, and streaming failures.
Designing and implementing data integration solutions to stream data between various sources and targets using MSK.
Leading data transformation and enrichment processes to ensure data quality and consistency in streaming applications.

JOB REQUIREMENT

Bachelor's or Master's degree in Computer Science, Information Technology, or related field.
Minimum 5 years of experience in event-driven architectures and streaming solutions.
Proficiency in Apache Kafka, with at least 2 years specifically in AWS MSK.
Design and implementation experience of high-throughput, low-latency streaming applications in AWS environments.
Strong understanding of Kafka internals and proficiency in programming languages such as Java, Python, or Scala.
Experience with AWS services like Lambda, Kinesis, S3, and IAM in conjunction with MSK.
Familiarity with CI/CD tools and IaC tools like CloudFormation, Terraform, or CDK.
Strong analytical and problem-solving skills with effective communication and collaboration abilities.
AWS Certified Solutions Architect, AWS Certified Developer, or similar AWS certification.
Strong analytical and problem-solving skills and effective communication and collaboration abilities.
Ability to manage multiple priorities and projects in a fast-paced environment.

WHAT'S ON OFFER

Company offers meal and parking benefits.
Full benefits and probationary salary provided.
Insurance coverage as per Vietnamese labor law and premium health care for employees and their families.
Work environment is values-driven, international, and agile in nature.
Opportunities for overseas travel related to training and work.
Participation in internal Hackathons and company events such as team building, coffee runs, and blue card activities.
Additional benefits include a 13th-month salary and performance bonuses.
Employees receive 15 days of annual leave and 3 days of sick leave per year.
Work-life balance with a 40-hour workweek from Monday to Friday.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product

Technical Skills:

Kafka, AWS

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Hybrid

Salary:

Negotiation

Job ID:

J01556

Status:

Close

Related Job:

DevOps Engineer

Others - Viet Nam


Product

  • Devops
  • Kubernetes
  • Network

Operate and evolve our Kubernetes platform across multiple clusters and environments (Prod, Dev, hybrid on-prem and public cloud), covering control plane operations, node lifecycle, upgrades, and autoscaling at every layer (Cluster Autoscaler, HPA, KEDA). Architect and manage hybrid cloud infrastructure spanning on-premises and public clouds (GCP, AWS), including workload placement, cross-cloud networking, and unified resource management. Own the CI/CD and GitOps experience end-to-end: container build pipelines, image optimization, and progressive delivery via ArgoCD / FluxCD. Own the observability stack as a single pane of glass across all clusters: Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus -- and help push toward agent-assisted SRE workflows. Manage and improve our inference platform: vLLM serving and AIBrix for multi-model orchestration and autoscaling across a fleet of NVIDIA GPUs. Operate platform services: Kafka, Redis, PostgreSQL, OpenSearch. Manage identity and access via Keycloak integrated with Google Workspace; harden SSO, RBAC, and secrets management across the platform. Harden network security across private load balancers, firewalls, and VPC segmentation; design and maintain hub-and-spoke / multi-AZ topologies. Support training infrastructure: self-service VM provisioning, RunPod burst capacity, Weights and Biases integration. Drive infrastructure reliability, cost efficiency, and capacity planning as the platform scales.

Negotiation

View details

Platform Engineer

Ho Chi Minh - Viet Nam


Product

  • Backend
  • Devops
  • Data Engineering

Build and maintain distributed infrastructure handling telemetry, sensory, and control data across cloud and edge environments Design and operate data ingestion and streaming pipelines connecting robot fleets to the cloud in real time, covering video, joint states, audio, and LiDAR Develop and maintain backend services and APIs that power the Company's developer-facing platform, with a focus on reliability and developer experience Manage and evolve cloud native infrastructure using Kubernetes, Docker, and infrastructure as code tooling Ensure platform reliability through monitoring, alerting, autoscaling, failover, and incident response Support ML and robotics teams with data infrastructure for training pipelines, policy rollout, and hardware-in-the-loop simulation Implement secure APIs with access control, rate limiting, and usage metering as we scale

Negotiation

View details

Software Engineer (Digital Twin)

Ho Chi Minh - Viet Nam


Product

  • Python
  • C/C++

Build and maintain high-fidelity digital twin environments for Asimov across MuJoCo, Isaac Sim, and Unreal Engine, calibrated to real hardware behavior. Design and own the systems -- not just the environments -- that let locomotion, autonomy, and perception teams generate, validate, and iterate on simulation scenarios at scale. Build pipelines for asset import, USD and MJCF workflows, sensor modeling, and real-to-sim calibration to keep digital twins synchronized with evolving hardware. Develop photorealistic rendering pipelines in Unreal Engine for synthetic data generation and perception model training. Work with hardware and mechatronics teams to model actuator dynamics, contact physics, and structural behavior, ensuring simulation parameters reflect physical ground truth. Integrate digital twin environments with the Company's locomotion training pipeline (Cyclotron) and autonomy stack, enabling teams to run experiments and close the sim-to-real gap. Contribute to the open-source Asimov simulation stack, including tooling, documentation, and reproducible environment workflows.

Negotiation

View details