Senior/Lead Data Engineer

ABOUT CLIENT

Our client is a global technology company that specializes in providing innovative IT solutions for the financial services industry

JOB DESCRIPTION

Implement technical infrastructure for compliance initiatives such as SOC 2 and GDPR, including building systems for a future data catalog and managing data access.
Design and develop scalable data pipelines for batch and event-driven data ingestion to facilitate real-time analytics and machine learning feature capabilities.
Establish foundational patterns for data quality monitoring, including automated freshness and integrity checks for critical data assets to enhance trust and reliability in the data.
Proactively lead the migration of analytical workloads from a shared production cluster to a scalable cloud data warehouse, such as Snowflake.

JOB REQUIREMENT

At least 7 years of experience in data engineering, emphasizing the construction and management of core data platforms.
Specialization in Modern Data Warehousing: Proven expertise in designing, constructing, and maintaining robust and scalable data warehouses such as Snowflake, BigQuery, or Redshift.
Proficiency in Event-Driven Architectures: Hands-on experience with real-time data processing technologies and patterns (e.g., Kafka, Kinesis, Flink, Spark Streaming).
Extensive Knowledge of Database Operations: Strong comprehension of database performance tuning, monitoring, disaster recovery, and the operational considerations of large-scale data systems.
Experience with Data Governance & Compliance: Hands-on involvement in creating technical solutions to meet compliance requirements like SOC 2 or GDPR, including data access controls and cataloging.
Pragmatic Problem-Solving Skills: Demonstrated ability to select appropriate solutions without over-engineering, while ensuring robustness in a startup environment. Proficiency in SQL and Python.
AWS Experience: Familiarity with core AWS services used in a platform context (RDS, S3, IAM, Kinesis, etc.). Experience using dbt (Data Build Tool) for data transformations in a production environment is highly desirable.
Infrastructure as Code Experience: Familiarity with tools such as Terraform for managing data infrastructure.
Experience in a Startup Environment: Comfortable working in an ambiguous and fast-paced setting.

WHAT'S ON OFFER

Generous salary package
Additional month's salary
Performance-based bonuses
Access to professional English training
Comprehensive health insurance
Ample annual leave opportunities

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Outsource

Technical Skills:

Data Engineering, Python, AWS

Location:

Ho Chi Minh, Ha Noi - Viet Nam

Working Policy:

Salary:

Negotiation

Job ID:

J00685

Status:

Close

Related Job:

Senior Business Analyst

Ho Chi Minh - Viet Nam


Outsource

  • Business Analyst

Negotiation

View details

AI-Native Software Engineering Lead

Ho Chi Minh - Viet Nam


Outsource

  • Backend
  • AI

Responsible for developing and evolving the AI-native SDLC operating model, including agent workflow designs, verification gates, context management standards, and evaluation frameworks Build and lead multi-agent systems using orchestration layers such as Claude Code, GitHub Copilot Workspace, Cursor, LangGraph, CrewAI, or equivalent, from prototype to production Collaborate with the Director of Engineering to contribute to and maintain the company's AI toolchain selection criteria and evaluate tools with engineering rigor, providing internal guidance on when AI is beneficial and when it is not Establish engineering standards, agent evaluation loops, and AI output quality gates across the delivery organization Previous experience in a lead, principal, or staff engineer role with demonstrated cross-team influence Experience in outsourcing, consulting, or multi-client delivery environments Track record of building or leading an internal community of practice, guild, or AI adoption program Develop and continuously evolve the company's AI-native SDLC playbook, including standards, workflow templates, case studies, and guardrails that delivery teams can adopt immediately Design and lead internal upskilling programs that transition engineers from AI-assisted to AI-native working patterns Keep track of the AI capability frontier, model improvements, new agent frameworks, and emerging risks, translating signals into timely updates to KMS's practices Work closely alongside Delivery Teams as an AI transformation advisor and execution partner, identifying the highest-value automation opportunities across the SDLC and coordinating with the team to implement them Design and deploy agent-orchestrated workflows tailored to each client's stack, team maturity, and delivery context, with measurable ROI Build business cases for AI-native adoption with clients and account managers, framing the value in terms of velocity, quality, and cost Represent the company's AI-native engineering capabilities in client conversations, QBRs, and RFP responses as a credible technical authority

Negotiation

View details

Platform Lead

Others - Singapore


Product

  • Backend
  • Devops
  • Data Engineering

Develop and expand distributed systems to handle large volumes of sensory, telemetry, and control data across cloud and edge environments, facilitating real-time connections for fleets of robots. Create the API Platform with a focus on high reliability, exceptional developer experience, and robust multimodal AI capabilities accessible through user-friendly APIs and SDKs. Establish extensive training and inference platforms for foundation models used in robot autonomy, teleoperation, and developer integrations. Devise data ingestion and streaming pipelines for real-time connectivity of robot fleets to the cloud, covering various data inputs such as video, LiDAR, joint states, and audio. Oversee and advance a modern cloud native infrastructure stack employing Kubernetes, Docker, and infrastructure as code tools. Ensure platform reliability through telemetry, monitoring, alerting, autoscaling, failover, and disaster recovery measures. Make infrastructure decisions pertaining to distributed storage, consensus protocols, GPU orchestration, network reliability, and API security. Foster collaboration across ML, robotics, and product teams to facilitate hardware in the loop simulation, policy rollout, continuous learning, and CI/CD workflows. Implement secure APIs featuring fine-grained access control, usage metering, rate limiting, and billing integration to accommodate a growing user base.

Negotiation

View details