Senior Python Data Engineer

JOB DESCRIPTION

We are looking for a Senior Python Data Engineer to join our team in HCMC, Vietnam
 
Company is a highly successful, hyper-growth, market-leading technology provider, with a world-class team, spread over 3 continents. Operating at the center of a new generation of cybersecurity companies, Company leverages today's geolocation data to make better risk-based decisions.
 
Working as a cornerstone piece of the infrastructure for some of the world’s largest tech companies (Amazon, BBC, Draft Kings, Akamai, Pokerstars, etc.), Company has become the global market-leader for compliance grade geolocation data and a critical piece of the decisioning engine.
 
Pre-IPO and with the ambition to lead the technical landscape for a generation, We are looking for a Senior Python Data Engineer to join our team in HCMC, Vietnam who will match our ambitions, and help us Better ourselves and achieve our goals. The Senior Python Data Engineer contributes to Company's success by building the data pipelines to transform the raw data into business insights which assist our company as well as our customers in the decision making process.
 
Our ideal candidate has at least five (5) years' of working experience as a Data Engineer and relevant experience in database technologies such as MySQL and NoSQL. You have experience in setting up the data pipelines in your previous companies. You are able to, and enjoy, working across multiple teams across locations, have good communication and interpersonal skills. You have proven your ability to work under tight timelines in your previous work experience and are highly organized and detail-oriented.
 
In this role, you will:
Understand the business, knowledge and insight.
Develop, transform large datasets and maintain robust data pipelines that can support various use cases with high performance.
Generate automated reports for customers with high quality stand
Develop and maintain interactive reporting tools.
Stay informed of database changes to ensure reports are always working as expected.
Coordinate with the Engineering team on release cycles and operational suggestions to monitor releases to foresee any issues that may arise with the data pipelines and reports.
Follow best practices and processes established by the team.

JOB REQUIREMENT

Advanced knowledge and experience using Python for data processing.
4+ years of in-depth practical experience with SQL and NoSQL.
Experience in large-scale deployment and performance tuning.
Development experience in Linux system environments.
Familiar with spinning up the infrastructure Amazon, Google Cloud Platform, or Azure.
Experience with ETL/ELT and integration flows in Airflow or similar.
Knowledge of data warehousing concepts.
Knowledge of BI technologies, methodologies and their application.
Knowledge of ELK Stack - Elasticsearch, Logstash, and Kibana using RESTful and JSON to build reporting solutions.
Ability to quickly learn new technologies and push the envelope for performance and reliability.
Good Docker/Kubernetes knowledge is a plus.
Strong analytical and problem-solving skills.
Results-orientated with the ability to meet deadlines.
Good communication and interpersonal skills as well as good written and spoken English skills.

WHAT'S ON OFFER

Why work with our client?
Company is based in Vancouver, Canada, with a development office in Ho Chi Minh City. A career with Company opens the door to exciting opportunities including the potential to travel, as well as temporary or permanent relocation to our Vancouver office. With a sponsored Canadian work visa, you’ll also be on the fast track to permanent residency and even Canadian citizenship if you desire. At Company, you won’t just be a small part of a software company, you will play an important role in an award-winning technology leader, where you will own your projects and create your own success. We have a number of exciting opportunities in all aspects of software development, risk and data analysis, quality assurance and reporting.
Benefits:
Health care insurance by Liberty, Bao Viet and social insurance
Opportunities for promotions and career development in a dynamic
Great chance to develop your skills and competences, with multiple trainings and job opportunities
Relaxed, friendly atmosphere as well as excellent working facilities
Social insurance follow Vietnam’s law
Only 40h working/week (Full weekend off)
12 days holiday/year

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product

Technical Skills:

Python, Data Engineering

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Salary:

$ 2,000 - $ 2,500

Job ID:

J00233

Status:

Close

Related Job:

Senior Business Analyst

Ho Chi Minh - Viet Nam


Outsource

  • Business Analyst

Negotiation

View details

AI-Native Software Engineering Lead

Ho Chi Minh - Viet Nam


Outsource

  • Backend
  • AI

Responsible for developing and evolving the AI-native SDLC operating model, including agent workflow designs, verification gates, context management standards, and evaluation frameworks Build and lead multi-agent systems using orchestration layers such as Claude Code, GitHub Copilot Workspace, Cursor, LangGraph, CrewAI, or equivalent, from prototype to production Collaborate with the Director of Engineering to contribute to and maintain the company's AI toolchain selection criteria and evaluate tools with engineering rigor, providing internal guidance on when AI is beneficial and when it is not Establish engineering standards, agent evaluation loops, and AI output quality gates across the delivery organization Previous experience in a lead, principal, or staff engineer role with demonstrated cross-team influence Experience in outsourcing, consulting, or multi-client delivery environments Track record of building or leading an internal community of practice, guild, or AI adoption program Develop and continuously evolve the company's AI-native SDLC playbook, including standards, workflow templates, case studies, and guardrails that delivery teams can adopt immediately Design and lead internal upskilling programs that transition engineers from AI-assisted to AI-native working patterns Keep track of the AI capability frontier, model improvements, new agent frameworks, and emerging risks, translating signals into timely updates to KMS's practices Work closely alongside Delivery Teams as an AI transformation advisor and execution partner, identifying the highest-value automation opportunities across the SDLC and coordinating with the team to implement them Design and deploy agent-orchestrated workflows tailored to each client's stack, team maturity, and delivery context, with measurable ROI Build business cases for AI-native adoption with clients and account managers, framing the value in terms of velocity, quality, and cost Represent the company's AI-native engineering capabilities in client conversations, QBRs, and RFP responses as a credible technical authority

Negotiation

View details

Platform Lead

Others - Singapore


Product

  • Backend
  • Devops
  • Data Engineering

Develop and expand distributed systems to handle large volumes of sensory, telemetry, and control data across cloud and edge environments, facilitating real-time connections for fleets of robots. Create the API Platform with a focus on high reliability, exceptional developer experience, and robust multimodal AI capabilities accessible through user-friendly APIs and SDKs. Establish extensive training and inference platforms for foundation models used in robot autonomy, teleoperation, and developer integrations. Devise data ingestion and streaming pipelines for real-time connectivity of robot fleets to the cloud, covering various data inputs such as video, LiDAR, joint states, and audio. Oversee and advance a modern cloud native infrastructure stack employing Kubernetes, Docker, and infrastructure as code tools. Ensure platform reliability through telemetry, monitoring, alerting, autoscaling, failover, and disaster recovery measures. Make infrastructure decisions pertaining to distributed storage, consensus protocols, GPU orchestration, network reliability, and API security. Foster collaboration across ML, robotics, and product teams to facilitate hardware in the loop simulation, policy rollout, continuous learning, and CI/CD workflows. Implement secure APIs featuring fine-grained access control, usage metering, rate limiting, and billing integration to accommodate a growing user base.

Negotiation

View details