Senior Data Development Engineer (Python/Go)

JOB DESCRIPTION

Our Client develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on exploiting market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform.
Technologists at our Client research, design, code, test and deploy projects while working collaboratively with researchers and portfolio managers. Our environment is relaxed yet intellectually driven. Our teams are lean and agile, which means rapid prototyping of products with immediate user feedback. We seek people who think in code, aspire to tackle undiscovered computer science challenges and are motivated by being around like-minded people. In fact, of the 600 employees globally, approximately 500 of them code every day.
Our Client’s success is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. That’s a key ingredient in remaining a leader in any industry.
Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it. Our collective intelligence will drive us there.
The Role: We are seeking for an exceptionally talented software engineer to develop large scale, complex software system that controls our datasets creation. Datasets are consumed internally by our researchers and our quantitative models. The successful candidate will be working on current and next-generation data acquiring system that ingest data from different format, protocol. Your responsibilities will include:
Requirement-gathering, architecting and designing extremely large-scale data system.
Developing software systems and micro-services in Python/Go.
Integrating state-of-the-art open source software and technologies.
Assembling platforms and frameworks to automate our data enrichment process.
Developing and architecting next generation monitoring platform for data guarding, data governance.
Collaborating with internal technology, research and portfolio management team.

JOB REQUIREMENT

Background in Computer Science, Electrical Engineering, Applied Math or Physics, with minimum Bachelor’s degree. Proof of good academic record (such as GPA and other relevant test scores).
Expert in design large scale distributed software system.
Expert programming skills in Python and Python stack (Flask, Celery…).
Expert in database design (either SQL or NoSQL), know how to optimize/design database schema that can evolve overtime.
Experience working under Linux environment, familiar with Perl/Shell scripting.
Candidate should be familiar with debugger/tool under Linux such as gcc, g++, gdb, maven/ant build
Experience with container technology (docker, kubernetes), micro-services, big data processing (spark, kafka, hdfs…) is a BIG PLUS.
Strong understanding of data structures and algorithms.
Be an analytical thinker with exceptional problem solving skills.
Good communication skills: must be fluent in English, spoken and written
Interested in applying technology to real world situation, comfortable working in fast paced work environment, detail oriented and capable performing tasks under time pressure.
Knowledge of basic statistics/probability, familiar with concepts such as correlation, standard deviation and how to compute is a PLUS

WHAT'S ON OFFER

Competitive and attractive compensation package with clear career road-map – where you feel challenged everyday
We offer a strong culture of learning and development: training courses, library, speakers, share and learn events
Learn from who sits next to you! Working in WQ you are surrounded by smart and talented people
Employee resources groups with strong diversity and inclusion culture
Premium Health Insurance and Employee Assistance Program
Generous time-off policy, unlimited sick days, re-creation sabbatical leave (based on tenure), Trade Union benefits for staff and family
Team building activities every month: Local engagement events, monthly team lunch – Employee clubs: football, ping-pong, badminton, yoga, running, PS5, movies, etc.
Annual company trip and occasional global conferences – opportunity to travel and connect with our global teams
Happy-hour with tea break, snacks and meals every day in the office!
The position is based in Hanoi or Ho Chi Minh city.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product

Technical Skills:

Data, Python, Golang

Location:

Ho Chi Minh, Ha Noi - Viet Nam

Working Policy:

Salary:

$ 3,500 - $ 5,000

Job ID:

J00442

Status:

Close

Related Job:

DevOps Engineer

Others - Viet Nam


Product

  • Devops
  • Kubernetes
  • Network

Operate and evolve our Kubernetes platform across multiple clusters and environments (Prod, Dev, hybrid on-prem and public cloud), covering control plane operations, node lifecycle, upgrades, and autoscaling at every layer (Cluster Autoscaler, HPA, KEDA). Architect and manage hybrid cloud infrastructure spanning on-premises and public clouds (GCP, AWS), including workload placement, cross-cloud networking, and unified resource management. Own the CI/CD and GitOps experience end-to-end: container build pipelines, image optimization, and progressive delivery via ArgoCD / FluxCD. Own the observability stack as a single pane of glass across all clusters: Grafana, Mimir, Tempo, Loki, Pyroscope, OnCall, Prometheus -- and help push toward agent-assisted SRE workflows. Manage and improve our inference platform: vLLM serving and AIBrix for multi-model orchestration and autoscaling across a fleet of NVIDIA GPUs. Operate platform services: Kafka, Redis, PostgreSQL, OpenSearch. Manage identity and access via Keycloak integrated with Google Workspace; harden SSO, RBAC, and secrets management across the platform. Harden network security across private load balancers, firewalls, and VPC segmentation; design and maintain hub-and-spoke / multi-AZ topologies. Support training infrastructure: self-service VM provisioning, RunPod burst capacity, Weights and Biases integration. Drive infrastructure reliability, cost efficiency, and capacity planning as the platform scales.

Negotiation

View details

Platform Engineer

Ho Chi Minh - Viet Nam


Product

  • Backend
  • Devops
  • Data Engineering

Build and maintain distributed infrastructure handling telemetry, sensory, and control data across cloud and edge environments Design and operate data ingestion and streaming pipelines connecting robot fleets to the cloud in real time, covering video, joint states, audio, and LiDAR Develop and maintain backend services and APIs that power the Company's developer-facing platform, with a focus on reliability and developer experience Manage and evolve cloud native infrastructure using Kubernetes, Docker, and infrastructure as code tooling Ensure platform reliability through monitoring, alerting, autoscaling, failover, and incident response Support ML and robotics teams with data infrastructure for training pipelines, policy rollout, and hardware-in-the-loop simulation Implement secure APIs with access control, rate limiting, and usage metering as we scale

Negotiation

View details

Software Engineer (Digital Twin)

Ho Chi Minh - Viet Nam


Product

  • Python
  • C/C++

Build and maintain high-fidelity digital twin environments for Asimov across MuJoCo, Isaac Sim, and Unreal Engine, calibrated to real hardware behavior. Design and own the systems -- not just the environments -- that let locomotion, autonomy, and perception teams generate, validate, and iterate on simulation scenarios at scale. Build pipelines for asset import, USD and MJCF workflows, sensor modeling, and real-to-sim calibration to keep digital twins synchronized with evolving hardware. Develop photorealistic rendering pipelines in Unreal Engine for synthetic data generation and perception model training. Work with hardware and mechatronics teams to model actuator dynamics, contact physics, and structural behavior, ensuring simulation parameters reflect physical ground truth. Integrate digital twin environments with the Company's locomotion training pipeline (Cyclotron) and autonomy stack, enabling teams to run experiments and close the sim-to-real gap. Contribute to the open-source Asimov simulation stack, including tooling, documentation, and reproducible environment workflows.

Negotiation

View details