Data Engineer

ABOUT CLIENT

Our client is using new technology to develop products for the banking industry

JOB DESCRIPTION

Our client is currently working on developing some of the world's fastest growing digital banks, and the data team plays a crucial role in shaping the bank's vision. The aim is to create a platform that encourages economic participation and broadens financial inclusion. This is achieved through the implementation of innovative data and analytic solutions to provide high-quality services and products to our customers, ultimately optimizing the business.
If you join as a Data Engineer, your role will involve creating solutions that support informed decision-making and innovation by providing clean, protected, quality, and auditable data from various sources into fit-for-purpose data products. This includes designing, developing, testing, deploying, and monitoring data pipelines in Databricks on AWS from a wide variety of data sources. It also involves designing, developing, testing, deploying, and monitoring scalable code with PySpark and SQL in Databricks.
Identifying opportunities to enhance internal process through code optimization and automation will be a part of the role, as well as building data quality dashboards, lineage flows, and monitoring tools to utilize the data pipeline. This provides active monitoring and actionable insight into overall data quality and data governance. You will also be involved in assisting in migrating data from legacy systems onto newly developed solutions, and following and leading best practices on all data security, retention, and privacy policies.

JOB REQUIREMENT

Completion of a Bachelor's degree is required.
A minimum of 2 years of experience in developing ETL/ELT pipelines is necessary.
Demonstrated proficiency in solution design, development, implementation, reporting, and analysis is essential.
Skillfulness in Apache-Spark, Python, and SQL languages is required.
Proficiency in working with various data formats such as Text, Delta, Parquet, JSON, CSV, and XML is important.
Familiarity with Spark structured streaming is necessary.
Experience with AWS infrastructure, particularly S3, is required.
Solid understanding of git-based version control, DevOps, and CI/CD is essential. Experience with the Atlassian stack is a bonus.
Knowledge of common web API frameworks and web services is necessary.
Strong teamwork, relationship, and client management skills are required, along with the ability to influence peers and senior management to achieve team goals.
Willingness to adopt modern technology, best practices, and ways of working is important.

WHAT'S ON OFFER

Company offers meal and parking benefits.
Full benefits and probationary salary provided.
Insurance coverage as per Vietnamese labor law and premium health care for employees and their families.
Work environment is values-driven, international, and agile in nature.
Opportunities for overseas travel related to training and work.
Participation in internal Hackathons and company events such as team building, coffee runs, and blue card activities.
Additional benefits include a 13th-month salary and performance bonuses.
Employees receive 15 days of annual leave and 3 days of sick leave per year.
Work-life balance with a 40-hour workweek from Monday to Friday.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Digital Bank, Product

Technical Skills:

Data Engineering, Big Data

Location:

Ho Chi Minh - Viet Nam

Salary:

Negotiation

Job ID:

J01710

Status:

Active

Related Job:

Senior DevOps (Data Platform)

Ho Chi Minh - Viet Nam


Digital Bank, Product

  • Devops
  • Spark

Managing workloads on EC2 clusters using DataBricks/EMR for efficient data processing Collaborating with stakeholders to implement a Data Mesh architecture for multiple closely related enterprise entities Utilizing Infrastructure as Code (IaC) tools for defining and managing data platform user access Implementing role-based access control (RBAC) mechanisms to enforce least privilege principles Collaborating with cross-functional teams to design, implement, and optimize data pipelines and workflows Utilizing distributed engines such as Spark for efficient data processing and analysis Establishing operational best practices for data warehousing tools Managing storage technologies to meet business requirements Troubleshooting and resolving platform-related issues Staying updated on emerging technologies and industry trends Documenting processes, configurations, and changes for comprehensive system documentation.

Negotiation

View details

Senior Machine Learning Engineer

Ho Chi Minh, Ha Noi - Viet Nam


Information Technology & Services

  • Machine Learning

Negotiation

View details

Fullstack Engineer - BRAIN

Ho Chi Minh - Viet Nam


product, Investment Management

  • Frontend
  • Backend

Create intricate single page applications. Construct components that can be used across various interfaces. Design layouts that are responsive for both desktop and mobile devices. Automate the testing procedures for the user interface. Develop services and APIs for backend applications. Incorporate AWS and external cloud services. Enhance application speed and scalability. Actively contribute to an agile engineering team focused on continual improvement. Utilize leading open-source technologies like MySQL, PostgreSQL, ELK stack, Sentry, Redis, Git, etc. Take part in periodic on-call responsibilities.

Negotiation

View details