Cloud Platform Engineers/Architects

ABOUT CLIENT

Our client is a leading global technology company that provides a wide range of IT services and solutions. With a strong focus on innovation and digital transformation, our client helps businesses adapt to the ever-changing technological landscape. Their expertise in areas like cloud computing, cybersecurity, and AI makes them a valuable partner for organizations.

JOB DESCRIPTION

Create, execute, and oversee cloud-based infrastructure on GCP and AWS
Develop, automate, and enhance data platform provisioning, scaling, and maintenance tasks
Prioritize security across all infrastructure and data platform operations
Construct automations and frameworks to streamline deployment, scaling, and data analytics tasks
Deliver high-quality content using appropriate document templates
Keep informed about industry best practices and emerging technologies to enhance Platform & Infrastructure
Employ containerization technologies such as Docker and orchestration tools like Kubernetes
Implement CI/CD pipelines and adhere to DevOps principles
Use Python and other common programming languages for automation duties
Collaborate with APIs and scripting/frameworks for operational efficiency
Implement and utilize monitoring and logging tools effectively
Collaborate with a variety of operating systems for compatibility and efficiency
Manage network configurations ensuring reliability, security, and performance
Troubleshoot and resolve infrastructure, applications, and networking issues

JOB REQUIREMENT

Engineer level:
The candidate should have proficiency in programming/scripting languages such as Java, Python, SQL, and Bash, as well as experience with tools like Terraform and Infrastructure as Code (IaC).
They should have a strong understanding of DevOps practices, CI/CD processes, and container orchestration, with a preference for experience in Cloud platforms, particularly GCP.
A minimum of 4 years of experience is required, along with fundamentals in Linux operating systems and Git source control.
Experience with data pipelines ETL/ELT fundamentals and SQL is essential, as well as good verbal communication skills in English.
Additionally, expertise in Docker and Kubernetes, knowledge of Istio Service Mesh and routing, and proficiency in SQL and DBT for data processing are necessary.
Familiarity with DevSecOps practices, including static analysis, composition analysis, vulnerability scanning, and secret management, is an advantage.
A solid understanding of CI/CD principles is essential, and experience with cloud providers such as GCP and AWS is preferred.
The candidate should be familiar with a range of tooling and frameworks, including GitHub for version control, Artifactory for artifact management, Codefresh for CI/CD pipelines, and Tableau for reporting.
Experience with Airflow for workflow automation, Helm or Kustomize for Kubernetes management, Twistlock for container security, Checkmarks for static application security testing, Blackduck for open-source security scanning, Ansible for configuration management and automation, and OpenShift for container orchestration is also beneficial.
Qualifications for this position include a Bachelor's degree in Computer Science or an Engineering field and certification in Google Cloud Platform (GCP) and/or Amazon Web Services (AWS).
Frontend skills such as JavaScript, Node.js, and React, Azure AD knowledge, and experience with Ansible for configuration management and automation are considered good-to-have requirements.
 
Architect level:
An architect-level candidate should have all the qualifications above, as well as a minimum of 8 years of experience with strong expertise in cloud architecture, particularly in GCP.
They should also have experience in designing resilient and scalable frameworks to address business needs and be capable of collaborating with stakeholders to translate business requirements into solutions, bridging the gap between cross-functional teams with strong communication skills.

WHAT'S ON OFFER

This position offers hybrid working arrangements, with three days working in the office and flexible hours.
Salary is negotiable based on candidate expectations.
Employees are entitled to 18 paid leaves annually, which includes 12 annual leaves and 6 personal leaves.
The insurance plan includes coverage based on full salary, a 13th-month salary, and performance bonuses.
A monthly meal allowance of 730,000 VND is provided.
Employees receive 100% full salary and benefits from the start of employment.
Medical benefits are extended to the employee and their family.
The work environment is fast-paced, flexible, and multicultural with opportunities for travel to 49 countries.
The company provides complimentary snacks, refreshments, and parking facilities.
Internal training programs covering technical, functional, and English language skills are offered.
The regular working hours are from 08:30 AM to 06:00 PM on Mondays to Fridays, inclusive of meal breaks.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Information Technology & Services

Technical Skills:

Devops, Google Cloud

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Hybrid

Salary:

Negotiation

Job ID:

J01637

Status:

Close

Related Job:

Senior/ Lead Data Engineer (Data Platform / MLOps)

Ho Chi Minh, Ha Noi - Viet Nam


Information Technology & Services

You will be responsible for managing, designing, and enhancing data systems and workflows that drive key business decisions. The role is focused 75% on data engineering, involving the construction and optimization of data pipelines and architectures, and 25% on supporting data science initiatives through collaboration with data science teams for machine learning workflows and advanced analytics. You will leverage technologies like Python, Airflow, Kubernetes, and AWS to deliver high-quality data solutions. Architect, develop, and maintain scalable data infrastructure, including data lakes, pipelines, and metadata repositories, ensuring the timely and accurate delivery of data to stakeholders. Work closely with data scientists to build and support data models, integrate data sources, and support machine learning workflows and experimentation environments. Develop and optimize large-scale, batch, and real-time data processing systems to enhance operational efficiency and meet business objectives. Leverage Python, Apache Airflow, and AWS services to automate data workflows and processes, ensuring efficient scheduling and monitoring. Utilize AWS services such as S3, Glue, EC2, and Lambda to manage data storage and compute resources, ensuring high performance, scalability, and cost-efficiency. Implement robust testing and validation procedures to ensure the reliability, accuracy, and security of data processing workflows. Stay informed of industry best practices and emerging technologies in both data engineering and data science to propose optimizations and innovative solutions.

Negotiation

View details

Senior DevOps Engineer

Ha Noi - Viet Nam


Financial services, Crypto

  • Devops
  • AWS

Develop, maintain, and manage tools to automate operational activities and improve engineering efficiency. This includes writing custom modules in Node.js or Golang for task management and system orchestration. Troubleshoot, diagnose, and resolve complex software and infrastructure issues, including debugging and modifying application code in Node.js and Golang environments. Update, track, and resolve technical issues in a timely manner. Recommend architectural enhancements and propose process improvements for scalability and reliability. Contribute to application development by intervening in existing modules or creating new ones to enhance system manageability, scalability, and performance. Evaluate and implement new technologies, frameworks, and vendor products to support business goals. Apply best-in-class security practices to safeguard critical systems and data. Ensure stability, reliability, and performance of production and non-production environments. Collaborate with engineering, QA, and product teams to align infrastructure with development needs. Participate in a 24/7 on-call rotation to support high-availability systems.

Negotiation

View details

DevOps Engineer

Ho Chi Minh - Viet Nam


Product, Offshore

  • Devops
  • Java
  • Kubernetes

As a Mid-level DevOps Engineer, you will play a key role in building, maintaining, and automating the environments, CI/CD pipelines, and infrastructure that power the mission-critical solutions for our internal teams and external clients. Maintain and ensure the availability of development, staging, and production environments Manage access, runtime stability, and environment upgrades Design, build, and improve CI/CD pipelines using tools like Jenkins, Automate build, testing, and deployment processes Troubleshoot and resolve pipeline issues Develop and maintain automation scripts using Ansible Standardize infrastructure configurations and automate environment provisioning Configure and maintain monitoring, logging, and alerting systems (Grafana, Prometheus, Splunk) Enhance observability with proactive alerting and dashboards Respond to alerts and incidents within agreed SLAs Triage and resolve infrastructure and pipeline issues Document incidents and implement preventative measures Apply system hardening, vulnerability remediation, and patching Support audits and compliance checks Monitor system performance and resource usage Conduct tuning (e.g., JVM, database, message broker) and provide optimization recommendations

Negotiation

View details