Cloud Platform Engineers/Architects

ABOUT CLIENT

Our client is a leading global technology company that provides a wide range of IT services and solutions. With a strong focus on innovation and digital transformation, our client helps businesses adapt to the ever-changing technological landscape. Their expertise in areas like cloud computing, cybersecurity, and AI makes them a valuable partner for organizations.

JOB DESCRIPTION

Create, execute, and oversee cloud-based infrastructure on GCP and AWS
Develop, automate, and enhance data platform provisioning, scaling, and maintenance tasks
Prioritize security across all infrastructure and data platform operations
Construct automations and frameworks to streamline deployment, scaling, and data analytics tasks
Deliver high-quality content using appropriate document templates
Keep informed about industry best practices and emerging technologies to enhance Platform & Infrastructure
Employ containerization technologies such as Docker and orchestration tools like Kubernetes
Implement CI/CD pipelines and adhere to DevOps principles
Use Python and other common programming languages for automation duties
Collaborate with APIs and scripting/frameworks for operational efficiency
Implement and utilize monitoring and logging tools effectively
Collaborate with a variety of operating systems for compatibility and efficiency
Manage network configurations ensuring reliability, security, and performance
Troubleshoot and resolve infrastructure, applications, and networking issues

JOB REQUIREMENT

Engineer level:
The candidate should have proficiency in programming/scripting languages such as Java, Python, SQL, and Bash, as well as experience with tools like Terraform and Infrastructure as Code (IaC).
They should have a strong understanding of DevOps practices, CI/CD processes, and container orchestration, with a preference for experience in Cloud platforms, particularly GCP.
A minimum of 4 years of experience is required, along with fundamentals in Linux operating systems and Git source control.
Experience with data pipelines ETL/ELT fundamentals and SQL is essential, as well as good verbal communication skills in English.
Additionally, expertise in Docker and Kubernetes, knowledge of Istio Service Mesh and routing, and proficiency in SQL and DBT for data processing are necessary.
Familiarity with DevSecOps practices, including static analysis, composition analysis, vulnerability scanning, and secret management, is an advantage.
A solid understanding of CI/CD principles is essential, and experience with cloud providers such as GCP and AWS is preferred.
The candidate should be familiar with a range of tooling and frameworks, including GitHub for version control, Artifactory for artifact management, Codefresh for CI/CD pipelines, and Tableau for reporting.
Experience with Airflow for workflow automation, Helm or Kustomize for Kubernetes management, Twistlock for container security, Checkmarks for static application security testing, Blackduck for open-source security scanning, Ansible for configuration management and automation, and OpenShift for container orchestration is also beneficial.
Qualifications for this position include a Bachelor's degree in Computer Science or an Engineering field and certification in Google Cloud Platform (GCP) and/or Amazon Web Services (AWS).
Frontend skills such as JavaScript, Node.js, and React, Azure AD knowledge, and experience with Ansible for configuration management and automation are considered good-to-have requirements.
 
Architect level:
An architect-level candidate should have all the qualifications above, as well as a minimum of 8 years of experience with strong expertise in cloud architecture, particularly in GCP.
They should also have experience in designing resilient and scalable frameworks to address business needs and be capable of collaborating with stakeholders to translate business requirements into solutions, bridging the gap between cross-functional teams with strong communication skills.

WHAT'S ON OFFER

This position offers hybrid working arrangements, with three days working in the office and flexible hours.
Salary is negotiable based on candidate expectations.
Employees are entitled to 18 paid leaves annually, which includes 12 annual leaves and 6 personal leaves.
The insurance plan includes coverage based on full salary, a 13th-month salary, and performance bonuses.
A monthly meal allowance of 730,000 VND is provided.
Employees receive 100% full salary and benefits from the start of employment.
Medical benefits are extended to the employee and their family.
The work environment is fast-paced, flexible, and multicultural with opportunities for travel to 49 countries.
The company provides complimentary snacks, refreshments, and parking facilities.
Internal training programs covering technical, functional, and English language skills are offered.
The regular working hours are from 08:30 AM to 06:00 PM on Mondays to Fridays, inclusive of meal breaks.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Outsource

Technical Skills:

Devops, Google Cloud

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Hybrid

Salary:

Negotiation

Job ID:

J01637

Status:

Close

Related Job:

Senior Software Engineer (Data Management, Data Lake)

Ho Chi Minh - Viet Nam


Product

  • Data Engineering
  • Cloud
  • Java
  • Typescript
  • Angular

We are looking for a seasoned Senior Data and Full-Stack Engineer to join our team. In this role, you will not only design and build high-performance, resilient data architectures but also craft scalable solutions that integrate seamlessly with microservice-based application ecosystems. Your expertise will span across data engineering and full-stack development, enabling you to deliver end-to-end solutions that power our platform. You will own critical design deliverables-both high and low level-including ERDs, logical and physical models, and architectural reviews. Beyond architecture, you will actively contribute to building robust services and APIs that leverage data for automation, customer experience optimization, and innovative product development. As Company continues to grow and data volumes surge, your ability to harness this data and translate it into actionable insights and advanced solutions will be pivotal to our success.#Your key responsibilities as a Senior Data & Full-stack Developer will include: Drive and design future-proof data architectures, strategically aligning data systems with business objectives to ensure efficient and scalable data management. Architecting conceptual, logical, and physical architecture and data models for operational enterprise data and analytics solutions using recognised data modelling approaches Collaboration with project teams to ensure architectural principles are met, and build and act as a change agent Build container-based big data architectures on top of Kubernetes Design and implement large and complex data solutions (Data Warehouse, Data Lake, Data Analytics) using various architectural patterns such as microservices Develop and understand the enterprise data landscape and map data stores and flows between the operational systems for our microservice approach Implementing the feature with high-performance, scalable, and testable components for our architecture and execute its development. Developing and deploying modern architectural patterns/techniques (microservices, DDD, TDD), including developing using modern frameworks, e.g. Spring Boot, Spring Cloud Developing and deploying modern frontend microservices, enrich DM Storybook using the latest Angular version. Develop RESTful APIs and microservices-based solutions leveraging containers (AKS, Kubernetes, Docker) technologies#General Create robust data architecture to manage data efficiently and effectively Define the data strategy and the key principles associated with it Develop and understand the enterprise data landscape and map data stores and -flows between the operational systems for our micro-service approach The role is varied and there are opportunities to become involved in activities across all parts of the business

Negotiation

View details

Platform Reliability Engineer

Ho Chi Minh - Viet Nam


Outsource

  • Devops

Maintain production reliability of the Linux-based research and trading platform within a globally distributed engineering team. Respond quickly to production infrastructure issues. Comprehend internal client needs and effectively communicate them to regional and global leadership. Identify risks, develop contingency plans, and implement solutions to mitigate them. Enhance the observability platform to monitor the performance and health of critical computing environments. Take part in occasional on-call rotations and support on-call staff during their shifts. Contribute to organizational knowledge through documentation, education, and writing maintainable code.

Negotiation

View details

Director Engineering – Software Engineering and AI Inferencing Platforms

Ho Chi Minh, Ha Noi - Viet Nam


Product

  • Management
  • Backend
  • Devops
  • Data Engineering
  • Cloud
  • AI

Lead and expand engineering teams in Vietnam across system software, data science, and AI platforms. Drive the creation, structure, and delivery of high-performance system software platforms that support AI products and services. Collaborate with global teams across Machine Learning, Inference Services, and Hardware/Software integration to guarantee performance, reliability, and scalability. Oversee the development and optimization of AI delivery platforms in Vietnam, including NIMs, Blueprints, and other flagship services. Collaborate with open-source and enterprise data and workflow ecosystems to advance accelerated AI factory, data science, and data engineering workloads. Promote continuous integration, continuous delivery, and engineering best practices across multi-site R&D Centers. Work with product management and other stakeholders to ensure enterprise readiness and customer impact. Establish and implement standard processes for large-scale, distributed system testing including stress, scale, failover, and resiliency testing. Ensure security and compliance testing aligns with industry standards for cloud and data center products. Mentor and develop talent within the organization, fostering a culture of quality and continuous improvement.

Negotiation

View details