(Middle - Senior) DevOps Engineer

JOB DESCRIPTION

DevOps Partner main responsibility is to enable the whole team to experience the full DevOps cycle (not just deploy something and operate for the team).
Giving the engineers responsibility and accountability to plan, code, test, deploy, and operate their product fast and reliable is our ultimate goal.
We’re a startup, that means nothing pretty here, you will be required to adapt in many situations and be familiar with a lot of open source components.
Our team split into multiple squads, each squad responsible for some components of the platform but we share the same responsibilities and help each other to fulfil bellow objectives:
Guide the whole team toward DevOps culture
As a Senior Member of the team, you are expected to keep yourself up to date with the best practices in the field, transfer it to our team members via internal training and maybe some public events.
The topics can be about:
Observability
Reliability
Testability
Scalability
Securability
Then apply to our daily operations, our platform architecture, give the team the direction to improve their development cycle.
Improve our team performance
We have a mixed architecture with a big monolith component together with many micro-service and micro-frontend. We have thousands of lines of bash script to spin up local minikube clusters for our developers (both backend or client side developers) to code and test their product on local, then use the same bash script to deploy to staging, uat, production on GKE using Github Action.
A lot of rooms for improvement here, such as:
Replace our bash script with Google Skaffold to bring better experience during local development, deployment
Optimise the build time for: Golang, Node backend services, React web apps (micro front-end), Flutter mobile apps
Analysing Pull Request lifecycle
Automatic Quarantine Flaky Test
Canary Deployment with Argo Rollout
And so much more…
We have A LOT OF E2E test cases. To organise the test, you need to understand the business domain, run them in a specific context and narrow down the scope to pinpoint exactly what's wrong.
Oh, did I mention that we have terrible memory leaks during unit-test, so we need to split the unit test into dozens of parallel runs as a work around?
Improve our platform performance
Because of our startup architecture :) We have several GKE clusters with a few hundred pods on each cluster, half of them are open source components, each of them require different deployment and scaling strategies, we need your help on this, the number of components is increasing dramatically. Along the way you will run out of mem because of our Java based components, so be ready for that.
Of Course we have Prometheus, Grafana, Alert Managers, you of course need to operate them, make them reliable and scale. You will need to implement and improve the current tool set to automatically add a telemetry instrument to our internal service or write a custom exporter for the open sources that still don’t have one.
Btw, do you know any stress test framework that can run with gRPC? We want to compare how good our custom ad-hoc stress test tool is with that.

JOB REQUIREMENT

Experienced in day to day development tasks - at least 2 years working as back-end or front-end or mobile engineer before.
Experienced with team performance monitoring framework (Four Keys)
Experienced with Infrastructure as code (Terraform or Pulumi) and of course, excellent in bash
Experienced with cloud computing and container ecosystem (require production level of experience).

WHAT'S ON OFFER

We are trying to bring the best experience to our members, through culture, environment and flexible working style.
Probation: 2 months (100% full-time salary).
Health Insurance package from BaoViet
Role rotation opportunity.
14 days Paid Leave Annually.
Young, dynamic, and cooperative working environment.  

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Learning Hub, EdTech start-up

Technical Skills:

Devops

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Salary:

$ 1,200 - $ 3,000

Job ID:

J01052

Status:

Close

Related Job:

Senior SAP FI Consultant

Ho Chi Minh - Viet Nam


outsourcing, Germany company

  • SAP

Lead a team of Finance specialists onsite and assist in configuring the solution (must have hands-on configuration experience). Provides guidance in the definition of solution design practices and standards that link back to SAP best practices. Understand the customer business processes and the IT landscape rapidly and able to foresee the likely challenges Ability to lead and facilitate design workshops (Blueprint), assessments, planning sessions. Work with customer business teams & project teams effectively Translates business goals into appropriate solutions while assessing feasibility and optimization of the solution. Conduct work effort estimation and develop work plans. Develops and maintains working relationships with a diverse group of business, functional and technical teams. Adhere to project plans, tasks, and deliverable; identifies dependencies and resource requirements.

Negotiation

View details

Senior Product Manager

Ho Chi Minh, Ha Noi - Viet Nam


No.1 Construction Tech company in Japan

  • Product Management

#The Opportunity: As the promoter of product development projects at the company, you will gain experience in project management and development direction, working with the business department from project definition to requirements definition, development, testing, and release. In the future, you will be responsible for product management of the company's products.#Job Scope: This is an important position that involves understanding the characteristics of users of the service and driving product development projects while balancing the needs of product, sales, customer success, and engineers. #Product Management: Define the scope, objectives, and goals of product development. Build the product structure and process, create and promote schedules. #Development Direction: Working with product managers to organize requirements, design business processes, and define requirements. Creating and executing test and release plans. Creating explanatory materials for internal/customer use. Verifying post-release effectiveness and reporting. Create and maintain comprehensive product documentation #Development Environment: Backend: Ruby on Rails, Go, AWS, Elasticsearch, MySQL, DynamoDB, Redis, Terraform (IaC), OIDC (Authentication). Frontend: Next.js/React.js, Typescript, Vue.js/Nuxt.js. Mobile App: Kotlin, Swift, Flutter. CI/CD & DevOps: Docker, Kubernetes, CodePipeline, CodeBuild, CircleCI, GitHub Actions. Monitoring & Tools: Datadog, Sentry, Bugsnag, Swagger, ZenHub, Figma

Negotiation

View details

Senior/ Lead Data Engineer (Data Platform / MLOps)

Ho Chi Minh, Ha Noi - Viet Nam


Information Technology & Services

You will be responsible for managing, designing, and enhancing data systems and workflows that drive key business decisions. The role is focused 75% on data engineering, involving the construction and optimization of data pipelines and architectures, and 25% on supporting data science initiatives through collaboration with data science teams for machine learning workflows and advanced analytics. You will leverage technologies like Python, Airflow, Kubernetes, and AWS to deliver high-quality data solutions. Architect, develop, and maintain scalable data infrastructure, including data lakes, pipelines, and metadata repositories, ensuring the timely and accurate delivery of data to stakeholders. Work closely with data scientists to build and support data models, integrate data sources, and support machine learning workflows and experimentation environments. Develop and optimize large-scale, batch, and real-time data processing systems to enhance operational efficiency and meet business objectives. Leverage Python, Apache Airflow, and AWS services to automate data workflows and processes, ensuring efficient scheduling and monitoring. Utilize AWS services such as S3, Glue, EC2, and Lambda to manage data storage and compute resources, ensuring high performance, scalability, and cost-efficiency. Implement robust testing and validation procedures to ensure the reliability, accuracy, and security of data processing workflows. Stay informed of industry best practices and emerging technologies in both data engineering and data science to propose optimizations and innovative solutions.

Negotiation

View details