Senior Cloud Data Architect

JOB DESCRIPTION

In a view of the development of his new operating model, we're currently recruiting a Cloud Data Architect to join our team in Singapore and drive the development with international ambition.
The prime responsibility of this new role is to design and build high-performance resilient data architectures and design data patterns to support micro-service-based application architecture.
You will lead the Platform Design Committee and be responsible for the data high and low-level design deliverables (ERD, logical and physical design or review)
As more customers engage with company and the amount of available data will increase, your contribution and understanding of how to use this data to improve automation and from customer experience to product and service development will be key for our success.
Key Responsibilities
Drive and design future-proof data architectures, strategically aligning data systems with business objectives to ensure efficient and scalable data management.
Architecting conceptual, logical, and physical architecture and data models for operational enterprise data and analytics solutions using recognized data modeling approaches
Collaboration with project teams to ensure architectural principles are met and build andact as a change agent
Build container-based big data architectures on top of Kubernetes
Evaluate cutting-edge big data technologies
Design and implement large and complex data solutions (Data Warehouse, Data Lake, Data Analytics) using various architectural patterns such as microservices
Utilize rapid prototyping techniques to accelerate time-to-market for our customers
Trend scouting around new technology
General
Create robust data architecture to manage data efficiently and effectively
Define the data strategy and the key principles associated with it
Develop and understand the enterprise data landscape and map data stores and flows between the operational systems for our micro-service approach
The role is varied and there are opportunities to become involved in activities across all parts of the business

JOB REQUIREMENT

Essential:
3+ years of cloud data experience and insurance industry track record
Strong data architecture and data modeling skills
Experience with designing solutions for managing highly complex business rules within the Azure ecosystem and cloud infrastructure, including data governance
Extensive experience working with Data-APIs (RESTful endpoints)
Prior work experience in Insurance consulting/architecture within a software and/or services company
Manage and work autonomously on projects
Experience with Advanced SQL and Python skills
Experience with DataOps
Experience in using DevOps while working on Cloud data platforms like using Terraform forInfrastructure as Code (IaC), GitOps or using Docker, Kubernetes
Experience building data pipelines leveraging Spark (Azure Databricks or Apache Spark), and Airflow.
A successful candidate must be able to demonstrate:
A visionary with a proven track record in strategic planning, individual/team development, and service delivery
Embodies a collaborative approach in bringing both business and technology stakeholders together to deliver technology solutions that enable tangible business benefits
Thinking, proactive and solutions-oriented
Innovative and entrepreneurial thinking
Working well as part of a team and be autonomous
Working in a fast-paced, high-volume environment
Decision-making abilities
Strong work ethic
Desirable:
Breadth of technical experience and knowledge, with depth / subject matter expertise in the following Data Analytics and AI Platform Cloud solutions required:
SQL including OSS (PostgreSQL, MySQL etc), Azure SQL
Data storage and archive
NoSQL Databases including OSS (Maria, Mongo etc), Cosmos DB
Big Data including SQL DW, Snowflake, Big Query, Redshift
Advanced Analytics including Azure DataBricks, visualization tools such as PowerBI, Tableau, QlikView
Streaming, IoT, Real-time analytics
ETL/ELT, Data Governance, Data Security
Data Science, Data Engineering
Deep Learning and Machine Learning including Azure ML, ML Server

WHAT'S ON OFFER

The Opportunity
You will reach your full capabilities by developing innovative products with trending and cutting-edge cloud and microservices technologies with a full lifecycle - you propose it, you build it, you own it.
You are the foundation of one potential and game-changer startup in Insurance Domain.
The Benefits
Very competitive remuneration package
Premium healthcare for yourself and two family members
Monthly meal, telephone and transport allowance
Generous year-end bonus
A solid business and technical team behind you
The pleasant, enthusiastic, international work environment
Opportunity for traveling & working in South East Asia
A brand new & state of the art office in District 1 (HCM city)
Latest technologies, flexible working hours
And many more to come.

CONTACT

PEGASI – IT Recruitment Consultancy | Email: recruit@pegasi.com.vn | Tel: +84 28 3622 8666
We are PEGASI – IT Recruitment Consultancy in Vietnam. If you are looking for new opportunity for your career path, kindly visit our website www.pegasi.com.vn for your reference. Thank you!

Job Summary

Company Type:

Product

Technical Skills:

Data Engineering, Cloud, Python, ETL/ELT, SQL, Azure, Data Science, Machine Learning

Location:

Ho Chi Minh - Viet Nam

Working Policy:

Salary:

Negotiation

Job ID:

J01562

Status:

Close

Related Job:

Senior Business Analyst

Ho Chi Minh - Viet Nam


Outsource

  • Business Analyst

Negotiation

View details

AI-Native Software Engineering Lead

Ho Chi Minh - Viet Nam


Outsource

  • Backend
  • AI

Responsible for developing and evolving the AI-native SDLC operating model, including agent workflow designs, verification gates, context management standards, and evaluation frameworks Build and lead multi-agent systems using orchestration layers such as Claude Code, GitHub Copilot Workspace, Cursor, LangGraph, CrewAI, or equivalent, from prototype to production Collaborate with the Director of Engineering to contribute to and maintain the company's AI toolchain selection criteria and evaluate tools with engineering rigor, providing internal guidance on when AI is beneficial and when it is not Establish engineering standards, agent evaluation loops, and AI output quality gates across the delivery organization Previous experience in a lead, principal, or staff engineer role with demonstrated cross-team influence Experience in outsourcing, consulting, or multi-client delivery environments Track record of building or leading an internal community of practice, guild, or AI adoption program Develop and continuously evolve the company's AI-native SDLC playbook, including standards, workflow templates, case studies, and guardrails that delivery teams can adopt immediately Design and lead internal upskilling programs that transition engineers from AI-assisted to AI-native working patterns Keep track of the AI capability frontier, model improvements, new agent frameworks, and emerging risks, translating signals into timely updates to KMS's practices Work closely alongside Delivery Teams as an AI transformation advisor and execution partner, identifying the highest-value automation opportunities across the SDLC and coordinating with the team to implement them Design and deploy agent-orchestrated workflows tailored to each client's stack, team maturity, and delivery context, with measurable ROI Build business cases for AI-native adoption with clients and account managers, framing the value in terms of velocity, quality, and cost Represent the company's AI-native engineering capabilities in client conversations, QBRs, and RFP responses as a credible technical authority

Negotiation

View details

Platform Lead

Others - Singapore


Product

  • Backend
  • Devops
  • Data Engineering

Develop and expand distributed systems to handle large volumes of sensory, telemetry, and control data across cloud and edge environments, facilitating real-time connections for fleets of robots. Create the API Platform with a focus on high reliability, exceptional developer experience, and robust multimodal AI capabilities accessible through user-friendly APIs and SDKs. Establish extensive training and inference platforms for foundation models used in robot autonomy, teleoperation, and developer integrations. Devise data ingestion and streaming pipelines for real-time connectivity of robot fleets to the cloud, covering various data inputs such as video, LiDAR, joint states, and audio. Oversee and advance a modern cloud native infrastructure stack employing Kubernetes, Docker, and infrastructure as code tools. Ensure platform reliability through telemetry, monitoring, alerting, autoscaling, failover, and disaster recovery measures. Make infrastructure decisions pertaining to distributed storage, consensus protocols, GPU orchestration, network reliability, and API security. Foster collaboration across ML, robotics, and product teams to facilitate hardware in the loop simulation, policy rollout, continuous learning, and CI/CD workflows. Implement secure APIs featuring fine-grained access control, usage metering, rate limiting, and billing integration to accommodate a growing user base.

Negotiation

View details