Get to know our Team :
Data Engineering runs the code, pipeline and infrastructure that extracts, processes and prepares every piece of data generated or consumed by Grab’s systems.
We are a diverse team of software engineers that not only work to solve all kinds of data related problems faced by teams from all corners of Grab but we also act as a bridge that ties everyone together through data.
As data in Grab never stops growing, this team also never stops, learning, innovating and expanding so that we can bring in or build the latest and best tools, technology to ensure the company’s continued success.
Get to know the Role :
Data Engineers in Grab get to work on one of the largest and fastest growing datasets of any company in South East Asia.
We operate in a challenging, fast paced and ever changing environment that will push you to grow and learn. You will be involved in various areas of Grab’s Data Ecosystem including reporting & analytics, data infrastructure, and various other data services that are integral parts of Grab’s overall technical stack.
The day-to-day activities :
Spearhead the development of systems, architectures, and platforms that can scale to the 3 Vs of Big data (Volume, Velocity, Variety)
Streamline data access and security to enable data scientists and analysts to easily access to data whenever they need to
Build out scalable and reliable ETL pipelines and processes to ingest data from a large number and variety of data sources
Maintain and optimize the performance of our data analytics infrastructure to ensure accurate, reliable and timely delivery of key insights for decision making
Lead the movement cleaning and normalizing subsets of data of interest as preparatory step before deeper analysis by the data scientists
Run Modern high performance analytical databases and computation engines like RedShift, BigQuery, Greenplum,Presto and others
The must haves :
A degree or higher in Computer Science, Electronics or Electrical Engineering, Software Engineering, Information Technology or other related technical disciplines.
Experience in handling large data sets (multiple TBs) and working with structured, unstructured and geographical datasets
Designed high performance scalable infrastructure stacks for Big Data Analytics
Deep understanding of databases and best engineering practices - include handling and logging errors, monitoring the system, building human-
fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline
Real passion for data, new data technologies, and discovering new and interesting solutions to the company’s data needs
Excellent communication skills to communicate with the product development engineers to coordinate development of data pipelines, and or any new products features that can be built on top of the results of data analysis