MoEngage is an insights-led customer engagement platform, trusted by 1,200+ global consumer brands. As a
Great Place to Work Company
we are a young, fast-paced and intelligent customer engagement platform that fosters a culture of innovation, ownership, freedom, and fun while building future-ready technology products. Sitting at a conflux of diverse technologies like
Artificial Intelligence, Big Data, Web Mobile platforms
, MoEngage technology analyzes billions of data points generated by customers and their devices in order to predict their behavior and engage them at every touchpoint throughout their lifecycle with personalized communication.
In just ten years since our inception, we have worked with leading Fortune 500 brands such as Deutsche Telekom and McAfee , along with internet-first brands such as Flipkart, Ola, OYO, and Bigbasket , with a global presence that encompasses 35 countries. We currently have offices in San Francisco, Boston, London, Dubai, Ho Chi Minh city, Bangkok, Kuala Lumpur, Singapore, Sydney, Vietnam, Berlin, Jakarta, and Bengaluru.
The care we give to our customers is quite high! Our achievement of top service and support ratings in Gartners Magic Quadrant, Gartner Peer Insights, and G2 Summer Reports is a testament to that. Another commendable quality is our people-centric culture, as we have recently been included in Battery Ventures top 25 private cloud computing companies . As recognized by the DivHERsity Awards, we are one of the top 20 diversity companies in the world, while the Economics Times names us as one of the Top Organizations for Women.
Will you be able to thrive in a fast-paced environment where innovation, speed, and customer-centric thinking are the normIs it your passion to uncover opportunities others are unaware of and to champion themDo you crave ownership and a chance to be a part of something that mattersIf so, this may be a worthwhile opportunity for you!
About the Data Engineering Team at Monengage: Join our innovative Data Engineering team, where your expertise will play a critical role in architecting and executing the data infrastructure that powers real-time data ingestion, large-scale data lake, and Kafka clusters where more than a million messages per second are produced. Our team is responsible for handling high-volume user and event data, business critical pipelines, not only ensuring that they are robust, high-performing but also scalable and efficient to meet the demands of our dynamic data environment.
Key Requirements:
Experience: A minimum of 6-8 years in the data engineering field, demonstrating a track record of managing data infrastructure and pipelines.
Programming Skills: Expertise in at least one high-level programming language, with a strong preference for candidates proficient in Java and Python.
Cloud Infrastructure: Proven experience in setting up, maintaining, and optimizing data infrastructures in cloud environments, particularly on AWS
Tech Stack Proficiency: Hands-on experience with a variety of data technologies including
Any one Streaming frameworks (Kstream/Flink/Samza/Spark streaming) - Mandatory
Kubernetes for container orchestration
AWS S3, Athena, and Glue for storage and ETL services
Spark for large-scale data processing
Good to have Experience : Presto/Hive/Hudi/Iceberg
Debezium for change data capture
Apache Airflow for workflow management
Data Processing: Demonstrable skills in cleansing and standardizing data from diverse sources such as Kafka streams and databases.
Query Optimization: Proficient in optimizing queries to achieve optimal performance with large datasets, minimizing processing times without sacrificing quality.
Problem-Solving Abilities: An analytical mindset with robust problem-solving skills, essential for identifying and addressing issues during data integration and schema evolution.
Cost Optimization Expertise: A keen eye for cost-saving opportunities without compromising on system efficiency. Capable of architecting solutions that are not only robust and scalable but also cost-effective, ensuring optimal resource utilization and avoiding unnecessary expenses in the cloud and data processing environments.