Design and build systems to efficiently move data across multiple systems and make it available for various teams like Data Science, Data Analytics, and Product.
Design, construct, test, and maintain data management systems.
Understand data and business metrics required by the product and architect the systems to make that data available in a usable/queryable manner.
Ensure that all systems meet the business/company requirements as well as best industry practices.
Keep ourselves abreast of new technologies in our domain.
Recommend different ways to constantly improve data reliability and quality.
Bachelors/Masters, Preferably in Computer Science or a related technical field.
2-5 years of relevant experience.
Deep knowledge and working experience of Kafka ecosystem.
Good programming experience, preferably in Python, Java, Go, and a willingness to learn more.
Experience in working with large sets of data platforms.
Strong knowledge of microservices, data warehouse, and data lake systems in the cloud, especially AWS Redshift, S3, and Glue.
Strong hands-on experience in writing complex and efficient ETL jobs.
Experience in version management systems (preferably with Git).
Strong analytical thinking and communication.
Passion for finding and sharing best practices and driving discipline for superior data quality and integrity.
Intellectual curiosity to find new and unusual ways of how to solve data management issues.