From the ground up, assist in creating and implementing the Data Platform
Data ingestion tools and integrations, stream and batch processing using kappa or lambda architectures, monitoring, and more
Build complete ETL/LT pipelines to feed the Data Warehouse and give downstream analytical data consumers the power to take control of the Data Pipelines
Provide consultative and data engineering solutions for various internal customers by working with cross-functional teams and interacting with Product, Engineering, Data Science, Analytics/BI, and Operations to understand their data needs
Job Requirements:
Bachelor s/Master s degree in Engineering, Computer Science (or equivalent experience)
At least 3+ years of relevant experience as a data/back-end engineer
3+ years of experience developing data platform infrastructure
Extensive working experience with at least one programming language like Scala, Python/PySpark, as well as SQL expertise
Extensive experience working with distributed big data technologies such as Spark, Presto, Hive, Redshift, etc.
Experience with implementing microservice architectures, event-based processing, and streaming pipelines
Knowledge of best practices in data engineering, including the ability to mentor other engineers and provide technical leadership
Solid understanding of the principles of continuous integration and deployment and agile software development