Create fault-tolerant batch and real-time data pipelines with scalable functions to extract huge datasets across several platforms, including location and time series data
Examine existing data processes to find opportunities for improvement
Collaboration with product management, technical program management, operations, and other engineers should be effective
Create and put into practice remedies that enhance scalability, performance, cost-effectiveness, and data quality
Job Requirements:
Bachelor s/Master s degree in Engineering, Computer Science (or equivalent experience)
At least 3+ years of relevant experience as a data engineer
Knowledge of Snowflake UDFs/stored procedures
Familiarity with Apache Airflow
Experience with Python
Knowledge of SQL
Expertise in Java and Amazon RDS is a plus
Knowledge of AWS and PyTest framework is desirable