Recreate data connectors, ETL jobs, and dashboards currently in use
Develop databases and data pipeline/ETL using modern technologies and tools
Assist the team in developing operationally efficient analytic solutions
Define standards and methodologies for the data warehousing environment
Develop easy to scale data pipelines using the latest tools and technologies like AWS, Snowflake, Spark, Kafka to induct data from various systems
Transform requirements into scalable technical solutions capable of meeting data warehousing design standards
Build data pipelines and ETL applications that support business operations in advertising, content, and finance/accounting
Collaborate to decipher data migration issues and improve system performance
Collaborate efficiently with product management, technical program management, operations, and other engineers
Job Requirements:
Bachelor s/Master s degree in Engineering, Computer Science (or equivalent experience)
At least 3+ years of relevant experience as a data engineer
Thorough knowledge of building scalable data systems and data-driven products while working with cross-functional teams
Expertise in utilizing Python, SQL, Apache Airflow, Snowflake, DBT, and Amazon Quicksight
Must be able to build data pipelines and ETL applications with large data sets
Ability to build REST APIs for back-end services
Must possess knowledge of implementing, testing, debugging, and deploying data pipelines using tools like Perfect, Airflow, Glue, Kafka, Serverless (Lambda, Kinesis, SQS, SNS), Fivetran, or Stitch Data/Singer
Professional experience of working with Cloud data warehousing technologies such as Redshift, BigQuery, Spark, Snowflake, Presto, Athena, and S3
Experience with SQL DB administration (PostgreSQL, MS SQL, etc.)
Understanding of complex, distributed, microservice web architectures
Experience with Python back-end, ETL to move from one database to another
Solid understanding of analytics needs and proactive-ness to build generic solutions to improve the efficiency