Design and develop robust and scalable data pipelines to support data integration using Kafka, Fivetran, Snowflake, Airflow, and dbt.
Help lead the implementation and maintenance of the data platform solutions, ensuring data integrity, performance, and security.
Collaborate with cross-functional teams including data scientists, analysts, and software engineers to understand data requirements and deliver high-quality solutions.
Evaluate and implement best practices for data modeling, ETL processes, and data quality assurance.
Optimize and tune data processing workflows and SQL queries for improved performance and efficiency.
Provide technical leadership and mentorship to junior data engineers, guiding them in implementing best practices and delivering high-quality solutions.
Stay up-to-date with industry trends and advancements in data engineering, continuously improving the teams technical knowledge and skill set.
Collaborate with infrastructure and operations teams to ensure reliable and scalable data storage, processing, and monitoring solutions.
This role requires
Bachelors degree in Computer Science, Engineering, or a related field. Advanced degree preferred.
Proven experience (5+ years) in data engineering, designing and implementing data pipelines, and building data infrastructure.
Strong expertise in working with Snowflake, Airflow, and dbt, including data modeling, ETL, and data quality assurance.
Proficiency in SQL and experience with optimizing and tuning queries for performance.
Solid understanding of data warehousing concepts, dimensional modeling, and data integration techniques.
Experience with cloud platforms (e.g., AWS, Azure, GCP) and cloud-based data technologies.
Strong programming skills in Python or other scripting languages for data manipulation and automation.
Excellent problem-solving and troubleshooting abilities with a keen attention to detail.
Strong communication skills with the ability to effectively collaborate with cross-functional teams and stakeholders.
Leadership experience, mentoring junior team members, and guiding technical projects.
Bonus points if you have
Experience with streaming data processing frameworks (e.g., Apache Kafka, Apache Flink).
Familiarity with containerization technologies (e.g., Docker, Kubernetes).
Knowledge of distributed computing frameworks (e.g., Spark, Hadoop).
Experience with data governance, data security, and compliance practices.
Understanding of DevOps principles and experience with CI/CD pipelines.
Join our talented Data Platform Team and contribute to the development of a cutting-edge data infrastructure that enables powerful data analytics and insights for our customers