As a Staff Data Engineer at ChargePoint , you will play a crucial role in designing, building, and maintaining our data infrastructure. You will collaborate closely with cross-functional teams to ensure the availability and reliability of data for various analytical and reporting needs. If you are passionate about data engineering, excited to tackle challenges, and have a strong background in developing robust data pipelines, we want to hear from you
What You Will Bring to ChargePoint
1. Data Pipeline Development: Design, build, and optimize data pipelines to ensure the efficient and scalable flow of data from source to destination. 2. Problem Solving: Demonstrate strong problem-solving and logical reasoning skills to address complex data engineering challenges. 3. Team Collaboration: Collaborate effectively with cross-functional teams and be a proactive team player. 4. Programming: Utilize your programming skills to develop and maintain data pipelines, focusing on reliability and performance. 5. Agile Environment: Thrive in an agile work environment, adapting to changing requirements and priorities. 6. Python: Leverage your proficiency in Python programming to develop data solutions. 7. NLP Understanding: Demonstrate an understanding of Natural Language Processing (NLP) algorithms and the ability to implement them. 8. AWS Stack: Possess strong experience with the AWS data stack, including services like [list relevant AWS services]. 9. Big Data Technologies: Hands-on experience with PySpark, Apache Spark, and CI/CD implementation using Terraform. 10. Snowflake and dbt: Experience in building ELT pipelines using Snowflake and dbt. 11. Data Modeling: Design and develop physical and logical data models to support data storage and retrieval. 12. BI Tools: Familiarity with BI tools such as PowerBI OR Tableau for data visualization and reporting. 13. Data Quality: Understand the importance of data quality and proactively work towards building high-quality data pipelines.
Requirements
-Bachelors or Masters degree in Computer Science, Data Science, or a related field. -Proven experience in data engineering with 6-10 years in a similar role. - Strong problem-solving skills and logical reasoning. - Proficiency in Python programming. - Knowledge of NLP algorithms and their implementation. - Experience with the AWS data stack. - Hands-on experience with PySpark, Apache Spark, and CI/CD using Terraform. - Familiarity with Snowflake and dbt for ELT pipelines. - Data modeling expertise. - Experience with BI tools like PowerBI and Tableau. - Strong understanding of data quality best practices