37 Elements Jobs
Data Engineer - SQL/PySpark (4-10 yrs)
Elements
posted 3d ago
Flexible timing
Key skills for the job
Key Responsibilities :
- Design, develop, and optimize scalable data processing pipelines using Spark and PySpark.
- Implement Spark optimization techniques to enhance performance and efficiency.
- Develop and execute complex SQL queries for data manipulation and transformation.
- Collaborate closely with data scientists, analysts, and engineering teams to address data needs.
- Work with AWS Cloud services such as Redshift, AWS Glue, SQL Server, and Databricks.
- Use Python for data processing tasks, ensuring modularization, packaging, and reusability.
- Basic knowledge of Informatica for data integration.
- Engage in real-time data streaming and integration with tools like NiFi, Kafka, and EventHub (preferred but optional).
- Hands-on experience with Snowflake for data warehousing and analytics.
Required Skills :
- Spark Expertise : Proven experience in Spark and PySpark, including optimization techniques.
- Complex SQL : Strong knowledge of SQL and hands-on experience with complex data processing.
- Cloud Exposure : Familiarity with AWS services (Redshift, Glue, etc.) and Databricks.
- Python Proficiency : Expertise in Python for data processing, scripting, and automation.
- Data Streaming : Experience with NiFi, Kafka, and EventHub is a plus.
- Informatica : Basic understanding of Informatica tools and their integration with other systems.
- Snowflake : Hands-on experience with Snowflake for data warehousing and analytics.
Functional Areas: Software/Testing/Networking
Read full job description