9 iFlow Jobs
5-8 years
Gurgaon / Gurugram
Data Engineer - Python/PySpark (5-8 yrs)
iFlow
posted 2mon ago
Flexible timing
Role : Data Engineer
location : Gurugram
Experience : Must have 5+ Years
Primary Skills : Python, Pyspark, SQL, Hive, Data Ware Housing, Data Modelling, Tableau/Power BI, Data Migration, Source to Target Mapping
Key Responsibilities :
- Independently run workshops and interviews with appropriate stakeholders (data owners from Business, Service and Product teams) to understand the magnitude of data usage & dependencies and identify key use cases associated to data sources.
- Assess data sources in scope, prioritize data products based on usage & priority and ascertain the data required
- Review and analyze common data used across multiple use cases but with varied definitions and derivations and perform root cause analysis
- Categorize the data based on usage and priority
- Develop Data Model (conceptual, logical and physical data models), create data mapping (Source-to-target mapping of complex business rules) while transforming business requirements to technical specifications
- Ensure successful migration of Legacy system data used under all use cases to new data platform
- Compile, segregate and prioritize Legacy system data
- Map all Legacy system variables in question with their new data platform counterparts
- Facilitate the creation, migration and testing of all variables from the Legacy system
- Test the SOR and derived data, scanners and dashboards enabled in new data platform
- Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy
- Develop optimized SQL queries to extract data from test and production databases to perform data analysis and data quality check, perform data analysis
- Proficient in Python for data engineering task, scripting and automation
- Responsible for creation of ETL pipelines using PySpark and Python in cloud platforms (e.g., AWS, Azure, Google Cloud)
- Strong understanding of data integration best practices and concepts
- Analysis of data to extract business logic under different business and technical BI reports (Tableau, Power BI, etc) and create visualizations to present the findings
- Communicate effectively with both technical and non-technical team members and partner teams to ensure alignment and project success
Essential Skills and Qualifications :
- Bachelor's degree in computer science, Engineering, or a related field; or equivalent work experience
- Excellent problem-solving and analytical skills, with a strong attention to detail
- Effective communication and collaboration skills, with the ability to work in cross-functional teams
- Familiarity with Data Engineering tools and technologies.
- Hands on experience working on cloud platforms (e.g., AWS, Azure, Google Cloud).
- Strong proficiency in SQL, Python, PySpark, Hive, Bigdata,
- 5+ years of experience in data engineering or a related role, with a focus on managed services and cloud-based solutions.
Main and PrimaryTechnical Skills :
- Python, Pyspark, SQL, Hive, Data Ware Housing, Data Modelling, Tableau/Power BI, Data Migration, Source to Target Mapping
Functional Areas: Software/Testing/Networking
Read full job description6-11 Yrs
Mumbai, Hyderabad / Secunderabad, Bangalore / Bengaluru
5-8 Yrs
Bangalore / Bengaluru, Hyderabad / Secunderabad