10 Steer Lean Consulting Jobs
Data Engineer - ETL Tools (4-6 yrs)
Steer Lean Consulting
posted 8d ago
Flexible timing
Key skills for the job
Job Description :
As a Data Engineer, you will design, build, and maintain scalable data pipelines and systems to support analytics and business intelligence.
You will work closely with data scientists, analysts, and other engineers to ensure seamless data integration, quality, and availability.
This role requires hands-on experience in big data tools, cloud platforms, and advanced ETL processes.
Key Responsibilities :
Data Pipeline Development :
- Design, develop, and maintain robust and scalable ETL pipelines to process and transform data.
- Implement data ingestion frameworks to collect data from various sources, including APIs, databases, and flat files.
Data Architecture :
- Architect efficient data storage solutions using relational and non-relational databases.
- Optimize and manage data warehouses, data lakes, and cloud-based storage systems.
Data Quality & Governance :
- Ensure high data quality through rigorous testing, validation, and monitoring.
- Implement data governance practices, including metadata management and lineage tracking.
Collaboration & Support :
- Collaborate with data scientists and analysts to define data requirements and deliver solutions.
- Provide ongoing support for data platforms and troubleshoot data-related issues.
Performance Optimization :
- Optimize database performance and query execution times.
- Monitor and improve the performance of data pipelines and storage solutions.
Technical Skills :
Core Data Engineering Skills :
- Proficiency in programming languages such as Python, Scala, or Java for data manipulation and pipeline development.
- Strong experience with ETL tools and frameworks such as Apache Airflow, Informatica, or Talend.
Big Data & Analytics :
- Hands-on experience with big data tools like Hadoop, Spark, or Kafka.
- Expertise in querying and managing large datasets using SQL or similar query languages.
Cloud Technologies :
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
- Familiarity with cloud-native data tools such as AWS Glue, BigQuery, Databricks, or Azure Data Factory.
Databases & Storage :
- Proficient in relational databases (e., MySQL, PostgreSQL) and NoSQL databases (e., MongoDB, Cassandra).
- Knowledge of data warehousing solutions like Snowflake, Redshift, or Synapse Analytics.
Data Processing & Integration :
- Experience with real-time data processing and stream analytics using tools like Apache Kafka or Flink.
- Familiarity with APIs and data integration techniques.
Visualization & Reporting (Bonus) :
- Familiarity with data visualization tools like Tableau, Power BI, or Looker.
Qualifications :
Educational Background : Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field.
Experience :
- 4-6 years of hands-on experience in data engineering and big data solutions.
- Proven experience building and optimizing scalable data pipelines and systems.
Soft Skills :
- Strong analytical and problem-solving abilities.
- Excellent communication and teamwork skills.
- Ability to work independently and manage multiple priorities
Functional Areas: Software/Testing/Networking
Read full job description5-7 Yrs
Gurgaon / Gurugram
4-6 Yrs
Bangalore / Bengaluru