10 Le Human Resources Solutions Jobs
Data Engineer - ETL/Data Pipeline (7-12 yrs)
Le Human Resources Solutions
posted 8d ago
Flexible timing
Key skills for the job
Job Description
- 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS) and Data Governance domains.
- Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs.
- Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions.
- Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real time data, and provide best practices for pipeline operations
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
- Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices.
- Comprehensive knowledge of Functional and technical impact analysis. Provide advice and ideas for technical solutions and improvements to data systems
- Create and maintain clear documentation on data models/schemas as well as transformation/validation rules
- Implement tools that help data consumers to extract, analyse, and visualize data faster through data pipelines
- Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's.
- Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated
- Migrate current data applications & pipelines to Cloud (AWS) leveraging PaaS technologies
We're excited if you have :
- Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience.
- 8+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred.
- Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. An even bigger plus if you have experience building framework
- 8+ years of recent hands-on SQL programming experience in a Big Data environment is required; Hadoop/Hive experience is preferred.
- Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases
- Hands on experience in AWS Cloud data engineering components including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, EMR etc.
- Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must!
- Knowledge of API and microservice integration with applications
- Experience building data solutions for Power BI and Web visualization applications
- Experience with Cloud is a plus
- Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills
- Ability to develop and organize high-quality documentation
- Superior analytical skills and a strong sense of ownership in your work
- Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML.
- Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously
- Prior Energy & Utilities industry experience is a big plus.
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Data Engineer roles with real interview advice