i
Creencia Technologies
4 Creencia Technologies Jobs
Data Engineer - ETL/Python (5-12 yrs)
Creencia Technologies
posted 22d ago
Key skills for the job
We are looking for a Data Engineer to join one of our leading media clients' engineering team. Your primary focus will be to design, develop, ship, maintain, and oversee the overall architecture of the client's data platforms-primarily systems related to customer-facing products.
The role will involve implementing the appropriate data infrastructure required to enable predictive analytics and ML algorithms. Additionally, you will support the Head of AI and Data Engineering in implementing the broader organisational vision. The team will also extend support to other departments for forecasting technologies. You will work collaboratively with the corporate data team (responsible for reporting, analytics, and insights) and liaise with data scientists and analysts across the organisation.
Key Responsibilities :
- Collaborate with data product managers, analysts, and data scientists to architect, build, and maintain data processing pipelines in SQL or Python.
- Build and maintain data warehouse/data lake-house solutions for analytics, reporting, and ML predictions.
- Implement DataOps and DevOps practices to streamline ETL and ELT pipelines.
- Optimise and improve existing processes while transitioning to scalable, best-practice solutions.
- Participate in Agile workflows with collaborative, Kanban-focused product teams.
- Work closely with data science teams and business analysts to refine data requirements for initiatives.
- Educate and train internal stakeholders on data pipelining and preparation techniques.
- Ensure compliance and governance through responsible data management practices.
- Act as a data and analytics evangelist, promoting capabilities and educating business leaders on leveraging data effectively
Requirements for Success :
- Experience: At least 3+ years of hands-on experience in data processing for large-scale digital applications.
Technical Skills :
- Proficiency in Python, Spark (SparkSQL), and SQL.
- Experience with data warehouse technologies like Snowflake and tools like Alation.
- Expertise in building bespoke data pipelines (ETL/ELT architectures) using Apache Airflow, AWS Glue, Amazon Athena, or related tools.
- Strong knowledge of cloud-native technologies (e.g., serverless functions, API gateways, NoSQL databases, and caching).
- Experience in mentoring data engineering teams and collaborating with data scientists.
- A degree in Software/Data Engineering, Computer Science, or equivalent work experience.
- Strong communication and collaboration skills, coupled with a hands-on, results-driven attitude.
What We Offer :
The organization values professional development, diversity, and inclusion. Employees have access to robust learning platforms such as Udemy and a 10% weekly learning time policy. Team-building activities, celebrations, and support groups further enhance the inclusive and dynamic workplace culture.
Functional Areas: Software/Testing/Networking
Read full job description