i
Freo
1 Freo Job
Freo - Senior Data Engineer - DataLake/Data Warehousing (5-7 yrs)
Freo
posted 22d ago
Flexible timing
Key skills for the job
About Freo :
Freo is a credit-led neobank that is looking to build an innovative digital banking experience that focuses on the unique needs of modern consumers in India and SE Asia with a host of financial products such as a digital savings account, uber-flexible credit line, credit, and EMI cards, buy now pay later (BNPL) schemes - all in one place.
Freedom starts today with a banking experience that lets its users do more and dream bigger. Freo partners with other banks and regulated financial institutions and is not a bank itself nor does it hold or claim to have a banking license.
MWYN Tech is the developer of the products and has partnered with the aforementioned banks and financial institutions to provide solutions through its technology platforms and mobile applications.
Job Description :
Function : Technical Management - Engineering Management
We are seeking an experienced and motivated Senior Data Engineer to join our data engineering team.
In this role, you will be responsible for designing, developing, and maintaining a robust and scalable data architecture to support both batch and real-time data processing needs. This role requires deep expertise in building data pipelines and managing data lakes, with a focus on leveraging AWS Redshift for data warehousing and PySpark for data lake architecture.
Responsibilities :
- Data Lake and Warehouse Architecture : Design, implement, and maintain data lakes and data warehouses, ensuring high availability, performance, and scalability.
- ETL Development : Lead the development of ETL pipelines for batch processing as well as real-time streaming data integration to support the organization's data needs.
- Data Engineering Best Practices : Establish and enforce best practices for data engineering, data security, and data quality management.
- AWS Cloud Expertise : Leverage AWS tools and services (e.g. , Redshift, S3 Lambda, Dynamo) for data storage, data transformation, and efficient data movement.
- PySpark and Big Data Processing : Architect and optimize data processing workflows using PySpark to support data lake architecture, ensuring efficient data ingestion, transformation, and storage.
- Cross-functional Collaboration : Work closely with data scientists, analysts, and stakeholders to understand data requirements and support business objectives.
- Team Leadership : Mentor junior engineers and foster a collaborative, high-performance engineering culture.
Requirements :
- Experience : 5-7 years of experience in data engineering or related fields with hands-on experience in data lake and data warehouse architecture.
Technical Skills :
- Proficiency with AWS services (Redshift, S3 Lambda, etc.
- Experience with other AWS big data tools (e.g. , EMR, Kinesis, Firehose, Glue).
- Strong experience with PySpark for data lake management and data transformation.
- Hands-on expertise in developing ETL pipelines for both batch and real-time processing.
- Management experience with Data Engineering tools such as Apache Superset, Apache Airflow, Apache Spark, Jupyterhub, etc.
- Soft Skills : Excellent problem-solving skills, communication skills, and ability to work in a fast-paced, agile environment
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Senior Data Engineer roles with real interview advice