5 Falcon India Jobs
Falcon - Data Engineer - Java/Python (2-5 yrs)
Falcon India
posted 29d ago
Flexible timing
Key skills for the job
Company Introduction :
Who are we?
Falcon a Series-A funded cloud-native, AI-first banking technology & processing platform that helps banks, NBFCs, and PPIs quickly and affordably launch next-gen financial products, such as credit card, credit line on UPI, prepaid card, fixed deposits, and loans.
Since our 2022 launch, we've processed USD 1 Bn+ in transactions, signed on 12 of India's top financial institutions, & clocked USD 15 Mn+ in revenue. Our company is backed by marquee investors from around the world, including heavyweight investors from Japan, USA, as well as leading Indian ventures and banks.
Experience level : Intermediate (5-7 years)
Key Responsibilities :
- Design, develop, and support scalable ETL processes using open source tools and data frameworks like AWS Glue, AWS Athena, redshift, Apache Kafka, Apache Spark, Apache Airflow and Pentaho Data Integration (PDI).
- Design, creation and maintenance of data lakes and data warehouse on AWS cloud.
- Maintain and optimise our data pipeline architecture, and formulate complex SQL queries for big data processing.
- Collaborate with product and engineering teams to design and develop a platform for data modelling and machine learning operations.
- Implement various data structures and algorithms to ensure we meet both functional and non-functional requirements.
- Maintain data privacy and compliance according to industry standards.
- Develop processes for monitoring and alerting on data quality issues.
- Continually evaluate new open source technologies and stay updated with the latest data engineering trends.
Key Qualifications :
- Bachelor's or Master's degree in Computer Science, MCA from a reputed institute
- Minimum of 4 years experience in a data engineering role.
- Experience using Python, Java, or Scala for data processing (Python preferred)
- Demonstrably deep understanding of SQL and analytical data warehouses.
- Solid experience with popular database frameworks such as PostgreSQL, MySQL, and MongoDB
- Knowledge of AWS technologies like , lambda, Athena, glue and redshift
- Hands-on experience implementing ETL (or ELT) best practices at scale.
- Hands-on experience with data pipeline tools (Airflow, Luigi, Azkaban, dbt)
- Experience with version control tools like Git.
- Familiarity with Linux-based systems and cloud services, preferably in environments like AWS.
- Strong analytical skills and ability to work in an agile and collaborative team environment.
Preferred Skills :
- Certification in any open source big data technologies.
- Expertise in open source big data technologies like Apache Hadoop, Apache Hive, and others.
- Familiarity with data visualisation tools like Apache Superset, Grafana, tableau etc.
- Experience in CI/CD processes and containerization technologies like Docker or Kubernetes.
Other specifics :
Location : Gurgaon
Job Type : Full Time
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Data Engineer roles with real interview advice
10-15 Yrs
Gurgaon / Gurugram
4-6 Yrs
Bangalore / Bengaluru