21 HDFC Securities Jobs
HDFC Securities - Data Engineer - ETL/Data Warehousing (4-7 yrs)
HDFC Securities
posted 12hr ago
Fixed timing
Key skills for the job
About Us :
HDFC securities is one of the leading stock broking companies in India and a subsidiary of HDFC Bank, a renowned private sector bank.
It has been serving a diverse customer base of retail and institutional investors since 2000. Headquartered in Mumbai, it offers an exhaustive product suite to help its customers invest in Equities, IPO/OFS, Buybacks, Mutual Funds, ETFs, Futures & Options for - Equity, Currency, and Commodities, Fixed Deposits, Bonds, NCDs, and National Pension Scheme, along with value added services like Online Will writing and Tax filing. The company offers a host of digital platforms like Mobile Trading App, Desktop based online trading facility, ProTerminal - an advanced trading platform and Arya - a voice enabled investing assistant. It also offers Call N Trade facility and dedicated Relationship Managers to assist customers. Since its inception, the company has established itself as a preferred trading platform (for NSE & BSE), with its integrated 3-in-1 account (Trading + Demat + Savings) backed by state-of-the-art technology. Over the years, the company has won many awards and recognitions. Currently, the company has 250+ branches across 190 cities, serving over 2.1 million customers.
We recently launched a discount broking platform called HDFC Sky in addition to HDFC InvestRight, which is our existing full service broking platform.
We are currently enhancing and scaling these platforms even further to continue to delight our valued customers.
About the Role :
We are looking for an experienced data engineer to join our growing team of data analytics experts. As a data engineer at HDFC Securities, you will be responsible for developing, maintaining, and optimizing our data warehouse, data pipeline, and data products. The data engineer will support multiple stakeholders, including software developers, database architectures, data analysts, and data scientists, to ensure an optimal data delivery architecture.
The ideal candidate should possess strong technical abilities to solve complex problems with data, a willingness to learn new technologies and tools if necessary and be comfortable supporting the data needs of multiple teams, stakeholders, and products.
Key Responsibilities :
- Design, build and maintain batch or real-time data pipelines in production.
- Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources.
- Develop ETL (extract, transform, load) or ELT processes to help extract and manipulate data from multiple sources.
- Automate data workflows such as data ingestion, aggregation, and ETL processing.
- Prepare raw data in Data Warehouses into a consumable dataset for both technical and non-technical stakeholders.
- Partner with data scientists and functional leaders in sales, marketing, and product to deploy machine learning models in production.
- Build, maintain, and deploy data products for analytics and data science teams on cloud platforms (e.g. AWS, Azure, GCP).
- Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures.
- Monitor data systems performance and implement optimization strategies.
- Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership.
Required Qualifications :
- BE/B.Tech/MCA from Tier 1/Tier 2 institutes
- 4+ Years of relevant experience in data engineering field
- Advanced SQL skills and experience with relational databases and database design.
- Experience working with cloud Data Warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Azure, etc.).
- Experience working with data ingestion tools such as Fivetran or stitch.
- Working knowledge of Cloud-based solutions (e.g. AWS, Azure, GCP).
- Experience building and deploying machine learning models in production.
- Strong proficiency in object-oriented languages : Python, Java, C++, Scala.
- Strong proficiency in scripting languages like Bash.
- Strong proficiency in data pipeline and workflow management tools (e.g., Airflow, Azkaban).
- Strong project management and organizational skills.
- Excellent problem-solving, communication, and organizational skills.
- Proven ability to work independently and with a team.
What will make you stand out :
- Good understanding of NoSQL databases like Redis, Cassandra, MongoDB, or Neo4j.
- Experience with working on large data sets and distributed computing (e.g. Hive/Hadoop/Spark/Presto/MapReduce).
- Awareness about opensource data engineering
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Data Engineer roles with real interview advice