i
StaidLogic Software
21 StaidLogic Software Jobs
8-10 years
StaidLogic - Senior Data Engineer - SQL/Python (8-10 yrs)
StaidLogic Software
posted 8d ago
Flexible timing
Key skills for the job
Role : Senior Data Engineer.
Experience : 8-10 Years.
Role Description.
This is a contract Senior Data Engineer role located in Pune, India.
The Senior Data Engineer will be responsible for Data Engineering, Data Modeling, Extract Transform Load (ETL), Data Warehousing, and Data Analytics on a day-to-day basis.
Data Engineer
We're looking for a Big Data Lead Engineer to :
- Engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using cloud (Azure) data platform infrastructure effectively.
- Transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques.
- Develop, train, and apply data engineering techniques to automate manual processes, and solve challenging business problems.
- Ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements.
- Build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues.
- Understand, represent, and advocate for client needs.
- Codify best practices, methodology and share knowledge with other engineers in UBS.
- Shape the Data and Distribution architecture and technology stack within our new cloud-based datalake-house.
- Be a hands-on contributor, senior lead in the big data and data lake space, capable to collaborate and influence architectural and design principles across batch and real time flows.
- Have a continuous improvement mindset, who is always on the lookout for ways to automate and reduce time to market for deliveries.
Your Expertise :
- Experience in building Data Processing pipeline using various ETL/ELT design patterns and methodologies to Azure data solution, building solutions using ADLSv2, Azure Data factory, Databricks, Python and PySpark.
- Experience with at least one of the following technologies: Scala/Java or Python.
- Deep understanding of the software development craft, with focus on cloud based (Azure), event driven solutions and architectures, with key focus on Apache Spark batch and streaming, Datalakehouses using medallion architecture.
- Knowledge of DataMesh principles is added plus.
- Ability to debug using tools Ganglia UI, expertise in Optimizing Spark Jobs.
- The ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate datasets.
- Expert in creating data structures optimized for storage and various query patterns for e.
- Parquet and Delta Lake.
- Experience in traditional data warehousing concepts (Kimball Methodology, Star Schema, SCD) / ETL tools (Azure Data factory, Informatica).
- Experience in data modelling atleast one database technology such as:.
- Traditional RDBMS (MS SQL Server, Oracle, PostgreSQL).
- NoSQL (MongoDB, Cassandra, Neo4J, CosmosDB, Gremlin).
- Understanding of Information Security principles to ensure compliant handling and management of data.
- Ability to clearly communicate complex solutions.
- Strong problem solving and analytical skills.
- Working experience in Agile methodologies (SCRUM).
- A proven team player with strong leadership skills, who can work in a collaborative way across business units, teams and regions.
Qualifications :
- Data Engineering and Data Modeling skills.
- Experience with Extract Transform Load (ETL) processes.
- Experience with cloud platforms like AWS or Azure.
- 9+ years of experience in a similar role.
Functional Areas: Software/Testing/Networking
Read full job description8-10 Yrs
10-14 Yrs
6-8 Yrs