i
WinZO
7 WinZO Jobs
WinZO - Data Engineer - Apache Spark/Hadoop (3-7 yrs)
WinZO
posted 1mon ago
Flexible timing
Key skills for the job
As a Data Engineer at WinZO, you will create and manage BI and analytics solutions that turn data into knowledge. You will also be responsible to enhance the business intelligence system to help us make better decisions. You will be working in a fast-paced environment which will require you to take initiatives with complete ownership, manage multiple projects, and drive execution with stakeholders.
We're looking for people with a hustler mindset, who are curious, eager to learn new things, with a passion for innovation, and work to be a little better every single day. This is not solely based on whether a candidate has previously done similar work or not. We're looking for someone dynamic with the below qualities in generous quantities to perform well in this role -
Responsibilities :
- Responsible for building and operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
- Working on designing, performance/scalability tuning of batch and real-time stream analytics and large data processing systems.
- Research and recommend frameworks and architectural/code design patterns for large-scale data processing and identify areas of improvement within the existing code and processes.
- Design, build and deploy internal applications to support our product life cycle, data and business intelligence, among others.
- Optimize data delivery and re-design infrastructure for greater scalability.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Apache Spark, Hadoop, and AWS technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs.
Requirements :
- B. Tech/B. E in Computer Science, IT, or similar field.
- Experience working with ETL/CDC pipelines and data Warehousing tools.
- Proficient with Spark Internals and decent work experience in BigData Engineering.
- Proficient with Kafka Internals.
- Proficient in any programming language e. g. Java/Scala/Python.
- Decent hold of Hive/SQL.
- Good Understanding of Transactional and Nosql Databases.
- Experience working with big data and databases, especially Apache Spark and Hive/Hadoop.
- Experience in a cloud environment. (AWS/GCP/Azure).
- Good to have proficiency with Linux and systems administration.
- Experience working with both structured and unstructured data paradigms.
- Experience troubleshooting data quality issues, and analyzing data requirements.
- Understanding event-driven architecture and, REST API's.
- Good to have a working knowledge of MongoDB/Apache Kafka/Unix shell scripting/Apache Cassandra.
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Data Engineer roles with real interview advice