Our Data team is responsible for designing pipelines from multiple sources and warehouses that will allow us to derive business insight. The team uses Azure and other open-source technologies, such as Azure Data Factory and PySpark. As our Data Analyst / Engineer, you will optimize our data integration at scale.
Responsibilities:
As a Data Engineer / Analyst, you will be responsible for the continuous operation and optimization of the data management platform.
Build data pipelines and python-based ETL tools for acquiring, processing, and delivering data
Develop database schemas in our data warehouse that enable performance analysis
Handle the challenges that come with managing terabytes of data
Develop the server applications and APIs that are used by our Data Team
Collaborate with the team to use BI Tools and build / deliver reports to monitor and understand company performance
Basic Qualifications:
Bachelor s degree or higher in Computer Science, Computer Engineering, Information Technology, or related field
Fluent in programming languages such as SQL, Python or Scala
2 to 3 years of work experience in building ETL pipelines in production data processing and analysis
Expert in designing SQL tables, choosing indexes, tuning queries, and optimizations across different functional environments
2 to 3 years of hands-on experience in writing complex SQL queries and using a BI tool
Experience with Data lakes and designing and maintaining data solutions using Spark and Azure serverless services such as ADF
Experience with data ingestion APIs, data sharing technologies, and warehouse infrastructure and development
Good verbal and written communication skills, following best practices in project documentation.
Only Pdf and Images are allowed to upload. Please Upload Files less than 1 MB.