7 Texplorers Jobs
Cloud Data Administrator - Spark (6-11 yrs)
Texplorers
posted 14hr ago
Key skills for the job
Your Mission :
BI Development Team is in the process of moving BI landscape into Azure Cloud.
We are implementing Microsoft's best practices ("Enterprise-Scale for Analytics and AI" deployed in "Data Landing Zone"). One main area is to administrate our Spark Clusters (PaaS).
You will be part of our Global BI Admin Team and will be working on following tasks. setup MS Synapse workspaces including Spark Clusters using terraform setup Synapse Linked Services and connections to other 3 rd party systems like Kafka, Oracle etc
- Run upgrade Projects with developer teams to upgrade to next Spark version (i.e Spark 3.3 to 3.4)
- Perform Resource Management by understanding and optimizing CPU and Memory usage.
- Optimize resource usage for Spark Application created by Developers
- Do "Executor Management" by launching and running executors on worker nodes
- Monitor executor performance and scalability (i.e when doing Google Data full loads)
- Detect and handle worker node failures by reallocate resources and restarting spark jobs
- Monitoring the Spark cluster health, resource utilization, and job execution metrics.
- Establish advanced monitoring by using Grafana and Prometheus
- Do performance tuning by fine-tune Spark parameters for optimal performance. Work with
- Developers to get understanding on their Spark Programs. Give hints for improvement of code
Skills :
- Apache Spark : In-depth knowledge of Spark architecture, components (RDDs, DataFrames, Datasets), and Spark SQL.
- Cluster Management : Proficiency in setting up, configuring, and maintaining Spark clusters.
- Resource Management : Understanding of YARN for resource allocation.
- Performance Tuning : Ability to optimize Spark jobs, memory usage, and parallelism.
- Monitoring Tools : Familiarity with monitoring tools like Grafana and Prometheus
- Hadoop Ecosystem : Understanding of HDFS, Hive, and other related technologies.
- Scripting Languages : Proficiency in Python, pyspark and sparksql.
- Security : Knowledge of securing Spark clusters and data.
- Problem-Solving : Ability to troubleshoot issues and resolve them efficiently.
- Communication : Clear communication with developers, data engineers, and stakeholders.
- Collaboration : Working well in cross-functional teams.
- Adaptability : Keeping up with evolving technologies and best practices
Functional Areas: Other
Read full job description