i
Smartedge IT Services
149 Smartedge IT Services Jobs
Big Data Engineer - Dynatrace (6-10 yrs)
Smartedge IT Services
posted 2d ago
Fixed timing
Key skills for the job
Job Title : Big Data Engineer (Dynatrace)
Location : Chennai, Bangalore, Hyderabad
Experience : 6-10 Years
Required skills : Big Data, Dynatrace
Job Description :
We are looking for an experienced Big Data Engineer (Dynatrace) with building scalable data pipelines and ensuring performance optimization through Dynatrace monitoring. This role requires a seasoned engineer to design, implement, and maintain robust data systems and utilize Dynatrace for real-time monitoring and troubleshooting of large-scale data environments.
As a Big Data Engineer, you will be part of an innovative team responsible for handling high volumes of data, optimizing performance, and ensuring high availability across multiple systems.
Key Responsibilities :
Design & Implement Data Pipelines : Lead the design, development, and optimization of complex data pipelines that support real-time and batch data processing.
Big Data Architecture : Architect and implement scalable and fault-tolerant data systems using technologies like Hadoop, Spark, Kafka, Hive, HBase, and others.
Dynatrace Monitoring : Leverage Dynatrace to monitor the performance of data infrastructure, applications, and systems. Identify bottlenecks and work on performance improvements.
Performance Optimization : Use Dynatrace's advanced monitoring and analytics to optimize big data workloads, improve latency, and ensure data pipeline reliability.
Data Integration : Design and implement solutions to integrate heterogeneous data sources, ensuring that data flows seamlessly through the big data ecosystem.
Collaboration & Leadership : Work closely with cross-functional teams (DevOps, Data Scientists, etc.) to enhance data engineering practices, share knowledge, and optimize data-related processes.
Troubleshooting & Incident Management : Use Dynatrace and other monitoring tools to proactively identify issues and work on resolving them, ensuring high system uptime and performance.
Automation : Implement automation scripts for monitoring, reporting, and alerting related to big data infrastructure and performance metrics.
Documentation : Maintain detailed documentation for data architectures, monitoring setups, performance metrics, and troubleshooting steps to ensure smooth knowledge transfer and operational continuity.
Required Skills and Experience :
Experience : 6+ years of experience in Big Data Engineering or Data Engineering with a focus on building and maintaining large-scale data systems.
Dynatrace Expertise : Extensive experience using Dynatrace for infrastructure monitoring, performance optimization, and troubleshooting in a big data environment.
Big Data Technologies : Strong knowledge of Hadoop, Spark, Kafka, HBase, Hive, and other big data processing and storage tools.
Programming Skills : Proficiency in Java, Scala, Python, SQL, and experience in developing efficient data processing pipelines.
Cloud Platforms : Expertise in working with cloud platforms such as AWS, GCP, or Azure for big data storage and processing.
Performance Optimization : Proven ability to analyze and optimize system performance using Dynatrace and other monitoring tools.
Collaboration : Strong collaboration skills, with experience working in agile and cross-functional teams.
Troubleshooting : Excellent problem-solving skills, with the ability to resolve issues in real-time and mitigate risks before they impact production systems.
Communication Skills : Strong written and verbal communication skills to convey complex technical concepts to both technical and non-technical stakeholders.
Preferred Qualifications : .
- Experience with containerized environments using Kubernetes, Docker.
- Familiarity with CI/CD pipelines for automating big data deployments.
- Knowledge of NoSQL databases (e.g., MongoDB, Cassandra)
Functional Areas: Software/Testing/Networking
Read full job description