11 Clasticon Solutions Jobs
Apache Flink Developer
Clasticon Solutions
posted 2d ago
Key skills for the job
We are seeking an experienced and motivated Apache Developer to join our dynamic data engineering team.
As a Apache Developer, you will play a key role in building and optimizing real-time data processing pipelines using Apache Flink. You will work closely with data scientists, engineers, and other stakeholders to design and implement scalable, high-performance solutions for processing large-scale, real-time data streams.
Key Responsibilities:
• Design, develop, and maintain real-time data processing pipelines using Apache Flink.
• Collaborate with data engineers, data scientists, and other technical teams to understand data requirements and implement efficient processing workflows.
• Write high-quality, maintainable, and scalable code for streaming applications using Flink APIs.
• Implement advanced data transformations, aggregations, and enrichment techniques for real-time analytics.
• Optimize Flink jobs for performance, scalability, and fault tolerance.
• Ensure data integrity, consistency, and high availability within streaming data pipelines.
• Develop and maintain monitoring and alerting systems to track the performance of data pipelines.
• Troubleshoot and resolve issues related to data pipeline performance, reliability, and accuracy.
• Stay current with the latest trends, tools, and techniques in the streaming and big data ecosystems.
• Contribute to the design of data architecture and data processing strategies to meet business needs.
Required Skills and Qualifications:
• Strong experience with Apache Flink and its ecosystem.
• Proficiency in Java or Scala, with experience in developing real-time data processing applications.
• Solid understanding of streaming data concepts, architectures, and patterns.
• Familiarity with distributed systems and technologies like Apache Kafka, Apache Pulsar, or RabbitMQ.
• Experience with data storage and retrieval systems (e.g., Hadoop, HBase, Cassandra, Elasticsearch).
• Expertise in working with SQL and NoSQL databases for data manipulation.
• Strong debugging, performance tuning, and troubleshooting skills in a distributed environment.
• Knowledge of containerization (Docker) and orchestration tools (Kubernetes) is a plus.
• Familiarity with CI/CD pipelines and automation practices.
• Experience working in agile development teams and using version control tools like Git.
• Strong analytical and problem-solving skills, with an ability to work in a fast-paced, dynamic environment.
• Excellent written and verbal communication skills.
Preferred Qualifications:
• Experience with Cloud platforms (AWS, GCP, or Azure) for deploying and managing Flink applications.
• Understanding of data warehousing and ETL processes in big data environments.
• Familiarity with data governance and security best practices.
• Experience with machine learning models or libraries that integrate with streaming data.
Education and Experience:
• Bachelors or Masters degree in Computer Science, Engineering, or a related field.
• years of hands-on experience in developing and deploying Flink-based streaming applications or other big data technologies.
Employment Type: Full Time, Permanent
Read full job description6-9 Yrs
₹ 12 - 16L/yr
Bangalore / Bengaluru
4-9 Yrs
₹ 12 - 15L/yr
Bangalore / Bengaluru