Discuss the Cost of Change (= code quality) with your team members continuously.
Write unit tests, integration tests, and API tests.
Support the application 24/7 based on team on-call rotations.
Write clean code with a focus on coupling, separation of concerns, and best practices.
Spend 90% of your time writing code, emphasizing Test-driven development (TDD).Dedicate 10% of your time to learning and improving existing application architecture.
Stay open to learning and adapting to new technology architectures and patterns.
Possess knowledge of distributed architectures, particularly with Akka, Akka Cluster, and Akka Persistence, alongside experience using Spark with Scala.
Have some hands-on experience with building and creating CI/CD pipelines.
Conduct code reviews and participate in design discussions.
Analyze the impact of changes on data and implement event sourcing and CQRS patterns.
Have a strong understanding of functional, reactive and parallel programming.
Troubleshoot and solve complex problems in production.
Collaborate and coordinate with different stakeholders, including product, data science, and account managers.
Diagnose AWS infrastructure issues related to the application.
Implement best practices for 24/7 application monitoring, orchestration, and performance optimization.
Follow Agile principles, participate in grooming and planning sessions, and effectively translate business requirements to Agile stories.
Practice DevOps and SecOps for continuous incremental delivery and quality products with the guidance of senior engineers.
Key Skills Required
Bachelor s or Master s degree in Computer Science or related discipline or equivalent work experience.
4-8 years of experience with Scala, experience in upgrading, maintaining, and performance tuning large Scala applications is required.
4+ years of advanced experience with Scala frameworks such as Akka/Pekko, Akka Cluster - deep understanding of Akka Persistence, Akka Projection and Akka Serialization is essential.
4+ years of advanced experience with Java and relational databases is essential.
2+ years of experience with AWS services (RDS, S3) is required.
2+ years of experience with Apache Spark. Familiarity with Spark SQL and a basic understanding of performance tuning large Spark applications would be beneficial.
2+ years of experience using monitoring and alert orchestration tools such as Prometheus, Grafana, OpsGenie/PagerDuty is essential.
2+ years of experience building CI/CD pipelines in GitLab for applications running on Kubernetes (EKS) using Docker is required.
2+ years of experience in developing microservices applications and familiarity with protocols such as HTTP and gRPC is essential.
Proficient in debugging and performance tuning large-scale Java and Big Data applications, using tools such as Visual VM, JProfiler, and remote debugging techniques.
Fluent in English, both spoken and written, with a large vocabulary (C1 English level).
Understand and implement basic object-oriented principles and functional programming principles. Implement good coding practices with thorough unit and integration testing, emphasizing TDD.
Commitment to following best practices for security, scalability, and performance.
Excellent problem-solving skills and the ability to troubleshoot complex technical issues in production environments.
Strong communication skills for effective collaboration with cross-functional teams, stakeholders, and third-party vendors.
Continuous improvement mindset to identify opportunities for automation, optimization, and efficiency gains in infrastructure and deployment processes.
Ability to document processes, procedures, and technical architectures for knowledge sharing and future reference.