We are seeking a highly motivated and skilled Big Data Developer to join our growing team. In this role, you will be responsible for designing, developing, and maintaining big data solutions using cutting-edge technologies. You will work with large datasets to extract valuable insights, build data pipelines, and develop scalable and high-performance data processing applications.
Key Responsibilities:
Design, develop, and deploy big data solutions using technologies such as Hadoop, Spark, Hive, HBase, Kafka, and other related tools.
Develop and maintain data pipelines for data ingestion, transformation, and loading (ETL/ELT).
Perform data analysis and extract meaningful insights from large datasets.
Build and optimize data models for data warehousing and data lakes.
Develop and implement data quality checks and data governance policies.
Collaborate with data scientists, data analysts, and business stakeholders to understand data requirements and translate them into technical solutions.
Stay abreast of the latest advancements in big data technologies and best practices.
Troubleshoot and resolve big data system issues.
Required Skills and Experience:
Bachelor s degree in Computer Science, Computer Engineering, or a related field.
[Number] years of professional experience in big data development.
Strong proficiency in Java, Python, or Scala.
Experience with big data technologies such as Hadoop, Spark, Hive, HBase, Kafka.
Experience with SQL and NoSQL databases.
Experience with cloud platforms such as AWS, Azure, or GCP.
Strong analytical and problem-solving skills.
Excellent communication and interpersonal skills.
Ability to work independently and as part of a team.
Desired Skills and Experience:
Experience with machine learning and deep learning frameworks (e.g., TensorFlow, PyTorch).
Experience with data visualization tools (e.g., Tableau, Power BI).
Experience with data streaming technologies (e.g., Flink, Spark Streaming).
Experience with containerization technologies (e.g., Docker, Kubernetes).