Data Engineer at Think Right Technologies will be responsible for building and deploying scalable data driven solutions. Data Engineers are responsible for working with both SQL and NoSQL technologies and collaborating with other product development teams identifying and implementing the appropriate data repositories based on strategical business needs.
Responsibilities for Data Engineer:
Data Pipeline Development:
Assist in designing, developing, and maintaining data pipelines to ensure efficient data ingestion, transformation, and storage.
Collaborate with senior data engineers to implement ETL (Extract, Transform, Load) processes and workflows.
Database Management:
Support the management and optimization of relational and NoSQL databases.
Assist in database schema design, indexing, and performance tuning.
Data Quality and Integrity:
Monitor and validate data to ensure accuracy and consistency across systems.
Help identify and resolve data quality issues and implement data cleansing techniques.
Data Integration:
Assist in integrating data from various sources, including internal and external systems, APIs, and third-party data providers.
Support data migration and synchronization tasks as needed.
Collaboration and Communication:
Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions.
Participate in team meetings, providing updates on progress and contributing to project discussions.
Documentation and Reporting:
Document data engineering processes, workflows, and systems to ensure clarity and maintainability.
Prepare reports and presentations to communicate data insights and project status to the team and management.
Qualifications and Working Experience:
Bachelor s degree in computer science, Information Technology, Data Science, or a related field.
6 - 8 years of experience in data engineering, database management, or a related role.
Technical Skills:
Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL, SQL Server).
Familiarity with NoSQL databases (e.g., MongoDB, Cassandra) is a plus.
Knowledge of programming languages such as Python Java, or Scala is must.
Understanding of data warehousing concepts and ETL processes. ML Basics and exposure is desirable
Familiarity with data integration tools and platforms is a plus (e.g., Apache NiFi, Talend).
AWS, Glue and/or Azure Data Factory exposure/experience is a plus/desirable
Experience in API, Big Data Technologies is desirable
ML Basics and exposure is desirable
Required Qualities:
Strong analytical and problem-solving skills.
Ability to work effectively both independently and as part of a team.
Excellent communication and interpersonal skills.
Attention to detail and a commitment to delivering high-quality work