206 Resy Jobs
Data Engineer I - India
Resy
posted 11hr ago
Key skills for the job
You Lead the Way. We ve Got Your Back.
At American Express, you ll be recognized for your contributions, leadership, and impact every colleague has the opportunity to share in the company s success. Together, we ll win as a team, striving to uphold our company values and powerful backing promise to provide the world s best customer experience every day. And we ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong.
Join Team Amex and lets lead the way together.
Responsibilities include, but not limited to:
The Ideal candidate will be responsible for Designing, Developing and maintaining data pipelines.
Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices
You will closely work with data scientists, analysts and other partners to ensure the flawless flow of data.
You will be Building and optimize reports for analytical and business purpose.
Monitor and solve data pipelines issues to ensure smooth operation.
Implementing data quality checks and validation process to ensure the accuracy completeness and consistency of data
Implementing data governance policies , access controls , and security measures to protect critical data and ensure compliance.
Developing deep understanding of integrations with other systems and platforms within the supported domains.
Bring a culture of innovation, ideas, and continuous improvement.
Challenging status quo, demonstrate risk taking, and implement creative ideas
Lead your own time, and work well both independently and as part of a team.
Adopt emerging standards while promoting best practices and consistent framework usage.
Work with Product Owners to define requirements for new features and plan increments of work.
Minimum Qualifications
BS or MS degree in computer science, computer engineering, or other technical subject area or equivalent 3-4 years of work experience
At least 5 year of hands-on experience with SQL, including schema design, query optimization and performance tuning.
Experience with distributed computing frameworks like Hadoop, Hive , Spark for processing large scale data sets.
Proficiency in any of the programming language python, pyspark for building data pipeline and automation scripts.
Understanding of cloud computing and exposure to any cloud GCP, AWS or Azure.
knowledge of CICD, GIT commands and deployment process.
Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows
Excellent communication and collaboration skills.
Benefits include:
Employment Type: Full Time, Permanent
Read full job description