Grade HResponsible for supporting the delivery of business analysis and consulting processes and procedures for the defined specialism using sound technical capabilities, building and maintaining effective working relationships, ensuring relevant standards are defined and maintained, and supporting delivery of process and system improvements. Specialisms: Business Analysis; Data Management and Data Science; Digital Innovation.
Entity:
Technology
IT&S Group
Position Overview:
We are seeking a Senior Data Engineer with extensive AWS expertise to build and optimize data solutions on AWS. This role requires hands-on experience with AWS data services and a track record of delivering sprint-based commitments in an agile environment.
Key Responsibilities :
Design and implement data pipelines using AWS services (EMR, Glue, Redshift, S3, Lambda)
Lead sprint work for data migration and optimization initiatives
Build and maintain ETL workflows using Python, Spark, and AWS native tools
Optimize AWS costs through efficient resource utilization and architecture improvements
Implement automated testing and monitoring for data pipelines
Lead data platform migrations and legacy system deprecation
Participate in sprint planning and daily stand-ups
Deliver sprint commitments consistently and independently
Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners.
Architects, designs, implements and maintains reliable and scalable data infrastructure to move, process and serve data.
Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data at bp
Required Qualifications
Deep expertise in AWS data services (EMR, Glue, Redshift, Lake Formation).
Strong experience with Python and AWS SDK!
Proven track record of sprint-based deliveries in agile teams.
Expert-level knowledge of AWS cost optimization techniques.
Experience with Bazel build system and CI/CD pipelines.
Strong SQL skills and data modeling experience.
Good to have AWS certifications (Solutions Architect, Data Analytics)!
Good to Have
Experience with Apache Iceberg table format and its optimization techniques.
Strong PySpark expertise, including:
Performance optimization for large-scale data processing.
Custom transformations and UDFs.
Window functions and complex aggregations.
Delta Lake integration.
Experience with data lake optimization techniques.
Knowledge of table format evolution and schema management.
Negligible travel should be expected with this role
Relocation Assistance:
This role is eligible for relocation within country
Remote Type:
This position is a hybrid of office/remote working
Skills:
Commercial Acumen, Communication, Data Analysis, Data cleansing and transformation, Data domain knowledge, Data Integration, Data Management, Data Manipulation, Data Sourcing, Data strategy and governance, Data Structures and Algorithms (Inactive), Data visualization and interpretation, Digital Security, Extract, transform and load, Group Problem Solving
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us .