As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability .
Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives.
Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus.
Key Responsibilities:
-
Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) .
-
Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use.
-
Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making.
-
Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency.
-
Drive data governance, security, and compliance best practices.
-
Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions .
-
Lead the design, implementation, and lifecycle management of data services and solutions.
-
Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development .
-
Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design .
About You:
-
7+ years of experience in data engineering, software development, or distributed systems.
-
Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks.
-
Strong programming skills in Scala and/or Java (Python is a plus).
-
Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (eg, S3, BigQuery, Databricks, EMR, Snowflake, etc).
-
Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data.
-
Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc).
-
Strong knowledge of data modeling, ETL pipeline design, and performance optimization .
-
Understanding of data governance, security, and compliance in large-scale data environments.
-
Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus.
-
Strong problem-solving skills and ability to work in complex, unstructured environments .
-
Excellent communication and collaboration skills, with experience working in cross-functional teams .
Why Join Us
-
Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment.
-
Influence and shape the future of data architecture and real-time data services at Target.
-
Solve high-impact business problems using scalable, low-latency data solutions .
-
Be part of a culture that values innovation, learning, and growth .
Employment Type: Full Time, Permanent
Read full job description