29 Scoop Technologies Jobs
Big Data Engineer - Python/PySpark (6-10 yrs)
Scoop Technologies
posted 13hr ago
Flexible timing
Job Role : Big Data Engineer
Work Location : Bangalore (CV Ramen Nagar location)
Experience : 7+ Years
Notice Period : Immediate - 30 days
Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud
JD and required Skills & Responsibilities :
- Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support.
- Solve complex business problems by utilizing a disciplined development methodology.
- Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies.
- Analyse the source and target system data. Map the transformation that meets the requirements.
- Interact with the client and onsite coordinators during different phases of a project.
- Design and implement product features in collaboration with business and Technology stakeholders.
- Anticipate, identify, and solve issues concerning data management to improve data quality.
- Clean, prepare, and optimize data at scale for ingestion and consumption.
- Support the implementation of new data management projects and re-structure the current data architecture.
- Implement automated workflows and routines using workflow scheduling tools.
- Understand and use continuous integration, test-driven development, and production deployment frameworks.
- Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards.
- Analyze and profile data for the purpose of designing scalable solutions.
- Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues.
Required Skills :
- 5+ years of relevant experience developing Data and analytic solutions.
- Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark
- Experience with relational SQL.
- Experience with scripting languages such as Python.
- Experience with source control tools such as GitHub and related dev process.
- Experience with workflow scheduling tools such as Airflow.
- In-depth knowledge of AWS Cloud (S3, EMR, Databricks)
- Has a passion for data solutions.
- Has a strong problem-solving and analytical mindset
- Working experience in the design, Development, and test of data pipelines.
- Experience working with Agile Teams.
- Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders
- Able to quickly pick up new programming languages, technologies, and frameworks.
- Bachelor's degree in computer science
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Big Data Engineer roles with real interview advice