817 Luxoft Jobs
Big Data Engineer - Hadoop/Hive/Spark (8-15 yrs)
Luxoft
posted 17hr ago
Flexible timing
Key skills for the job
Pl note candidate should have overall experience of 7+ yrs in Bigdata along with Hadoop , Hive Spark and Java / scala .
He/ She should be available to join in 30 days notice period.
Experience 8-15 yrs.
Location : Bangalore or Chennai.
Work Mode : Hybrid (2 days in a week should be ready to work in office and 3 days from home).
Skills Required Bigdata along with Hadoop , Hive, Spark.
Notice period Immediate to 30 days only (Not going ahead with 45-90 days notice period).
Project Description :
- We've been engaged by a large UK Based Investment Bank in the Corporate, Commercial and Institutional Banking (CCIB) portfolio to work on the projects for managing analytics support for all the key initiatives of all segments/products .
- Ensuring quality service output, submitted in a timely manner and communicated effectively.
- Continuous Improvement in Productivity to the standards prescribed from time to time.
- Manage data & information support for all the key initiatives across Retail Bank supporting countries & group teams.
- Work closely with technology on solutions to resolve identified production issues which are impacting existing infrastructure /solution covering data quality, data assurance, refresh timeliness, data security governance.
- To work closely with Business & Technology to review the integration of new functionality / technology implementation projects initiated.
- Extend analytics support spanning across entire gamut of business (Customer Acquisition, Portfolio Management, New Product Development).
- Manage analysis & reporting across all Retail products/segments.
- Interact with metrics owners across various functions and get end to end understanding of the metrics and applicability of the same across various MI dashboards.
- Provide development opportunities through regular engagement, ensuring that all staff is aware of the opportunities available in order to help them succeed within their role.
- Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Mandatory Skills
- 7+ years of relevant technology experience.
- Hands on Hadoop and Bigdata tech.
- HIVE, HBase, Kafka, Spark.
- Strong analytic skills related to working with unstructured datasets.
- Strong experience on SQL.
- Good Experience on Data Base & Data Warehouse building and management.
- Java / Scala programming experience.
- Exposure to data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing and highly scalable 'big data' data stores.
Experience 8-15 yrs.
Work LocationBangalore/ Chennai.
Work Mode Hybrid (2 days in office and 3 days at home).
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Big Data Engineer roles with real interview advice
6-10 Yrs
Bangalore / Bengaluru
5-9 Yrs
Bangalore / Bengaluru
5-9 Yrs
Gurgaon / Gurugram