Key
- Demonstrate strong technical capabilities and knowledge of building, designing and maintaining the data architecture for large volume data solutions
- Design solutions for data aggregation, improve data foundational procedures, integrate new data management technologies and software into the existing system and build data collection pipelines
- Should be able to guide the team in programming, database knowledge, data warehousing or big data applications.
- Lead components of large scale client engagements and/or smaller client engagements while consistently delivering quality client services.
- Monitor progress, manage risk, and effectively communicate with key stakeholders regarding status, issues and key priorities to achieve expected outcomes.
- Understanding business and technical requirements and provision of subject matter expertise.
- Conducting of data discovery activities, performing root cause analysis, and making recommendations for the remediation of data quality issues.
- Should be able to write effective, scalable python code
- Provide product and design level technical best practices
- Work with key stakeholders to define the delivery model based on resources, infrastructure landscape and SME availability
- Support the growth of the data migration / data warehouse & data integration practice through internal initiatives and identifying new business opportunities
- Coach and develop junior colleagues across the data team
Mandatory Experience
- BE/BTech/MCA/MTech with adequate industry experience
- 3 - 9 years of experience on big data technologies and Python programming
- Exposure working on Markit EDM or Eagle platforms
- Very good experience on Big Data Analytics along with sound knowledge on Python and Java.
- Should have completed at least 2 full life cycle experience in data analytics project
- Establishing scalable, efficient, automated processes for large scale data analyses and management
- Discover, design, and develop analytical methods to support novel approaches of data and information processing
- Prepare and analyse historical data and identify patterns
- Prior experience and expertise in the following domain
- Building or improving data transformation processes, data visualisations and applications
- Exposure to Kafka, Hadoop and data processing engines such as Spark or Hadoop MapReduce
- Big Data querying tools such as Pig or Hive or Impala
- Solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data
- Must have experience working on Hadoop cluster, Big Data technologies like Python, Spark, etc. along with hands on experience in writing Java programs for Big data analytics.
- Excellent analytical (including problem solving), technical and team management skills
- Provide technical support for program management
Desired Experience
- Experience working on Banking and Capital Markets or Wealth and Asset Management
- Familiar with data process, lineage, metadata and regulatory reporting requirements
- Experience with cloud systems such as AWS, Azure
- Experience using Agile methodologies
- Good to have programming experience (e.g. application development or scripting languages - Perl, VBasic, VBScript, Unix Shell scripts)
- Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
- Prior Client facing skills
Employment Type: Full Time, Permanent
Read full job description