9 Progriz Coe Jobs
Big Data Architect SME
Progriz Coe
posted 17d ago
Fixed timing
Summary:
Our growing organization is seeking a skilled professional to serve as an internal resource for our specialized focus in the field of Big Data/Hadoop Development. We provide Fortune 500 clients throughout the United States with IT consultants in a wide-ranging technical sphere.
In order to fully maintain our incoming nationwide and international hires, we will be hiring a Big Data Lead Developer/Architect for out Atlanta, GA (US) headquarters to coach/mentor our incoming classes of consultants. If you have a strong passion for the various Big Data platforms and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.
This is a full-time internal position that requires you to be local to Mumbai.
Key Responsibilities:
The Big Data/Hadoop SME will take on the following responsibilities:
- Interviewing potential consultants to ensure all onboarding employees will be successful in Big Data domains prior to each onboarding.
- The design, development, and maintenance of our best-in-class Big Data/Hadoop development training materials
- Training, guiding and mentoring junior to mid-level developers.
- Preparing mock interview situations to enhance the esteemed learning process provided by the company.
- Acting as a primary resource for individuals working on a variety of projects throughout the US
- Interacting with our Executive and Sales team to ensure that projects and employees are appropriately matched.
- Prepping consultants for interviews for specific assignments involving development and implementation of Hadoop and other environments
The ideal candidate will not only possess a solid knowledge of Big Data infrastructures, but must also have a fluency in the following areas (allowing for fluid interactions with other team members scattered across the entry to senior level spectrum):
- Hadoop development and implementation
- Strong in Object-Oriented Development in Scala/Java platform
- Hands on experience in big data technologies including Scala or Spark, Hadoop, Hive, HDFS.
- Strong SQL skills and experience
- Designing, building, installing, configuring, and supporting Big Data Clusters Spark/Kafka
- Translate complex functional and technical requirements into detail design Implementing ETL process for integration of data from disparate sources.
- Cloud Experience is a plus
Desired Qualifications Include:
Minimum Education:
Employment Type: Full Time, Permanent
Read full job description