As a Sr. Data Engineer at Micron Technology Inc., you will be a key member of a multi-functional team responsible for developing and growing Micron s methods and systems for extracting new insight for our rapidly growing data streams. You will be collaborating with data scientists, engineers, technicians and analytics teams to design and implement systems to extract data from Micron s business systems, redefining it into an actionable format, and as needed, crafting multifaceted presentation layers for use by high-level engineers and managers throughout the company. You will be crafting new solutions, as well as, supporting, configuring, and improving existing solutions.
Responsibilities and Tasks
Understand the Business Problem and the Relevant Data
Maintain an intimate understanding of company and department strategy.
Translate analysis requirements into data requirements.
Identify and understand the data sources that are relevant to the business problem.
Develop conceptual models that assemble the relationships within the data.
Define the data-quality objectives for the solution.
Be a domain guide in data sources and reporting options.
Architect Data Management Systems
Use understanding of the business problem and the nature of the data to select appropriate data management system (Big Data, OLTP, OLAP, etc.)
Design and implement optimum data structures in the appropriate data management system in GCP Cloud to satisfy the data requirements.
Plan methods for archiving/deletion of information.
Develop, Automate, and Orchestrate an Ecosystem of ETL Processes for Varying Volumes of Data
Identify and select the optimum methods of access for each data source (real-time/streaming, delayed, static)
Resolve transformation requirements and develop processes to bring structured and unstructured data from the source to a new physical data model.
Develop processes to efficiently load the transform data into the data management system.
Prepare Data to Meet Analysis Requirements
Work with the data scientist to implement strategies for cleaning and preparing data for analysis (e.g., outliers, missing data, etc.).
Develop and code data extracts.
Follow standard methodologies to ensure data quality and data integrity.
Ensure that the data is fit to use for data science applications.
Qualifications and Experience:
5-8 years of proven track record developing, delivering, and/or supporting data engineering, sophisticated analytics or business intelligence solutions.
Ability to work with multiple operating systems (e.g., Windows, Unix, Linux, etc.)
Strong Experience in developing ETL/ELT processes using Python, Py-Spark, SQL .
Significant experience with big data processing and/or developing applications and data sources via GCP Services like GCS, BigQuery, DataProc, Pub/Sub, Cloud Functions, Big Table, Cloud Composer/Cloud Workflows etc.
Good to have knowledge on Cloud Data Warehousing systems like Snowflake.
Understanding of how distributed systems work.
Familiarity with software architecture (data structures, data schemas, etc.)
Strong working knowledge of databases (Oracle, MSSQL, etc.) including SQL and NoSQL.
Strong mathematics background, analytical, problem solving, and social skills.
Strong interpersonal skills (written, verbal and presentation).
Experience working in a global, multi-functional environment.
Minimum 4 years of proven experience in any of the following: At least one high-level client, object-oriented language (e.g. JAVA, Python, Scala, etc.); Good to have - one or more Data Extraction Tools (NIFI etc.)