You want to impact the industries that run our world: Your efforts will result in real-world impact helping to keep the lights on, get food into grocery stores, reduce emissions, and most importantly, ensure workers return home safely.
You are the architect of your own career: If you put in the work, this role won t be your last at Samsara. We set up our employees for success and have built a culture that encourages rapid career development, countless opportunities to experiment and master your craft in a hyper growth environment.
You re energized by our opportunity: The vision we have to digitize large sectors of the global economy requires your full focus and best efforts to bring forth creative, ambitious ideas for our customers.
You want to be with the best: At Samsara, we win together, celebrate together and support each other. You will be surrounded by a high-caliber team that will encourage you to do your best.
In this role, you will:
Develop and maintain E2E data pipelines, backend ingestion and participate in the build of Samsara s Data Platform to enable advanced automation and analytics.
Work with data from a variety of sources including but not limited to: CRM data, Product data, Marketing data, Order flow data, Support ticket volume data.
Manage critical data pipelines to enable our growth initiatives and advanced analytics.
Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with data layers and data lake.
Develop and improve the current data architecture, data quality, monitoring, observability and data availability.
Write data transformations in SQL/Python to generate data products consumed by customer systems and Analytics, Marketing Operations, Sales Operations teams.
Champion, role model, and embed Samsara s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices.
Minimum requirements for the role:
A Bachelor s degree in computer science, data engineering, data science, information technology, or equivalent engineering program.
5+ years of work experience as a data engineer, including 3+ years of experience in designing, developing, testing, and maintaining E2E data pipelines..
Experience with modern cloud-based data-lake and data-warehousing technology stacks, and familiarity with typical data-engineering tools, ETL/ELT, and data-warehousing processes and best practices.
Experience with the following:
Languages: Python, SQL.
Exposure to ETL tools such as Fivetran, DBT or equivalent.
API: Exposure to python based API frameworks for data pipelines.
RDBMS: MySQL, AWS RDS/Aurora MySQL, PostgreSQL, Oracle, MS SQL-Server or equivalent.
Cloud: AWS, Azure and/or GCP.
Data warehouse: Databricks, Google Big Query, AWS Redshift, Snowflake or equivalent.
An ideal candidate has:
Comfortable in working with business customers to gather requirements and gain a deep understanding of varied datasets.
A self-starter, motivated, responsible, innovative and technology-driven person who performs well both solo and as a team member.
A proactive problem solver and have good communication as well as project management skills to relay your findings and solutions across technical and non technical audiences.
ETL and Orchestration Experience.
Fivetran, Alteryx or equivalent.
DBT or equivalent.
Logging and Monitoring: One or more of Splunk, DataDog, AWS Cloudwatch or equivalent.
AWS Serverless: AWS API Gateway, Lambda, S3, SNS, SQS, SecretsManager.