Reversing the impact of climate change is one of the world s biggest challenges. And businesses have a responsibility to lead the way. While individual consumer choices are important, over 80% of all the emissions reductions necessary for the world to reach Net-Zero, require business-level action. But despite the growing momentum and ambition from companies around the world to set Net-Zero goals, there are significant challenges to delivering on these ambitions. Business leaders don t really know how they will get there. And the very first step, of getting emissions measurement right, is hard.
Terrascope is a smart carbon management and accounting platform that empowers corporations to decarbonise their operations, portfolios and supply chains in a trusted, confident, and secure manner. We are on a journey to build digital tools and analytics, datasets and algorithms, and an ecosystem of technical expertise and partnerships needed for companies to optimize their climate strategy
Terrascope is backed by one of the world s largest food and agri companies and global leader in climate action and sustainability. With their significant strategic advantage and secure funding, the venture is uniquely positioned to drive profit with purpose; driving decarbonization in supply chains while generating outsized financial returns.
We are seeking a Senior Data & Analytics Engineer to design and implement scalable data architectures for SaaS platforms in both single-tenant and multi-tenant environments. This role will focus on leveraging the AWS Data Engineering stack, Postgres, and advanced analytics processing techniques, including the creation of materialised views to enable high-volume data analytics. The ideal candidate is skilled in Python scripting, Java or Go, and proficient in handling large-scale data processing workflows. This role will report into the Director of Engineering & Tech and will be crucial in shaping the future of climate-tech SaaS products.
In this role you will:
100% hands-on role to build a robust and scalable data platform to support seamless data onboarding and management for a SaaS platform.
Design scalable single-tenant and multi-tenant SaaS data platforms, optimize materialized views for analytics, and develop pipelines using the AWS Data Engineering stack.
Design and implement efficient data migration scripts and workflows to enable smooth data onboarding for new and existing clients.
Write clean, efficient code in Python, Java, or Go, and design robust data models using Postgres ORM frameworks.
Process large-scale datasets, optimize Postgres databases for high performance, and implement best practices for scaling analytics solutions.
Index optimization in Postgres for query performance.
Creating materialized views and analytics-ready datasets for headless BI.
Implement row-level security and design multi-tenant database architectures for scalability and security.
Develop pipelines and processes to integrate diverse data connectors into the SaaS platform while ensuring data integrity and consistency.
Enable data accessibility and transformation for data science teams by creating analytics-ready datasets and facilitating model integration.
Ensure the data platform and migration workflows are optimized for scalability, high performance, and low latency.
Work closely with product, engineering, and data science teams to align platform capabilities with analytics and machine learning requirements.
Manage and scale AWS infrastructure and automate workflows using the GitHub DevOps stack.
You should have:
Bachelor s degree in STEM field.
At least 5 to 8 years of extensive experience as a Data and Analytics engineer building data platforms for SaaS apps, large-scale data processing workflows, advanced analytics and processing techniques.
Experience in database migration projects.
Experience in build or migration of multi-tenant database projects.
Deep competence in Python scripting for writing ETL pipelines, custom migration scripts, and automating AWS tasks.
Deep competence in Java/Go for building high-performance, scalable tools to handle complex migration needs.
Deep competence in data storage and management using AWS RDS(postgres), S3 and Document DB.
Deep competence of Postgres database architecture and functionality, including indexes, partitioning, and query optimization.
Deep competence in Materialized Views, including their creation, refresh strategies, and use cases for analytics.
Advanced SQL skills to design complex queries that aggregate, filter, and transform data effectively for materialized views.
Deep competence in data processing using AWS Glue, Lambda and Step functions.
Have experience in AWS data migration service.
Deep competence in data processing and analytics using AWS Athena and AWS Redshift.
Deep competence in security and monitoring using AWS IAM, AWS CloudWatch and AWS CloudTrail.
Experience in design mapping between MongoDB s flexible schema and Postgres relational schema.
Experience in data enrichment and cleaning techniques.
Proven experience with scalable, large data sets and high-performance SaaS applications.
Strong ability to work with and optimize large-scale data systems
You are a data engineer with a strong background in building scalable analytics solutions in startup environments.
You are passionate about creating efficient data processing systems and driving analytics innovation.
You are a problem solver with excellent programming skills and a focus on performance optimization.
You are a collaborative team player who enjoys mentoring peers and sharing best practices.
Even better if you are:
Familiar with Rust programming language.
An entrepreneurial problem solver comfortable in managing risk and ambiguity.
Were committed to creating an inclusive environment for our strong and diverse team. We value diversity and foster a community where everyone can be his or her authentic self.