Filter interviews by
I applied via LinkedIn and was interviewed in Jun 2024. There were 2 interview rounds.
Snowflake has limitations such as maximum table size, maximum number of columns, and maximum number of concurrent queries.
Snowflake has a maximum table size of 16TB for all tables, including temporary and transient tables.
There is a limit of 1600 columns per table in Snowflake.
Snowflake has a maximum of 10,000 concurrent queries per account.
There are also limitations on the number of objects (databases, schemas, tables
Use SQL functions like SUBSTRING and CHARINDEX to split staged data's row into separate columns
Use SUBSTRING function to extract specific parts of the row
Use CHARINDEX function to find the position of a specific character in the row
Use CASE statements to create separate columns based on conditions
Monitor overnight data load job in Snowflake
Set up alerts and notifications for job completion or failure
Check job logs for any errors or issues
Monitor resource usage during the data load process
Use Snowflake's query history to track job progress
Implement automated retries in case of failures
Top trending discussions
Oops dsa sql network
I applied via Approached by Company and was interviewed in Jul 2024. There were 2 interview rounds.
I applied via Approached by Company and was interviewed in May 2024. There was 1 interview round.
Joins in SQL are used to combine rows from two or more tables based on a related column between them.
Joins are used to retrieve data from multiple tables based on a related column between them
Types of joins include INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN
Example: SELECT * FROM table1 INNER JOIN table2 ON table1.column = table2.column
I applied via LinkedIn and was interviewed in Jul 2024. There was 1 interview round.
Python DSA Leetcode style questions, prepare.
I have a strong background in data engineering with experience in various industries.
Bachelor's degree in Computer Science with a focus on data engineering
Worked as a Data Engineer at XYZ Company, where I developed and maintained data pipelines
Implemented data quality checks and automated data validation processes
Collaborated with cross-functional teams to design and implement scalable data solutions
Experience with clo...
Facebook is a leading social media platform with vast user base and cutting-edge technology.
Facebook has over 2.8 billion monthly active users, providing a massive data source for analysis and engineering.
The company has a strong focus on innovation and constantly develops new technologies and tools.
Facebook's data infrastructure is highly advanced, allowing for complex data processing and analysis.
Working at Facebook ...
Spark is a distributed computing framework used for big data processing.
Spark is an open-source project under Apache Software Foundation.
It can process data in real-time and batch mode.
Spark provides APIs for programming in Java, Scala, Python, and R.
It can be used for various big data processing tasks like machine learning, graph processing, and SQL queries.
Spark uses in-memory processing for faster data processing.
Coding questions on SQL - given 2 tables, join and find the results after the join
I applied via Company Website and was interviewed before Mar 2023. There were 3 interview rounds.
1.Basic SQL
2. Python based question
3.Data modelling
4. Spark
5. Cloud based questions
Sql,Python,Data Modeling and Project based questions
I applied via Approached by Company and was interviewed in Aug 2024. There were 4 interview rounds.
Majorly the interviews revolve around the following skills :
1. Advanced SQL
2. PYTHON (DSA)
Majorly the interviews revolve around the following skills :
1. Advanced SQL
2. python (DSA)
I would have used indexing, query optimization, and data partitioning to optimize the system.
Implement indexing on frequently queried columns to improve search performance.
Optimize queries by using proper joins, filters, and aggregations.
Partition large tables to distribute data across multiple storage devices for faster access.
Use materialized views to precompute and store aggregated data for quicker retrieval.
Handled a complex data migration project by breaking it down into smaller tasks and collaborating with team members.
Identified key stakeholders and their requirements
Developed a detailed project plan with timelines and milestones
Assigned specific tasks to team members based on their strengths and expertise
Regularly communicated progress updates and addressed any issues promptly
I applied via Company Website and was interviewed before Nov 2023. There were 4 interview rounds.
Interview experience
Senior Analyst
88
salaries
| ₹4.8 L/yr - ₹11.2 L/yr |
Team Lead
58
salaries
| ₹7 L/yr - ₹17 L/yr |
Analyst
54
salaries
| ₹1.8 L/yr - ₹7.9 L/yr |
Operations Analyst
37
salaries
| ₹2.4 L/yr - ₹6 L/yr |
Manager
26
salaries
| ₹10.5 L/yr - ₹28 L/yr |
TCS
Infosys
Wipro
HCLTech