Filter interviews by
I applied via Referral and was interviewed in Nov 2023. There were 2 interview rounds.
I applied via Naukri.com and was interviewed in Jul 2023. There were 3 interview rounds.
Basic grammar and reasoning questions they will ask
I applied via Company Website and was interviewed in Jul 2024. There were 5 interview rounds.
I applied via Company Website and was interviewed in Oct 2024. There were 4 interview rounds.
Basic Python, SQL, and Bash questions
Data pipeline design involves creating a system to efficiently collect, process, and analyze data.
Understand the data sources and requirements before designing the pipeline.
Use tools like Apache Kafka, Apache NiFi, or AWS Glue for data ingestion and processing.
Implement data validation and error handling mechanisms to ensure data quality.
Consider scalability and performance optimization while designing the pipeline.
Doc...
posted on 29 Dec 2024
It is nice to do work
It helps in understanding the 5
It helps a lot in company
The goal of a Data Analyst is to analyze data to extract valuable insights and make data-driven decisions.
Identify trends and patterns in data
Create visualizations to communicate findings
Provide actionable recommendations based on data analysis
I applied via Recruitment Consulltant and was interviewed in Oct 2024. There was 1 interview round.
Big data refers to large and complex data sets that are difficult to process using traditional data processing applications.
Big data involves large volumes of data
It includes data from various sources such as social media, sensors, and business transactions
Big data requires specialized tools and technologies for processing and analysis
Spark is a distributed computing framework that processes big data in memory and is known for its speed and ease of use.
Spark is a distributed computing framework that can process data in memory for faster processing.
It uses Resilient Distributed Datasets (RDDs) for fault-tolerant distributed data processing.
Spark provides high-level APIs in Java, Scala, Python, and R for ease of use.
It supports various data sources li...
Our application is a data engineering platform that processes and analyzes large volumes of data to provide valuable insights.
Our application uses various data processing techniques such as ETL (Extract, Transform, Load) to clean and transform raw data into usable formats.
We utilize big data technologies like Hadoop, Spark, and Kafka to handle large datasets efficiently.
The application also includes machine learning al...
posted on 30 May 2024
I applied via Approached by Company and was interviewed in Apr 2024. There was 1 interview round.
I applied via Referral and was interviewed in Mar 2024. There were 2 interview rounds.
Surrogate key is a unique identifier used in databases to uniquely identify each record in a table.
Surrogate keys are typically generated by the system and have no business meaning.
They are used to simplify database operations and improve performance.
Example: Using an auto-incrementing integer column as a surrogate key in a table.
Senior Software Engineer
3.3k
salaries
| ₹9.2 L/yr - ₹28 L/yr |
Software Engineer
3.2k
salaries
| ₹5 L/yr - ₹18.1 L/yr |
Claims Associate
2.3k
salaries
| ₹1.5 L/yr - ₹4.8 L/yr |
Associate Software Engineer
1.3k
salaries
| ₹3 L/yr - ₹8 L/yr |
Associate
997
salaries
| ₹2 L/yr - ₹6.1 L/yr |
Infosys
TCS
Wipro
HCLTech