Filter interviews by
I applied via Naukri.com and was interviewed in Apr 2024. There was 1 interview round.
SCD types refer to slowly changing dimensions in data warehousing, categorized into Type 1, Type 2, and Type 3.
Type 1 SCD: Overwrites existing data with new information, losing historical data.
Type 2 SCD: Maintains historical data by creating new records for changes, with a surrogate key for each version.
Type 3 SCD: Keeps both old and new values in the same record, with separate columns for each version.
A junk dimension is a single dimension table that combines multiple low-cardinality attributes that are not related to the fact table.
Contains attributes that are not related to the fact table
Reduces the number of dimension tables in the data warehouse
Helps in simplifying the data model and improving query performance
Snowflake schema is a normalized form of star schema with multiple levels of dimension tables.
Snowflake schema has normalized dimension tables, leading to reduced redundancy and improved data integrity.
Star schema has denormalized dimension tables, which can lead to data redundancy but faster query performance.
Snowflake schema is more complex to query compared to star schema due to multiple levels of normalization.
Star...
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.
Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.
Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.
Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.
posted on 23 Sep 2024
It wa really a great experience.
It was really a great experience.
In 3 years, I see myself leading a team of data engineers, implementing cutting-edge technologies, and driving impactful data-driven decisions.
Leading a team of data engineers
Implementing cutting-edge technologies
Driving impactful data-driven decisions
My strengths include strong analytical skills, attention to detail, and the ability to work well in a team.
Strong analytical skills - able to analyze complex data sets and derive meaningful insights
Attention to detail - meticulous in ensuring data accuracy and quality
Team player - collaborate effectively with colleagues to achieve common goals
I have worked on projects involving building data pipelines, optimizing database performance, and creating machine learning models.
Built data pipelines using Apache Spark and Kafka
Optimized database performance by tuning queries and indexes
Created machine learning models for predictive analytics
Implemented real-time data processing using technologies like Apache Flink
My CGPA is 3.8 out of 4.0.
My CGPA is 3.8, which is considered high in my university.
I have consistently maintained a high CGPA throughout my academic career.
I have received several academic awards based on my CGPA.
My CGPA reflects my dedication and hard work towards my studies.
My hobbies include hiking, photography, and playing the guitar.
Hiking: I enjoy exploring nature trails and challenging myself physically.
Photography: I love capturing moments and landscapes through my camera lens.
Playing the guitar: I find relaxation and creativity in strumming chords and learning new songs.
posted on 26 Jul 2024
Sql assessment round
I applied via Referral and was interviewed in May 2024. There were 4 interview rounds.
Python and SQL questions were asked
posted on 29 May 2024
I applied via Campus Placement and was interviewed in Apr 2024. There were 2 interview rounds.
It was a written test where theoretical SQL questions were asked like primary key, foreign key, set operators and some queries
I applied via Campus Placement and was interviewed in May 2024. There were 2 interview rounds.
Two coding questions
ADF stands for Azure Data Factory, a cloud-based data integration service that allows you to create, schedule, and manage data pipelines.
ADF allows you to create data-driven workflows for orchestrating and automating data movement and data transformation.
You can use ADF to ingest data from various sources, process and transform the data, and then publish the data to different destinations.
ADF supports a wide range of d...
I applied via Recruitment Consulltant and was interviewed before Sep 2021. There were 3 interview rounds.
Java code to reverse a string
Use StringBuilder class to reverse the string
Call reverse() method on the StringBuilder object
Convert the StringBuilder object back to String using toString() method
Developed ETL pipeline to ingest, clean, and analyze customer data for personalized marketing campaigns
Gathered requirements from stakeholders to understand data sources and business objectives
Designed data model to store customer information and campaign performance metrics
Implemented ETL process using Python and Apache Spark to extract, transform, and load data
Performed data quality checks and created visualizations ...
I have used various transformations such as filtering, joining, aggregating, and pivoting in my data engineering projects.
Filtering data based on certain conditions
Joining multiple datasets together
Aggregating data to summarize information
Pivoting data from rows to columns or vice versa
Interview experience
based on 1 review
Rating in categories
BI Consultant
57
salaries
| ₹5.3 L/yr - ₹15 L/yr |
Associate Consultant
48
salaries
| ₹4.2 L/yr - ₹12.4 L/yr |
Associate BI Consultant
32
salaries
| ₹4 L/yr - ₹11 L/yr |
Consultant
30
salaries
| ₹6.4 L/yr - ₹27.4 L/yr |
Senior BI Consultant
28
salaries
| ₹7.2 L/yr - ₹22.6 L/yr |
TCS
Infosys
Wipro
HCLTech