Filter interviews by
I applied via Referral and was interviewed in Apr 2024. There was 1 interview round.
Yes, I am available to join immediately.
I am currently available to start a new position right away.
I have no prior commitments that would prevent me from joining immediately.
I am excited about the opportunity and ready to hit the ground running.
I am available full-time for immediate start
Available to start immediately
Full-time availability
Flexible schedule
Top trending discussions
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.
Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.
Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.
Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.
posted on 29 May 2024
I applied via Campus Placement and was interviewed in Apr 2024. There were 2 interview rounds.
It was a written test where theoretical SQL questions were asked like primary key, foreign key, set operators and some queries
I applied via Campus Placement and was interviewed in May 2024. There were 2 interview rounds.
Two coding questions
ADF stands for Azure Data Factory, a cloud-based data integration service that allows you to create, schedule, and manage data pipelines.
ADF allows you to create data-driven workflows for orchestrating and automating data movement and data transformation.
You can use ADF to ingest data from various sources, process and transform the data, and then publish the data to different destinations.
ADF supports a wide range of d...
Developed ETL pipeline to ingest, clean, and analyze customer data for personalized marketing campaigns
Gathered requirements from stakeholders to understand data sources and business objectives
Designed data model to store customer information and campaign performance metrics
Implemented ETL process using Python and Apache Spark to extract, transform, and load data
Performed data quality checks and created visualizations ...
I have used various transformations such as filtering, joining, aggregating, and pivoting in my data engineering projects.
Filtering data based on certain conditions
Joining multiple datasets together
Aggregating data to summarize information
Pivoting data from rows to columns or vice versa
Dual mode in Power BI allows users to switch between DirectQuery and Import modes for data sources.
Dual mode allows users to combine the benefits of both DirectQuery and Import modes in Power BI.
Users can switch between DirectQuery and Import modes for different data sources within the same report.
DirectQuery mode connects directly to the data source for real-time data retrieval, while Import mode loads data into Power...
I applied via Naukri.com and was interviewed in Nov 2022. There were 4 interview rounds.
Join two tables in PySpark code and DataFrame
Create two DataFrames from the tables
Specify the join condition using join() function
Select the columns to be displayed using select() function
Use show() function to display the result
Interview experience
based on 2 reviews
Rating in categories
Consultant
56
salaries
| ₹4 L/yr - ₹15.7 L/yr |
Solution Engineer
49
salaries
| ₹3.5 L/yr - ₹10.5 L/yr |
Senior Consultant
49
salaries
| ₹7 L/yr - ₹20 L/yr |
Technical Support Engineer
49
salaries
| ₹2.8 L/yr - ₹3.6 L/yr |
Associate Solutions Engineer
25
salaries
| ₹3 L/yr - ₹6.9 L/yr |
TCS
Infosys
Wipro
HCLTech