Filter interviews by
I am a data specialist with 5 years of experience in analyzing and interpreting complex data sets to drive business decisions.
Experienced in data cleaning, transformation, and visualization using tools like SQL, Python, and Tableau
Strong analytical skills with a proven track record of identifying trends and patterns in data
Excellent communication skills to effectively present findings and recommendations to stakeholder...
Top trending discussions
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
I got several invitation calls from 3 different persons for the same interview at Xebia Bangalore Brigade office.I attended an interview at Xebia on January 11, 2025, and the experience was disappointing. Despite reading several negative reviews beforehand, I chose to give the company a fair chance, but unfortunately, the concerns expressed in those reviews turned out to be valid.
From the very beginning, the process was poorly managed. I waited for over three hours before being called, while candidates who arrived after me were invited for their interviews earlier. This inconsistency immediately raised questions about the fairness of their process.
When my turn finally came, the interview began with a moderately challenging SQL question: I was asked to fetch all invalid December month transaction IDs (which is coming in ooo hours) from a dataset, applying conditions such as working hours from Monday to Friday (9 AM to 4 PM), excluding weekends and specific holidays (24th and 25th December). While I attempted to solve this, the interviewer interrupted repeatedly with casual, unrelated remarks. These interruptions disrupted my concentration and added unnecessary pressure, making it difficult to focus on solving the query effectively.
Following this, the interviewer moved to a Python question, which involved determining whether a given number was a perfect square. Although the problem itself was simple, it included irrelevant details, such as pre-imported libraries in a web-based IDE. This added an unnecessary layer of complexity and confusion. Again, the interviewer’s interruptions and casual talk distracted me further. Instead of focusing on assessing my logic and problem-solving skills, he seemed more interested in making irrelevant comments.
What stood out most negatively was the interviewer’s unprofessional behavior. At one point, he made an inappropriate remark about my name, comparing it to his own, which he claimed was not as "weighted."
I asked his name politely and he replied " Vaibhav Gupta"
While I attempted to steer the conversation back to technical discussions, his attitude remained dismissive and unfocused. He even questioned my leadership skills but turned it into an argument instead of allowing me to explain.
I also noticed disparities in how candidates were treated. For instance, a female candidate before me was given over an hour for her interview, while mine felt rushed and dismissive. While this is my personal observation, it raised concerns about bias in their evaluation process.
The interview ended abruptly and on a negative note. When I tried to discuss architectural patterns for data pipelines, the interviewer dismissed my points outright, stating that they did not need data architects. Without providing proper closure, he left the room, leaving me feeling disrespected and undervalued.
Overall, the experience was frustrating and insulting. The interviewer’s behavior was unprofessional and dismissive, and the process lacked the basic respect and fairness expected in a professional setting. Based on my experience, I strongly believe that Xebia needs to overhaul their interview practices, ensuring a more structured, unbiased, and respectful approach toward candidates.
I am relieved I was not selected, as this experience highlighted what could likely be a toxic work environment. I would not recommend Xebia to anyone, as their lack of professionalism and courtesy reflects poorly on their organizational culture.
I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.
I applied via Naukri.com and was interviewed in Dec 2024. There were 4 interview rounds.
NA kjwnoi wniowe nfiow flmi
NA fklwmoiwef,m ionfwno njnwfeio onfwp
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.
Inefficient code can lead to slow performance, such as using collect() on large datasets.
Data skew can cause uneven distribution of data across partitions, impacting processing time.
Resource constraints like insufficient memory or CPU can result in slow Spark jobs.
Improper configuration settings, su...
I applied via Approached by Company and was interviewed in Jun 2024. There was 1 interview round.
Power Pivot is a data analysis tool in Excel that allows users to create powerful data models, perform calculations, and generate insights.
Power Pivot is an Excel add-in used for data analysis and modeling.
It allows users to import and manipulate large datasets from different sources.
Users can create relationships between tables, perform calculations, and create advanced data visualizations.
Power Pivot is commonly used...
Power Query is a data connection technology that enables you to discover, connect, combine, and refine data across a wide variety of sources.
Power Query is used to import, transform, and combine data from different sources for analysis.
It helps in cleaning and shaping data before loading it into Excel or Power BI.
Power Query can be used to automate data preparation tasks, saving time and effort.
It allows users to easil...
Power Pivot is used for data modeling and analysis, while Power Query is used for data transformation and cleaning.
Power Pivot is used for creating relationships between tables and performing calculations.
Power Query is used for importing, transforming, and cleaning data from various sources.
Power Pivot is more focused on data analysis and modeling, while Power Query is more focused on data preparation.
Both Power Pivot...
To retrieve data over 3 months in a dynamic dashboard, use a date range filter and ensure the data source is updated regularly.
Create a date range filter in the dashboard to select a time period of over 3 months
Ensure the data source is updated regularly to include the required data
Use SQL queries or data extraction tools to pull the necessary data for the dashboard
Consider automating the data retrieval process to ensu
A Dicreat Chart is a type of chart that displays data points in a discrete manner, typically using bars or columns.
Dicreat Charts are used to represent categorical data, where each category is represented by a separate bar or column.
They are commonly used in market research, survey data analysis, and comparison of different categories.
Examples of Dicreat Charts include bar charts, column charts, and stacked bar charts.
SQL join is used to combine rows from two or more tables based on a related column between them.
Types of SQL joins include INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN.
INNER JOIN returns rows when there is at least one match in both tables.
LEFT JOIN returns all rows from the left table and the matched rows from the right table.
RIGHT JOIN returns all rows from the right table and the matched rows from the left table...
Stored procedures are precompiled SQL queries stored in a database for reuse.
Stored procedures are precompiled SQL queries stored in a database for reuse
They can improve performance by reducing network traffic and increasing security
Stored procedures can be used to encapsulate business logic and complex queries
Examples include procedures for updating customer information or calculating sales totals
CTE stands for Common Table Expressions. It is a temporary result set that can be referenced within a SELECT, INSERT, UPDATE, or DELETE statement.
CTEs are defined using the WITH keyword in SQL.
They help improve readability and maintainability of complex queries.
CTEs can be recursive, allowing for hierarchical data querying.
Examples: Recursive CTEs for querying organizational hierarchies, CTEs for data transformation be
Matrix in PowerBi is a visual representation of data in rows and columns, allowing for easy comparison and analysis.
Matrix displays data in a grid format with rows and columns
It allows for easy comparison of data across different categories
Users can drill down into the data to see more detailed information
Matrix can be used to create interactive reports and dashboards
Lambda function is an anonymous function in Python that can have any number of arguments, but can only have one expression.
Used for creating small, throwaway functions without a name
Commonly used with functions like map(), filter(), and reduce()
Can be used to define functions inline without the need to formally define a function using def keyword
To manipulate datasets in Python, steps include loading data, cleaning data, transforming data, and analyzing data using libraries like Pandas.
Load the dataset using Pandas library
Clean the data by handling missing values, removing duplicates, and correcting data types
Transform the data by applying functions, merging datasets, and creating new columns
Analyze the data by performing statistical analysis, visualizations,
DAX data types are used in Power BI and Excel to define the type of data stored in a column or measure.
DAX data types include Integer, Decimal Number, String, Boolean, Date, Time, DateTime, and Currency.
Data types are important for calculations and formatting in DAX formulas.
For example, using the correct data type for a column can ensure accurate calculations and visualizations.
Inner join returns only the matching rows between two tables, while left join returns all rows from the left table and the matching rows from the right table.
Inner join only includes rows that have matching values in both tables
Left join includes all rows from the left table, even if there are no matching rows in the right table
Example: Inner join - SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id
Example...
posted on 25 Sep 2024
I applied via Walk-in and was interviewed in Aug 2024. There were 5 interview rounds.
Maths grammar & communication
You're like this job opportunity
I applied via LinkedIn and was interviewed in Jul 2024. There were 2 interview rounds.
It was pair programming round where we need to attempt a couple of Spark Scenario along with the Interviewer. You will have a boiler plate code with some functionalities to be filled up. You will be assessed on writing clean and extensible code and test cases.
I applied via Naukri.com and was interviewed in Oct 2024. There was 1 interview round.
Incremental load in pyspark refers to loading only new or updated data into a dataset without reloading the entire dataset.
Use the 'delta' function in pyspark to perform incremental loads by specifying the 'mergeSchema' option.
Utilize the 'partitionBy' function to optimize incremental loads by partitioning the data based on specific columns.
Implement a logic to identify new or updated records based on timestamps or uni...
Interview experience
based on 27 reviews
Rating in categories
Spatial Data Specialist
972
salaries
| ₹1.5 L/yr - ₹4.5 L/yr |
Spatial Data Specialist 2
797
salaries
| ₹2.6 L/yr - ₹5.5 L/yr |
Spatial Data Specialist 1
558
salaries
| ₹1.8 L/yr - ₹4 L/yr |
GIS Analyst
479
salaries
| ₹1.8 L/yr - ₹4.3 L/yr |
Senior Software Engineer
301
salaries
| ₹11.7 L/yr - ₹33 L/yr |
Google Maps
TomTom
MapmyIndia
Bosch