Filter interviews by
I applied via Approached by Company and was interviewed in Aug 2023. There was 1 interview round.
To connect to a SQL server using Python, you can use the pyodbc library.
Install pyodbc library using pip
Import pyodbc module in your Python script
Establish a connection using the pyodbc.connect() method, providing the necessary connection details
Create a cursor object using the connection.cursor() method
Execute SQL queries using the cursor.execute() method
Fetch the results using the cursor.fetchall() or cursor.fetchone...
Top trending discussions
posted on 31 Dec 2024
Apache Spark architecture includes a cluster manager, worker nodes, and driver program.
Apache Spark architecture consists of a cluster manager, which allocates resources and schedules tasks.
Worker nodes execute tasks and store data in memory or disk.
Driver program coordinates tasks and communicates with the cluster manager.
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkCon...
reduceBy is used to aggregate data based on key, while groupBy is used to group data based on key.
reduceBy is a transformation that combines the values of each key using an associative function and a neutral 'zero value'.
groupBy is a transformation that groups the data based on a key and returns a grouped data set.
reduceBy is more efficient for aggregating data as it reduces the data before shuffling, while groupBy shu...
RDD is a low-level abstraction representing a distributed collection of objects, while DataFrame is a higher-level abstraction representing a distributed collection of data organized into named columns.
RDD is more suitable for unstructured data and low-level transformations, while DataFrame is more suitable for structured data and high-level abstractions.
DataFrames provide optimizations like query optimization and code...
The different modes of execution in Apache Spark include local mode, standalone mode, YARN mode, and Mesos mode.
Local mode: Spark runs on a single machine with one executor.
Standalone mode: Spark runs on a cluster managed by a standalone cluster manager.
YARN mode: Spark runs on a Hadoop cluster using YARN as the resource manager.
Mesos mode: Spark runs on a Mesos cluster with Mesos as the resource manager.
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
I applied via LinkedIn and was interviewed in Nov 2024. There was 1 interview round.
Aptitude test involved with quantative aptitude, logical reasoning and reading comprehensions.
I have strong skills in data processing, ETL, data modeling, and programming languages like Python and SQL.
Proficient in data processing and ETL techniques
Strong knowledge of data modeling and database design
Experience with programming languages like Python and SQL
Familiarity with big data technologies such as Hadoop and Spark
Yes, I am open to relocating for the right opportunity.
I am willing to relocate for the right job opportunity.
I have experience moving for previous roles.
I am flexible and adaptable to new locations.
I am excited about the possibility of exploring a new city or country.
I applied via Campus Placement and was interviewed in Oct 2024. There was 1 interview round.
Use regular expression to remove special characters from a string
Use the regex pattern [^a-zA-Z0-9\s] to match any character that is not a letter, digit, or whitespace
Use the replace() function in your programming language to replace the matched special characters with an empty string
Example: input string 'Hello! How are you?' will become 'Hello How are you' after removing special characters
Databricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.
Databricks is built on top of Apache Spark, providing a unified platform for data engineering, data science, and business analytics.
Internals of Databricks include a cluster manager, job scheduler, and workspace for collaboration.
Optimization techniques in Databricks include query optimizati...
I applied via Job Portal and was interviewed in Aug 2024. There were 3 interview rounds.
Its mandatory test even for experience people
My strengths include strong analytical skills, attention to detail, and problem-solving abilities.
Strong analytical skills - able to analyze complex data sets and derive meaningful insights
Attention to detail - meticulous in ensuring data accuracy and quality
Problem-solving abilities - adept at identifying and resolving data-related issues
Experience with data manipulation tools like SQL, Python, and Spark
Seeking new challenges and growth opportunities in a different environment.
Looking for new challenges to enhance my skills and knowledge
Seeking growth opportunities that align with my career goals
Interested in exploring different technologies and industries
Want to work in a more collaborative team environment
Seeking better work-life balance or location proximity
Consultant
5
salaries
| ₹35 L/yr - ₹70 L/yr |
Manager
5
salaries
| ₹35 L/yr - ₹50 L/yr |
Senior Consultant
4
salaries
| ₹26 L/yr - ₹41.4 L/yr |
Functional Architect
4
salaries
| ₹13.3 L/yr - ₹60 L/yr |
Team Lead
3
salaries
| ₹4.5 L/yr - ₹15 L/yr |
Accenture
TCS
Wipro
Infosys