Premium Employer

i

This company page is being actively managed by Affine Team. If you also belong to the team, you can get access from here

Affine Verified Tick Work with us arrow

Compare button icon Compare button icon Compare

Filter interviews by

Affine Interview Questions and Answers

Updated 10 May 2025
Popular Designations

45 Interview questions

A Consultant was asked 2mo ago
Q. What strategies can be employed for the optimization of ETL processes and Spark jobs?
Ans. 

Optimize ETL processes and Spark jobs through efficient design, resource management, and performance tuning.

  • Use partitioning to improve data processing speed. For example, partitioning large datasets by date can speed up queries.

  • Implement data caching in Spark to store intermediate results, reducing the need for repeated computations.

  • Optimize data formats by using columnar storage formats like Parquet or ORC, whic...

View all Consultant interview questions
A Senior Data Scientist Associate was asked 3mo ago
Q. How does the linear programming algorithm work?
Ans. 

Linear programming optimizes a linear objective function subject to linear constraints, finding the best outcome in a feasible region.

  • Linear programming involves maximizing or minimizing a linear objective function.

  • Constraints are linear inequalities that define the feasible region.

  • The feasible region is typically a convex polygon in multi-dimensional space.

  • The Simplex method is a popular algorithm used to solve l...

View all Senior Data Scientist Associate interview questions
A Senior Data Scientist Associate was asked 3mo ago
Q. How does the XGBoost algorithm work, with an explanation of each step?
Ans. 

XGBoost is an efficient implementation of gradient boosting that optimizes performance and accuracy through ensemble learning.

  • 1. **Gradient Boosting Framework**: XGBoost builds models in a sequential manner, where each new model corrects errors made by the previous ones.

  • 2. **Decision Trees**: It primarily uses decision trees as base learners, where each tree is built to minimize the loss function.

  • 3. **Regularizati...

View all Senior Data Scientist Associate interview questions
A Senior Data Scientist Associate was asked 3mo ago
Q. Mention some metrics in ML and explain the tradeoffs between them.
Ans. 

ML metrics help evaluate model performance, each with trade-offs affecting accuracy, interpretability, and application.

  • Accuracy vs. Precision: High accuracy may come with low precision in imbalanced datasets. Example: Classifying rare diseases.

  • Recall vs. F1 Score: High recall may lower F1 score, impacting balance in precision and recall. Example: Fraud detection.

  • ROC-AUC vs. PR-AUC: ROC-AUC is sensitive to class im...

View all Senior Data Scientist Associate interview questions
A Senior Data Scientist Associate was asked 3mo ago
Q. Write a program to count the most frequent integer in an array.
Ans. 

This program counts the most frequently occurring integer in an array, identifying the maximum repetitive integer efficiently.

  • Use a dictionary to store the count of each integer in the array. For example, for the array [1, 2, 2, 3], the counts would be {1: 1, 2: 2, 3: 1}.

  • Iterate through the array and update the count in the dictionary for each integer encountered.

  • After counting, find the integer with the maximum c...

View all Senior Data Scientist Associate interview questions
A Data Engineer was asked 5mo ago
Q. How do you create a pipeline in Databricks?
Ans. 

To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.

  • Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.

  • Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.

  • Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.

View all Data Engineer interview questions
A Data Engineer was asked 5mo ago
Q. Write SQL and PySpark queries to find all employees with the same salary in the same department.
Ans. 

Identify employees with the same salary within the same department using SQL and PySpark.

  • Use SQL's GROUP BY clause to group employees by department and salary.

  • Example SQL query: SELECT department, salary FROM employees GROUP BY department, salary HAVING COUNT(*) > 1;

  • In PySpark, use DataFrame operations to group by department and salary.

  • Example PySpark code: df.groupBy('department', 'salary').count().filter('cou...

View all Data Engineer interview questions
Are these interview questions helpful?
A Data Analyst was asked 7mo ago
Q. What are joins in SQL? Explain the different types of joins and their outputs.
Ans. 

Joins in SQL are used to combine rows from two or more tables based on a related column between them.

  • Types of joins include INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN

  • INNER JOIN returns rows when there is at least one match in both tables

  • LEFT JOIN returns all rows from the left table and the matched rows from the right table

  • RIGHT JOIN returns all rows from the right table and the matched rows from the left ta...

View all Data Analyst interview questions
A Data Engineer was asked 9mo ago
Q. What are the different transformations you have used?
Ans. 

I have used various transformations such as filtering, joining, aggregating, and pivoting in my data engineering projects.

  • Filtering data based on certain conditions

  • Joining multiple datasets together

  • Aggregating data to summarize information

  • Pivoting data from rows to columns or vice versa

View all Data Engineer interview questions
A Data Engineer was asked 9mo ago
Q. What is dual mode in Power BI?
Ans. 

Dual mode in Power BI allows users to switch between DirectQuery and Import modes for data sources.

  • Dual mode allows users to combine the benefits of both DirectQuery and Import modes in Power BI.

  • Users can switch between DirectQuery and Import modes for different data sources within the same report.

  • DirectQuery mode connects directly to the data source for real-time data retrieval, while Import mode loads data into ...

View all Data Engineer interview questions

Affine Interview Experiences

51 interviews found

I applied via Naukri.com and was interviewed in Feb 2022. There were 4 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all tips
Round 2 - Coding Test 

Test had a mix of questions on Statistics, Probability, Machine Learning, SQL and Python.

Round 3 - Technical 

(11 Questions)

  • Q1. How to retain special characters (that pandas discards by default) in the data while reading it?
  • Ans. 

    To retain special characters in pandas data, use encoding parameter while reading the data.

    • Use encoding parameter while reading the data in pandas

    • Specify the encoding type of the data file

    • Example: pd.read_csv('filename.csv', encoding='utf-8')

  • Answered by AI
  • Q2. How to read large .csv files in pandas quickly?
  • Ans. 

    Use pandas' read_csv() method with appropriate parameters to read large .csv files quickly.

    • Use the chunksize parameter to read the file in smaller chunks

    • Use the low_memory parameter to optimize memory usage

    • Use the dtype parameter to specify data types for columns

    • Use the usecols parameter to read only necessary columns

    • Use the skiprows parameter to skip unnecessary rows

    • Use the nrows parameter to read only a specific numb...

  • Answered by AI
  • Q3. How do perform the manipulations quicker in pandas?
  • Ans. 

    Use vectorized operations, avoid loops, and optimize memory usage.

    • Use vectorized operations like apply(), map(), and applymap() instead of loops.

    • Avoid using iterrows() and itertuples() as they are slower than vectorized operations.

    • Optimize memory usage by using appropriate data types and dropping unnecessary columns.

    • Use inplace=True parameter to modify the DataFrame in place instead of creating a copy.

    • Use the pd.eval()...

  • Answered by AI
  • Q4. Explain generators and decorators in python
  • Ans. 

    Generators are functions that allow you to iterate over a sequence of values without creating the entire sequence in memory. Decorators are functions that modify the behavior of other functions.

    • Generators use the yield keyword to return values one at a time

    • Generators are memory efficient and can handle large datasets

    • Decorators are functions that take another function as input and return a modified version of that funct...

  • Answered by AI
  • Q5. You have a pandas dataframe with three columns, filled with state names, city names and arbitrary numbers respectively. How to retrieve top 2 cities per state. (top according to the max number in the third...
  • Q6. How does look up happens in a list when you do my_list[5]?
  • Ans. 

    my_list[5] retrieves the 6th element of the list.

    • Indexing starts from 0 in Python.

    • The integer inside the square brackets is the index of the element to retrieve.

    • If the index is out of range, an IndexError is raised.

  • Answered by AI
  • Q7. How to create dictionaries in python with repeated keys?
  • Ans. 

    To create dictionaries in Python with repeated keys, use defaultdict from the collections module.

    • Import the collections module

    • Create a defaultdict object

    • Add key-value pairs to the dictionary using the same key multiple times

    • Access the values using the key

    • Example: from collections import defaultdict; d = defaultdict(list); d['key'].append('value1'); d['key'].append('value2')

  • Answered by AI
  • Q8. What is the purpose of lambda function when regural functions(of def) exist? how are they different?
  • Ans. 

    Lambda functions are anonymous functions used for short and simple operations. They are different from regular functions in their syntax and usage.

    • Lambda functions are defined without a name and keyword 'lambda' is used to define them.

    • They can take any number of arguments but can only have one expression.

    • They are commonly used in functional programming and as arguments to higher-order functions.

    • Lambda functions are oft...

  • Answered by AI
  • Q9. Merge vs join in pandas
  • Ans. 

    Merge and join are used to combine dataframes in pandas.

    • Merge is used to combine dataframes based on a common column or index.

    • Join is used to combine dataframes based on their index.

    • Merge can handle different column names, while join cannot.

    • Merge can handle different types of joins (inner, outer, left, right), while join only does inner join by default.

  • Answered by AI
  • Q10. How will the resultant table be, when you "merge" two tables that match at a column. and the second table has many of keys repeated.
  • Ans. 

    The resultant table will have all the columns from both tables and the rows will be a combination of matching rows.

    • The resultant table will have all the columns from both tables

    • The rows in the resultant table will be a combination of matching rows

    • If the second table has repeated keys, there will be multiple rows with the same key in the resultant table

  • Answered by AI
  • Q11. Some questions on spacy and NLP models and my project.
Round 4 - Technical 

(8 Questions)

  • Q1. Explain eign vectors and eign values? what purpose do they serve in ML?
  • Ans. 

    Eigenvalues and eigenvectors are linear algebra concepts used in machine learning for dimensionality reduction and feature extraction.

    • Eigenvalues represent the scaling factor of the eigenvectors.

    • Eigenvectors are the directions along which a linear transformation acts by stretching or compressing.

    • In machine learning, eigenvectors are used for principal component analysis (PCA) to reduce the dimensionality of data.

    • Eigenv...

  • Answered by AI
  • Q2. Explain PCA briefly? what can it be used for and what can it not be used for?
  • Ans. 

    PCA is a dimensionality reduction technique used to transform high-dimensional data into a lower-dimensional space.

    • PCA can be used for feature extraction, data visualization, and noise reduction.

    • PCA cannot be used for causal inference or to handle missing data.

    • PCA assumes linear relationships between variables and may not work well with non-linear data.

    • PCA can be applied to various fields such as finance, image process...

  • Answered by AI
  • Q3. What is VIF and how is it calculated?
  • Ans. 

    VIF stands for Variance Inflation Factor, a measure of multicollinearity in regression analysis.

    • VIF is calculated for each predictor variable in a regression model.

    • It measures how much the variance of the estimated regression coefficient is increased due to multicollinearity.

    • A VIF of 1 indicates no multicollinearity, while a VIF greater than 1 indicates increasing levels of multicollinearity.

    • VIF is calculated as 1 / (1...

  • Answered by AI
  • Q4. What is AIC & BIC in linear regression?
  • Ans. 

    AIC & BIC are statistical measures used to evaluate the goodness of fit of a linear regression model.

    • AIC stands for Akaike Information Criterion and BIC stands for Bayesian Information Criterion.

    • Both AIC and BIC are used to compare different models and select the best one.

    • AIC penalizes complex models less severely than BIC.

    • Lower AIC/BIC values indicate a better fit of the model to the data.

    • AIC and BIC can be calculated...

  • Answered by AI
  • Q5. Do we minimize or maximize the loss in logistic regression?
  • Ans. 

    We minimize the loss in logistic regression.

    • The goal of logistic regression is to minimize the loss function.

    • The loss function measures the difference between predicted and actual values.

    • The optimization algorithm tries to find the values of coefficients that minimize the loss function.

    • Minimizing the loss function leads to better model performance.

    • Examples of loss functions used in logistic regression are cross-entropy...

  • Answered by AI
  • Q6. How does one vs rest work for logistic regression?
  • Ans. 

    One vs Rest is a technique used to extend binary classification to multi-class problems in logistic regression.

    • It involves training multiple binary classifiers, one for each class.

    • In each classifier, one class is treated as the positive class and the rest as negative.

    • The class with the highest probability is predicted as the final output.

    • It is also known as one vs all or one vs others.

    • Example: In a 3-class problem, we ...

  • Answered by AI
  • Q7. What is one vs one classification?
  • Ans. 

    One vs one classification is a binary classification method where multiple models are trained to classify each pair of classes.

    • It is used when there are more than two classes in the dataset.

    • It involves training multiple binary classifiers for each pair of classes.

    • The final prediction is made by combining the results of all the binary classifiers.

    • Example: In a dataset with 5 classes, 10 binary classifiers will be traine...

  • Answered by AI
  • Q8. How to find the number of white cars in a city? (interviewer wanted my approach and had given me 5 minutes to come up with an apporach)
  • Ans. 

    Estimate the number of white cars using surveys, traffic data, and image recognition techniques.

    • Conduct surveys: Ask residents about car colors in their neighborhoods.

    • Use traffic cameras: Analyze footage to count white cars during peak hours.

    • Leverage social media: Analyze posts or images of cars in the city.

    • Utilize machine learning: Train a model on images of cars to identify white ones.

    • Collaborate with local authoriti...

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - for the most part, practical questions were asked. so, your experience would matter the most. hence prepare accordingly.

Skills evaluated in this interview

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.

Round 1 - Technical 

(4 Questions)

  • Q1. All employees having same salary in the smae department sql and pysprk
  • Ans. 

    Identify employees with the same salary within the same department using SQL and PySpark.

    • Use SQL's GROUP BY clause to group employees by department and salary.

    • Example SQL query: SELECT department, salary FROM employees GROUP BY department, salary HAVING COUNT(*) > 1;

    • In PySpark, use DataFrame operations to group by department and salary.

    • Example PySpark code: df.groupBy('department', 'salary').count().filter('count &g...

  • Answered by AI
  • Q2. How to create pipeline in databricks
  • Ans. 

    To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.

    • Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.

    • Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.

    • Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.

  • Answered by AI
  • Q3. Palindrome, 2nd char in every word make is to upper case, sql rank and dense rank releated questions , given 2 tables country and city we need to calculate total population in each continent by joining the...
  • Q4. String manuplation questions inpython

Interview Preparation Tips

Interview preparation tips for other job seekers - PRepare well on pyspark
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I appeared for an interview in Sep 2024.

Round 1 - Coding Test 

A coding test was administered to address several machine learning problem statements.

Round 2 - Technical 

(2 Questions)

  • Q1. Mention some metric in ML and explain the tradeoffs between them
  • Ans. 

    ML metrics help evaluate model performance, each with trade-offs affecting accuracy, interpretability, and application.

    • Accuracy vs. Precision: High accuracy may come with low precision in imbalanced datasets. Example: Classifying rare diseases.

    • Recall vs. F1 Score: High recall may lower F1 score, impacting balance in precision and recall. Example: Fraud detection.

    • ROC-AUC vs. PR-AUC: ROC-AUC is sensitive to class imbalan...

  • Answered by AI
  • Q2. How does the xgboost algorithm works , with explanation on every steps
  • Ans. 

    XGBoost is an efficient implementation of gradient boosting that optimizes performance and accuracy through ensemble learning.

    • 1. **Gradient Boosting Framework**: XGBoost builds models in a sequential manner, where each new model corrects errors made by the previous ones.

    • 2. **Decision Trees**: It primarily uses decision trees as base learners, where each tree is built to minimize the loss function.

    • 3. **Regularization**:...

  • Answered by AI
Round 3 - One-on-one 

(4 Questions)

  • Q1. Write a program to reverse an array of integers and strings?
  • Ans. 

    This program reverses an array containing both integers and strings.

    • Use a loop to iterate through the array from the last index to the first.

    • Create a new array to store the reversed elements.

    • Example: For input ['apple', 1, 'banana', 2], output should be [2, 'banana', 1, 'apple'].

  • Answered by AI
  • Q2. Write a program to count the maximum repetitive integer among one array?
  • Ans. 

    This program counts the most frequently occurring integer in an array, identifying the maximum repetitive integer efficiently.

    • Use a dictionary to store the count of each integer in the array. For example, for the array [1, 2, 2, 3], the counts would be {1: 1, 2: 2, 3: 1}.

    • Iterate through the array and update the count in the dictionary for each integer encountered.

    • After counting, find the integer with the maximum count....

  • Answered by AI
  • Q3. Project discussion and why use specific algorithm
  • Q4. How does linear programming algorithm works ?
  • Ans. 

    Linear programming optimizes a linear objective function subject to linear constraints, finding the best outcome in a feasible region.

    • Linear programming involves maximizing or minimizing a linear objective function.

    • Constraints are linear inequalities that define the feasible region.

    • The feasible region is typically a convex polygon in multi-dimensional space.

    • The Simplex method is a popular algorithm used to solve linear...

  • Answered by AI
Interview experience
1
Bad
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
No response

I appeared for an interview in Apr 2025, where I was asked the following questions.

  • Q1. Questions from SQL(40 Mins), Power BI (25 Mins) and Excel(15 Mins)
  • Q2. Joins Number of records in inner outer left right joins, SQL Query and foundational questions, DAX Writing for Running total sales

Interview Preparation Tips

Interview preparation tips for other job seekers - This was for 5+ yrs experience. prepare accordingly. Focus more on SQL part for such roles at this company. Avoid any malfunction to your device. the HR is rude did not reschedule my 2nd Round even after issue from their end.

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 24 Sep 2024

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Describe your workflow of the last project
  • Ans. 

    Developed ETL pipeline to ingest, clean, and analyze customer data for personalized marketing campaigns

    • Gathered requirements from stakeholders to understand data sources and business objectives

    • Designed data model to store customer information and campaign performance metrics

    • Implemented ETL process using Python and Apache Spark to extract, transform, and load data

    • Performed data quality checks and created visualizations ...

  • Answered by AI
  • Q2. What are the different transformations you have used
  • Ans. 

    I have used various transformations such as filtering, joining, aggregating, and pivoting in my data engineering projects.

    • Filtering data based on certain conditions

    • Joining multiple datasets together

    • Aggregating data to summarize information

    • Pivoting data from rows to columns or vice versa

  • Answered by AI
Round 2 - One-on-one 

Skills evaluated in this interview

Data Analyst Interview Questions & Answers

user image Anonymous

posted on 18 Nov 2024

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.

Round 1 - Technical 

(3 Questions)

  • Q1. Highlight the odd cells in excel
  • Ans. 

    Use conditional formatting to highlight odd cells in Excel

    • Select the range of cells you want to highlight

    • Go to the 'Home' tab and click on 'Conditional Formatting'

    • Choose 'New Rule' and select 'Use a formula to determine which cells to format'

    • Enter the formula '=MOD(A1,2)=1' (assuming A1 is the top-left cell of your selected range)

    • Choose the formatting style you want for the odd cells

  • Answered by AI
  • Q2. What are Joins in SQL, explain different joins and their outputs
  • Q3. Query to give running sum of salary

Interview Preparation Tips

Interview preparation tips for other job seekers - Keep your SQL strong and practice joins with problem solving questions

Skills evaluated in this interview

Power BI Developer Interview Questions & Answers

user image Shruti Yadav

posted on 21 Aug 2024

Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. What are the types of data set you have worked upon?
  • Ans. 

    I have worked on various types of data sets including sales data, customer data, financial data, and social media data.

    • Sales data

    • Customer data

    • Financial data

    • Social media data

  • Answered by AI
  • Q2. Write dax for delta sales growth
  • Ans. 

    Calculate delta sales growth using DAX formula

    • Use the following DAX formula: Delta Sales Growth = (SUM(Sales[SalesAmount]) - CALCULATE(SUM(Sales[SalesAmount]), PREVIOUSMONTH('Date'[DateKey]))) / CALCULATE(SUM(Sales[SalesAmount]), PREVIOUSMONTH('Date'[DateKey]))

    • Make sure to replace 'Sales[SalesAmount]' with the actual column name in your dataset

    • Ensure that 'Date'[DateKey]' is the date column in your dataset

  • Answered by AI

Data Analyst Interview Questions & Answers

user image Anonymous

posted on 21 Mar 2024

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Referral and was interviewed in Feb 2024. There were 3 interview rounds.

Round 1 - Coding Test 

The first round was a combination of MCQs and SQL Coding test. It consisted of 23 MCQs on SQL, 10 MCQs on Power BI and 5 SQL Coding questions.

Round 2 - Technical 

(1 Question)

  • Q1. Simple questions on SQL were asked.
Round 3 - One-on-one 

(5 Questions)

  • Q1. What is pivot table in Excel?
  • Q2. What is 'Data Validation' in Excel?
  • Ans. 

    Data Validation in Excel ensures that data entered in a cell meets certain criteria or conditions.

    • Data Validation allows you to set rules for what can be entered in a cell, such as a range of values, a list of items, or a custom formula.

    • Examples of Data Validation include setting a drop-down list of options for a cell, restricting input to a certain number range, or ensuring dates are entered in a specific format.

    • Data ...

  • Answered by AI
  • Q3. What is the order of execution of an SQL query?
  • Ans. 

    The order of execution of an SQL query involves multiple steps to retrieve data from a database.

    • 1. Parsing: The SQL query is first parsed to check for syntax errors.

    • 2. Optimization: The query optimizer creates an execution plan to determine the most efficient way to retrieve data.

    • 3. Compilation: The optimized query is compiled into an executable form.

    • 4. Execution: The compiled query is executed by the database engine t...

  • Answered by AI
  • Q4. What is the difference between Tree Map and Heatmap in Tableau?
  • Ans. 

    Tree Map visualizes hierarchical data using nested rectangles, while Heatmap displays data values using color gradients.

    • Tree Map displays data hierarchically with nested rectangles, where the size and color represent different measures.

    • Heatmap visualizes data values using color gradients, with darker colors indicating higher values.

    • Tree Map is useful for showing hierarchical data structures, while Heatmap is effective ...

  • Answered by AI
  • Q5. What is the difference between 'Extract Data' and 'Live Connection' in Tableau?
  • Ans. 

    Extract Data saves a snapshot of data in Tableau workbook, while Live Connection directly connects to data source.

    • Extract Data creates a static copy of data in Tableau workbook, while Live Connection directly queries data source in real-time.

    • Extract Data is useful for working offline or with small datasets, while Live Connection is ideal for large datasets or when data is frequently updated.

    • Extract Data can improve per...

  • Answered by AI

Interview Preparation Tips

Topics to prepare for Affine Data Analyst interview:
  • SQL
  • Power Bi
  • Tableau
  • Excel

Skills evaluated in this interview

Data Engineer Interview Questions & Answers

user image do achieve

posted on 18 Sep 2024

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. What is dual mode in Power BI
  • Ans. 

    Dual mode in Power BI allows users to switch between DirectQuery and Import modes for data sources.

    • Dual mode allows users to combine the benefits of both DirectQuery and Import modes in Power BI.

    • Users can switch between DirectQuery and Import modes for different data sources within the same report.

    • DirectQuery mode connects directly to the data source for real-time data retrieval, while Import mode loads data into Power...

  • Answered by AI

Skills evaluated in this interview

Interview experience
1
Bad
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. I don't know why they rejected me
  • Q2. What ever they asked me for coding part the output is correct and logic is also correct

Interview Questions & Answers

user image Anonymous

posted on 10 Sep 2024

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Selected Selected

I applied via Company Website and was interviewed in Mar 2024. There were 3 interview rounds.

Round 1 - Technical 

(1 Question)

  • Q1. What is self-join
  • Ans. 

    Self-join is a SQL query that joins a table to itself.

    • Self-join is used when a table needs to be joined with itself to compare rows within the same table.

    • It is achieved by using table aliases to differentiate between the two instances of the same table.

    • Commonly used in hierarchical data structures or when comparing related records within the same table.

  • Answered by AI
Round 2 - One-on-one 

(1 Question)

  • Q1. What is a stored procedure
  • Ans. 

    A stored procedure is a set of SQL statements that are stored in a database and can be called by other programs or scripts.

    • Stored procedures can improve performance by reducing network traffic and executing complex operations on the database server.

    • They can be used to encapsulate business logic and enforce security measures.

    • Example: CREATE PROCEDURE GetCustomerOrders AS SELECT * FROM Orders WHERE CustomerID = @Customer...

  • Answered by AI
Round 3 - One-on-one 

(1 Question)

  • Q1. What is difference between DROP and TRUNCATE statements?
  • Ans. 

    DROP deletes the table structure and data, while TRUNCATE deletes only the data.

    • DROP statement removes the table from the database, including all data and structure.

    • TRUNCATE statement removes all data from the table, but keeps the table structure intact.

    • DROP is a DDL (Data Definition Language) command, while TRUNCATE is a DML (Data Manipulation Language) command.

  • Answered by AI

Interview Preparation Tips

Topics to prepare for Affine interview:
  • SQL
  • Python
  • ETL
  • Datawareshouse
  • Cloud
Interview preparation tips for other job seekers - ask hr for important or primary skills and prepare well for the discussion.

Skills evaluated in this interview

Top trending discussions

View All
Interview Tips & Stories
2w
toobluntforu
·
works at
Cvent
Can speak English, can’t deliver in interviews
I feel like I can't speak fluently during interviews. I do know english well and use it daily to communicate, but the moment I'm in an interview, I just get stuck. since it's not my first language, I struggle to express what I actually feel. I know the answer in my head, but I just can’t deliver it properly at that moment. Please guide me
Got a question about Affine?
Ask anonymously on communities.

Affine Interview FAQs

How many rounds are there in Affine interview?
Affine interview process usually has 1-2 rounds. The most common rounds in the Affine interview process are Technical, Coding Test and One-on-one Round.
How to prepare for Affine interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at Affine. The most common topics and skills that interviewers at Affine expect are SQL, Python, AWS, ETL and Machine Learning.
What are the top questions asked in Affine interview?

Some of the top questions asked at the Affine interview -

  1. you have a pandas dataframe with three columns, filled with state names, city n...read more
  2. I have two jars of 5 litres and 3 litres. How can I measure 4 litres? (Assume: ...read more
  3. How to retain special characters (that pandas discards by default) in the data ...read more
How long is the Affine interview process?

The duration of Affine interview process can vary, but typically it takes about less than 2 weeks to complete.

Tell us how to improve this page.

Overall Interview Experience Rating

3.6/5

based on 41 interview experiences

Difficulty level

Easy 23%
Moderate 64%
Hard 14%

Duration

Less than 2 weeks 78%
2-4 weeks 22%
View more
Join Affine Command The New

Interview Questions from Similar Companies

HCL Infosystems Interview Questions
3.9
 • 144 Interviews
Webkul Software Interview Questions
4.0
 • 71 Interviews
Softenger Interview Questions
4.0
 • 58 Interviews
JK Tech Interview Questions
3.6
 • 36 Interviews
View all

Affine Reviews and Ratings

based on 180 reviews

3.3/5

Rating in categories

3.5

Skill development

3.1

Work-life balance

3.2

Salary

3.0

Job security

3.2

Company culture

2.9

Promotions

3.1

Work satisfaction

Explore 180 Reviews and Ratings
Backend Engineer

Bangalore / Bengaluru

3-4 Yrs

Not Disclosed

Python Backend Engineer

Bangalore / Bengaluru

2-4 Yrs

₹ 8.5-18.5 LPA

MLOps Engineer

Bangalore / Bengaluru

4-8 Yrs

Not Disclosed

Explore more jobs
Senior Associate
139 salaries
unlock blur

₹10 L/yr - ₹18.3 L/yr

Business Analyst
104 salaries
unlock blur

₹5.8 L/yr - ₹12.6 L/yr

Consultant
91 salaries
unlock blur

₹12 L/yr - ₹31.4 L/yr

Senior Business Analyst
87 salaries
unlock blur

₹8 L/yr - ₹18 L/yr

Data Engineer
47 salaries
unlock blur

₹6 L/yr - ₹15 L/yr

Explore more salaries
Compare Affine with

HCL Infosystems

3.9
Compare

Softenger

4.0
Compare

Capital Numbers Infotech

4.4
Compare

JK Tech

3.6
Compare
write
Share an Interview