Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by Altimetrik Team. If you also belong to the team, you can get access from here

Altimetrik Verified Tick

Compare button icon Compare button icon Compare

Filter interviews by

Clear (1)

Altimetrik Data Science Intern Interview Questions and Answers

Updated 3 Aug 2022

Altimetrik Data Science Intern Interview Experiences

1 interview found

I applied via Campus Placement and was interviewed in Aug 2021. There were 6 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all Resume tips
Round 2 - Aptitude Test 

In both aptitude and coding in the second round, aptitude mostly consists of basic problems and there are some data science problems like bias, stats and probability.

Round 3 - Coding Test 

2 coding problems the ones I got are easier didn't take more than 15 minutes to solve both of them.

Round 4 - Technical 

(2 Questions)

  • Q1. Pretty hard technical interview from formulae behind algorithms to math to algorithms touched the sight of all basic data science questions that are supposed to be asked on data science interview
  • Q2. What is gradient descent, why does gradient descent follow tan angles and please explain and write down the formula of it.
  • Ans. 

    Gradient descent is an optimization algorithm used to minimize the cost function of a machine learning model.

    • Gradient descent is used to update the parameters of a model to minimize the cost function.

    • It follows the direction of steepest descent, which is the negative gradient of the cost function.

    • The learning rate determines the step size of the algorithm.

    • The formula for gradient descent is: theta = theta - alpha * (1/...

  • Answered by AI
Round 5 - One-on-one 

(2 Questions)

  • Q1. Managerial Technical round asked some basic level coding questions and data handling with lists, tuples, sets and dicts.
  • Q2. Please write a dictionary and try to sort it.
  • Ans. 

    A dictionary sorted in ascending order based on keys.

    • Create a dictionary with key-value pairs

    • Use the sorted() function to sort the dictionary based on keys

    • Convert the sorted dictionary into a list of tuples

    • Use the dict() constructor to create a new dictionary from the sorted list of tuples

  • Answered by AI
Round 6 - HR 

(6 Questions)

  • Q1. What is your family background?
  • Q2. Why should we hire you?
  • Q3. Where do you see yourself in 5 years?
  • Q4. Why are you looking for a change?
  • Q5. What are your strengths and weaknesses?
  • Q6. Tell me about yourself.

Interview Preparation Tips

Interview preparation tips for other job seekers - Go through the Machine learning lectures of Andrew Ng on youtube, you could easily pass the interview if you have a grip on Andrew Ng's lectures.

Skills evaluated in this interview

Interview questions from similar companies

I applied via Recruitment Consultant and was interviewed in Mar 2021. There was 1 interview round.

Interview Questionnaire 

1 Question

  • Q1. Explain about your projects

Interview Preparation Tips

Interview preparation tips for other job seekers - Interviewer was looking for Data science experience in infrastructure that is building a solution for remedy ticket

Data Scientist Interview Questions & Answers

LTIMindtree user image Abhishek Srivastav

posted on 16 Mar 2015

Interview Questionnaire 

3 Questions

  • Q1. Code For parse Traingle
  • Ans. 

    Code for parsing a triangle

    • Use a loop to iterate through each line of the triangle

    • Split each line into an array of numbers

    • Store the parsed numbers in a 2D array or a list of lists

  • Answered by AI
  • Q2. Asci value along with alphabets(both capital and small)
  • Ans. 

    The ASCII value is a numerical representation of a character. It includes both capital and small alphabets.

    • ASCII values range from 65 to 90 for capital letters A to Z.

    • ASCII values range from 97 to 122 for small letters a to z.

    • For example, the ASCII value of 'A' is 65 and the ASCII value of 'a' is 97.

  • Answered by AI
  • Q3. Would you like to go for Hire studies

Interview Preparation Tips

Round: Test
Experience: First round was through Elitmus.
If you want to be in IT industry must appear it atleast once, for core also u can try it.
It's usually a tough exam but if u are good in maths , apti you will crack it.
Tips: Focus more on Reasoning part. this is the most difficult part.
practise paragraphs reading and solving(Average level)(Infosys level or less)
If you need any kind of help you can contact me via email or can even ring me.
I would recomend everybody to appear this exam with minimum of one month dedicated preparation
Duration: 120 minutes
Total Questions: 60

Round: Coding Round on their own plateform
Experience: It was little difficult to write codes on some other plateform. But time was enough to cope up.
Tips: Try writing as many programs you can write in C, C++ and JAVA not on paper, on compiler . while giving this exam you can select any of these three languages. Based on that your technical Interview will be taken.

Round: Technical Interview
Experience: Its easy one if you have hands on on programing
Tips: Explore and explore .

Round: HR Interview
Experience: Most difficult round for me(I feel myself a little weak in English). But stay calm. And be cheerful.
I still don't know the exact answer of the question but conversation gone for about 20 minutes on this topic.
He din't seem satisfied with me. Btw most of the people says to say no. You can take your call according to the situation.
Tips: Stay calm. Have as much Knowledge about the organisation. Try to make your Intro as much interesting as possible with achivements, hobbies etc. Ya English plays most important role here.

General Tips: Always have faith in yourself. And remember Everything happens for some good reason
Skill Tips: Dont go deep in OS, DBMS but have rough idea about all the topics
Skills: C, C++, DATA STRUCTURE, DBMS, OS
College Name: GANDHI INSTITUTE OF ENGINEERING AND TECHNOLOGY
Motivation: I wanted a job. :)
Funny Moments: A number of stories are there related to this job.
One is I already had an offer so I booked my ticket to home from Bangalore But at very last moment my father told me that you should never miss any chance, go for it. I went and interview date was postponded due to some reasons. I got a mail at 10:30 pm saying I have to attend interview next day morning at 8:30 pm. I ran to get printout of that mail. The venue was 3 hour journey from my place so I din't sleep for the whole night because i knew that if I ll sleep, I would not be able to wake up But I din't studied also because it would have lead to sleep. And Without having sleep and last moment study I made it.

Skills evaluated in this interview

I applied via Campus Placement and was interviewed before Jul 2020. There was 1 interview round.

Interview Questionnaire 

1 Question

  • Q1. Joined as a fresher, basic C program algo

Interview Preparation Tips

Interview preparation tips for other job seekers - Very easy to crack
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Rate yourself in plsql mview index cte merge statement
  • Ans. 

    I rate myself highly in PL/SQL with expertise in mview, index, CTE, and merge statement.

    • I have extensive knowledge and experience in writing PL/SQL code.

    • I am proficient in creating and managing materialized views (mview) to improve query performance.

    • I am skilled in creating and managing indexes to optimize database performance.

    • I am well-versed in using Common Table Expressions (CTE) for complex queries and recursive op...

  • Answered by AI
Round 2 - HR 

(1 Question)

  • Q1. Salary discussion

Interview Preparation Tips

Topics to prepare for Zensar Technologies Data Engineer interview:
  • materialized view
  • Indexing
  • ref cursor
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.

Round 1 - Technical 

(14 Questions)

  • Q1. How to create pipeline in adf?
  • Ans. 

    To create a pipeline in ADF, you can use the Azure Data Factory UI or code-based approach.

    • Use Azure Data Factory UI to visually create and manage pipelines

    • Use code-based approach with JSON to define pipelines and activities

    • Add activities such as data movement, data transformation, and data processing to the pipeline

    • Set up triggers and schedules for the pipeline to run automatically

  • Answered by AI
  • Q2. Diffrent types of activities in pipelines
  • Ans. 

    Activities in pipelines include data extraction, transformation, loading, and monitoring.

    • Data extraction: Retrieving data from various sources such as databases, APIs, and files.

    • Data transformation: Cleaning, filtering, and structuring data for analysis.

    • Data loading: Loading processed data into a data warehouse or database.

    • Monitoring: Tracking the performance and health of the pipeline to ensure data quality and reliab

  • Answered by AI
  • Q3. What is use of getmetadata
  • Ans. 

    getmetadata is used to retrieve metadata information about a dataset or data source.

    • getmetadata can provide information about the structure, format, and properties of the data.

    • It can be used to understand the data schema, column names, data types, and any constraints or relationships.

    • This information is helpful for data engineers to properly process, transform, and analyze the data.

    • For example, getmetadata can be used ...

  • Answered by AI
  • Q4. Diffrent types of triggers
  • Ans. 

    Triggers in databases are special stored procedures that are automatically executed when certain events occur.

    • Types of triggers include: DML triggers (for INSERT, UPDATE, DELETE operations), DDL triggers (for CREATE, ALTER, DROP operations), and logon triggers.

    • Triggers can be classified as row-level triggers (executed once for each row affected by the triggering event) or statement-level triggers (executed once for eac...

  • Answered by AI
  • Q5. Diffrence between normal cluster and job cluster in databricks
  • Ans. 

    Normal cluster is used for interactive workloads while job cluster is used for batch processing in Databricks.

    • Normal cluster is used for ad-hoc queries and exploratory data analysis.

    • Job cluster is used for running scheduled jobs and batch processing tasks.

    • Normal cluster is terminated after a period of inactivity, while job cluster is terminated after the job completes.

    • Normal cluster is more cost-effective for short-liv...

  • Answered by AI
  • Q6. What is slowly changing dimensions
  • Ans. 

    Slowly changing dimensions refer to data warehouse dimensions that change slowly over time.

    • SCDs are used to track historical changes in data over time.

    • There are three types of SCDs - Type 1, Type 2, and Type 3.

    • Type 1 SCDs overwrite old data with new data, Type 2 creates new records for changes, and Type 3 maintains both old and new data in separate columns.

    • Example: A customer's address changing would be a Type 2 SCD.

    • Ex...

  • Answered by AI
  • Q7. Incremental load
  • Q8. With use in python
  • Ans. 

    Use Python's 'with' statement to ensure proper resource management and exception handling.

    • Use 'with' statement to automatically close files after use

    • Helps in managing resources like database connections

    • Ensures proper cleanup even in case of exceptions

  • Answered by AI
  • Q9. List vs tuple in python
  • Ans. 

    List is mutable, tuple is immutable in Python.

    • List can be modified after creation, tuple cannot be modified.

    • List uses square brackets [], tuple uses parentheses ().

    • Lists are used for collections of items that may need to be changed, tuples are used for fixed collections of items.

    • Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)

  • Answered by AI
  • Q10. Datalake 1 vs datalake2
  • Ans. 

    Datalake 1 and Datalake 2 are both storage systems for big data, but they may differ in terms of architecture, scalability, and use cases.

    • Datalake 1 may use a Hadoop-based architecture while Datalake 2 may use a cloud-based architecture like AWS S3 or Azure Data Lake Storage.

    • Datalake 1 may be more suitable for on-premise data storage and processing, while Datalake 2 may offer better scalability and flexibility for clou...

  • Answered by AI
  • Q11. How to read a file in databricks
  • Ans. 

    To read a file in Databricks, you can use the Databricks File System (DBFS) or Spark APIs.

    • Use dbutils.fs.ls('dbfs:/path/to/file') to list files in DBFS

    • Use spark.read.format('csv').load('dbfs:/path/to/file') to read a CSV file

    • Use spark.read.format('parquet').load('dbfs:/path/to/file') to read a Parquet file

  • Answered by AI
  • Q12. Star vs snowflake schema
  • Ans. 

    Star schema is denormalized with one central fact table surrounded by dimension tables, while snowflake schema is normalized with multiple related dimension tables.

    • Star schema is easier to understand and query due to denormalization.

    • Snowflake schema saves storage space by normalizing data.

    • Star schema is better for data warehousing and OLAP applications.

    • Snowflake schema is better for OLTP systems with complex relationsh

  • Answered by AI
  • Q13. Repartition vs coalesece
  • Ans. 

    repartition increases partitions while coalesce decreases partitions in Spark

    • repartition shuffles data and can be used for increasing partitions for parallelism

    • coalesce reduces partitions without shuffling data, useful for reducing overhead

    • repartition is more expensive than coalesce as it involves data movement

    • example: df.repartition(10) vs df.coalesce(5)

  • Answered by AI
  • Q14. Parquet file uses
  • Ans. 

    Parquet file format is a columnar storage format used for efficient data storage and processing.

    • Parquet files store data in a columnar format, which allows for efficient querying and processing of specific columns without reading the entire file.

    • It supports complex nested data structures like arrays and maps.

    • Parquet files are highly compressed, reducing storage space and improving query performance.

    • It is commonly used ...

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - One-on-one 

(2 Questions)

  • Q1. Azure Scenario based questions
  • Q2. Pyspark Coding based questions
Round 2 - One-on-one 

(2 Questions)

  • Q1. ADF, Databricks related question
  • Q2. Spark Performance problem and scenarios
  • Ans. 

    Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.

    • Inefficient code can lead to slow performance, such as using collect() on large datasets.

    • Data skew can cause uneven distribution of data across partitions, impacting processing time.

    • Resource constraints like insufficient memory or CPU can result in slow Spark jobs.

    • Improper configuration settings, su...

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. What can you improve the query performance?
  • Ans. 

    Improving query performance by optimizing indexes, using proper data types, and minimizing data retrieval.

    • Optimize indexes on frequently queried columns

    • Use proper data types to reduce storage space and improve query speed

    • Minimize data retrieval by only selecting necessary columns

    • Avoid using SELECT * in queries

    • Use query execution plans to identify bottlenecks and optimize accordingly

  • Answered by AI
  • Q2. What id SCD type2 table?
  • Ans. 

    SCD type2 table is used to track historical changes in data by creating new records for each change.

    • Contains current and historical data

    • New records are created for each change

    • Includes effective start and end dates for each record

    • Requires additional columns like surrogate keys and version numbers

    • Used for slowly changing dimensions in data warehousing

  • Answered by AI

Data Engineer Interview Questions & Answers

Coforge user image Ravikumar Kawale (RK)

posted on 24 Aug 2023

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via LinkedIn and was interviewed in Jul 2023. There were 3 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all Resume tips
Round 2 - Technical 

(1 Question)

  • Q1. In round 1st they ask questions about your current project. Also the practical based questions on the modules you have worked in your recent project. Sometimes they ask to share ur screen. Overall experie...
Round 3 - Coding Test 

Python code and sql query they ask

Interview Preparation Tips

Interview preparation tips for other job seekers - First make good cv so then u will shortlist and after that HR will connect u. And further process goes smoothly
Interview experience
1
Bad
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Apr 2023. There were 3 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all Resume tips
Round 2 - Technical 

(2 Questions)

  • Q1. Coding question of finding index of 2 nos. having total equal to target in a list, without using nested for loop? l= [2,15,5,7] t= 9 output》》[0,3]
  • Ans. 

    Finding index of 2 numbers having total equal to target in a list without nested for loop.

    • Use dictionary to store the difference between target and each element of list.

    • Iterate through list and check if element is in dictionary.

    • Return the indices of the two elements that add up to target.

  • Answered by AI
  • Q2. What is random forest, knn?
  • Ans. 

    Random forest and KNN are machine learning algorithms used for classification and regression tasks.

    • Random forest is an ensemble learning method that constructs multiple decision trees and combines their outputs to make a final prediction.

    • KNN (k-nearest neighbors) is a non-parametric algorithm that classifies new data points based on the majority class of their k-nearest neighbors in the training set.

    • Random forest is us...

  • Answered by AI
Round 3 - Technical 

(4 Questions)

  • Q1. Ll coding on python dictionary
  • Q2. Find unique keys in 2 dictionaries
  • Ans. 

    To find unique keys in 2 dictionaries.

    • Create a set of keys for each dictionary

    • Use set operations to find the unique keys

    • Return the unique keys

  • Answered by AI
  • Q3. Aws ec2 model deployment procedure
  • Ans. 

    AWS EC2 model deployment involves creating an instance, installing necessary software, and deploying the model.

    • Create an EC2 instance with the desired specifications

    • Install necessary software and dependencies on the instance

    • Upload the model and any required data to the instance

    • Deploy the model using a web server or API

    • Monitor the instance and model performance for optimization

  • Answered by AI
  • Q4. Overloading concept of oop
  • Ans. 

    Overloading is the ability to define multiple methods with the same name but different parameters.

    • Overloading allows for more flexibility in method naming and improves code readability.

    • Examples include defining multiple constructors for a class with different parameter lists or defining a method that can accept different data types as input.

    • Overloading is resolved at compile-time based on the number and types of argume...

  • Answered by AI

Interview Preparation Tips

Topics to prepare for Coforge Data Scientist interview:
  • Python programming
  • python coding
  • dictionary functions , set funct
  • ML, DL Algorithms
  • NLP , AWS
Interview preparation tips for other job seekers - Every time had 2 to 4 panel size and all were technical. All rounds are tough as panel size is more and always extends the given time of interview.

Completed 2 rounds and from 2 weeks they have not arrange hr round.
Morever Hr is saying My profile is on hold.

Very bad rating for companys prolonged hiring process and sometime irritating as candidates like me prepare and attend the interview besides interviews are in working hours. And after completing two rounds not even scheduling Hr round only give information that your profile is on hold......

Skills evaluated in this interview

Contribute & help others!
anonymous
You can choose to be anonymous

Altimetrik Interview FAQs

How many rounds are there in Altimetrik Data Science Intern interview?
Altimetrik interview process usually has 6 rounds. The most common rounds in the Altimetrik interview process are Resume Shortlist, Aptitude Test and Coding Test.
What are the top questions asked in Altimetrik Data Science Intern interview?

Some of the top questions asked at the Altimetrik Data Science Intern interview -

  1. What is gradient descent, why does gradient descent follow tan angles and pleas...read more
  2. Please write a dictionary and try to sort ...read more
  3. Pretty hard technical interview from formulae behind algorithms to math to algo...read more

Recently Viewed

SALARIES

Wipro

Principal

unlock blur L/yr

(7 salaries)

SALARIES

TCS

HR Executive

unlock blur L/yr

(499 salaries)

SALARIES

PwC

Area Manager

unlock blur L/yr

(3 salaries)

SALARIES

PwC

Principal

unlock blur L/yr

(5 salaries)

SALARIES

IBM

Principal

unlock blur L/yr

(3 salaries)

SALARIES

Amazon

Principal

unlock blur L/yr

(5 salaries)

SALARIES

PwC

HR Assistant

unlock blur L/yr

(4 salaries)

DESIGNATION

COMPANY BENEFITS

Mount Litera Zee School

No Benefits

REVIEWS

Delhi World Public School

No Reviews

Tell us how to improve this page.

Interview Questions from Similar Companies

Tech Mahindra Interview Questions
3.5
 • 3.8k Interviews
LTIMindtree Interview Questions
3.8
 • 2.9k Interviews
Mphasis Interview Questions
3.4
 • 791 Interviews
Coforge Interview Questions
3.3
 • 519 Interviews
Cyient Interview Questions
3.6
 • 284 Interviews
KPIT Technologies Interview Questions
3.4
 • 281 Interviews
CitiusTech Interview Questions
3.4
 • 270 Interviews
View all

Altimetrik Data Science Intern Reviews and Ratings

based on 1 review

2.0/5

Rating in categories

1.0

Skill development

1.0

Work-life balance

4.0

Salary

1.0

Job security

1.0

Company culture

3.0

Promotions

3.0

Work satisfaction

Explore 1 Review and Rating
Senior Software Engineer
1.2k salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Staff Engineer
865 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Senior Engineer
638 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Software Engineer
313 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Staff Software Engineer
236 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Explore more salaries
Compare Altimetrik with

Accenture

3.8
Compare

Persistent Systems

3.5
Compare

Mphasis

3.4
Compare

LTIMindtree

3.8
Compare
Did you find this page helpful?
Yes No
write
Share an Interview