Upload Button Icon Add office photos

Filter interviews by

Atlassian MIS Executive Interview Questions and Answers

Be the first one to contribute and help others!

Interview questions from similar companies

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
No response
Round 1 - Technical 

(5 Questions)

  • Q1. Write a lambda function in python
  • Ans. 

    A lambda function in Python is a small anonymous function defined using the lambda keyword.

    • Lambda functions can have any number of arguments, but can only have one expression.

    • Syntax: lambda arguments : expression

    • Example: lambda x, y : x + y

  • Answered by AI
  • Q2. Dbutils in databricks
  • Ans. 

    dbutils is a utility provided by Databricks for interacting with files and directories in the Databricks environment.

    • dbutils.fs.ls('/') - list files in root directory

    • dbutils.fs.cp('dbfs:/file.txt', 'file.txt') - copy file from DBFS to local file system

    • dbutils.fs.mkdirs('dbfs:/new_dir') - create a new directory in DBFS

  • Answered by AI
  • Q3. Persist and cache
  • Q4. What is commit in SQL
  • Ans. 

    A commit in SQL is a command that saves all the changes made in a transaction to the database.

    • A commit is used to make all the changes made in a transaction permanent.

    • Once a commit is issued, the changes cannot be rolled back.

    • It is important to use commit to ensure data integrity and consistency.

    • Example: COMMIT; - this command is used to commit the changes in a transaction.

  • Answered by AI
  • Q5. Rank and dans_rank

Interview Preparation Tips

Interview preparation tips for other job seekers - read about basic SQL, python and spark questions

Skills evaluated in this interview

I applied via Naukri.com and was interviewed in Jun 2021. There were 4 interview rounds.

Interview Questionnaire 

3 Questions

  • Q1. 1.joins and all basic SQL questions
  • Q2. 2. Oops knowledge advantage but not necessary.
  • Ans. 

    Object-oriented programming (OOP) knowledge is an advantage but not necessary for a data engineer.

    • OOP concepts like inheritance, encapsulation, and polymorphism can be useful in designing data models.

    • OOP languages like Java and Python are commonly used in data engineering.

    • Understanding OOP can help with debugging and maintaining code.

    • However, OOP is not a requirement for data engineering and other programming paradigms

  • Answered by AI
  • Q3. 3.er relationship

Interview Preparation Tips

Interview preparation tips for other job seekers - 1.gd
2.tech1
3.tech2
4.hr
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-

I applied via Job Portal

Round 1 - Coding Test 

There is a test where you build data pipeline

Round 2 - Technical 

(1 Question)

  • Q1. Streaming use case with spark
  • Ans. 

    Spark can be used for real-time data processing in streaming use cases.

    • Spark Streaming allows for processing real-time data streams.

    • It can handle high-throughput and fault-tolerant processing.

    • Examples include real-time analytics, monitoring, and alerting.

  • Answered by AI

Skills evaluated in this interview

Data Engineer Interview Questions & Answers

AVASOFT user image Harshavarthini Ganesh

posted on 4 Mar 2025

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
4-6 weeks
Result
Selected Selected

I appeared for an interview before Mar 2024.

Round 1 - Group Discussion 

The group discussion (GD) round is expected to last 20 minutes. The topics were straightforward and easily comprehensible. The primary focus when participating in the GD should be on English fluency. It is not primarily about how content-rich or intellectually impressive your speech is, but rather about the level of fluency in communication.

Round 2 - Technical 

(1 Question)

  • Q1. Most of the questions will be based on our resume.

Interview Preparation Tips

Interview preparation tips for other job seekers - Concentrate on effective communication and include only the skills and experiences you are truly familiar with in your resume. If you are targeting a specific domain, such as data science or mobility, emphasize that domain and highlight the relevant work you have completed.
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Dec 2024. There were 2 interview rounds.

Round 1 - Aptitude Test 

It was well designed

Round 2 - Technical 

(2 Questions)

  • Q1. Basic Questions about data warehousing
  • Q2. Dbt Scenarios

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare well for dbt and other ETL tools
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Explain surrogate key
  • Ans. 

    Surrogate key is a unique identifier used in databases to uniquely identify each record in a table.

    • Surrogate keys are typically generated by the system and have no business meaning.

    • They are used to simplify database operations and improve performance.

    • Example: Using an auto-incrementing integer column as a surrogate key in a table.

  • Answered by AI
Interview experience
5
Excellent
Difficulty level
Easy
Process Duration
2-4 weeks
Result
-

I applied via Company Website and was interviewed in Oct 2024. There were 4 interview rounds.

Round 1 - Coding Test 

Basic Python, SQL, and Bash questions

Round 2 - One-on-one 

(4 Questions)

  • Q1. SQL questions with operations that include changing from string to array. Tip: stick to RDBMS-specific dialects only (like Postgres), I used Spark SQL
  • Q2. Simple Python questions with a follow-up to optimise it
  • Q3. Bash script-based questions, are pretty basic.
  • Q4. Data pipeline design and best practices.
  • Ans. 

    Data pipeline design involves creating a system to efficiently collect, process, and analyze data.

    • Understand the data sources and requirements before designing the pipeline.

    • Use tools like Apache Kafka, Apache NiFi, or AWS Glue for data ingestion and processing.

    • Implement data validation and error handling mechanisms to ensure data quality.

    • Consider scalability and performance optimization while designing the pipeline.

    • Doc...

  • Answered by AI
Round 3 - One-on-one 

(3 Questions)

  • Q1. Easy to medium Leetcode-based question. With moderate difficulty.
  • Q2. Simple Python-based question with optimisation.
  • Q3. Design specific questions based on Data pipelines.
Round 4 - Behavioral 

(3 Questions)

  • Q1. SQL-based question with moderate difficulty.
  • Q2. Python-based questions, follow questions with some optimisations.
  • Q3. Bash-script based round.

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(3 Questions)

  • Q1. What is bigadata
  • Ans. 

    Big data refers to large and complex data sets that are difficult to process using traditional data processing applications.

    • Big data involves large volumes of data

    • It includes data from various sources such as social media, sensors, and business transactions

    • Big data requires specialized tools and technologies for processing and analysis

  • Answered by AI
  • Q2. How spark works
  • Ans. 

    Spark is a distributed computing framework that processes big data in memory and is known for its speed and ease of use.

    • Spark is a distributed computing framework that can process data in memory for faster processing.

    • It uses Resilient Distributed Datasets (RDDs) for fault-tolerant distributed data processing.

    • Spark provides high-level APIs in Java, Scala, Python, and R for ease of use.

    • It supports various data sources li...

  • Answered by AI
  • Q3. Explain your application
  • Ans. 

    Our application is a data engineering platform that processes and analyzes large volumes of data to provide valuable insights.

    • Our application uses various data processing techniques such as ETL (Extract, Transform, Load) to clean and transform raw data into usable formats.

    • We utilize big data technologies like Hadoop, Spark, and Kafka to handle large datasets efficiently.

    • The application also includes machine learning al...

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Work on basic. And have clear understanding of concepts , accept what you dont know and show attitude to learn and work on require skill
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Coding Test 

Factorial coding questions and SQL coding questions using group by

Round 2 - Technical 

(2 Questions)

  • Q1. Java questions on oops concepts
  • Q2. SQL questions on having clause
Round 3 - HR 

(2 Questions)

  • Q1. Can you relocate
  • Q2. Brief info about amdocs
  • Ans. 

    Amdocs is a software and services provider for communications, media, and entertainment industries.

    • Founded in 1982 in Israel

    • Headquartered in Chesterfield, Missouri

    • Provides customer experience solutions for telecom companies

    • Offers services such as billing, CRM, and data analytics

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Stick to your resume and final year project
Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - One-on-one 

(2 Questions)

  • Q1. Project discussion
  • Q2. SQL

Tell us how to improve this page.

Interview Questions from Similar Companies

Oracle Interview Questions
3.7
 • 864 Interviews
Amdocs Interview Questions
3.7
 • 517 Interviews
Adobe Interview Questions
3.9
 • 235 Interviews
Salesforce Interview Questions
4.0
 • 230 Interviews
24/7 Customer Interview Questions
3.5
 • 176 Interviews
Chetu Interview Questions
3.3
 • 175 Interviews
AVASOFT Interview Questions
2.9
 • 165 Interviews
Dassault Systemes Interview Questions
4.0
 • 164 Interviews
Freshworks Interview Questions
3.5
 • 164 Interviews
View all
Software Engineer
135 salaries
unlock blur

₹20 L/yr - ₹70 L/yr

Software Developer
112 salaries
unlock blur

₹20 L/yr - ₹85 L/yr

Senior Software Engineer
106 salaries
unlock blur

₹25 L/yr - ₹91.5 L/yr

Sde1
57 salaries
unlock blur

₹28 L/yr - ₹86 L/yr

Software Development Engineer II
43 salaries
unlock blur

₹25 L/yr - ₹96 L/yr

Explore more salaries
Compare Atlassian with

Salesforce

4.0
Compare

Google

4.4
Compare

Amazon

4.0
Compare

Oracle

3.7
Compare
Did you find this page helpful?
Yes No
write
Share an Interview