i
Altimetrik
Filter interviews by
I applied via Approached by Company and was interviewed in Apr 2024. There was 1 interview round.
SQL and python questions are very easy.
I applied via Naukri.com and was interviewed in Sep 2023. There were 4 interview rounds.
Case study related to semantic search
I applied via Referral and was interviewed before Jan 2023. There were 3 interview rounds.
General knowledge
English knowledge
Technical questions from hive , spark Scala and azure
Basics of sql and joins
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
Code in Python for checking palindromes and SQL for sales data based on months.
I applied via Campus Placement and was interviewed before Aug 2023. There were 4 interview rounds.
Generic aptitude test
Prompting is a technique used to encourage a person to continue speaking or to provide more information.
Verbal prompts: asking open-ended questions, using encouraging words
Non-verbal prompts: nodding, maintaining eye contact
Visual prompts: showing images or videos to stimulate conversation
Memory handling in conversational AI involves managing data storage and retrieval efficiently.
Use efficient data structures to store and retrieve information
Implement caching mechanisms to reduce memory usage
Optimize algorithms for processing large amounts of data
Consider using cloud-based storage solutions for scalability
Monitor memory usage and performance regularly
I applied via Recruitment Consulltant and was interviewed before Jul 2023. There were 2 interview rounds.
Handling ADF pipelines involves designing, building, and monitoring data pipelines in Azure Data Factory.
Designing data pipelines using ADF UI or code
Building pipelines with activities like copy data, data flow, and custom activities
Monitoring pipeline runs and debugging issues
Optimizing pipeline performance and scheduling triggers
I applied via LinkedIn and was interviewed in Jan 2024. There was 1 interview round.
Pyspark is a Python API for Apache Spark, a powerful open-source distributed computing system.
Pyspark is used for processing large datasets in parallel across a cluster of computers.
It provides high-level APIs in Python for Spark programming.
Pyspark allows seamless integration with other Python libraries like Pandas and NumPy.
Example: Using Pyspark to perform data analysis and machine learning tasks on big data sets.
Pyspark SQL is a module in Apache Spark that provides a SQL interface for working with structured data.
Pyspark SQL allows users to run SQL queries on Spark dataframes.
It provides a more concise and user-friendly way to interact with data compared to traditional Spark RDDs.
Users can leverage the power of SQL for data manipulation and analysis within the Spark ecosystem.
To merge 2 dataframes of different schema, use join operations or data transformation techniques.
Use join operations like inner join, outer join, left join, or right join based on the requirement.
Perform data transformation to align the schemas before merging.
Use tools like Apache Spark, Pandas, or SQL to merge dataframes with different schemas.
Pyspark streaming is a scalable and fault-tolerant stream processing engine built on top of Apache Spark.
Pyspark streaming allows for real-time processing of streaming data.
It provides high-level APIs in Python for creating streaming applications.
Pyspark streaming supports various data sources like Kafka, Flume, Kinesis, etc.
It enables windowed computations and stateful processing for handling streaming data.
Example: C...
based on 2 interviews
Interview experience
based on 4 reviews
Rating in categories
Senior Software Engineer
1.2k
salaries
| ₹9.5 L/yr - ₹36 L/yr |
Staff Engineer
903
salaries
| ₹11.1 L/yr - ₹41 L/yr |
Senior Engineer
692
salaries
| ₹9 L/yr - ₹31 L/yr |
Software Engineer
322
salaries
| ₹4.8 L/yr - ₹19 L/yr |
Staff Software Engineer
235
salaries
| ₹10.4 L/yr - ₹37 L/yr |
Accenture
Xoriant
CitiusTech
HTC Global Services