i
Pramata Knowledge Solutions
Filter interviews by
Clear (1)
I applied via Naukri.com and was interviewed before Oct 2022. There were 4 interview rounds.
Questions based on contract law. Essay on general or legal topic.
Our expectations from company include high quality standards, continuous improvement, adherence to regulations, and customer satisfaction.
Maintaining high quality standards in all products and services
Implementing continuous improvement processes to enhance quality
Adhering to regulations and standards set by relevant authorities
Ensuring customer satisfaction through quality products and services
Top trending discussions
I applied via LinkedIn and was interviewed in Mar 2024. There were 2 interview rounds.
Testing lifecycle involves planning, designing, executing, and evaluating tests to ensure quality of a product.
1. Planning phase involves defining test objectives, scope, and resources.
2. Design phase includes creating test cases, test data, and test environment.
3. Execution phase involves running tests, recording results, and reporting defects.
4. Evaluation phase includes analyzing test results, identifying trends, an...
I was interviewed in Aug 2024.
I successfully implemented a new project management system to streamline workflow and improve team efficiency.
Implemented new project management system
Streamlined workflow
Improved team efficiency
I applied via Approached by Company and was interviewed before Jun 2021. There were 4 interview rounds.
First round was coding round that comprised of 4 questions, 1 sql and 3 programming questions. Out of 3, if you are able to run 2 code successfully, you'll qualify for the next round
Spark is faster than MapReduce due to in-memory processing and DAG execution.
Spark uses DAG (Directed Acyclic Graph) execution while MapReduce uses batch processing.
Spark performs in-memory processing while MapReduce writes to disk after each operation.
Spark has a more flexible programming model with support for multiple languages.
Spark has built-in libraries for machine learning, graph processing, and stream processin...
Spark optimization techniques improve performance and efficiency of Spark applications.
Partitioning data to reduce shuffling
Caching frequently used data
Using broadcast variables for small data
Tuning memory allocation and garbage collection
Using efficient data formats like Parquet
Avoiding unnecessary data shuffling
Using appropriate hardware configurations
Optimizing SQL queries with appropriate indexing and partitioning
Hive partitioning is dividing data into smaller, manageable parts while bucketing is dividing data into equal parts based on a hash function.
Partitioning is useful for filtering data based on a specific column
Bucketing is useful for evenly distributing data for faster querying
Partitioning can be done on multiple columns while bucketing is done on a single column
Partitioning creates separate directories for each partiti...
Hive optimization techniques improve query performance and reduce execution time.
Partitioning tables to reduce data scanned
Using bucketing to group data for faster querying
Using vectorization to process data in batches
Using indexing to speed up lookups
Using compression to reduce storage and I/O costs
I applied via Recruitment Consulltant and was interviewed before Apr 2023. There were 2 interview rounds.
Asked get write python code for a perticular scenario
I applied via Recruitment Consulltant and was interviewed in Dec 2024. There were 2 interview rounds.
Sql exam - MCQ & write a query
Aptitude test along with python and SQL MCQ questions. Sql coding was also asked which was very simple
based on 1 interview
Interview experience
based on 2 reviews
Rating in categories
Contract Analyst
205
salaries
| ₹0 L/yr - ₹0 L/yr |
Data Associate
124
salaries
| ₹0 L/yr - ₹0 L/yr |
Contracts Analyst
110
salaries
| ₹0 L/yr - ₹0 L/yr |
Data Analyst
21
salaries
| ₹0 L/yr - ₹0 L/yr |
Full Stack Developer
16
salaries
| ₹0 L/yr - ₹0 L/yr |
Mu Sigma
Fractal Analytics
LatentView Analytics
Tiger Analytics