i
Datamatics Global Services
Filter interviews by
I applied via Approached by Company and was interviewed in Apr 2024. There was 1 interview round.
Create a list containing all Python data types.
Use the following data types: int, float, complex, str, list, tuple, dict, set, bool, bytes, bytearray, memoryview, None
Example: ['int', 'float', 'complex', 'str', 'list', 'tuple', 'dict', 'set', 'bool', 'bytes', 'bytearray', 'memoryview', 'None']
Extract a character from a string in a list of strings.
Iterate through the list of strings
Use indexing to extract the desired character from each string
Handle cases where the index is out of range
Return the extracted characters as a new list
Create a dictionary with Name and Age for 4 records in Python.
Use curly braces {} to create a dictionary.
Separate key-value pairs with a colon :
Separate each record with a comma ,
Function to check if a string is a palindrome.
Create a function that takes a string as input.
Reverse the string and compare it with the original string.
Return true if they are the same, false otherwise.
Example: 'racecar' is a palindrome.
Data normalization is the process of organizing data in a database efficiently, while data standardization is the process of ensuring consistency and uniformity in data.
Data normalization involves organizing data into tables and columns to reduce redundancy and improve data integrity.
Data standardization involves ensuring that data is consistent and uniform across the database.
Normalization helps in reducing data redun...
The probability of drawing 3 balls of the same color from a box containing 4 balls of each color (Red, Green, Blue).
Calculate the total number of ways to draw 3 balls out of 12 balls
Calculate the number of ways to draw 3 balls of the same color
Divide the number of favorable outcomes by the total number of outcomes to get the probability
Top trending discussions
I applied via LinkedIn and was interviewed in Nov 2024. There was 1 interview round.
Apache Spark is a fast and general-purpose cluster computing system.
Apache Spark is an open-source distributed computing system that provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
It has a unified architecture that combines SQL, streaming, machine learning, and graph processing capabilities.
Spark architecture consists of a driver program that coordinates the exe...
I applied via Job Portal and was interviewed in Aug 2024. There were 3 interview rounds.
Its mandatory test even for experience people
Asked me two string array question one was to reverse a string without any pre build function and second one was a medium question to print the number and the count of it into the next level of the tree
I manage my work by prioritizing tasks, setting goals, staying organized, and communicating effectively with team members.
Prioritize tasks based on deadlines and importance
Set clear goals and milestones to track progress
Stay organized with tools like project management software
Communicate effectively with team members to ensure alignment and collaboration
I applied via campus placement at KLS Institute of Management Education and Research, Belgaum and was interviewed in Jun 2024. There were 2 interview rounds.
Basic coding test like prime
posted on 19 Jul 2024
Hadoop architecture is a distributed computing framework for processing large data sets across clusters of computers.
Hadoop consists of HDFS (Hadoop Distributed File System) for storage and MapReduce for processing.
HDFS divides data into blocks and stores them across multiple nodes in a cluster.
MapReduce is a programming model for processing large data sets in parallel across a distributed cluster.
Hadoop also includes ...
Hadoop is a distributed storage system while Spark is a distributed processing engine.
Hadoop is primarily used for storing and processing large volumes of data in a distributed environment.
Spark is designed for fast data processing and can perform in-memory computations, making it faster than Hadoop for certain tasks.
Hadoop uses MapReduce for processing data, while Spark uses Resilient Distributed Datasets (RDDs) for f...
based on 1 interview
Interview experience
based on 2 reviews
Rating in categories
Consultant
798
salaries
| ₹5.5 L/yr - ₹24.8 L/yr |
Associate Consultant
649
salaries
| ₹3.6 L/yr - ₹15 L/yr |
Executive
602
salaries
| ₹1.1 L/yr - ₹4.5 L/yr |
Senior Executive
327
salaries
| ₹1.2 L/yr - ₹10 L/yr |
Executive Accountant
297
salaries
| ₹1 L/yr - ₹4 L/yr |
TCS
Wipro
Infosys
HCLTech