Filter interviews by
I applied via Campus Placement
Big data refers to large volumes of structured and unstructured data that is too complex for traditional data processing applications.
Big data involves massive amounts of data that cannot be easily managed or analyzed using traditional methods
It includes structured data (like databases) and unstructured data (like social media posts)
Examples include analyzing customer behavior on e-commerce websites, processing sensor ...
Technologies related to big data include Hadoop, Spark, Kafka, and NoSQL databases.
Hadoop - Distributed storage and processing framework for big data
Spark - In-memory data processing engine for big data analytics
Kafka - Distributed streaming platform for handling real-time data feeds
NoSQL databases - Non-relational databases for storing and retrieving large volumes of data
Data warehousing is the process of collecting, storing, and managing data from various sources for analysis and reporting.
Data warehousing involves extracting data from multiple sources
Data is transformed and loaded into a central repository
Allows for complex queries and analysis to be performed on the data
Examples include data warehouses like Amazon Redshift, Google BigQuery
Cloud in big data refers to using cloud computing services to store, manage, and analyze large volumes of data.
Cloud computing allows for scalable and flexible storage of big data
It provides on-demand access to computing resources for processing big data
Examples include AWS, Google Cloud, and Microsoft Azure
Python is a versatile programming language used for various purposes including web development, data analysis, artificial intelligence, and automation.
Python is used for web development with frameworks like Django and Flask.
It is commonly used for data analysis and visualization with libraries like Pandas and Matplotlib.
Python is popular in artificial intelligence and machine learning projects with libraries like Tenso...
Scala is a programming language that is used for building scalable and high-performance applications.
Scala is used for developing applications that require high performance and scalability.
It is often used in Big Data processing frameworks like Apache Spark.
Scala combines object-oriented and functional programming paradigms.
It is interoperable with Java, allowing developers to leverage existing Java libraries.
Scala is ...
SEO in writing refers to optimizing content to improve visibility on search engines.
SEO involves using relevant keywords in writing to improve search engine rankings
Creating high-quality content that is valuable to readers is key for SEO in writing
Optimizing meta tags, headings, and images can also improve SEO in writing
I am a passionate writer with a love for storytelling and a knack for creating engaging content.
Experienced in writing articles, blog posts, and creative fiction
Skilled in conducting research and interviewing subjects for in-depth pieces
Proficient in editing and proofreading to ensure high-quality work
Published works include a collection of short stories and a series of travel guides
I applied via Naukri.com and was interviewed in Feb 2023. There were 3 interview rounds.
I applied via Referral and was interviewed before May 2023. There were 3 interview rounds.
1 DSA Question (Python, Java , C , C++ etc) , 1 SQL question
Abzooba India Infotech interview questions for popular designations
I applied via Campus Placement and was interviewed before Jun 2022. There were 2 interview rounds.
The coding questions were from basic DSA in Python and on top of that there were some SQL related questions as well.
I applied via Referral and was interviewed in May 2021. There were 3 interview rounds.
I applied via campus placement at Heritage Institute of Technology, Kolkata and was interviewed before Jul 2021. There were 3 interview rounds.
Online Assessment with 2 code in DSA and MCQs on computer fundamentals ( DBMS+OS+ML+SQL+Computer Architecture)
Top trending discussions
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.
Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.
Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.
Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.
posted on 17 Jan 2025
I applied via Naukri.com and was interviewed in Dec 2024. There were 2 interview rounds.
Reversing a linked list in Java using iterative approach.
Create three pointers: prev, current, and next.
Iterate through the list, updating pointers to reverse the links.
Return the new head of the reversed list.
Handle StaleElementReferenceException by re-locating the element before interacting with it.
Use try-catch block to catch StaleElementReferenceException
Re-locate the element using findElement method before interacting with it
Use WebDriverWait to wait for the element to become stale
I have worked on various projects involving test automation, performance testing, and quality assurance processes.
Developed automated test scripts using Selenium WebDriver for web applications
Conducted performance testing using JMeter to identify bottlenecks and optimize system performance
Implemented quality assurance processes to ensure software meets requirements and standards
Collaborated with cross-functional teams ...
My strategy for designing an automation framework involves identifying key functionalities, selecting appropriate tools, creating reusable components, implementing robust error handling, and integrating with CI/CD pipelines.
Identify key functionalities to be automated based on priority and impact on testing.
Select appropriate tools and technologies based on the application under test and team expertise.
Create reusable ...
Interview experience
based on 72 reviews
Rating in categories
Data Scientist
43
salaries
| ₹5.5 L/yr - ₹20.7 L/yr |
Associate Software Engineer
39
salaries
| ₹4 L/yr - ₹15 L/yr |
Senior Software Engineer
34
salaries
| ₹6 L/yr - ₹21 L/yr |
Associate Technical Specialist
23
salaries
| ₹6.3 L/yr - ₹14.8 L/yr |
Software Engineer
20
salaries
| ₹4.8 L/yr - ₹13.5 L/yr |
Fractal Analytics
Mu Sigma
Tiger Analytics
LatentView Analytics