i
Amdocs
Filter interviews by
Big data refers to large and complex data sets that are difficult to process using traditional data processing applications.
Big data involves large volumes of data
It includes data from various sources such as social media, sensors, and business transactions
Big data requires specialized tools and technologies for processing and analysis
Spark is a distributed computing framework that processes big data in memory and is known for its speed and ease of use.
Spark is a distributed computing framework that can process data in memory for faster processing.
It uses Resilient Distributed Datasets (RDDs) for fault-tolerant distributed data processing.
Spark provides high-level APIs in Java, Scala, Python, and R for ease of use.
It supports various data sources li...
Our application is a data engineering platform that processes and analyzes large volumes of data to provide valuable insights.
Our application uses various data processing techniques such as ETL (Extract, Transform, Load) to clean and transform raw data into usable formats.
We utilize big data technologies like Hadoop, Spark, and Kafka to handle large datasets efficiently.
The application also includes machine learning al...
Factorial coding questions and SQL coding questions using group by
Amdocs is a software and services provider for communications, media, and entertainment industries.
Founded in 1982 in Israel
Headquartered in Chesterfield, Missouri
Provides customer experience solutions for telecom companies
Offers services such as billing, CRM, and data analytics
Top trending discussions
I applied via Company Website and was interviewed in Oct 2024. There were 4 interview rounds.
Basic Python, SQL, and Bash questions
Data pipeline design involves creating a system to efficiently collect, process, and analyze data.
Understand the data sources and requirements before designing the pipeline.
Use tools like Apache Kafka, Apache NiFi, or AWS Glue for data ingestion and processing.
Implement data validation and error handling mechanisms to ensure data quality.
Consider scalability and performance optimization while designing the pipeline.
Doc...
I applied via LinkedIn and was interviewed in Jul 2024. There were 2 interview rounds.
Hackerearth problem on reversing the string & one more array question
Reverse a given string
Use built-in functions like reverse() in Python
Iterate through the string in reverse order and append characters to a new string
Use stack data structure to reverse the string
I applied via Company Website and was interviewed in Feb 2024. There were 4 interview rounds.
Cyclic linked lists are linked lists where the last node points back to the first node, creating a loop.
Cyclic linked lists have no NULL pointers, making it difficult to determine the end of the list.
They can be used to efficiently represent circular data structures like a round-robin scheduling algorithm.
Detecting cycles in a linked list can be done using Floyd's cycle-finding algorithm.
Real world problem: Predicting customer churn in a subscription-based service
Collect and analyze customer data such as usage patterns, demographics, and interactions
Use machine learning algorithms to identify factors leading to churn
Implement targeted retention strategies based on the analysis
Monitor and evaluate the effectiveness of the strategies over time
Surrogate key is a unique identifier used in databases to uniquely identify each record in a table.
Surrogate keys are typically generated by the system and have no business meaning.
They are used to simplify database operations and improve performance.
Example: Using an auto-incrementing integer column as a surrogate key in a table.
I applied via Company Website and was interviewed before Aug 2023. There was 1 interview round.
I was interviewed before Aug 2023.
Data warehousing is the process of collecting, storing, and managing data from various sources for analysis and reporting.
Data warehousing involves extracting data from multiple sources
Data is transformed and loaded into a central repository
Data can be queried and analyzed for business intelligence purposes
Examples include data warehouses like Amazon Redshift, Snowflake, and Google BigQuery
I applied via Recruitment Consulltant and was interviewed before Jun 2022. There were 3 interview rounds.
Sql queries not too difficult asked to write on google document.
based on 6 reviews
Rating in categories
Software Developer
7.6k
salaries
| ₹5 L/yr - ₹16.8 L/yr |
Software Engineer
1.9k
salaries
| ₹4 L/yr - ₹16 L/yr |
Softwaretest Engineer
1.7k
salaries
| ₹3 L/yr - ₹12 L/yr |
Functional Test Engineer
1.2k
salaries
| ₹4 L/yr - ₹12.3 L/yr |
Associate Software Engineer
1k
salaries
| ₹3.2 L/yr - ₹12 L/yr |
TCS
IBM
Infosys
Wipro