Filter interviews by
I applied via campus placement at Guru Gobind Singh Indraprastha University (GGSIU) and was interviewed in Jul 2024. There were 2 interview rounds.
Essay Writing and some basic english grammar questions.
View on Narayan Murthy's Statement about Working 50-60 hours per week
I applied via Company Website and was interviewed before Mar 2023. There was 1 interview round.
I was interviewed before Jul 2022.
Windows repared related questions, How to recover data
If system will not getting on , what will you do.
How to share printer or provide printer access
I was interviewed before Apr 2022.
Lakshmikumaran & Sridharan interview questions for popular designations
I applied via Recruitment Consulltant and was interviewed before Feb 2022. There were 2 interview rounds.
I applied via Campus Placement and was interviewed before May 2020. There was 1 interview round.
I applied via Naukri.com and was interviewed in Sep 2018. There were 5 interview rounds.
Top trending discussions
I applied via Company Website and was interviewed in Oct 2024. There were 4 interview rounds.
AWS services used include S3, Redshift, Glue, EMR, and Lambda in a scalable and cost-effective architecture.
AWS S3 for storing large amounts of data
AWS Redshift for data warehousing and analytics
AWS Glue for ETL processes
AWS EMR for big data processing
AWS Lambda for serverless computing
Developed a real-time data processing pipeline for analyzing customer behavior
Designed and implemented data ingestion process using Apache Kafka
Utilized Apache Spark for data processing and analysis
Built data models and visualizations using tools like Tableau
Implemented machine learning algorithms for predictive analytics
Spark submit command is used to submit Spark applications to a cluster
Used to launch Spark applications on a cluster
Requires specifying the application JAR file, main class, and any arguments
Can set various configurations like memory allocation, number of executors, etc.
Example: spark-submit --class com.example.Main --master yarn --deploy-mode cluster myApp.jar arg1 arg2
Developed a real-time data processing pipeline for analyzing customer behavior
Designed and implemented data ingestion process using Apache Kafka
Utilized Apache Spark for data processing and analysis
Built data models and visualizations using tools like Tableau
Implemented machine learning algorithms for predictive analytics
To configure a cluster for 100 TB data, consider factors like storage capacity, processing power, network bandwidth, and fault tolerance.
Choose a distributed storage system like HDFS or Amazon S3 for scalability and fault tolerance.
Select high-capacity servers with sufficient RAM and CPU for processing large volumes of data.
Ensure high-speed network connections between nodes to facilitate data transfer.
Implement data r...
Our current project architecture involves a microservices-based approach with data pipelines for real-time processing.
Utilizing microservices architecture for scalability and flexibility
Implementing data pipelines for real-time processing of large volumes of data
Leveraging cloud services such as AWS or Azure for infrastructure
Using technologies like Apache Kafka for streaming data
Ensuring data quality and reliability t
Use a SQL query with a subquery to find the 2nd most ordered item in a category.
Use a subquery to rank items within each category based on the number of orders
Select the item with rank 2 within each category
Order the results by category and rank to get the 2nd most ordered item in each category
based on 6 interviews
Interview experience
based on 123 reviews
Rating in categories
Associate
68
salaries
| ₹7 L/yr - ₹13.2 L/yr |
Senior Associate
39
salaries
| ₹10.5 L/yr - ₹19 L/yr |
Patent Analyst
36
salaries
| ₹3.5 L/yr - ₹9.2 L/yr |
Executive
30
salaries
| ₹2.1 L/yr - ₹4.1 L/yr |
Senior Patent Analyst
23
salaries
| ₹5 L/yr - ₹10.2 L/yr |
Khaitan & Co
Cyril Amarchand Mangaldas
Shardul Amarchand Mangaldas
Trilegal