Filter interviews by
JavaScript array method is used to manipulate arrays in JavaScript.
Some common array methods include push(), pop(), shift(), unshift(), splice(), slice(), concat(), and join().
Example: let arr = ['apple', 'banana', 'orange']; arr.push('grape'); // ['apple', 'banana', 'orange', 'grape']
Example: let arr = ['apple', 'banana', 'orange']; arr.pop(); // ['apple', 'banana']
I have experience using various sourcing methods such as job boards, social media, employee referrals, and networking events.
Utilized job boards like Indeed and LinkedIn to find qualified candidates
Leveraged social media platforms such as Facebook and Twitter for sourcing
Encouraged employee referrals to tap into existing networks
Attended networking events to connect with potential candidates
Keywords and Boolean search techniques are essential for sourcing candidates for specific roles in IT recruitment.
Use quotation marks for exact phrases (e.g. "software engineer")
Utilize Boolean operators like AND, OR, NOT to refine search results
Include relevant technical skills and certifications as keywords (e.g. Java, AWS, CISSP)
REST API principles are a set of guidelines for designing APIs that adhere to the principles of REST.
Use HTTP methods (GET, POST, PUT, DELETE) to perform CRUD operations
Use resource URIs to represent entities
Statelessness - each request from a client must contain all the information necessary to process the request
Use hypermedia links to allow clients to navigate the API dynamically
What people are saying about EPAM Systems
Elicitation techniques involve gathering requirements, stakeholder management ensures stakeholder needs are met, and change management handles changes effectively.
Elicitation techniques include interviews, surveys, observations, and workshops to gather requirements.
Stakeholder management involves identifying stakeholders, understanding their needs, and communicating with them effectively.
Change management ensures ...
Dataflow and Dataproc are both processing services in GCP, but with different approaches and use cases.
Dataflow is a fully managed service for executing batch and streaming data processing pipelines.
Dataproc is a managed Spark and Hadoop service for running big data processing and analytics workloads.
Dataflow provides a serverless and auto-scaling environment, while Dataproc offers more control and flexibility.
Dat...
Optimization techniques in Spark, SQL, BigQuery, and Airflow.
Use partitioning and bucketing in Spark to optimize data processing.
Optimize SQL queries by using indexes, query rewriting, and query optimization techniques.
In BigQuery, use partitioning and clustering to improve query performance.
Leverage Airflow's task parallelism and resource allocation to optimize workflow execution.
Spark is a distributed processing engine, Airflow is a workflow management system, and BigQuery is a fully managed data warehouse.
Spark is designed for big data processing and provides in-memory computation capabilities.
Airflow is used for orchestrating and scheduling data pipelines.
BigQuery is a serverless data warehouse that allows for fast and scalable analytics.
Spark can be integrated with Airflow to schedule ...
The question is asking about types of transformations, number of jobs, tasks, and actions in the context of a Senior Data Engineer role.
Types of transformations: Extract, Transform, Load (ETL), MapReduce, Spark transformations, SQL transformations
Number of jobs: Depends on the complexity and scale of the data engineering projects
Number of tasks: Varies based on the number of data sources, data transformations, and...
Architecture based use case involves designing a system to meet specific requirements.
Identify the requirements and constraints of the use case
Design a solution architecture that meets the requirements
Consider scalability, security, and performance in the architecture
Implement the architecture using cloud services and technologies
Monitor and optimize the architecture for efficiency
I applied via LinkedIn and was interviewed in Nov 2024. There were 4 interview rounds.
My current day-to-day tasks involve designing and implementing data pipelines, optimizing data storage and retrieval, and collaborating with cross-functional teams.
Designing and implementing data pipelines to extract, transform, and load data from various sources
Optimizing data storage and retrieval processes for efficiency and scalability
Collaborating with cross-functional teams to understand data requirements and del...
The end-to-end project architecture involves designing and implementing the entire data pipeline from data ingestion to data visualization.
Data ingestion: Collecting data from various sources such as databases, APIs, and files.
Data processing: Cleaning, transforming, and aggregating the data using tools like Apache Spark or Hadoop.
Data storage: Storing the processed data in data warehouses or data lakes like Amazon S3 ...
Use Spark (Databricks) notebooks to migrate 1000s of tables efficiently.
Utilize Spark's parallel processing capabilities to handle large volumes of data
Leverage Databricks notebooks for interactive data exploration and transformation
Automate the migration process using scripts or workflows
Optimize performance by tuning Spark configurations and cluster settings
Understanding SQL joins is crucial for data retrieval and analysis in relational databases.
INNER JOIN: Returns records with matching values in both tables. Example: SELECT * FROM A INNER JOIN B ON A.id = B.id.
LEFT JOIN: Returns all records from the left table and matched records from the right table. Example: SELECT * FROM A LEFT JOIN B ON A.id = B.id.
RIGHT JOIN: Returns all records from the right table and matched rec...
Seeking new challenges and opportunities for growth in a more dynamic environment.
Desire for new challenges and learning opportunities
Seeking a more dynamic work environment
Looking to expand skill set and experience
Interested in working on cutting-edge technologies
Seeking better career advancement prospects
I expect EPAM to provide challenging projects, opportunities for growth, a collaborative work environment, and support for continuous learning.
Challenging projects that allow me to utilize my skills and knowledge
Opportunities for professional growth and advancement within the company
A collaborative work environment where teamwork is valued
Support for continuous learning through training programs and resources
Yes, I am willing to relocate for the right opportunity. I can join the company within 4 weeks.
Willing to relocate for the right opportunity
Can join within 4 weeks
Open to discussing relocation assistance if needed
I appeared for an interview in Feb 2025.
The code demonstrates exception handling, resulting in an output of 2 due to incrementing variable I in catch and finally blocks.
The code throws a NullPointerException which is caught in the catch block.
In the catch block, I is incremented from 0 to 1.
The finally block executes regardless of exception, incrementing I from 1 to 2.
The final output printed is the value of I, which is 2.
Identify and print duplicate names from a string array that start with 'B'.
1. Create a string array with names, e.g., ['Bob', 'Alice', 'Bill', 'Bob', 'Bobby'].
2. Use a HashMap or dictionary to count occurrences of each name.
3. Iterate through the array and check for names starting with 'B'.
4. Print names that have a count greater than 1, e.g., 'Bob' and 'Bill' if they are duplicates.
TestNG annotations are used to define test methods, configuration, and grouping in automated testing.
@Test: Marks a method as a test method. Example: @Test public void testMethod() {}
@BeforeSuite: Executes before the entire test suite. Example: @BeforeSuite public void setupSuite() {}
@AfterSuite: Executes after the entire test suite. Example: @AfterSuite public void teardownSuite() {}
@BeforeTest: Executes before any te...
In a runner class, tags help organize and filter test cases for execution in automation testing frameworks.
Tags are used to categorize tests, e.g., @smoke, @regression.
They allow selective execution, e.g., running only @smoke tests.
Tags can be combined, e.g., @regression and @critical.
In Cucumber, tags are specified in the feature file, e.g., @login.
In TestNG, tags can be implemented using groups in XML configuration.
Regression testing checks existing features after changes, while smoke testing verifies basic functionality post-deployment.
Regression testing ensures that new code changes do not adversely affect existing functionalities.
Smoke testing is a preliminary test to check if the basic functions of an application work.
Example of regression testing: After a new feature is added, testing all existing features to ensure they sti...
Custom exceptions enhance error handling in automation testing, while StaleElementReferenceException indicates a DOM element is no longer valid.
Custom Exceptions: These are user-defined exceptions that allow developers to create specific error handling scenarios tailored to their application needs.
Common Exceptions: In automation testing, I've encountered exceptions like NoSuchElementException, TimeoutException, and St...
Waits in automation testing manage timing issues between code execution and web element availability.
Implicit Wait: Sets a default wait time for the entire session. Example: driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
Explicit Wait: Waits for a specific condition to occur before proceeding. Example: WebDriverWait wait = new WebDriverWait(driver, 10);
Fluent Wait: Similar to explicit wait but allows p...
BDD is a software development approach that enhances collaboration between developers, testers, and non-technical stakeholders.
Originated from Test Driven Development (TDD) to improve communication.
Focuses on defining behavior in simple language, often using Gherkin syntax.
Encourages collaboration through examples and scenarios, e.g., 'Given a user is logged in, when they click logout, then they should see the login pa...
I appeared for an interview in Jun 2025, where I was asked the following questions.
I appeared for an interview in Feb 2025.
I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.
Best practices for writing PowerShell scripts
Use clear and descriptive variable names
Comment your code to explain complex logic
Handle errors gracefully with try/catch blocks
Use functions to modularize your script
Avoid hardcoding values whenever possible
To create a Private Endpoint for Azure Key Vaults, you need to configure a private link service.
Navigate to the Azure portal and search for 'Key Vaults'.
Select the Key Vault you want to create a Private Endpoint for.
In the Key Vault settings, go to 'Private endpoint connections' and click on 'Add'.
Choose the subscription, resource group, and private DNS zone for the Private Endpoint.
Review and create the Private Endpoi...
AKS allows for network configuration during creation and provides options for managing it.
During creation of AKS, network configuration options include specifying virtual network, subnet, and network policies.
Network configuration can be managed through Azure portal, Azure CLI, or ARM templates.
AKS supports network policies like Azure CNI, Kubenet, and Calico for network security and isolation.
Network configuration can...
I appeared for an interview in Feb 2025.
Questions on Python: 5 different simple python coding questions
React hooks allow functional components to manage state and side effects, enhancing code reusability and readability.
useState: Manages state in functional components. Example: const [count, setCount] = useState(0);
useEffect: Handles side effects like data fetching. Example: useEffect(() => { fetchData(); }, []);
Custom Hooks: Create reusable logic. Example: function useFetch(url) { /* logic */ }
useContext: Access con...
I appeared for an interview in Jan 2025.
Stream based problems involve processing data in a continuous flow rather than all at once.
Use stream processing libraries like Apache Kafka or Apache Flink
Consider factors like data volume, velocity, and variety
Implement backpressure mechanisms to handle high data loads
I applied via LinkedIn and was interviewed in Dec 2024. There were 2 interview rounds.
Optional in Swift allows variables to have no value. Optional binding and chaining are used to safely unwrap optionals.
Optional in Swift allows variables to have no value, denoted by a '?' after the type.
Optional binding is used to safely unwrap optionals by checking if they contain a value.
Optional chaining allows you to call methods, access properties, and subscript on an optional that might currently be nil.
Example:...
ARC stands for Automatic Reference Counting, a memory management system used in iOS to automatically manage memory allocation and deallocation.
ARC automatically tracks and manages the memory used by objects in an iOS application.
It keeps track of the number of references to an object and deallocates the object when there are no more references to it.
ARC is enabled by default in Xcode for iOS projects, reducing the need...
Closures are self-contained blocks of functionality that can be passed around and used in code.
Closures capture and store references to any constants and variables from the context in which they are defined.
To prevent strong reference cycles, use capture lists in closures.
Use weak or unowned references when capturing self inside a closure to avoid memory leaks.
Middleware is software that acts as a bridge between an operating system or database and applications, allowing them to communicate with each other.
Middleware is a layer of software that sits between the operating system and applications, providing services such as authentication, logging, and caching.
Custom middleware can be created in ASP.NET Core by implementing the IMiddleware interface and adding it to the applica...
Yes, a program can be written to convert 'aabbccaaa' to '2a2b2c3a'.
Create a function that iterates through the input string and counts the consecutive characters.
Store the count and character in a new string as needed.
Return the final output string.
Some of the top questions asked at the EPAM Systems interview -
The duration of EPAM Systems interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 521 interview experiences
Difficulty level
Duration
based on 1.7k reviews
Rating in categories
Senior Software Engineer
3.7k
salaries
| ₹16.4 L/yr - ₹37.2 L/yr |
Software Engineer
2.2k
salaries
| ₹8.5 L/yr - ₹23.8 L/yr |
Lead Software Engineer
1.1k
salaries
| ₹29.9 L/yr - ₹47 L/yr |
Senior Systems Engineer
390
salaries
| ₹22 L/yr - ₹36.3 L/yr |
Software Developer
366
salaries
| ₹10.2 L/yr - ₹30.5 L/yr |
DXC Technology
Optum Global Solutions
Virtusa Consulting Services
CGI Group