Filter interviews by
Data commissioning involves the process of ensuring data quality, accuracy, and relevance before it is used for decision-making.
Data commissioning involves assessing the quality of data to ensure it is accurate and reliable.
It includes validating data sources and ensuring they meet the required standards.
Data commissioning also involves cleaning and transforming data to make it usable for analysis.
It is essential for o...
To determine if a site can work, consider factors like server reliability, site functionality, user experience, and performance.
Check server reliability and uptime
Test site functionality and user experience
Monitor site performance metrics like load time and responsiveness
Top trending discussions
I was interviewed in Jan 2025.
I applied via Naukri.com and was interviewed in Nov 2024. There were 2 interview rounds.
It was basic coding to test the Python skills like reading data from local and uploading in cloud then some basic dag related questions
Clustering involves grouping similar data together, while partitioning involves dividing data into smaller, manageable sections.
Clustering is used to group similar data points together based on certain criteria, such as customer segments or product categories.
Partitioning involves dividing a large dataset into smaller, more manageable sections for easier data retrieval and processing.
Clustering is often used for data a...
Different approaches to optimizing SQL include indexing, query optimization, and database design.
Use indexing to improve query performance
Optimize queries by avoiding unnecessary joins and using appropriate functions
Design the database schema efficiently to reduce redundancy and improve data retrieval speed
To find the count of items bought by a customer from Flipkart during a year excluding February, you need to aggregate the data and filter out February transactions.
Aggregate the data by customer and item purchased
Filter out transactions from February
Count the number of items bought by each customer
I applied via Company Website and was interviewed in Oct 2024. There was 1 interview round.
Data Warehouse stores structured data for reporting and analysis, Data Lakes store raw and unstructured data, Tables are basic data structures.
Data Warehouse is used for storing structured data from various sources for reporting and analysis.
Data Lakes store raw and unstructured data in its native format for future processing and analysis.
Tables are basic data structures used to organize and store data in a structured ...
Group by is used to group rows that have the same values into summary rows, while Partition by is used to divide the result set into partitions to which the function is applied separately.
Group by is used with aggregate functions to group rows based on a column or set of columns.
Partition by is used with window functions to divide the result set into partitions.
Group by is used with SELECT statement, while Partition by...
I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.
I applied via LinkedIn and was interviewed in Jun 2024. There was 1 interview round.
Using Unix command 'grep' to find a word in a set of files
Use 'grep' command followed by the word you want to search for and the file(s) you want to search in
Add the '-r' flag to search recursively in all files in a directory
Use '-i' flag for case-insensitive search
Example: grep 'hello' file.txt
Example: grep -r 'error' /path/to/directory
Example: grep -i 'apple' file1.txt file2.txt
Star schema is a denormalized schema with a single fact table and multiple dimension tables, while snowflake schema is a normalized schema with multiple dimension tables normalized into sub-dimension tables.
Star schema has a single fact table surrounded by multiple dimension tables, while snowflake schema has dimension tables normalized into sub-dimension tables.
In star schema, dimensions are denormalized, while in sno...
Data can be loaded into BigQuery using Dataflow by creating a pipeline in Dataflow that reads data from a source and writes it to BigQuery.
Create a Dataflow pipeline using Apache Beam SDK
Read data from a source such as Cloud Storage or Pub/Sub
Transform the data as needed using Apache Beam transformations
Write the transformed data to BigQuery using BigQueryIO.write()
posted on 25 Jun 2024
posted on 11 Jul 2024
I applied via WorkIndia and was interviewed before Jul 2023. There was 1 interview round.
My salary expectation is based on my experience, skills, and the industry standard.
Consider my experience and skills when determining salary
Research industry standard salaries for Data Entry Operators
Open to negotiation based on benefits and opportunities
based on 1 interview
Interview experience
Junior Engineer
119
salaries
| ₹2.2 L/yr - ₹8 L/yr |
Sub Divisional Engineer
109
salaries
| ₹8.3 L/yr - ₹25 L/yr |
Junior Telecom Officer
100
salaries
| ₹4 L/yr - ₹12.5 L/yr |
Junior Accounts Officer
88
salaries
| ₹5 L/yr - ₹11 L/yr |
Network Engineer
34
salaries
| ₹1.2 L/yr - ₹8.2 L/yr |
Bharti Airtel
Vodafone Idea
Jio
Tata Communications