i
ZeMoSo Technologies
Filter interviews by
Top trending discussions
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.
Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.
Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.
Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.
I applied via Instahyre and was interviewed in Jun 2024. There were 3 interview rounds.
Coding round had one SQL and one Python question. And some mcqs around python and math
posted on 29 May 2024
I applied via Campus Placement and was interviewed in Apr 2024. There were 2 interview rounds.
It was a written test where theoretical SQL questions were asked like primary key, foreign key, set operators and some queries
I applied via Campus Placement and was interviewed in May 2024. There were 2 interview rounds.
Two coding questions
I applied via Referral and was interviewed in Nov 2023. There was 1 interview round.
I applied via Recruitment Consulltant and was interviewed before Sep 2021. There were 3 interview rounds.
Java code to reverse a string
Use StringBuilder class to reverse the string
Call reverse() method on the StringBuilder object
Convert the StringBuilder object back to String using toString() method
I applied via Job Portal and was interviewed before May 2022. There were 2 interview rounds.
Coding test was bad. they asked irrelevant things that were not related to the field
Data engineering is the process of designing, building, and maintaining the infrastructure for data storage and processing.
Data engineering involves creating and managing data pipelines
It includes tasks such as data modeling, data integration, and data warehousing
Data engineers work with big data technologies such as Hadoop, Spark, and NoSQL databases
They also ensure data quality, security, and scalability
Examples of d...
A resilient distributed database is a database that can continue to function even if some of its nodes fail.
It is designed to be fault-tolerant and highly available.
Data is distributed across multiple nodes to ensure redundancy.
If one node fails, the database can continue to function using data from other nodes.
Examples include Apache Cassandra, Riak, and HBase.
Developed ETL pipeline to ingest, clean, and analyze customer data for personalized marketing campaigns
Gathered requirements from stakeholders to understand data sources and business objectives
Designed data model to store customer information and campaign performance metrics
Implemented ETL process using Python and Apache Spark to extract, transform, and load data
Performed data quality checks and created visualizations ...
I have used various transformations such as filtering, joining, aggregating, and pivoting in my data engineering projects.
Filtering data based on certain conditions
Joining multiple datasets together
Aggregating data to summarize information
Pivoting data from rows to columns or vice versa
based on 1 interview
Interview experience
3-6 Yrs
Not Disclosed
Senior Software Engineer
74
salaries
| ₹12.7 L/yr - ₹33 L/yr |
Associate Software Engineer
45
salaries
| ₹6.5 L/yr - ₹7.3 L/yr |
Software Engineer
33
salaries
| ₹6 L/yr - ₹17 L/yr |
Software Engineer2
31
salaries
| ₹10 L/yr - ₹14 L/yr |
Senior Software Engineer 2
25
salaries
| ₹21.3 L/yr - ₹25 L/yr |
Persistent Systems
LTIMindtree
Mphasis
TCS