Designed, built, and configured applications to meet business processes and application requirements. Delivery as per agile model.
Resource must have good hands-on with relevant Spark experience of a minimum of 6 years in Big Data, including Apache Spark. Good to have Cloud Knowledge.
Proficiency in Scala or Python, as these are commonly used languages for Apache Spark development.
Extensive experience with Apache Spark, including Spark Core, RDDs (Resilient Distributed Datasets), Data Frames, and Spark SQL.
Strong understanding of the broader big data ecosystem, including Hadoop, Hive, HDFS, and related technologies.
Expertise in designing and implementing scalable and efficient data processing solutions using Apache Spark.
Proficient in data modeling and analysis, with the ability to transform raw data into meaningful insights using Spark.
Database Systems: Understanding of database systems and integration with Spark.
Troubleshooting and Debugging: Strong troubleshooting and debugging skills to identify and address issues in Spark applications.
Collaboration and Communication: Effective collaboration skills to work with cross-functional teams and good communication skills to convey complex technical concepts.
Willingness to stay updated on the latest advancements in Apache Spark and related technologies.