Recro

Recro.io - Lead Big Data Engineer - Hadoop/Spark

Job Location

pune, India

Job Description

The role involves building and managing Big Data Pipelines to handle large structured datasets for scalable analytics solutions. The primary focus will be on selecting optimal tools and implementing, maintaining, and monitoring them while integrating with the company's architecture. Responsibilities : Work closely with Product Management & Engineering leadership to build the right solution. Participate in design discussions to select, integrate, and maintain Big Data tools and frameworks. Develop distributed processing systems for cleansing, processing, and analyzing large datasets using Akka and Spark. Critically review existing data pipelines and propose improvements. Takes initiative and works independently as a Senior Individual Contributor on multiple products. Develop highly scalable Big Data pipelines with at least 3 years of experience. Requirements : Worked with Spark, Akka, Storm, Hadoop, and various file formats. Experience with ETL tools like Apache NiFi and Airflow. Strong coding skills in Java or Scala, with knowledge of design patterns. Proficiency in Git, Gradle/Maven/SBT. Solid understanding of OOP, data structures, algorithms, profiling, and optimization. Additional Skills : Strong verbal and written communication skills. Ability to work under pressure and manage multiple projects. Passion for learning, problem-solving, and troubleshooting. (ref:hirist.tech)

Location: pune, IN

Posted Date: 3/26/2025
View More Recro Jobs

Contact Information

Contact Human Resources
Recro

Posted

March 26, 2025
UID: 5080148096

InternJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.