This role is complete onsite and the location is London, UK.
Key Responsibilities:
Design, develop, and maintain ETL data pipelines using Scala and PySpark.
Work on big data processing frameworks like Apache Spark to process large datasets efficiently.
Integrate various data sources and databases into the data processing ecosystem.
Collaborate in Agile environments, contributing to sprint planning, code reviews, and continuous integration practices.
Required Skills and Qualifications:
Proficiency in Scala and PySpark for data processing and ETL development.
Strong understanding of Apache Spark and distributed computing frameworks.
Strong understanding of data structures, algorithms, and software engineering best practices.
Excellent problem-solving and analytical skills.