Data Engineer-(Scala and Spark)

Remote, USA Full-time Posted 2025-02-22

Job Description

We are looking for a highly skilled Data Engineer with expertise in Scala and Apache Spark to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and solutions to manage and process large datasets efficiently.

Key Responsibilities: ? Design and implement robust, scalable, and high-performance data pipelines using Scala and Apache Spark. ? Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. ? Optimize data processing systems and ensure their scalability and efficiency. ? Work on data extraction, transformation, and loading (ETL) processes for various structured and unstructured datasets. ? Implement best practices for data engineering, including data governance, data quality, and monitoring. ? Debug and resolve data processing and performance issues promptly. ? Ensure data security and compliance with relevant regulations and guidelines.

Skills & Qualifications: ? Strong programming skills in Scala with 3+ years of hands-on experience. ? Proficiency in Apache Spark for large-scale data processing. ? Familiarity with Hadoop Ecosystem (e.g., HDFS, Hive, HBase) is a plus. ? Experience with cloud platforms like AWS, Azure, or GCP. ? Knowledge of relational and non-relational databases (e.g., PostgreSQL, MongoDB). ? Hands-on experience with data versioning, CI/CD pipelines, and orchestration tools (e.g., Airflow). ? Strong problem-solving skills and ability to work in an Agile environment. ? Knowledge of containerization tools (Docker, Kubernetes) is an advantage.

Educational Background:
? Bachelor?s or Master?s degree in Computer Science, Engineering, or a related field.

Experience: 3-6 years

Apply Job!

Similar Remote Jobs