Deel

Data Engineer

Deel1 weeks ago
Location

Spain

Workplace

Remote

Type

Full Time

Level

Mid

Role

Data Engineer

Posted

Mar 6, 2026

Full TimeRemoteMid

The role

Summary

Deel is seeking a Data Engineer to join their Data Platform team to build and maintain data infrastructure that supports their global payroll and HR platform. The role involves designing ETL/ELT pipelines using Apache Spark on AWS EMR, optimizing data warehouse solutions, and collaborating with cross-functional teams to deliver analytics solutions for a company serving 150+ countries.

What you'll do

Data Pipeline Development: Design, build, and maintain efficient ETL/ELT pipelines using Apache Spark on AWS EMR to integrate data from various source systems
Batch and Real-time Processing: Build and optimize batch and near real-time data processing jobs including performance tuning, partitioning, and cost control
Data Transformation: Write complex SQL queries and Python scripts to transform and aggregate large datasets for analytics
Data Quality Assurance: Implement validation checks, cleansing routines, and monitoring to ensure data integrity and reliability
Data Warehouse Optimization: Develop and optimize schemas, tables, and define contracts between Spark pipelines, Iceberg tables, and Snowflake models
Cross-functional Collaboration: Work with data analysts, data scientists, and engineers to understand requirements and deliver appropriate solutions
Documentation and Standards: Document pipeline designs, data flows, and definitions while adhering to team standards for transparency
Project Management: Handle multiple projects simultaneously, prioritize work, and communicate progress to meet stakeholder deadlines

What we look for

Technical

SQL ProficiencyStrong SQL skills with experience in data modeling and building data warehouse solutions
Programming SkillsProficiency in Python for data processing and pipeline automation
ETL/ELT ExperienceFamiliarity with ETL tools and workflow orchestration frameworks like Apache Airflow
Big Data ProcessingExperience with Apache Spark for large-scale data processing and optimization
Data Quality ImplementationExperience implementing data quality checks and working with large-scale datasets
Cloud PlatformsExperience with AWS services, particularly EMR for managed Spark clusters

Education

Bachelor's DegreeBachelor's or Master's degree in Computer Science, Mathematics, Physics, or related field

Experience

Data Engineering ExperienceAt least 3 years of experience in data engineering or similar backend data development role
Problem-solving SkillsStrong analytical and problem-solving abilities with attention to detail
Communication SkillsStrong communication and teamwork skills to work with cross-functional stakeholders

Skills

Required skills

Apache SparkCore technology for data processing, including performance tuning and optimization
PythonPrimary programming language for data pipeline automation and processing
SQLAdvanced SQL skills for complex queries, data modeling, and warehouse solutions
AWS EMRManaged cluster platform experience for running Spark jobs at scale
ETL/ELT ProcessesDesign and implementation of data integration pipelines
Data WarehousingSchema design, optimization, and data warehouse architecture
Data QualityImplementation of validation, cleansing, and monitoring frameworks

Nice to have

Apache AirflowWorkflow orchestration and scheduling for data pipelines
SnowflakeCloud data warehouse platform experience
Apache IcebergModern table format for large analytic datasets
dbtData transformation and modeling framework
ScalaAdditional programming language for Spark development
Performance TuningOptimization of data processing jobs for cost and performance
Real-time ProcessingNear real-time data processing and streaming technologies

Compensation & benefits

Stock options

Available

Benefits

Stock Options

Stock grant opportunities dependent on role, employment status and location

Remote Work Flexibility

Fully remote work environment with optional WeWork access

Location-based Perks

Additional perks and benefits based on employment status and country

Global Opportunity

Work with a globally distributed team across 100+ countries and 74 languages

Career Acceleration

Experience at the forefront of global work revolution with complex challenges impacting millions

Inclusive Environment

Equal opportunity employer committed to diversity and inclusion


Interview process

  1. 1
    Initial Screening Phone or video screening with talent acquisition team to discuss background and role fit
  2. 2
    Technical Assessment Role-related technical assessment focusing on data engineering skills and problem-solving
  3. 3
    Technical Interview In-depth technical discussion with engineering team covering SQL, Python, and data pipeline design
  4. 4
    System Design Data architecture and pipeline design discussion with senior engineers
  5. 5
    Cross-functional Interview Interview with data analysts or data scientists to assess collaboration skills
  6. 6
    Final Interview Discussion with hiring manager about team fit, career goals, and company culture

Apply for this position

You'll be redirected to the company's application page


Deel

Deel

View all jobs

Deel is a global payroll and HR platform that helps companies manage their global workforce.

San Francisco, California, United StatesFounded 2018deel.com

Tech Stack

Languages
PythonSQLScala
Frameworks
Apache Sparkdbt
Databases
SnowflakeApache IcebergData Lake
Tools
AWS EMRApache AirflowAWSGit
Other
ETL/ELTData WarehousingData Quality
Apply Now