An exciting opportunity for an experienced Data Engineer to join the Engineering team working within the Digital & Product Department.
Location: Chiswick (West London)
Work Type: 3 days in office, 2 day from home
Salary: £60k - £70k dependent on experience
Full time permanent position
The successful candidate will be responsible for creating the pipelines that transform data in a scalable and repeatable way. You will need to produce efficient code that automates the ingestion and cleansing of data and apply the transformations required to achieve the target data format.
You will work with the Engineering and Data & Business Analytics teams to understand the source and target data requirements and use this knowledge to develop data integration solutions to achieve the necessary migrations.
The Data Engineer will support the software developers, database architects and data analysts on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
DUTIES & RESPONSIBILITIES
- Understand, build and develop ETL and data integration solutions within Azure landscape using a wide array of technologies and data sources
- Analyse and organise raw data
- Explore ways to enhance data quality and reliability
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc.
- Work with cloud-based infrastructure (Azure) for hosting data solutions and applications
- Collaborate with architects, data analysts and data scientists to help meet the business goals
SKILLS REQUIRED
- 4+ years experience as a cloud data engineer
- Good knowledge of the Azure data engineering stack, especially Azure Synapse and Azure Data Lake.
- Proven experience in development and maintenance of ETL/ELT processes within a medallion architecture.
- Azure synapse pipeline development and experience in writing PySpark notebooks is preferable.
- Good knowledge of (and preferably experience in) design and implementation of delta loads and table/column level CDC implementation.
- Strong experience working with relational databases (OLAP and OLTP)
- Programming experience in Python, PySpark and T-SQL
- Previous experience with Azure DevOps and understanding of CI/CD is desirable
- Analytical skills related to working with structured and unstructured datasets
- Excellent written and verbal communication skills
- Experience supporting and working with cross-functional teams in a dynamic environment
Please Note: Swoop Recruitment are now out of office until 2nd January 2025 - we will endeavour to respond to your application as soon as possible once reviewed upon our return.