Hybrid 3 days onsite
The vulnerability management platforms development squad is looking for a highly skilled Data Engineer with deep expertise in PostgreSQL, Snowflake or ElasticSearch.
The ideal candidate will have advanced experience in data modeling, ETL processes, and building large-scale data solutions.
A strong technical background in data engineering and streaming services (e.g., Kafka) is required.
Responsibilities:
Data Pipeline Development: Build, optimize, and manage data pipelines, orchestration tools, and governance frameworks to ensure efficient, high-quality data flow.
Data Quality & Governance: Perform data quality checks, enforce governance standards, and apply quality rules to ensure data integrity.
Real-time Data Processing: Utilize Python and streaming technologies (e.g., Kafka) for real-time data processing and analytics, with PostgreSQL as the source.
Advanced SQL Development: Write complex SQL queries for data manipulation, analysis, and integration.
Snowflake or ElasticSearch Architecture & Implementation: Design and implement large-scale data integration solutions using either of them.
Qualifications:
Experience: 8+ years in IT with a focus on data engineering and architecture.
In addition to deep knowledge of PostgreSQL, the candidate should have expertise of Snowflake or ElasticSearch: In-depth knowledge of architecture, functions, and data warehousing concepts.
ETL & Data Modeling: Advanced skills in ETL processes, data modeling, and data warehousing.
Programming: Proficiency in Python for data engineering tasks.
Streaming Services: Experience with Kafka or similar real-time data streaming services.
Communication : Strong analytical, architectural design, and communication skills for engaging with diverse technical stakeholders.
This role requires a technical expert with a passion for solving complex data challenges and building large-scale data solutions.