About the job
Deriv Group is one of the largest brokers in the world, with over 1,200 team members and 20 global offices.
As a Senior Data Quality Engineer in the Business Intelligence team, you will perform automated and manual testing of data sets used in our internal data systems. Working with a passionate group of Data Engineers, you will write testing plans and routines and manage continuous integration and regression testing processes. You will be the key contact point for clearing release data per delivery specifications. Another important responsibility will be developing, testing, and maintaining architectures for data processing and building Extract, Transform, and Load (ETL) pipelines.
One of Deriv’s goals is to become the world’s leading online broker; to achieve that, our data warehouse must always remain trustworthy. Your skills will help us ensure data accuracy and quality and enhance our expertise in the business.
- Maintain data integrity while extracting data from complex sources, both in-house and third-party. Responsible for data security, accuracy, and accessibility.
- Establish and implement test plans, including identification, analysis, and resolving quality issues related to code and data. Ensure ETL (extract, transform, and load) processes maintain data integrity.
- Catalogue any new and existing data sources and ETLs process.
- Design resilient and extensible extraction pipelines.
- Optimise and standardise existing data management workflows/processes, design new enhanced processes, and update documentation for each data solution.
- Collaborate externally with clients to acquire data extraction access. Work internally with other departments to establish data collection and analysis procedures.
- A minimum of 7 years of experience with data quality or data engineering field
- Passion towards data and analysing business needs
- Hands-on experience with one or more common data integration tools such as Airflow, DataStage, Informatica, Stitch, Talend
- Experience troubleshooting and resolving issues in an ETL process.
- Experience with SQL, preferably Postgres
- Solid knowledge of testing and data quality. Data-oriented personality.
- Familiarity with data ingestion pipeline concepts and data technologies such as OLTP databases, Data Warehousing, Data Lakes, DW/BI tools
- Experience with Data Warehouse, Data Marts, Dimensional Modeling, Data Quality concepts, and techniques
- Proficiency in data wrangling, data masking, self-recovery, and creating data alerts
- Experience using APIs or Webhooks to pull data from various sources
- Good programming ability in Python
- Hands-on experience with Google Cloud Platform (GCP) services such as BigQuery, scheduled queries, Cloud Storage, and Cloud Functions
- Knowledge about CI / CD principles and tools (e.g., CircleCi)
- A strong background in data provisioning and ETL process
- Experience in peer-reviewing pipeline codes and suggesting improvements when required
- Fluency in spoken and written English
What’s Good To Have
- Experience with iterative, lean, or agile development methodologies a plus
- Knowledge of Database Administration, back-end, and Reporting tools
- Experience in testing back-end services such as APIs, Databases, and distributed services.
- Experience in developing cron jobs, REST APIs, Web Services, text and XML processing
- Experience in managing stakeholders’ expectations and technical requirement gathering
- Familiarity with container technologies such as Docker
- A fintech background
- Market-based salary
- Annual performance bonus
- Medical insurance
- Housing and transportation allowance
- Casual dress code
- Work permit