Role Overview
Work embedded as a member of a squad or across multiple squads to produce, test, document, and review algorithms and data-specific source code that supports the deployment and optimization of data retrieval, processing, storage, and distribution for a business area.
Responsibilities
Data Architecture & Data Engineering
- Understand the technical landscape and bank-wide architecture connected to or dependent on the business area supported in order to effectively design and deliver data solutions (architecture, pipeline, etc.).
- Translate/interpret data architecture direction and associated business requirements and leverage expertise in analytical and creative problem solving to synthesize data solution designs.
- Participate in design thinking processes to successfully deliver data solution blueprints.
- Leverage state-of-the-art relational and No-SQL databases, as well as integration and streaming platforms, to deliver sustainable business-specific data solutions.
- Design data retrieval, storage, and distribution solutions, including contributing to all phases of the development lifecycle.
- Develop high-quality data processing, retrieval, storage, and distribution designs in a test-driven and domain-driven/cross-domain environment.
- Build analytics tools that utilize the data pipeline by quickly producing well-organised, optimized, and documented source code and algorithms.
- Create and maintain sophisticated CI/CD pipelines (authoring and supporting CI/CD pipelines in Jenkins or similar tools and deploying to multi-site environments).
- Automate tasks through appropriate tools and scripting technologies (e.g., Ansible, Chef).
- Debug existing source code and polish feature sets.
- Assemble large, complex data sets that meet business requirements and manage the data pipeline.
- Build infrastructure to automate extremely high volumes of data delivery.
- Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business.
- Ensure designs and solutions support technical organization principles of self-service, repeatability, testability, scalability, and resilience.
- Inform and support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Support the continuous optimization, improvement, and automation of data processing and distribution processes.
- Ensure quality assurance and testing of all data solutions aligned to the QA Engineering and architectural guidelines.
- Implement and align to Group Security standards and practices.
- Monitor the performance of data solutions designs and ensure ongoing optimization.
- Stay ahead of the curve on data processing, retrieval, storage, and distribution technologies.
People
- Coach and mentor other engineers.
- Conduct peer reviews, testing, and problem-solving within and across the broader team.
- Build data science team capability in the use of data solutions.
Risk & Governance
- Identify technical risks and mitigate these (pre, during, and post-deployment).
- Update/design all application documentation aligned to organization technical standards and risk/governance frameworks.
- Create business cases and solution specifications for various governance processes (e.g., CTO approvals).
- Participate in incident management and Disaster Recovery (DR) activity.
- Deliver on time and on budget.
Qualifications and Experience
- BA/BSc/HND degree in a relevant field.
- 3+ years of relevant experience.
- Proficiency in Hadoop is required.
- Knowledge of Spark and/or AWS is a distinct advantage.
- Experience in lake formation.
- Ability to adapt to in-house built ETL tools.
How to Apply
Interested and qualified candidates should apply online via the Absa Workday portal or through the provided application link: https://www.myjobmag.co.ke/apply-now/1166364.