Wednesday, November 4, 2020

New Job Vacancy at ABSA Bank Limited South Africa - Data Steward

  AjiraLeo Tanzania       Wednesday, November 4, 2020
ABSA Bank Ltd South Africa
Jobs27 | South Africa Jobs 2020: New Jobs Vacancies at ABSA Group Limited-  South Africa, 2020
Jobs27 | South Africa Jobs 2020
About Us 
Truly African
We are a diversified standalone African financial services group, delivering an integrated set of products and services across personal and business banking, corporate and investment banking, wealth, investment management and insurance.

Absa Group Limited is listed on the JSE and is one of Africa’s largest diversified financial services groups with a presence in 12 countries across the continent and around 41 000 employees.
Absa Group Limited is listed on the JSE and is one of Africa’s largest diversified financial services groups with a presence in 12 countries across the continent and around 41 000 employees.

We own majority stakes in banks in Botswana, Ghana, Kenya, Mauritius, Mozambique, the Seychelles, South Africa, Tanzania (ABSA Bank in Tanzania and National Bank of Commerce), Uganda and Zambia. We also have representative offices in Namibia and Nigeria, as well as insurance operations in Botswana, Kenya, Mozambique, South Africa, Tanzania and Zambia.

Position: Data Steward (VAF) - JHB
Location: Johannesburg
Bring your possibility to life! Define your career with us
With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.
Free CV Writing and Download, Cover/Job Application Letters, Interview Questions and It's Best Answers plus Examples. Click Here!

Job Summary
• Data stewardship is a functional role in data management and governance, with responsibility for ensuring that data policies and standards turn into practice within the steward’s domain.
• The incumbent has to incorporates processes, policies, guidelines and responsibilities for administering organizations' entire data in compliance with policy and/or regulatory obligations.
• Responsible for assuring quality and trust in the data, creating standard definitions for the organization to follow, and maintaining a consistent use of data resources across the organization.
• Define the data and identify assets within their own data domains. This ensures there isn’t conflict with other data elements.
• Create processes and procedures along with access controls to monitor adherence. This includes establishing internal policies and standards—and enforcing those policies.
• Maintain quality of the data using customer feedback, concerns, questions; internally reporting metrics; evaluating and identifying issues; and coordinating and implementing corrections regularly.
• Optimize workflows and communications.
• Monitor data usage to assist teams, share best practice trends in data use, and provide insight into how and where teams can use data to help in day-to-day decision-making.
• Ensure compliance and security of the data. Data stewards are responsible for protecting the data—while providing information on potential risks and offering regulatory guidance.

Job Description
Key Responsibilities
Accountability Data Architecture & Data Engineering
  • Understand the technical landscape and bank wide architecture that is connected to or dependent on the
  • business area supported in order to effectively design & deliver data solutions (architecture, pipeline etc.)
  • Translate / interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to synthesise data solution designs (build a solution from its components) beyond the analysis of the problem
  • Participate in design thinking processes to successfully deliver data solution blueprints
  • Leverage state of the art relational and No-SQL databases as well integration and streaming platforms do deliver sustainable business specific data solutions.
  • Design data retrieval, storage & distribution solutions (and OR components thereof) including contributing to all phases of the development lifecycle e.g. design process
  • Develop high quality data processing, retrieval, storage & distribution design in a test driven & domain driven / cross domain environment
  • Build analytics tools that utilize the data pipeline by quickly producing well-organised, optimized, and documented source code & algorithms to deliver technical data solutions
  • Create & Maintain Sophisticated CI / CD Pipelines (authoring & supporting CI/CD pipelines in Jenkins or similar tools and deploy to multi-site environments – supporting and managing your applications all the way to production)
  • Automate tasks through appropriate tools and scripting technologies e.g. Ansible, Chef
  • Debug existing source code and polish feature sets.
  • Assemble large, complex data sets that meet business requirements & manage the data pipeline
  • Build infrastructure to automate extremely high volumes of data delivery
  • Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business
  • Ensure designs & solutions support the technical organisation principles of self-service, repeatability, testability, scalability & resilience
  • Apply general design patterns and paradigms to deliver technical solutions
  • Inform & support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Support the continuous optimisation, improvement & automation of data processing, retrieval, storage & distribution processes
  • Ensure the quality assurance and testing of all data solutions aligned to the QA Engineering & broader architectural guidelines and standards of the organisation
  • Implement & align to the Group Security standards and practices to ensure the undisputable separation, security & quality of the organisation’s data
  • Meaningfully contribute to & ensure solutions align to the design & direction of the Group Architecture & in particular data standards, principles, preferences & practices. Short-term deployment must align to strategic long-term delivery.
  • Meaningfully contribute to & ensure solutions align to the design and direction of the Group Infrastructure standards and practices e.g. OLA’s, IAAS, PAAS, SAAS, Containerization etc.
  • Monitor the performance of data solutions designs & ensure ongoing optimization of data solutions
  • Stay ahead of the curve on data processing, retrieval, storage & distribution technologies & processes
  • (Global best practices & trends) to ensure best practice

  • Coach & mentor other engineers
  • Conduct peer reviews, testing, problem solving within and across the broader team
  • Build data science team capability in the use of data solutions

Risk & Governance
  • Identify technical risks and mitigate these (pre, during & post deployment)
  • Update / Design all application documentation aligned to the organization technical standards and risk governance frameworks
  • Create business cases & solution specifications for various governance processes (e.g. CTO approvals)
  • Participate in incident management & DR activity – applying critical thinking, problem solving & technical expertise to find the underlying cause of major incidents
  • Deliver on time & on budget (always)
Read Also:

Education and Experience required
  • Relevant NQF level 7 qualification in computer science, engineering, physics, mathematics or equivalent
  • Development and deployment of data applications
  • Design & Implementation of infrastructure tooling and work on horizontal frameworks and libraries
  • Creation of data ingestion pipelines between legacy data warehouses and the big data stack
  • Automation of application back-end workflows
  • Building and maintaining backend services created by multiple services framework
  • Maintain and enhance applications backed by Big Data computation applications
  • Be eager to learn new approaches and technologies
  • Strong problem solving skills
  • Strong programming skills
  • Worked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks)
  • Preferred: Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean)
  • Preferred: Experience with some of the following: Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, CouchbaseDB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools, SAS and SQL skills
  • At least three (3) years’ experience working in Big data environment (advantageous for all, a must for high volume environments) – optimizing and building big data pipelines, architectures and data sets
Bachelor's Degree: Information Technology
Deadline: 09th November, 2020.

Thanks for reading New Job Vacancy at ABSA Bank Limited South Africa - Data Steward

« Prev Post

No comments:

Post a Comment