Data Engineer

Apply for this job

Email *

Job Description

Key Accountabilities

  • Participate in Data Management systems implementation projects: Data Lakehouse, Data Streaming, Metadata Management, Reference Data Management
  • Develop data pipelines to bring new data to Enterprise Data Fabric
  • Ensure data pipelines are efficient, scalable, and maintainable
  • Comply with data engineering and development best practices (CI/CD, Code Management, Testing, Knowledge Management, Documentation etc.)
  • Ensure that all Data Policies are met within Enterprise Data Fabric.
  • Ensure that implemented systems correspond with target Data Architecture
  • Support Data teams (DQ, Data Governance, BI, Data Science) in achieving their goals
  • Maintain agile delivery process based on one of frameworks: Kanban, Scrum
  • Ensure that SLAs with data consumers and data sources are maintained
  • Implement all necessary monitoring, alerting

 

Other Accountabilities

  • Software and Tools knowledge
  • Python (Advanced level)
  • Airflow or Apache NiFi
  • K8s (OpenShift), Docker
  • RDBMS: MS SQL Server, PostgreSQL, Oracle
  • ETL (at least one of): SSIS, Informatica PowerCenter, IBM Datastage, Pentaho
  • SQL – Advanced user (Stored Procedures, Window functions, Temp Tables, Recursive Queries)
  • Git (GitHub/GitLab)

Skills

Key Interactions

  • Familiar with following concepts:
  • Data Warehousing (Star/Snowflake schemas)
  • Data Lake
  • Agile methodologies (Kanban, Scrum)
  • Strong problem-solving skills

Competencies

  • Adaptability/Flexibility
  • Creativity/Innovation
  • Decision Making/Judgment
  • Dependability
  • Initiative
  • Integrity/Ethics
  • Problem Solving/Analysis

Skills

  • Ability to interact with internal and external stakeholders
  • Ability to work under pressure
  • Accuracy and attention to detail

Education

  • Bachelor’s degree in Computer Science or equivalent