Location: Egypt (mostly remote)

Responsibilities

  • Build and maintain ETLs in cloud environment (Azure and/or Google Cloud)
  • Write Spark jobs for data transformation on structured and semi structured datasets 
  • Design and Implement a real-time data streaming architecture 
  • Contribute to the design and implementation of the application databases and storage hierarchy
  • Write code for monitoring pipelines and data quality
  • Maintain the data infrastructure in a healthy and highly available state

Qualifications 

  • 2+ years hands-on experience in data engineering
  • Experience in cloud native data warehouse solutions, with hands-on experience with at least one of Azure, AWS or Google Cloud
  • Hands-on experience writing ETLs or other data pipelines in a cloud environment
  • Strong understanding of databases concepts such as Relational, NoSQL, Graph Databases, etc…
  • Strong understanding of best practices in Data Warehousing and Data Modelling concepts
  • Proficiency in SQL and at least one of Python or Scala for Spark
  • Proficiency in distributed systems (storage and processing) including Hadoop, Spark and Kafka
  • Highly Preferred experience with real-time data streaming
  • Preferred some experience with visualization tools such as PowerBI or Tableau

Apply for this position

Allowed Type(s): .pdf, .doc, .docx