* Strong data analytic skills related to working with structured and unstructured datasets.
* Strong knowledge of Data warehousing/ETL standards and best practices.
* Experience working with Dimensional Data Model and data pipelines in relation with the same.
* Advanced working SQL knowledge and experience working with OLAP databases like SQL Datawarehouse.
* Experience with Microsoft Azure public cloud and preferably components : Azure Data Lake, Azure Data Factory, SQL Server (SQL DB, SQL DW).
* Experience working with Big data frameworks especially Spark, distributed processing.
* Experience working with Databricks or other Hadoop distributions.
* Hands on experience Designing and developing Data Pipelines for Data Ingestion or Transformation using Python(PySpark)/Spark SQL.
* Experience working with version control GitHub and CI/CD pipelines using Azure DevOps is a plus
Job Types: Full-time, Part-time, Temporary, Contract