emergiTEL
Location: Montreal, QC
Job Description: Job description:
- Use of Azure Data Factory for the ingestion in the data lake
- Use of Databricks for transformation and loading of the data in the different layers
- Use of Spark/Scala/python for the development of the loading libraries
- Participate in the development of the new ingestion pattern using databricks streaming and Kafka
- Develop best practices on the data quality validation
The candidate must have the following qualifications to be retained for an internal process:
- 3-5 years in the development in spark/python (not PySpark)
- 3 years on databricks
- 2 years on Azure DevOps
- Experience with Powershell
- Experience on azure data lake
- Azure Data Factory (ADF) (It will need to be able to build a simple “copy data” pipeline. We don’t use dataflow in ADF, just copy data).
- Bilingualism: speak one language well and understand the other well
Soft Skills
- Strong interpersonal, teamwork, coordination and consensus building skills
- Strong communication, documentation, storytelling, creativity, and presentation skills
- Strong organizational skills, the ability to perform under pressure and to manage multiple priorities with competing demands
Company Name: emergiTEL
Salary:
Apply for the job!
To apply for this job please visit jobviewtrack.com.