The role requires understanding and translating business needs into data models, creating robust data pipelines, and developing and maintaining data lakes.
It also involves implementing and maintaining CI/CD pipelines for data solutions. The ideal candidate will be proficient in big data tools preferably on Azure Stack like ADF, Databricks, Synapse Analytics, Azure DevOps, programming languages such as Spark (Scala/Python), SQL, and have strong analytical skills related to working with structured and unstructured datasets.
This role is for Data Engineering team that works on data needs for the whole enterprise. The group focuses on collection, organisation, governance, and access management of data to support productivity, efficiency, and decision-making for the business.
Software Powered by iCIMS