Inloggen - Registreer  

Medior of Senior Data Engineer met Azure, Python en SQL

CareerValue B.V. - Amsterdam
De Functie: Als Data Engineer Consultant met Azure, Python en SQL zul je verantwoordelijk zijn voor het ontwerpen, implementeren en onderhouden van onze data-infrastructuur. Hierbij zullen de focus liggen op Azure cloud
- Volledige vacature bekijken
Helaas bent u niet ingelogd, login en probeer het opnieuw.    

Python Data Engineer

Stafide - via Talent - Amsterdam - 11-03-2025 Naar vacature  

Job Description As a Python Data Engineer, you will: Analyze and interpret business documents, particularly in the Finance and Risk domain, focusing on corporate credit risk analytical models. Collaborate with stakeholders to translate business requirements into technical specifications. Design, build, and maintain scalable data pipelines and workflows using Python, PySpark, and Databricks. Convert SAS code of analytical assets to Python or PySpark, ensuring accuracy and efficiency. Conduct thorough testing, validation, and troubleshooting to meet performance and functionality standards. Deploy solutions using Azure Pipelines and monitor their performance post-deployment. Develop a comprehensive migration plan, including timelines, resource allocation, and risk management. Collaborate with cross-functional teams, including analysts, data scientists, SAS developers, and engineers, to ensure successful program migration and integration. Provide training and support on new Databricks, Python/PySpark-based solutions. Maintain detailed documentation of migration processes, code changes, testing procedures, and performance metrics. What You Bring to the Table: Strong hands-on experience (5+ years) in Python and PySpark, with deep knowledge in data engineering. Proficiency in Databricks and Azure Data Factory (ADF) for building and deploying data pipelines. Expertise in SQL and its application in data analysis. Demonstrated experience in working with analytical models, particularly in the Finance & Risk domain. A solid understanding of data structures, data quality, and data migration. Strong debugging and troubleshooting skills, with expertise in test automation and frameworks. Excellent communication skills and the ability to align stakeholders with technical solutions under pressure. Experience in working with APIs and integrating Databricks jobs using AppServices and Python. You should possess the ability to: Lead end-to-end development, from requirement analysis to deployment, ensuring high-quality, scalable, and performant solutions. Work efficiently under tight timelines, solving problems and addressing risks in the process. Collaborate and communicate effectively with various stakeholders, including business analysts, modeling teams, and SAS developers. Develop comprehensive migration plans and ensure compliance with internal policies and procedures. Troubleshoot and resolve technical issues, offering support during the post-deployment phase. What We Bring to the Table: A challenging and dynamic work environment within a leading-edge technology stack. Opportunities to collaborate with cross-functional teams and stakeholders from the Finance & Risk domain. The chance to work with cutting-edge tools like Databricks, PySpark, and Azure Pipelines. Competitive compensation and the potential for career growth within the company. Requirements As a Python Data Engineer, you will: Analyze and interpret business documents, particularly in the Finance and Risk domain, focusing on corporate credit risk analytical models. Collaborate with stakeholders to translate business requirements into technical specifications. Design, build, and maintain scalable data pipelines and workflows using Python, PySpark, and Databricks. Convert SAS code of analytical assets to Python or PySpark, ensuring accuracy and efficiency. Conduct thorough testing, validation, and troubleshooting to meet performance and functionality standards. Deploy solutions using Azure Pipelines and monitor their performance post-deployment. Develop a comprehensive migration plan, including timelines, resource allocation, and risk management. Collaborate with cross-functional teams, including analysts, data scientists, SAS developers, and engineers, to ensure successful program migration and integration. Provide training and support on new Databricks, Python/PySpark-based solutions. Maintain detailed documentation of migration processes, code changes, testing procedures, and performance metrics. What You Bring to the Table: Strong hands-on experience (5+ years) in Python and PySpark, with deep knowledge in data engineering. Proficiency in Databricks and Azure Data Factory (ADF) for building and deploying data pipelines. Expertise in SQL and its application in data analysis. Demonstrated experience in working with analytical models, particularly in the Finance & Risk domain. A solid understanding of data structures, data quality, and data migration. Strong debugging and troubleshooting skills, with expertise in test automation and frameworks. Excellent communication skills and the ability to align stakeholders with technical solutions under pressure. Experience in working with APIs and integrating Databricks jobs using AppServices and Python. You should possess the ability to: Lead end-to-end development, from requirement analysis to deployment, ensuring high-quality, scalable, and performant solutions. Work efficiently under tight timelines, solving problems and addressing risks in the process. Collaborate and communicate effectively with various stakeholders, including business analysts, modeling teams, and SAS developers. Develop comprehensive migration plans and ensure compliance with internal policies and procedures. Troubleshoot and resolve technical issues, offering support during the post-deployment phase. What We Bring to the Table: A challenging and dynamic work environment within a leading-edge technology stack. Opportunities to collaborate with cross-functional teams and stakeholders from the Finance & Risk domain. The chance to work with cutting-edge tools like Databricks, PySpark, and Azure Pipelines. Competitive compensation and the potential for career growth within the company.

meer...

Naar vacature

Meer vacatures van Stafide

Gerelateerde vacatures aan Python Data Engineer

Data Engineer - Python

STAFIDE - Amsterdam
Job Description As a Data Engineer - Python, you will: Ensure the team adheres to industry standards and best practices throughout development. Work within an agile startup environment, leveraging agile methodologies
- Volledige vacature bekijken

Senior Data Engineer (Cloud, Spark, Hadoop, Python)

CareerValue B.V. - Amsterdam
De Functie: Je bent zelf verantwoordelijk voor de bouw van de pipelines en kan je wanneer je dit wilt zelf ook de inzichten hieruit presenteren. Je bent hier trouwens wel een consultant en gaat de mooiste bedrijven helpen om
- Volledige vacature bekijken

Python Data Engineer

STAFIDE - Amsterdam
Job Description As a Python Data Engineer, you will: Analyze and interpret business documents, particularly in the Finance and Risk domain, focusing on corporate credit risk analytical models. Collaborate with stakeholders
- Volledige vacature bekijken

Python Data Engineer

Searchsoftware - Amsterdam
Echte Python-specialisten weten het al lang: samen bereik je meer. Niet alleen door in een team te werken, maar ook door elkaar scherp te houden en te blijven leren. Sparren over de beste oplossingen, code reviewen met
- Volledige vacature bekijken