Senior Data Engineer (Eisenbahnumfeld, Datenarchitekturen, Datenpipelines mit Python, AWS CDK, GitLab CI, ETL, Polars, DataScience, ML, Scrum, SAFe) Remote/Frankfurt

Job description:

** Under the contract, the contractor provides services independently and**
** self-organised:**
* Data pipeline development: setup, maintenance and optimization of scalable data pipelines for data integration, processing and transmission. Ensuring data quality, monitoring and automation of data processes.
* Cloud technologies: work with cloud platforms, especially AWS (Amazon Web Services) to implement data systems in the cloud. Using cloud-based services such as S3, Athena and Lambda.
* Data modeling and architecture: Development and implementation of data architectures. Design of scalable data models tailored to long-term data requirements.
* Big data technologies: handling large amounts of data using adequate technologies and ensuring efficient storage and processing.
* DevOps and CI/CD: Use of DevOps practices to automate and improve development
- and deployment processes. Using tools such as Git and Docker for Continuous Integration and Continuous Deployment (CI/CD).
* IT security and compliance: implementation of IT security policies and regulatory requirements (compliance) as well as DB operator specifications and programming guidelines. Ensuring data protection and security in data processing.
**Muss requirements:**
* Experience in the processing and processing of data from Leit
- and safety technology in the railway environment (minimum 1 year over at least 1 project in CV comprehensible.)
* Practical experience in developing scalable data architectures. (min. 1.5 years over at least 1 project in CV comprehensible)
* Experience in the construction and maintenance of scalable data pipelines with Python. (min. 1.5 years over at least 1 project in CV comprehensible)
* Experience in developing a library based on AWS CDK / GitLab CI to create infrastructure as code. (Minimum 1 project broken down in CV.)
* Experience in the development of a Utility Library for enabling flexible and efficient handling of different file formats in integration in ETL pipelines. (Minimum 1 project broken down in CV)
**Requirements:**
* Practical knowledge in Polars (Python). (min. 1.5 years over at least 1 project in CV comprehensible.)
* Experience in communication and cooperation with system
- and expert in the railway environment. (Minimum 1 project broken down in the CV. If validated in an interview)
* Good knowledge of Data Science and Machine Learning in Python. (minimum 3 years for at least 3 projects in the CV comprehensible)
* Good knowledge of agile methods (Scrum, SAFe). (Minimum 1 project broken down in CV.)

Be a part of our comminity

Join us on Telegram or Discord to get instant notifications about the newest freelance projects and talk to some of the smartest software engineers in the world.