Job description:
Here is the updated text:
Strong knowledge of Python and PySpark. SQL knowledge (e.g. Postgres, MS SQL, DB2, Teradata, SparkSQL). AWS cloud knowledge, and also knowledge of Azure. Experience with Databricks and building financial data processing pipelines. Experience with data warehousing and data lake architectures. Dimensional modeling knowledge, especially. Experience with integration architectures and ETL methods. Knowledge of data streaming and event-based architectures (especially Kafka). DevOps principles knowledge. Banking industry experience and over 5 years of data engineer experience.