Data Engineer

Job description:

Here is the updated text:

We're looking for a Data Engineer for our client Aurobay. Aurobay develops and produces world-class powertrain solutions for a global market. We're a pioneering global supplier of propulsion technology, development services, and contract manufacturing with manufacturing capabilities in two continents.

As a Data Engineer, you will be assigned key responsibilities concerning the design, development, and maintenance of our data environments. You will be instrumental in optimizing data workflows, ensuring data quality, and integrating data from different sources.

Key responsibilities:

* Design, develop, and maintain scalable data pipelines using Databricks to support data ingestion, transformation, and loading (ETL/ELT) processes.
* Collaborate with and mentor other product teams and engineers to build and maintain the data platform that integrates data from multiple sources.
* Optimize data processing workflows and ensure they adhere to architectural principles, performance, and security.
* Implement and enforce data quality checks, monitoring, and alerting systems to ensure the integrity and reliability of the data.
* Leverage Databricks features such as Delta Lake and Databricks Workflows to enhance data pipeline performance and reliability.
* Work with cloud infrastructure teams to ensure the platform's performance, availability, and scalability within the cloud environment.

Required:

* 5 years of experience as a Data Engineer or similar role, with at least 2 years of hands-on experience in Databricks.
* Knowledge of IAC, CI/CD pipelines, version control (Git), and DevOps practices in data engineering.
* Proficiency in Databricks and familiarity with key features like Delta Lake, Databricks Jobs, and Databricks SQL.
* Excellent skills in Python for data processing and pipeline development.
* Strong understanding of distributed data processing technologies such as Apache Spark.
* Experience in working with big data technologies and tools, including Spark, Kafka, Hadoop, or similar.
* Experience of medallion architecture in Databricks.
* Experience in working with automation testing and data quality.

Beneficial:

* Experience of Azure infrastructure (especially ADF, Blob Storage & DataLake).
* Experience of agile practices working in a product organization.
* Understanding and experience of data mesh concepts.
* Experience of using Terraform and/or Bicep.
* Experience of working with streaming ETL.
* Understanding or experience of concepts like SLO, GitHub, and control planes.

Start date: January 2025
End date: October 2026
Location: Gothenburg, Sweden

Be a part of our comminity

Join us on Telegram or Discord to get instant notifications about the newest freelance projects and talk to some of the smartest software engineers in the world.