€45000

DataOps Engineer

A leading logistics and supply chain company is seeking a DataOps Engineer to join its growing data engineering and analytics team. The company leverages advanced data solutions to optimize everything from warehouse operations to real-time route planning, and is investing heavily in data automation, quality, and scalability. This role is ideal for a data-focused engineer who thrives in production environments and is passionate about building reliable data systems to support business-critical decision-making in a fast-moving logistics ecosystem.

The DataOps Engineer will be responsible for enabling and automating the flow of data across platforms, ensuring data reliability, observability, and efficiency in support of analytics, operations, and AI-driven logistics systems. This includes managing data pipelines, implementing CI/CD for data workflows, monitoring infrastructure, and supporting data governance efforts.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines for real-time and batch processing.
  • Implement CI/CD pipelines and version control for data workflows, models, and infrastructure.
  • Collaborate with data engineers, analysts, and DevOps teams to ensure reliable data delivery across systems.
  • Monitor data infrastructure using observability tools and respond to data pipeline failures or quality issues.
  • Automate testing, validation, and deployment of data assets to improve consistency and reduce errors.
  • Support data governance and compliance efforts through metadata management and data lineage tracking.
  • Work with cloud platforms (e.g., AWS, Azure, or GCP) to manage and optimize cloud-native data infrastructure.
  • Identify and address performance bottlenecks in data pipelines and processing environments.

Requirements

  • 3+ years of experience in DataOps, Data Engineering, or a related DevOps/Data role.
  • Strong knowledge of ETL/ELT tools and orchestration frameworks (e.g., Airflow, dbt, Prefect, Luigi).
  • Experience with cloud platforms (preferably AWS), including tools like S3, Lambda, Glue, Redshift, etc.
  • Proficiency in SQL, Python, and/or other scripting languages for data manipulation and automation.
  • Experience with infrastructure as code (e.g., Terraform, CloudFormation) and containerization tools (e.g., Docker, Kubernetes).
  • Familiarity with data quality frameworks, monitoring tools, and incident management.
  • Solid understanding of version control systems (e.g., Git) and CI/CD practices.
  • Ability to work with large, complex datasets and ensure high data availability and performance.