High Heaton
Misterton
Stoke Bishop
Snaith
Beacon Park
Cross Inn
Gipsyville
Headstone, Greater London
Week
Mockbeggar
North Seaton
Gretna Green
Herne Hill
East Parley
Halkyn, Flintshire
Bearsden
Pembroke
Berkhamsted, Hertfordshire
Studley
Inverkeithing, Fife

Senior Algorithm Engineer (Python/Spark–Distributed Processing)

Link copied!
Location
Job type Temporal
Publication date 29 January 2026
3 people applied for this job

General Description

Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)

For a complete understanding of this opportunity, and what will be required to be a successful applicant, read on.
Location: UK (O/IR35) / Belgium / Netherlands / Germany (B2B)
Working model: Remote
Start: ASAP

Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
We’re hiring a Senior Algorithm Engineer to join a data-intensive SaaS platform operating in a complex, regulated industry. This is a hands-on senior IC role focused on building and optimising distributed data pipelines that power pricing, forecasting and billing calculations at scale. – This is not an ML / Data Science / GenAI role

What you’ll be doing

  • Design, build and deploy algorithms/data models supporting pricing, forecasting and optimisation use cases in production
  • Develop and optimise distributed Spark / PySpark batch pipelines for large-scale data processing
  • Write production-grade Python workflows implementing complex, explainable business logic
  • Work with Databricks for job execution, orchestration and optimisation 
  • Improve pipeline performance, reliability and cost efficiency across high-volume workloads
  • Collaborate with engineers and domain specialists to translate requirements into scalable solutions
  • Provide senior-level ownership through technical leadership, mentoring and best-practice guidance

Key experience required

  • Proven experience delivering production algorithms/data models (forecasting, pricing, optimisation or similar)
  • Strong Python proficiency and modern data stack exposure (SQL, Pandas/NumPy + PySpark; Dask/Polars/DuckDB a bonus)
  • build, schedule and optimise Spark/PySpark pipelines xwzovoh in Databricks (Jobs/workflows, performance tuning, production delivery)
  • Hands-on experience with distributed systems and scalable data processing (Spark essential)
  • Experience working with large-scale/high-frequency datasets (IoT/telemetry, smart meter, weather, time-series)
  • Clear communicator able to influence design decisions, align stakeholders and operate autonomously

Nice to have

  • Energy/utilities domain exposure
  • Cloud ownership experience (AWS preferred, Azure also relevant)
  • Experience defining microservices / modular components supporting data products

Verification

Verified company

The company details and address have been verified.