Back to jobs

Data Ops Engineer (Machine Learning)

  • Data Engineering
  • Edinburgh
  • Permanent
  • £70,001-£80,000
  • About the role

    Data Ops Engineer (Machine Learning)

    Competitive Salary + Comprehensive Benefits

    MBN have partnered with one of the largest Banking Institutions throughout the UK. We're

    looking for passionate professionals who want to grow their talents and achieve great things. If

    that sounds like you, we want to talk to you about joining the team.

    The Role:

    We’ll look to you to drive value for the customer through modelling, sourcing and data

    transformation. You’ll be working closely with core technology, architecture and Data Science

    teams to deliver strategic ML and AI Models, while driving Agile and DevOps adoption in the

    delivery of ML solutions.

    Skills & Experience:

    • Delivering the automation of data engineering pipelines and ML processes to

    support the deployment of data science cases through the removal of manual


    • Supporting best practice for development of data and ML pipelines, as well as the

    creation of reusable code and data assets

    • Conducting common and specialized data monitoring and ML/AI model

    performance analysis in production

    • Anticipating the challenges associated with ML models in production and designing

    efficient, elegant solutions to these (e.g., retraining strategies, monitoring alerts)

    • Educating and embedding new data techniques into the business through role

    modelling, training and experiment design oversight

    Is this the job for you?

    To be successful in this role, you’ll need to be an intermediate level programmer with experience

    of building machine learning pipelines. Ideally, you will have a qualification in Computer Science,

    Software Engineering or a relevant quantitative discipline. You’ll also need a strong

    understanding of Data Science techniques, including using data to drive insights; applying

    statistical and machine learning models and working with and dependencies with wider teams

    and the end customer. You should have a proven track record in extracting value and features

    from large scale data.

    Ideally what you will have:

    • Extensive experience using RDMS, Hadoop and SQL

    • Experience of ETL technical design, automated data or ML model quality testing, QA

    and documentation, data warehousing, data modelling and data wrangling

    • Experience in designing, building and deploying highly scalable and reliable data or

    ML pipelines using BigData techniques and tools (for example Airflow, Python,

    Redshift/Snowflake, Streamset).

    • Software development experience with proficiency in Python, Java, Scala or another


    • Experience using professional software engineering best practices for the full

    development life cycle, including coding standards, code reviews, version control

    management, and testing

    • Experience building Continuous Integration/Continuous Delivery pipelines using

    tools such as Jenkins or TeamCity

    • Experience or strong understanding of DevOps & MLOps practice and principles

    • Good critical thinking and proven problem solving abilities

    • You have experience of shipping scalable data and ML solutions in the cloud (AWS,

    Azure, GCP)

    • Experience of Unix scripting, working with NoSQL databases and working with

    pub/sub and event driven technologies like Kafka

    Does this look like the role for you?

    Apply now by submitting your CV or feel free to reach out for a confidential chat at