This page discusses some MLOPs best practices. Some best practices apply to the whole ML pipeline, others to a specific step in the pipeline. We recommend that you choose an MLOps pipeline that supports these best practices and avoid common MLOps pain points.
A typical ML pipeline is an iterative process consisting of three major steps:
- Data: Data collection, data cleaning and so on.
- Model: ML model development, model optimization, and so on.
MLOps best practices for data
Data is the most critical part of the MLOps pipeline. Here are some best practices to help you succeed.
Use a central repository
You need a central repository for your data in which a team member can store any dataset generated for their project. If datasets are in the central repo, then they can be reused or shared by other team members.
This means that when you start a new machine learning (ML) project, you don't have to start with raw data all over again. Instead, you can pick up pre-built, high-level, clean datasets from the repository. This can save a tremendous amount of time and effort in the long run by eliminating duplicate work.
Validate your data
Identifying errors in data is essential, and you want to find them as early as possible in the ML lifecycle. Run checks on the data flow to ensure you'll have an error-free and reliable production pipeline.
Validate your data by writing tests on datasets and run the tests regularly by integrating them into your ML pipeline. This way, you can catch and correct data problems right away.
Version your data
Data versioning is required to develop reproducible results and helps automate machine learning model development. The ability to roll back to an earlier version can be vital in a productionized model!
You can build a versioning tool from scratch, or use one of the many tools available in the market.
MLOps best practices for models
Machine learning model creation is mostly rapid experimentation with lots of iteration. Here are some best practices to help you do it well (again and again and again).
Use a model store
A model store is a centralized repository that stores and versions metadata for machine learning models and manages model artifacts.
For all ML projects, you need to save your trained model at some point and load it from the place it is saved whenever you use the model to make predictions. A team member should easily be able to pick up an existing model saved by some other team members and use it for their own project.
Monitor your model
Due to evolving data profiles, model performance might degrade over time. The data models can show a decrease in performance more often than conventional software systems. Monitor your model to catch any behavior that might indicate issues so you can immediately correct them.
If you are interested in creating a good machine learning model and maintaining its performance for a long period, then your pipeline needs a model monitoring sub-system.
Version your model
Version control tracks changes to the machine learning model over time. With version control, you can see the whole journey of a model saved within your model store.
Experimentation requires keeping track of model development history. Version control tracks every time you change a parameter in your ML algorithm, use a different performance metric, or try a different ML algorithm for your problem. You will have different versions to compare how the changes impacted model performance.
General MLOps best practices
These best practices do not apply to any single stage of the process, but they are good guidelines to follow.
Enforce coding standards
When working on a machine learning project, you need your code to be readable and maintainable. If you follow the coding standards related to your technology or programming languages, then code written by different ML engineers will be uniform, maintainable, and readable. Coding standards not only improve the programming practices, they also increase the programmer’s efficiency.
For example, it is a common for data scientists to write all the code in a single notebook. It might be nice to do some discovery work, but this is definitely not a production-friendly coding style! A good standard might be to define and divide different components of your ML project into separate places such as Data Testing, Data Transformation, Model Development, and Production Jobs.
Continuously integrate and deploy (CI/CD)
Continuous integration in machine learning reruns the machine learning pipeline whenever there is an update in project code or data. It helps developers merge their code changes frequently into a shared code repository for building and testing. This helps ensure that code from various developers is merged seamlessly and any errors are quickly reported back to developers.
Continuously test everything
Continuous testing in machine learning operations is a type of machine learning model testing. It ensures the model testing at every stage of the model development, training, and validation lifecycle. The primary aim of continuous testing is to measure the quality of the machine learning model and the data fed into it at every step of the pipeline by testing frequently and early.
Your MLOps platform should continuously test your model and validate your data. You might test by setting a particular ML metric you want to track to measure your model performance, setting a certain threshold for that metric to get alerts from the platform whenever something goes wrong, or writing data quality tests to check the final output of your model.
Build it to scale up
We live in a Big Data World. You need to make sure that your ML pipeline is scalable and able to process terabytes-sized data. It is crucial, especially when we think about the current trend: deep learning applications.
If you think that your business has real big data, then this could be arguably the first thing you should ask from your MLOps platform. It is all about having an MLOps application being able to run tasks parallel to increase efficiency and process big data.