Blog / Three ways to categorize machine learning platforms
Three ways to categorize machine learning platforms

Three ways to categorize machine learning platforms

Fredrik Rönnlund

Machine learning (ML) platforms take many forms and usually solve only one or a few parts of the ML problem space. So how do you make sense of the different platforms that all call themselves ML platforms?

Machine learning platforms take many forms, from labeling of data to visualizing data and from training models to monitoring deployed models. All of these different factors, and more, constitute what people call a machine learning platform, so it is your responsibility as a technology evaluator to understand the core concepts around machine learning so that you know what kind of platform best suits your needs. This blogpost will introduce three ways of looking at ML platforms.

What are machine learning platforms?

Machine learning (ML) platforms are services or tools that allow you to automate or outsource parts of your data science work. The way they do that can however be very different. A library, such as MLFlow ( https://mlflow.org/ ), is a platform, in the same way that an analytics platform H2O ( https://www.h2o.ai/ ) is a platform. With a few minutes of reading about the two products you will realize that they solve completely different problems with completely different approaches. How can then a layman compare these two platforms?

First, let’s define the standard problems in machine learning. On a higher level, machine learning can be divided into three parts:

  1. Data management
  2. Model training
  3. Prediction serving

Let’s look at each one of these and see what they entail.

Data management

Data on its part can be divided into tabular data (e.g. databases with customer information) and unstructured data (e.g. images of our product in different scenarios). Data management in machine learning has to do with issues such as collection, preprocessing (ETL), labeling, annotating, and exploring data. Each of these challenges requires different tools.

Model development

Model training on its part can be divided into feature extraction and training. Training is also different depending on if you’re building AutoML solutions, using pre-existing models, building traditional machine learning models (like decision trees) or deep learning models on large unstructured data. For each case, there are different needs for infrastructure, frameworks and collaboration tools.

Prediction serving

The main two categories for prediction serving are the way it is deployed, either as a part of a software or as an external access point that can be accessed from serving. Models can also be served for batch inference (when the need for predictions is sporadic) or live inference (when the need is constant). Other issues to consider in prediction serving are AB testing of models, canary serving of new models, rollback of models, model staleness and more.

Summary

As you can see from the main three machine learning pipeline categories, the needs and tools supporting these needs vary greatly. Further dimensions in the whole toolchain come from the machine learning team’s background. Data scientists with a background in software engineering tend to value tools that allow them to develop models in an IDE whereas recent students of analytics and data science are more accustomed to interactive web interfaces, such as Jupyter notebooks. Also your company’s internal toolchain from the cloud provider you use or on-premises GPU clusters you might have, has an impact on what tools can be used.

It’s thus a fair point to assume that “one tool for everything” is not a trivial solution.

Further reading

Free eBookPractical MLOpsHow to get started with MLOps?