Valohai blog

Insights from the deep learning industry.

Valohai Joins Forces with Twitter and Facebook

Valohai, the MLOps platform company, is collaborating with Twitter and Facebook to launch a competition for the annual The Neural Information Processing Systems (NeurIPS) conference to advance the optimization of machine learning models towards more accurate AI solutions. The goal is to find better optimization algorithms for machine learning.

Why colabel adopted Valohai instead of hiring their first MLOps engineer

Colabel decided to adopt Valohai to manage their machine learning infrastructure & model serving through Kubernetes instead of hiring an MLOps engineer. TL;DR colabel enables companies to automate workflows specific to their business, from recognizing objects in microscopic images to automatically categorizing incoming documents for different internal workflows. They needed a solution that automatically manages their machine learning infrastructure and model serving through their Kubernetes cluster on Google Cloud Platform. colabel started by building their own MLOps solution with Kubeflow but quickly realized that the costs associated with building, maintaining, and hiring the right talent to build your own MLOps solution were not financially sustainable. Today colabel can focus on building their platform and its integrations, leaving the hassle of maintaining the machine learning infrastructure and Kubernetes to Valohai. colabel helps businesses build custom models for automating image and document processing Every business has workflows that can be automated and every business is different. Using out of the box solutions, such as APIs to classify dogs and hot-dogs will take you only so far.

What did I Learn about CI/CD for Machine Learning

Most software development teams have adopted continuous integration and delivery (CI/CD) to iterate faster. However, a machine learning model depends not only on the code but also the data and hyperparameters. Releasing a new machine learning model in production is more complex than traditional software development.

  • 13 min read
  • Jun 10, 2020 4:09:27 PM

Bayesian Hyperparameter Optimization with Valohai

Grid search and random search are the most well-known in hyperparameter tuning. They are also both first-class citizens inside the Valohai platform. You define your search space, hit go, and Valohai will start all your machines. It does a search over the designated area of parameters you’ve defined. It is all automatic and doesn’t make you launch or shut down machines by hand. Also, you don't accidentally leave machines running costing you money. But we’ve been missing one central way for hyperparameter tuning, Bayesian optimization. Not anymore!

Classifying 4M Reddit posts in 4k subreddits: an end-to-end machine learning pipeline

Finding the right subreddit to submit your post can be tricky, especially for people new to Reddit. There are thousands of active subreddits with overlapping content. If it is no easy task for a human, I didn’t expect it to be easier for a machine. Currently, redditors can ask for suitable subreddits in a special subreddit: r/findareddit.

  • 16 min read
  • Apr 1, 2020 2:46:02 PM

Machine Learning and Remote Work

A lot of companies and teams are going fully remote for the first time due to the Coronavirus. We at Valohai are big believers in remote work. Having practiced with a distributed team for a good 4 years we would like to share some of our thoughts on remote work in Machine Learning. A lot of major pain points we have seen revolve around tooling.

Using DVC to version control your ML experiment data

In this blog post we will explore how you can use DVC for your data version control and how you can automate your data version control with and without DVC inside the Valohai platform. DVC (https://dvc.org/) is an open source command-line tool for version controlling your binary data in the same way as you version control code in Git. You hook it up to your data store (e.g. AWS S3 or Azure Blob Storage) and after that use it in the same way as you use Git for pulling and pushing files.

Machine Learning in the cloud vs on-premises

The cloud is just somebody else’s computer It’s a running joke among developers that the cloud is just a word for somebody else’s computer. But the fact remains, that by leveraging the cloud you can reap benefits that you couldn’t achieve with your on-premises server farm.

Three ways to categorize machine learning platforms

Machine learning (ML) platforms take many forms and usually solve only one or a few parts of the ML problem space. So how do you make sense of the different platforms that all call themselves ML platforms?

    Related Posts