Insights from the deep learning industry.
Part 2: Tips and tricks for running your deep learning executions on Valohai CLI Valohai executions can be triggered directly from the CLI and let you roll up your sleeves and fine-tune your options a bit more hands-on than our web-based UI. In part one, I showed you how to install and get started with Valohai’s command-line interface (CLI). Now, it’s time to take a deeper dive and power up with features that’ll take your daily productivity to new heights.
Ville Tuulos, machine learning infrastructure architect, was the first to publicly dissect Netflix’s Machine Learning infrastructure at QCon in November 2018 in San Francisco. If you haven’t seen the talk yet, read the summary of his talk here! All the pictures used here, are from Ville's presentation. The full talk is 49 minutes long and you can watch it in its entirety on YouTube. From a scattered toolset to a coherent machine learning platform Ville starts by comparing Machine Learning Infrastructure to an online store and how building one was truly a technical problem twenty years ago. Back then you needed to build the whole online shop yourself starting from setting up the servers because the cloud did not exist. New platforms and technologies have since emerged that allow basically anyone to build up an online store and nowadays it is more about knowing the customers than setting up the webshop.
In our series of machine learning infrastructure blog posts, we recently featured Uber’s Michelangelo. Today we’re happy to be interviewing Ville Tuulos from Netflix. Ville is a machine learning infrastructure architect at Netflix’s Los Gatos, CA office.
Part 1: Getting started As new Valohai users get acquainted with the platform, many fall in love our web-based UI - and for good reason. Its responsive, intuitive and gets the job done with just a few clicks. But don’t be fooled into thinking that’s the end of the interface conversation. We know it takes different [key]strokes for different folks, so Valohai also includes a command-line interface (CLI) and the REST API.
One of the core design paradigms of Valohai is technology agnosticism. Building on top of the file system and in our case Docker means that we support running very different kinds of applications, scripts, languages and frameworks on top of Valohai. This means most systems are Valohai-ready because of these common abstractions. The same is true for TensorBoard as well.
Last week we had the pleasure of joining our partner SwiftStack at our joint booth at the NVIDIA GTC 2019 conference in San Jose. GTC touts itself as the premier AI conference and it sure was.
In this blog post we’ll look at which parts a machine learning platform consists of and compare building your own infrastructure from scratch to buying a ready-made service that does everything for you.
Running a local notebook is great for early data exploration and model tinkering, there’s no doubt about it. But eventually you’ll outgrow it and want to scale up and train the model in the cloud with easy parallel executions, full version control and robust deployment. (Letting you reproduce your experiments and share them with team members at any time.)
SwiftStack and Valohai, in joint partnership, announce the world’s first peta-scale ML solution that covers everything from computation to data management in a multi-cloud environment. The solution provides a global namespace removing silos and enabling universal access to all your data in all your machine learning use-cases. It has built-in support for Azure, Google Cloud, AWS and SwiftStack.