Valohai blog

Insights from the deep learning industry.

All Posts

From Zero to Hero with Valohai CLI, Part 2

Part 2: Tips and tricks for running your deep learning executions on Valohai CLI

Valohai executions can be triggered directly from the CLI and let you roll up your sleeves and fine-tune your options a bit more hands-on than our web-based UI. In part one, I showed you how to install and get started with Valohai’s command-line interface (CLI). Now, it’s time to take a deeper dive and power up with features that’ll take your daily productivity to new heights.

Prerequisites & Installation

See part one of the series for pre-reqs and an easy step-by-step installation guide.

Ad Hoc or Not?

First, lets jump right in and look at one of the available flags of note:  -a (or --adhoc).

For example:

vh exec run -a mystep

Using this flag will take the contents of your currently active local folder, compress it down to a single tarball, and send it to the server for execution. (Note: any file or folder starting with a dot is considered hidden and won’t be bundled in.)

By default, the executions are based on the latest (or specific) commit in your VCS (like git). With --adhoc, they are based on your local files instead.

While an ad-hoc execution is not based on any source repository, it’s still version-controlled by Valohai (we save the tarball for you). That said, using a DVCS like git for your code is still highly recommended. Ad-hoc executions are best suited for quick exploration and debugging.

Parameters for deep learning executions

You can feed parameters of your execution via the CLI, too. For example, if you have defined two parameters learning_rate and dropout in your valohai.yaml, you can set their values like this:

vh exec run -a mystep --learning_rate 0.001 --dropout 0.1

If you want to list all available parameters for a specific step, you can see them using the --help flag. For example:

vh exec run mystep --help

Inputs for your DL models

Inputs are predefined “slots” for files that you want Valohai to fetch for you before the execution. Using Valohai’s CLI in combination with inputs is a good strategy when dealing with big data, as you don’t need to fetch huge files into your local machine.

vh exec run -a mystep --input1=

If you want to download multiple files per slot, you can re-use the name like this:

vh exec run -a mystep --in1= --in1=

You can also see which inputs are available for a specific step, just as you could for parameters by using the --help flag:

vh exec run mystep --help

I Spy with CLI: Executions

Another cool flag for running an execution is -w (or -watch).

For example:

vh exec run mystep -w

When you use --watch, the CLI will give you a pretty terminal window with the latest log output and the status of the execution. This saves you a few clicks as you don’t need to open the web UI for the same information.

If your execution is already running or finished, you can also stream the logs of the latest execution with:

vh exec logs --stream latest

If you want logs of a specific execution, say number #13, you can write:

vh exec logs 13

If you just want to see stdout logs and none of the stderr or Valohai status logs type:

vh exec logs latest --no-status --no-stderr

Now, let’s say you’d prefer a third-party tool for watching execution progress, like Tensorboard. You can use a similar flag --sync. It will download the output files of your execution (say Tensorboard checkpoint files) live to your local machine. (For more info on this, see our Tensorboard tutorial here.)

Outputs (Finally!)

After your execution is finished, you might want to download the outputs to your local machine. It’s your lucky day; the Valohai CLI makes this task a breeze. For example:

vh exec outputs 13

This will list all the outputs of execution number #13. You can also get the listing as JSON:

vh --table-format json exec outputs 13

If you want to download the outputs, you can call:

vh exec outputs 141 -d ./myfolder

You can also filter the download with a wildcard or select just a specific file with the -f parameter:

vh exec outputs latest -f *.txt

If you want to do something with the output files right away, you can also stack up multiple commands like this:

vh exec 13 outputs -d .outputs && tensorboard

This concludes your quick introduction to using the Valohai CLI.

Want to learn more?  Our blog and documentation have all the answers. Create your free account and dive right in – we’re here to help if you need anything.  Feel free to reach out anytime – shoot us a message through the app’s chat window, by Slack, or at


Juha Kiili
Juha Kiili
Senior Software Developer with gaming industry background shape-shifted into full-stack ninja. I have the biggest monitor.

Related Posts

Classifying 4M Reddit posts in 4k subreddits: an end-to-end machine learning pipeline

Finding the right subreddit to submit your post can be tricky, especially for people new to Reddit. There are thousands of active subreddits with overlapping content. If it is no easy task for a human, I didn’t expect it to be easier for a machine. Currently, redditors can ask for suitable subreddits in a special subreddit: r/findareddit.

Production Machine Learning Pipeline for Text Classification with fastText

When doing machine learning in production, the choice of the model is just one of the many important criteria. Equally important are the definition of the problem, gathering high-quality data and the architecture of the machine learning pipeline.

Exploring NLP concepts using Apache OpenNLP

Introduction After looking at a lot of Java/JVM based NLP libraries listed on Awesome AI/ML/DL I decided to pick the Apache OpenNLP library. One of the reasons comes from the fact another developer (who had a look at it previously) recommended it. Besides, it’s an Apache project, they have been great supporters of F/OSS Java projects for the last two decades or so (see Wikipedia). It also goes without saying that Apache OpenNLP is backed by the Apache 2.0 license.