Valohai blog

Insights from the deep learning industry.

All Posts

From Zero to Hero with Valohai CLI, Part 2

Part 2: Tips and tricks for running your deep learning executions on Valohai CLI

Valohai executions can be triggered directly from the CLI and let you roll up your sleeves and fine-tune your options a bit more hands-on than our web-based UI. In part one, I showed you how to install and get started with Valohai’s command-line interface (CLI). Now, it’s time to take a deeper dive and power up with features that’ll take your daily productivity to new heights.

Prerequisites & Installation

See part one of the series for pre-reqs and an easy step-by-step installation guide.

Ad Hoc or Not?

First, lets jump right in and look at one of the available flags of note:  -a (or --adhoc).

For example:

vh exec run -a mystep


Using this flag will take the contents of your currently active local folder, compress it down to a single tarball, and send it to the server for execution. (Note: any file or folder starting with a dot is considered hidden and won’t be bundled in.)


By default, the executions are based on the latest (or specific) commit in your VCS (like git). With --adhoc, they are based on your local files instead.



While an ad-hoc execution is not based on any source repository, it’s still version-controlled by Valohai (we save the tarball for you). That said, using a DVCS like git for your code is still highly recommended. Ad-hoc executions are best suited for quick exploration and debugging.

Parameters for deep learning executions

You can feed parameters of your execution via the CLI, too. For example, if you have defined two parameters learning_rate and dropout in your valohai.yaml, you can set their values like this:

vh exec run -a mystep --learning_rate 0.001 --dropout 0.1


If you want to list all available parameters for a specific step, you can see them using the --help flag. For example:

vh exec run mystep --help


Inputs for your DL models

Inputs are predefined “slots” for files that you want Valohai to fetch for you before the execution. Using Valohai’s CLI in combination with inputs is a good strategy when dealing with big data, as you don’t need to fetch huge files into your local machine.


vh exec run -a mystep --input1=http://t.com/file.gz


If you want to download multiple files per slot, you can re-use the name like this:

vh exec run -a mystep --in1=http://t.com/1 --in1=http://t.com/2


You can also see which inputs are available for a specific step, just as you could for parameters by using the --help flag:

vh exec run mystep --help


I Spy with CLI: Executions

Another cool flag for running an execution is -w (or -watch).

For example:

vh exec run mystep -w


When you use --watch, the CLI will give you a pretty terminal window with the latest log output and the status of the execution. This saves you a few clicks as you don’t need to open the web UI for the same information.



If your execution is already running or finished, you can also stream the logs of the latest execution with:

vh exec logs --stream latest


If you want logs of a specific execution, say number #13, you can write:

vh exec logs 13


If you just want to see stdout logs and none of the stderr or Valohai status logs type:

vh exec logs latest --no-status --no-stderr



Now, let’s say you’d prefer a third-party tool for watching execution progress, like Tensorboard. You can use a similar flag --sync. It will download the output files of your execution (say Tensorboard checkpoint files) live to your local machine. (For more info on this, see our Tensorboard tutorial here.)

Outputs (Finally!)

After your execution is finished, you might want to download the outputs to your local machine. It’s your lucky day; the Valohai CLI makes this task a breeze. For example:

vh exec outputs 13


This will list all the outputs of execution number #13. You can also get the listing as JSON:

vh --table-format json exec outputs 13


If you want to download the outputs, you can call:

vh exec outputs 141 -d ./myfolder


You can also filter the download with a wildcard or select just a specific file with the -f parameter:


vh exec outputs latest -f *.txt



If you want to do something with the output files right away, you can also stack up multiple commands like this:

vh exec 13 outputs -d .outputs && tensorboard


This concludes your quick introduction to using the Valohai CLI.

Want to learn more?  Our blog and documentation have all the answers. Create your free account and dive right in – we’re here to help if you need anything.  Feel free to reach out anytime – shoot us a message through the app’s chat window, by Slack, or at info@valohai.com.

create-a-free-valohai-account

Juha Kiili
Juha Kiili
Senior Software Developer with gaming industry background shape-shifted into full-stack ninja. I have the biggest monitor.

Related Posts

Exploring NLP concepts using Apache OpenNLP

Introduction After looking at a lot of Java/JVM based NLP libraries listed on Awesome AI/ML/DL I decided to pick the Apache OpenNLP library. One of the reasons comes from the fact another developer (who had a look at it previously) recommended it. Besides, it’s an Apache project, they have been great supporters of F/OSS Java projects for the last two decades or so (see Wikipedia). It also goes without saying that Apache OpenNLP is backed by the Apache 2.0 license.

Self-Driving with Valohai

One of the hottest areas of application for deep learning is undoubtedly self-driving cars. We’ll go through the problem space, discuss its intricacies and build a self-driving solution utilizing the Unity game engine, training a neural network on top of the Valohai platform. Regardless of the technologies used, you’ll get an understanding of the basics as well as the code to tweak for yourself.

How to do Deep Learning for Java on the Valohai Platform?

Introduction Some time ago I came across this life-cycle management tool (or cloud service) called Valohai and I was quite impressed by its user-interface and simplicity of design and layout. I had a good chat about the service at that time with one of the members of Valohai and was given a demo. Previous to that I had written a simple pipeline using GNU Parallel, JavaScript, Python and Bash - and another one purely using GNU Parallel, and Bash. I also thought about replacing the moving parts with ready-to-use task/workflow management tools like Jenkins X, Jenkins Pipeline, Concourse or Airflow but due to various reasons I did not proceed with the idea.