Hetzner

HetznerCluster([bootstrap, image, location, ...])

Cluster running on Hetzner cloud vServers.

Overview

Authentication

To authenticate with Hetzner you must first generate a personal access token.

Then you must put this in your Dask configuration at cloudprovider.hetzner.token. This can be done by adding the token to your YAML configuration or exporting an environment variable.

# ~/.config/dask/cloudprovider.yaml

cloudprovider:
  hetzner:
    token: "yourtoken"
$ export DASK_CLOUDPROVIDER__HETZNER__TOKEN="yourtoken"
class dask_cloudprovider.hetzner.HetznerCluster(bootstrap: Optional[str] = None, image: Optional[str] = None, location: Optional[str] = None, server_type: Optional[str] = None, docker_image: Optional[str] = None, **kwargs)[source]

Cluster running on Hetzner cloud vServers.

VMs in Hetzner are referred to as vServers. This cluster manager constructs a Dask cluster running on VMs.

When configuring your cluster you may find it useful to install the hcloud tool for querying the Hetzner API for available options.

https://github.com/hetznercloud/cli

Parameters
image: str

The image to use for the host OS. This should be a Ubuntu variant. You can list available images with hcloud image list|grep Ubuntu.

location: str

The Hetzner location to launch you cluster in. A full list can be obtained with hcloud location list.

server_type: str

The VM server type. You can get a full list with hcloud server-type list. The default is cx11 which is vServer with 2GB RAM and 1 vCPU.

n_workers: int

Number of workers to initialise the cluster with. Defaults to 0.

worker_module: str

The Python module to run for the worker. Defaults to distributed.cli.dask_worker

worker_options: dict

Params to be passed to the worker class. See distributed.worker.Worker for default worker class. If you set worker_module then refer to the docstring for the custom worker class.

scheduler_options: dict

Params to be passed to the scheduler class. See distributed.scheduler.Scheduler.

env_vars: dict

Environment variables to be passed to the worker.

extra_bootstrap: list[str] (optional)

Extra commands to be run during the bootstrap phase.

Attributes
asynchronous

Are we running in the event loop?

auto_shutdown
bootstrap
command
dashboard_link
docker_image
gpu_instance
loop
name
observed
plan
requested
scheduler_address
scheduler_class
worker_class

Methods

adapt([Adaptive, minimum, maximum, ...])

Turn on adaptivity

call_async(f, *args, **kwargs)

Run a blocking function in a thread as a coroutine.

from_name(name)

Create an instance of this class to represent an existing cluster by name.

get_client()

Return client for the cluster

get_logs([cluster, scheduler, workers])

Return logs for the cluster, scheduler and workers

get_tags()

Generate tags to be applied to all resources.

new_worker_spec()

Return name and spec for the next worker

scale([n, memory, cores])

Scale cluster to n workers

scale_up([n, memory, cores])

Scale cluster to n workers

sync(func, *args[, asynchronous, ...])

Call func with args synchronously or asynchronously depending on the calling context

close

get_cloud_init

logs

render_cloud_init

render_process_cloud_init

scale_down