CLI

gordo CLI

Available CLIs for Gordo:

gordo

The main entry point for the CLI interface

gordo [OPTIONS] COMMAND [ARGS]...

Options

--version

Show the version and exit.

--log-level <log_level>

Run workflow with custom log-level.

Environment variables

GORDO_LOG_LEVEL

Provide a default for --log-level

build

Build a model and deposit it into ‘output_dir’ given the appropriate config settings.

Parameters
———-
machine_config: dict
A dict loadable by gordo.machine.Machine.from_config
output_dir: str
Directory to save model & metadata to.
model_register_dir: path
Path to a directory which will index existing models and their locations, used
for re-using old models instead of rebuilding them. If omitted then always
rebuild
print_cv_scores: bool
Print cross validation scores to stdout
model_parameter: List[Tuple[str, Any]
List of model key-values, wheres the values will be injected into the model
config wherever there is a jinja variable with the key.
exceptions_reporter_file: str
JSON output file for exception information
exceptions_report_level: str
Details level for exception reporting
gordo build [OPTIONS] MACHINE_CONFIG [OUTPUT_DIR]

Options

--model-register-dir <model_register_dir>
--print-cv-scores

Prints CV scores to stdout

--model-parameter <model_parameter>

Key-Value pair for a model parameter and its value, may use this option multiple times. Separate key,valye by a comma. ie: –model-parameter key,val –model-parameter some_key,some_value

--exceptions-reporter-file <exceptions_reporter_file>

JSON output file for exception information

--exceptions-report-level <exceptions_report_level>

Details level for exception reporting

Options

EXIT_CODE | TYPE | MESSAGE | TRACEBACK

Arguments

MACHINE_CONFIG

Required argument

OUTPUT_DIR

Optional argument

Environment variables

MACHINE

Provide a default for MACHINE_CONFIG

OUTPUT_DIR

Provide a default for OUTPUT_DIR

MODEL_REGISTER_DIR

Provide a default for --model-register-dir

EXCEPTIONS_REPORTER_FILE

Provide a default for --exceptions-reporter-file

EXCEPTIONS_REPORT_LEVEL

Provide a default for --exceptions-report-level

run-server

Run the gordo server app with Gunicorn

gordo run-server [OPTIONS]

Options

--host <host>

The host to run the server on.

Default

0.0.0.0

--port <port>

The port to run the server on.

Default

5555

--workers <workers>

The number of worker processes for handling requests.

Default

2

--worker-connections <worker_connections>

The maximum number of simultaneous clients per worker process.

Default

50

--threads <threads>

The number of worker threads for handling requests.This argument only has affects with –worker-class=gthread. Default value is 8 (4 x $(NUM_CORES))

--worker-class <worker_class>

The type of workers to use.

Default

gthread

--log-level <log_level>

The log level for the server.

Default

debug

Options

critical | error | warning | info | debug

--server-app <server_app>

The application to run

Default

gordo.server.server:build_app()

--with-prometheus-config

Run with custom config for prometheus

Environment variables

GORDO_SERVER_HOST

Provide a default for --host

GORDO_SERVER_PORT

Provide a default for --port

GORDO_SERVER_WORKERS

Provide a default for --workers

GORDO_SERVER_WORKER_CONNECTIONS

Provide a default for --worker-connections

GORDO_SERVER_THREADS

Provide a default for --threads

GORDO_SERVER_WORKER_CLASS

Provide a default for --worker-class

GORDO_SERVER_LOG_LEVEL

Provide a default for --log-level

GORDO_SERVER_APP

Provide a default for --server-app

workflow

gordo workflow [OPTIONS] COMMAND [ARGS]...
generate

Machine Configuration to Argo Workflow

gordo workflow generate [OPTIONS]

Options

--machine-config <machine_config>

Required Machine configuration file

--workflow-template <workflow_template>

Template to expand

--owner-references <owner_references>

Kubernetes owner references to inject into all created resources. Should be a nonempty yaml/json list of owner-references, each owner-reference a dict containing at least the keys ‘uid’, ‘name’, ‘kind’, and ‘apiVersion’

--gordo-version <gordo_version>

Version of gordo to use, if different than this one

--project-name <project_name>

Required Name of the project which own the workflow.

--project-revision <project_revision>

Revision of the project which own the workflow.

--output-file <output_file>

Optional file to render to

--namespace <namespace>

Which namespace to deploy services into

--split-workflows <split_workflows>

Split workflows containg more than this number of models into several workflows, where each workflow contains at most this nr of models. The workflows are outputted sequentially with ‘—’ in between, which allows kubectl to apply them all at once.

--n-servers <n_servers>

Max number of ML Servers to use, defaults to N machines * 10

--docker-repository <docker_repository>

The docker repo to use for pulling component images from

--docker-registry <docker_registry>

The docker registry to use for pulling component images from

--retry-backoff-duration <retry_backoff_duration>

retryStrategy.backoff.duration for workflow steps

--retry-backoff-factor <retry_backoff_factor>

retryStrategy.backoff.factor for workflow steps

--gordo-server-workers <gordo_server_workers>

The number of worker processes for handling Gordo server requests.

--gordo-server-threads <gordo_server_threads>

The number of worker threads for handling requests.

--gordo-server-probe-timeout <gordo_server_probe_timeout>

timeoutSeconds value for livenessProbe and readinessProbe of Gordo server Deployment

--without-prometheus

Do not deploy Prometheus for Gordo servers monitoring

--prometheus-metrics-server-workers <prometheus_metrics_server_workers>

Number of workers for Prometheus metrics servers

--image-pull-policy <image_pull_policy>

Default imagePullPolicy for all gordo’s images

--with-keda

Enable support for the KEDA autoscaler

--ml-server-hpa-type <ml_server_hpa_type>

HPA type for the ML server

Options

none | k8s_cpu | keda

--custom-model-builder-envs <custom_model_builder_envs>

List of custom environment variables in

--prometheus-server-address <prometheus_server_address>

Prometheus url. Required for “–ml-server-hpa-type=keda”

--keda-prometheus-metric-name <keda_prometheus_metric_name>

metricName value for the KEDA prometheus scaler

--keda-prometheus-query <keda_prometheus_query>

query value for the KEDA prometheus scaler

--keda-prometheus-threshold <keda_prometheus_threshold>

threshold value for the KEDA prometheus scaler

--resources-labels <resources_labels>

Additional labels for resources. Have to be empty string or a dictionary in JSON format

--server-termination-grace-period <server_termination_grace_period>

terminationGracePeriodSeconds for the gordo server

--server-target-cpu-utilization-percentage <server_target_cpu_utilization_percentage>

targetCPUUtilizationPercentage for gordo-server’s HPA

Environment variables

WORKFLOW_GENERATOR_MACHINE_CONFIG

Provide a default for --machine-config

WORKFLOW_GENERATOR_OWNER_REFERENCES

Provide a default for --owner-references

WORKFLOW_GENERATOR_GORDO_VERSION

Provide a default for --gordo-version

WORKFLOW_GENERATOR_PROJECT_NAME

Provide a default for --project-name

WORKFLOW_GENERATOR_PROJECT_REVISION

Provide a default for --project-revision

WORKFLOW_GENERATOR_OUTPUT_FILE

Provide a default for --output-file

WORKFLOW_GENERATOR_NAMESPACE

Provide a default for --namespace

WORKFLOW_GENERATOR_SPLIT_WORKFLOWS

Provide a default for --split-workflows

WORKFLOW_GENERATOR_N_SERVERS

Provide a default for --n-servers

WORKFLOW_GENERATOR_DOCKER_REPOSITORY

Provide a default for --docker-repository

WORKFLOW_GENERATOR_DOCKER_REGISTRY

Provide a default for --docker-registry

WORKFLOW_GENERATOR_RETRY_BACKOFF_DURATION

Provide a default for --retry-backoff-duration

WORKFLOW_GENERATOR_RETRY_BACKOFF_FACTOR

Provide a default for --retry-backoff-factor

WORKFLOW_GENERATOR_GORDO_SERVER_WORKERS

Provide a default for --gordo-server-workers

WORKFLOW_GENERATOR_GORDO_SERVER_THREADS

Provide a default for --gordo-server-threads

WORKFLOW_GENERATOR_GORDO_SERVER_PROBE_TIMEOUT

Provide a default for --gordo-server-probe-timeout

WORKFLOW_GENERATOR_WITHOUT_PROMETHEUS

Provide a default for --without-prometheus

WORKFLOW_GENERATOR_PROMETHEUS_METRICS_SERVER_WORKERS
WORKFLOW_GENERATOR_IMAGE_PULL_POLICY

Provide a default for --image-pull-policy

WORKFLOW_GENERATOR_WITH_KEDA

Provide a default for --with-keda

WORKFLOW_GENERATOR_ML_SERVER_HPA_TYPE

Provide a default for --ml-server-hpa-type

WORKFLOW_GENERATOR_CUSTOM_MODEL_BUILDER_ENVS

Provide a default for --custom-model-builder-envs

WORKFLOW_GENERATOR_PROMETHEUS_SERVER_ADDRESS

Provide a default for --prometheus-server-address

WORKFLOW_GENERATOR_KEDA_PROMETHEUS_METRIC_NAME

Provide a default for --keda-prometheus-metric-name

WORKFLOW_GENERATOR_KEDA_PROMETHEUS_QUERY

Provide a default for --keda-prometheus-query

WORKFLOW_GENERATOR_KEDA_PROMETHEUS_THRESHOLD

Provide a default for --keda-prometheus-threshold

WORKFLOW_GENERATOR_RESOURCE_LABELS

Provide a default for --resources-labels

WORKFLOW_GENERATOR_SERVER_TERMINATION_GRACE_PERIOD

Provide a default for --server-termination-grace-period

WORKFLOW_GENERATOR_SERVER_TARGET_CPU_UTILIZATION_PERCENTAGE