RAMP-workflow commands

The following commands are built using the click package which provides tab completion for the command options. You however need to activate shell completion by following the instructions given in the click documentation. The ramp-test command also comes with tab completion for the submission name if the submission you are looking for is located in the ./submissions/ folder.

ramp-test

Test a submission and/or a notebook before to submit on RAMP studio.

ramp-test [OPTIONS]

Options

--submission <submission>

The kit to test. It should be located in the “submissions” folder of the starting kit. If “ALL”, all submissions in the directory will be tested.

Default

starting_kit

--ramp-kit-dir <ramp_kit_dir>

Root directory of the ramp-kit to test.

Default

.

--ramp-data-dir <ramp_data_dir>

Directory containing the data. This directory should contain a “data” folder.

Default

.

--data-label <data_label>

A label specifying the data in case the same submissions are executed on multiple datasets. If specified, problem.get_train_data and problem.get_test_data should accept a data_label argument. Typically they can deal with multiple datasets containing the data within the directory specified by –ramp-data-dir (default: ./data), for example using subdirectories ./data/<data_label>/. It is also the subdirectory of submissions/<submission>/training_output where results are saved if –save-output is used.

--ramp-submission-dir <ramp_submission_dir>

Directory where the submissions are stored. It is the directory (typically called “submissions” in the ramp-kit) that contains the individual submission subdirectories.

Default

submissions

--notebook

Whether or not to test the notebook.

Default

False

--quick-test

Specify this flag to test the submission on a small subset of the data.

--pickle

Specify this flag to pickle the submission after training.

--partial-train

Specify this flag to partial train an existing trained workflow, previously saved by setting –pickle. The workflow.train_submission needs to accept prev_trained_workflow.

--save-output

Specify this flag to save predictions, scores, eventual error trace, and state after training.

--retrain

Specify this flag to retrain the submission on the full training set after the CV loop.

--ignore-warning

Will filters all warning and avoid to print them.

ramp-show

Command-line to show information about local submissions.

ramp-show [OPTIONS] COMMAND [ARGS]...

leaderboard

Display the leaderboard for all the local submissions.

ramp-show leaderboard [OPTIONS]

Options

--ramp-kit-dir <ramp_kit_dir>

Root directory of the ramp-kit to retrieved the train submission.

Default

.

--data-label <data_label>

A label specifying the data in case the same submissions are executed on multiple datasets. If specified, it is the subdirectory of submissions/<submission>/training_output where results are searched for to be summarized.

--metric <metric>

A list of the metric to report. Example:

–metric [‘rmse’]

Default

[]

--step <step>

A list of the processing to report. Choices are {“train” , “valid”, “test”}. Example:

–step [‘valid’,’test’]

Default

[]

--sort-by <sort_by>

Give the metric, step, and stat to use for sorting. Use tuples, for example:

–mean –sort-by (‘rmse’,’test’,’mean’)

–bagged –sort-by “(‘test rmse’)”

Default

[]

--ascending, --descending

Sort in ascending or descending order.

Default

True

--precision <precision>

The precision for the different metrics reported.

Default

2

--bagged, --mean

Bagged or mean scores.

Default

True