Running an acceptance test now requires no configuration . To run the assess_min_version.py
file, use the assess
helper script:
cd $GOPATH/src/github.com/juju/juju/acceptancetests
./assess min_version
The script is parameter-rich and should be able to accept any tweaks that you want to make to its defaults:
$ ./assess -h
usage: assess [-h] [--juju-home HOME_DIR] [--juju-data DATA_DIR]
[--juju-repository REPO_DIR] [--log-dir LOG_DIR]
[--substrate {lxd}] [--debug] [--verbose] [--region REGION]
[--to TO] [--agent-url AGENT_URL] [--agent-stream AGENT_STREAM]
[--series SERIES] [--keep-env] [--logging-config LOGGING_CONFIG]
[--juju JUJU] [--python PYTHON]
TEST
Sets up an environment for (local) Juju acceptance testing.
positional arguments:
TEST Which acceptance test to run (see below for valid
tests)
optional arguments:
-h, --help show this help message and exit
main testing environment options:
--juju-home HOME_DIR JUJU_HOME environment variable to be used for test
[default: create a new directory in /tmp/juju-ci/*
(randomly generated)]
--juju-data DATA_DIR JUJU_DATA environment variable to be used for test
[default: HOME_DIR/data]
--juju-repository REPO_DIR
JUJU_REPOSITORY environment variable to be used for
test [default: /home/tsm/Work/src/github.com/juju/juju
/acceptancetests/repository]
extra testing environment options:
--log-dir LOG_DIR Location to store logs [HOME_DIR/log]
--substrate {lxd} Cloud substrate to run the test on [default: lxd].
options to pass through to test script:
--debug Pass --debug to Juju.
--verbose Verbose test harness output.
--region REGION Override environment region.
--to TO Place the controller at a location.
--agent-url AGENT_URL
URL for retrieving agent binaries.
--agent-stream AGENT_STREAM
Stream for retrieving agent binaries.
--series SERIES Series to use for environment [default: bionic]
--keep-env Preserve the testing directories, e.g. HOME_DIR,
DATA_DIR, ... after the test completes
--logging-config LOGGING_CONFIG
Override logging configuration for a deployment.
[default: "<root>=INFO;unit=INFO"]
executables::
--juju JUJU Path to the Juju binary to be used for testing.
[default: /home/tsm/Work/bin/juju]
--python PYTHON Python executable to call test with [default:
/usr/bin/python]
TEST options: add_cloud, add_credentials, agent_metadata,
autoload_credentials, block, bootstrap, bundle_export, caas_deploy_charms,
cloud, cloud_display, constraints, container_networking,
cross_model_relations, deploy_lxd_profile, deploy_lxd_profile_bundle,
deploy_webscale, destroy_model, endpoint_bindings, heterogeneous_control,
juju_output, juju_sync_tools, log_forward, log_rotation, min_version,
mixed_images, model_change_watcher, model_config_tree, model_defaults,
model_migration, model_migration_versions, multi_series_charms, multimodel,
network_health, network_spaces, persistent_storage, primary_sub_relations,
proxy, recovery, resolve, resources, sla, spaces_subnets, ssh_keys, storage,
unregister, upgrade, upgrade_lxd_profile, upgrade_series, user_grant_revoke,
wallet
Limitations
Some tests wont work because they use different command-line arguments. Please file issues on Launchpad if you trip up on anything.
We’re currently in the process of porting the tests to Python 3. I’m unsure, but am somewhat optimistic, that assess
will work with both sorts of tests.