[Call for Testing] Running acceptance tests locally (and easily)

Running an acceptance test now requires no configuration :tada:. To run the assess_min_version.py file, use the assess helper script:

cd $GOPATH/src/github.com/juju/juju/acceptancetests
./assess min_version

The script is parameter-rich and should be able to accept any tweaks that you want to make to its defaults:

$ ./assess -h
usage: assess [-h] [--juju-home HOME_DIR] [--juju-data DATA_DIR]
              [--juju-repository REPO_DIR] [--log-dir LOG_DIR]
              [--substrate {lxd}] [--debug] [--verbose] [--region REGION]
              [--to TO] [--agent-url AGENT_URL] [--agent-stream AGENT_STREAM]
              [--series SERIES] [--keep-env] [--logging-config LOGGING_CONFIG]
              [--juju JUJU] [--python PYTHON]
              TEST

Sets up an environment for (local) Juju acceptance testing.

positional arguments:
  TEST                  Which acceptance test to run (see below for valid
                        tests)

optional arguments:
  -h, --help            show this help message and exit

main testing environment options:
  --juju-home HOME_DIR  JUJU_HOME environment variable to be used for test
                        [default: create a new directory in /tmp/juju-ci/*
                        (randomly generated)]
  --juju-data DATA_DIR  JUJU_DATA environment variable to be used for test
                        [default: HOME_DIR/data]
  --juju-repository REPO_DIR
                        JUJU_REPOSITORY environment variable to be used for
                        test [default: /home/tsm/Work/src/github.com/juju/juju
                        /acceptancetests/repository]

extra testing environment options:
  --log-dir LOG_DIR     Location to store logs [HOME_DIR/log]
  --substrate {lxd}     Cloud substrate to run the test on [default: lxd].

options to pass through to test script:
  --debug               Pass --debug to Juju.
  --verbose             Verbose test harness output.
  --region REGION       Override environment region.
  --to TO               Place the controller at a location.
  --agent-url AGENT_URL
                        URL for retrieving agent binaries.
  --agent-stream AGENT_STREAM
                        Stream for retrieving agent binaries.
  --series SERIES       Series to use for environment [default: bionic]
  --keep-env            Preserve the testing directories, e.g. HOME_DIR,
                        DATA_DIR, ... after the test completes
  --logging-config LOGGING_CONFIG
                        Override logging configuration for a deployment.
                        [default: "<root>=INFO;unit=INFO"]

executables::
  --juju JUJU           Path to the Juju binary to be used for testing.
                        [default: /home/tsm/Work/bin/juju]
  --python PYTHON       Python executable to call test with [default:
                        /usr/bin/python]

TEST options: add_cloud, add_credentials, agent_metadata,
autoload_credentials, block, bootstrap, bundle_export, caas_deploy_charms,
cloud, cloud_display, constraints, container_networking,
cross_model_relations, deploy_lxd_profile, deploy_lxd_profile_bundle,
deploy_webscale, destroy_model, endpoint_bindings, heterogeneous_control,
juju_output, juju_sync_tools, log_forward, log_rotation, min_version,
mixed_images, model_change_watcher, model_config_tree, model_defaults,
model_migration, model_migration_versions, multi_series_charms, multimodel,
network_health, network_spaces, persistent_storage, primary_sub_relations,
proxy, recovery, resolve, resources, sla, spaces_subnets, ssh_keys, storage,
unregister, upgrade, upgrade_lxd_profile, upgrade_series, user_grant_revoke,
wallet

Limitations

Some tests wont work because they use different command-line arguments. Please file issues on Launchpad if you trip up on anything.

We’re currently in the process of porting the tests to Python 3. I’m unsure, but am somewhat optimistic, that assess will work with both sorts of tests.

3 Likes

Update: AWS is now supported out of the box with the --substrate argument. For example:

assess --substrate=aws wallet

I’ve been using this for the past couple of days (with lxd) and it’s really nice. So much easier just to fire off a test run without all the mucking around with yaml and env vars etc.

If you are working on ci tests and haven’t tried this yet, do yourself a favour and give it a go.

Several extra tests will be supported shortly. In the interim, here is some expanded documentation on how to run each of the acceptance tests (Incomplete at this stage).

assess_log_rotation.py

Requires an extra argument --agent:

./assess log_rotation --agent=machine
./assess log_rotation --agent=unit

assess_min_version.py

Works out the box:

./assess min_version

assess_mixed_images.py

Can accept an argument. It has a different name than the original file to increase consistency with --local-metadata-source used by assess_bootstrap.py.

./assess  mixed_images --remote-metadata-source=<url>

assess_model_change_watcher.py

./assess model_change_watcher

assess_model_config_tree.py

./assess model_config_tree

assess_model_defaults.py

Accepts a --other-region argument that is aliased to --secondary-region for consistency with assess_cross_model_relations.py.

./assess model_defaults --other-region=<region>

assess_model_migrations_versions.py

./assess model_migrations_versions --stable-juju-bin=</path/to/file>

assess_multi_series_charms.py

The --devel-series argument has been aliased to --charm-devel-series for consistency.

./assess  multi_series_charms --charms-devel-series=<series>
./assess  multi_series_charms --devel-series=<series>