Online Mind Mapping and Brainstorming

Create your own awesome maps

Online Mind Mapping and Brainstorming

Even on the go

with our free apps for iPhone, iPad and Android

Get Started

Already have an account? Log In

Avocado Workshop by Mind Map: Avocado Workshop
0.0 stars - reviews range from 0 to 5

Avocado Workshop

Topics

Basic usage

Execute binary, ./scripts/avocado run /bin/true, when installed use only "avocado run /bin/true", ./scripts/avocado run /bin/true /bin/false, ./scripts/avocado run passtest failtest warntest skiptest errortest sleeptest

Help and docs, ./scripts/avocado -h, ./scripts/avocado run -h, ..., cd docs && make html

Execute binary and compare outputs, Used for tests which should generate the same stdout/stderr, Possibility to combine with other plugins (run inside GDB,...), ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh, ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh --output-check-record all, ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh, # modify the file, ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh, ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh --disable-output-check, # exit -1, ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh, ./scripts/avocado run ./DEVCONF/Basic_usage/output_compare.sh --disable-output-check

Execute binary and parse results using wrapper, Used to run the same steps with various wrappers, For example run complex testsuite with perf/strace/... (viz. Wrapper section), Alternatively use it to "translate" existing binaries to Avocado (here), DEVCONF/Basic_usage/testsuite.sh, nasty testsuite which doesn't return exit_code, ./scripts/avocado run DEVCONF/Basic_usage/testsuite.sh, ./scripts/avocado run DEVCONF/Basic_usage/testsuite.sh --wrapper DEVCONF/Basic_usage/testsuite-wrapper.sh:*testsuite.sh, # remove FAIL test, ./scripts/avocado run DEVCONF/Basic_usage/testsuite.sh --wrapper DEVCONF/Basic_usage/testsuite-wrapper.sh:*testsuite.sh, # remove WARN test, ./scripts/avocado run DEVCONF/Basic_usage/testsuite.sh --wrapper DEVCONF/Basic_usage/testsuite-wrapper.sh:*testsuite.sh, More details in Wrapper section

Multiplex using ENV, ./scripts/avocado run examples/tests/env_variables.sh, ./scripts/avocado run examples/tests/env_variables.sh -m examples/tests/env_variables.sh.data/env_variables.yaml, ./scripts/avocado multiplex, -c, -d, -t, -h ;-), This is pretty close to QA, right?, More details in Multiplex section

Grab sysinfo, ./scripts/avocado run passtest --open-browser

Jenkins/CI

Jenkins?, Jenkins

Quick&dirty setup

avocado simple

avocado parametrized

avocado lots of tests

avocado multi, multi scenarios, multi archs, multi workers (OSs)

avocado git poll

Writing tests in Python

Can be executed as, Inside Avocado, ./scripts/avocado run examples/tests/passtest.py, As python scripts, examples/tests/passtest.py

Logging, Possibility to set test as WARN, self.log.warn("In the state of Denmark there was an odor of decay"), See warntest test, Log more important steps, self.log.info("I successfully executed this important step with %s args" % args), self.log.debug("Executing iteration %s" % iteration), self.log.warn("You already know me"), self.log.error("Everyone should check these messages on failure"), stdout, stderr

Whiteboard, ./scripts/avocado run examples/tests/whiteboard.py, cat ~/avocado/job-results/latest/test-results/examples/tests/whiteboard.py/whiteboard

Advanced usage of params, See Multiplexer section

Advanced GDB, See GDB section

Avocado-virt, See Avocado-virt section

Test on remote machine

remote, ./scripts/avocado run passtest --remote-hostname 192.168.122.235 --remote-username root --remote-password 123456, Connects, Copies tests directories, Runs avocado inside machine with --json --archive, Grab and store results, Report results localy, --remote-no-copy

vm, run in libvirt machine, ./scripts/avocado run examples/tests/passtest.py importerror.py corrupted.py pass.py passtest --vm-domain vm1 --vm-hostname 192.168.122.235 --vm-username root --vm-password 123456 --vm-cleanup, Checks if domain exists, Start it if not already started, (--vm-clenaup) Creates snapshot, call remote test, (--vm-cleanup) Restore snapshot, --vm-cleanup, --vm-no-copy

docker plugin, first think what we require, docker run -v $LOCAL_PATH:$CONT_PATH $IMAGE avocado run $tests, grab the results, --json -, --archive + copy, we don't require any remote interaction, only simple execution, plugin initialization, 1) name, status, class RunDocker(plugin.Plugin): """ Run tests inside docker container """ name = 'run_docker' enabled = True, 2) configuration hook, docker_parser = None def configure(self, parser): msg = 'run on a remote machine arguments' self.docker_parser = parser.runner.add_argument_group(msg) self.docker_parser.add_argument('--docker-image', dest='docker_image', default=None, help='Specify the docker image') self.docker_parser.add_argument('--docker-no-cleanup', dest='docker_no_cleanup', action='store_true', help="Don't cleanup docker after run") self.configured = True, 3) activation hook, def activate(self, app_args): if hasattr(app_args, 'docker_image') and app_args.docker_image: self.docker_parser.set_defaults(remote_result=DockerTestResult, test_runner=DockerTestRunner), class RunDocker, class RunDocker(plugin.Plugin): """ Run tests on a remote machine """ name = 'run_docker' enabled = True docker_parser = None def configure(self, parser): msg = 'run on a remote machine arguments' self.docker_parser = parser.runner.add_argument_group(msg) self.docker_parser.add_argument('--docker-image', dest='docker_image', default=None, help='Specify the docker image') self.docker_parser.add_argument('--docker-no-cleanup', dest='docker_no_cleanup', action='store_true', help="Don't cleanup docker after run") self.configured = True def activate(self, app_args): if hasattr(app_args, 'docker_image') and app_args.docker_image: self.docker_parser.set_defaults(remote_result=DockerTestResult, test_runner=DockerTestRunner), Prerequisites, prepare docker cmd, self.cmd = ('docker run -v %s:%s %s %s ' % (self.local_test_dir, self.docker_test_dir, '' if self.args.docker_no_cleanup else '--rm', self.args.docker_image)), copy tests to shared location,     def _copy_tests(self):         """         Gather test's directories and copy them recursively to         $docker_test_dir + $test_absolute_path.         :note: Default tests execution is translated into absolute paths too         """         # TODO: Use `avocado.loader.TestLoader` instead         paths = set()         for i in xrange(len(self.urls)):             url = self.urls[i]             if not os.path.exists(url):     # use test_dir path + py                 url = os.path.join(data_dir.get_test_dir(), '%s.py' % url)             url = os.path.abspath(url)  # always use abspath; avoid clashes             # modify url to docker_path + abspath             paths.add(os.path.dirname(url))             self.urls[i] = self.docker_test_dir + url         previous = ' NOT ABSOLUTE PATH'         for path in sorted(paths):             if os.path.commonprefix((path, previous)) == previous:                 continue    # already copied             rpath = self.local_test_dir + path             os.makedirs(os.path.dirname(rpath))             shutil.copytree(path, rpath)             previous = path         process.system("chcon -Rt svirt_sandbox_file_t %s"                        % self.local_test_dir, ignore_status=True), Get unique urls, Ignore the nested ones as we copy recursively, Do the copying, Change selinux content, class DockerTestResult, class DockerTestResult(HumanTestResult):     """     Remote Machine Test Result class.     """     def __init__(self, stream, args):         """         Creates an instance of DockerTestResult.         :param stream: an instance of :class:`avocado.core.output.View`.         :param args: an instance of :class:`argparse.Namespace`.         """         HumanTestResult.__init__(self, stream, args)         self.local_test_dir = tempfile.mkdtemp(prefix='avocado-docker-volume-',                                                dir='/var/tmp')         self.docker_test_dir = '/avocado'         self.urls = self.args.url         self.remote = None      # Remote runner initialized during setup         self.output = '-'         self.command_line_arg_name = '--docker-image'         self.cmd = None     def _copy_tests(self):         """         Gather test's directories and copy them recursively to         $docker_test_dir + $test_absolute_path.         :note: Default tests execution is translated into absolute paths too         """         # TODO: Use `avocado.loader.TestLoader` instead         paths = set()         for i in xrange(len(self.urls)):             url = self.urls[i]             if not os.path.exists(url):     # use test_dir path + py                 url = os.path.join(data_dir.get_test_dir(), '%s.py' % url)             url = os.path.abspath(url)  # always use abspath; avoid clashes             # modify url to docker_path + abspath             paths.add(os.path.dirname(url))             self.urls[i] = self.docker_test_dir + url         previous = ' NOT ABSOLUTE PATH'         for path in sorted(paths):             if os.path.commonprefix((path, previous)) == previous:                 continue    # already copied             rpath = self.local_test_dir + path             os.makedirs(os.path.dirname(rpath))             shutil.copytree(path, rpath)             previous = path         process.system("chcon -Rt svirt_sandbox_file_t %s"                        % self.local_test_dir, ignore_status=True)     def setup(self):         """ Setup remote environment and copy test's directories """         self.stream.notify(event='message', msg="REMOTE TESTS: Docker Image "                            "'%s'" % self.args.docker_image)         self._copy_tests()         self.cmd = ('docker run -v %s:%s %s %s '                     % (self.local_test_dir, self.docker_test_dir,                        '' if self.args.docker_no_cleanup else '--rm',                        self.args.docker_image))     def tear_down(self):         """ Cleanup after test execution """         if not self.args.docker_no_cleanup and self.local_test_dir:             shutil.rmtree(self.local_test_dir, True), Actual execution, 1) Execute the tests,         avocado_cmd = ('avocado run --force-job-id %s --json - '                        '--archive %s' % (self.result.stream.job_unique_id,                                          " ".join(urls)))         stdout = process.system_output(self.result.cmd + avocado_cmd,                                        ignore_status=True, timeout=None), 2) Grab the results,     def run_suite(self, test_suite):         """         Run one or more tests and report with test result.         :param params_list: a list of param dicts.         :return: a list of test failures.         """         del test_suite     # using self.result.urls instead         failures = []         self.result.setup()         results = self.run_test(self.result.urls)         self.result.start_tests()         for tst in results['tests']:             test = RemoteTest(name=tst['test'],                               time=tst['time'],                               start=tst['start'],                               end=tst['end'],                               status=tst['status'])             state = test.get_state()             self.result.start_test(state)             self.result.check_test(state)             if not status.mapping[state['status']]:                 failures.append(state['tagged_name'])         local_log_dir = os.path.dirname(self.result.stream.debuglog)         docker_log_dir = os.path.relpath(os.path.dirname(results['debuglog']),                                          self.result.docker_test_dir)         zip_filename = docker_log_dir + '.zip'         archive.uncompress(os.path.join(self.result.local_test_dir,                                         zip_filename),                            local_log_dir)         self.result.end_tests()         self.result.tear_down()         return failures, setup, run tests, parse results and store results, get logdir from results (docker path), change docker path into local tmp dir (docker volume), extract results, cleanup, class DockerTestRunner, class DockerTestRunner(TestRunner):     """ Tooled TestRunner to run on remote machine using ssh """     def run_test(self, urls):         """         Run tests.         :param urls: a string with test URLs.         :return: a dictionary with test results.         """         avocado_cmd = ('avocado run --force-job-id %s --json - '                        '--archive %s' % (self.result.stream.job_unique_id,                                          " ".join(urls)))         stdout = process.system_output(self.result.cmd + avocado_cmd,                                        ignore_status=True, timeout=None)         for json_output in stdout.splitlines():             # We expect dictionary:             if json_output.startswith('{') and json_output.endswith('}'):                 try:                     return json.loads(json_output)                 except ValueError:                     pass         raise ValueError("Can't parse json out of remote's avocado output:"                          "\n%s" % stdout)     def run_suite(self, test_suite):         """         Run one or more tests and report with test result.         :param params_list: a list of param dicts.         :return: a list of test failures.         """         del test_suite     # using self.result.urls instead         failures = []         self.result.setup()         results = self.run_test(self.result.urls)         self.result.start_tests()         for tst in results['tests']:             test = RemoteTest(name=tst['test'],                               time=tst['time'],                               start=tst['start'],                               end=tst['end'],                               status=tst['status'])             state = test.get_state()             self.result.start_test(state)             self.result.check_test(state)             if not status.mapping[state['status']]:                 failures.append(state['tagged_name'])         local_log_dir = os.path.dirname(self.result.stream.debuglog)         docker_log_dir = os.path.relpath(os.path.dirname(results['debuglog']),                                          self.result.docker_test_dir)         zip_filename = docker_log_dir + '.zip'         archive.uncompress(os.path.join(self.result.local_test_dir,                                         zip_filename),                            local_log_dir)         self.result.end_tests()         self.result.tear_down()         return failures

GDB

./scripts/avocado run DEVCONF/GDB/doublefree*.py

./scripts/avocado run --gdb-run-bin=doublefree: DEVCONF/GDB/doublefree.py, backtrace, ...

./scripts/avocado run --gdb-run-bin=doublefree DEVCONF/GDB/doublefree2.py, without ':' it stops at the beginning, we can use `ddd` or other GDB compatible programs, don't forget to resume the test (FIFO)

./scripts/avocado run --gdb-run-bin=doublefree: DEVCONF/GDB/doublefree3.py --gdb-prerun-commands DEVCONF/GDB/doublefree3.py.data/gdb_pre --multiplex DEVCONF/GDB/doublefree3.py.data/iterations.yaml, OK, we reached the "handle_exception" function we want to investigate, n, n, jump +1, Works fine, hurray

Advanced GDB

vim DEVCONF/Advanced_GDB/modify_variable.py.data/doublefree.c

./scripts/avocado run DEVCONF/Advanced_GDB/modify_variable.py --show-job-log

Wrappers

Valgrind, ./scripts/avocado run DEVCONF/Wrappers/doublefree.py --wrapper examples/wrappers/valgrind.sh:*doublefree --open-browser, cat /home/medic/avocado/job-results/latest/test-results/doublefree.py/valgrind.log.*

Strace, The same way as valgrind, Imagine for example crashing qemu machine running windows under very complex workload defined in test

Other, ltrace, perf, strace, time, valgrind

Custom wrappers, qemu on PPC, wrap executed programs, grab results

Multiplexer

Motivation, QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv, by_order:     a_beer:         msg: a beer     0_beers:         msg: 0 beers     999999999:         msg: 999999999 beers     lizard:         msg: lizard     negative:         msg: -1 beers     noise:         msg: sfdeljknesv, by_order: !join     beer:         suffix: beer         a_beer:             msg: a     beers:         suffix: beers         0_beers:             msg: 0         999999999:             msg: 999999999         negative:             msg: -1     no_beers:         noise:             msg: sfdeljknesv         lizard:             msg: lizard

How would you solve our problem?, !multiplex vs. !join, virt: hw: cpu: intel: amd: arm: fmt: !join qcow: qcow2: qcow2v3: raw: os: !join linux: !join Fedora: 19: Gentoo: windows: 3.11:, virt: !multiplex hw: !multiplex cpu: intel: amd: arm: fmt: qcow: qcow2: qcow2v3: raw: os: linux: Fedora: 19: Gentoo: windows: 3.11:, virt: hw: cpu: !mux_domain intel: amd: arm: fmt: !mux_domain qcow: qcow2: qcow2v3: raw: os: !mux_domain linux: Fedora: 19: Gentoo: windows: 3.11:, map tests to test->files

Usage, ./scripts/avocado multiplex -h, ./scripts/avocado multiplex -t, ./scripts/avocado multiplex -t DEVCONF/Multiplexer/nomux.yaml, ./scripts/avocado multiplex -t DEVCONF/Multiplexer/simple.yaml, ./scripts/avocado multiplex -t DEVCONF/Multiplexer/advanced.yaml, ./scripts/avocado multiplex -c, ./scripts/avocado multiplex -c DEVCONF/Multiplexer/simple.yaml, ./scripts/avocado multiplex -c DEVCONF/Multiplexer/advanced.yaml, ./scripts/avocado multiplex -cd examples/mux-selftest-advanced.yaml, eg. variant: 21, ./scripts/avocado run examples/tests/sleeptenmin.py -m DEVCONF/Multiplexer/simple.yaml, ./scripts/avocado run examples/tests/sleeptenmin.py -m DEVCONF/Multiplexer/advanced.yaml, ./scripts/avocado multiplex --filter-only/filter-out, ./scripts/avocado multiplex -c DEVCONF/Multiplexer/advanced.yaml --filter-only /by_method/shell, 2nd level filters, Additional tags, !using, !include, !remove_node, !remove_value, !join

Let's go crazy, ./scripts/avocado multiplex DEVCONF/Multiplexer/crazy.yaml

Future, Currently we update environment from used leaves, by_something:     first:         foo: bar by_whatever:     something:         foo: baz, params.get('foo') => 'baz', Soon we want provide the whole layout, by_something:     first:         hello: world         foo: bar by_whatever:     something:         foo: baz, params.get('foo') => raise Exception, params.get('hello') => "world", params.get('/by_whatever', 'foo') => 'baz', params.get('/by_something', 'foo') => 'bar', params.get_variants('/by_something') => [TreeNode("/by_whatever/something")], Automatic multiplexation, 1) without multiplexation, just check it still works, QA while developing tests, Does it work?, Developer (sanity), Will it survive 1s sleep?, 2) test's file multiplexation, check all variants related to this test, Developer (check it behaves well), Will it survive 1, 60, 3600s sleeps?, 3) global file multiplexation, Create complex test loops, QA - complex scenarios, all possible variants, Intel, AMD, Arm64, PPC64, ..., 1, 60, 3600, smallpages, hugepages, ...

Eclipse

Debugging

Remote debugging, download pydev, copy it to $PYTHONPATH, or add path in code:     import sys     sys.path.append('$PATH_TO_YOUR_PYDEV'), import pydevd pydevd.settrace("$ECLIPSE_IP_ADDR", True, True), 1. True = forward stdout (optional), 2. True = forward stderr (optional)

Install Avocado

Using from sources, # Uninstall previously installed versions, git clone https://github.com/avocado-framework/avocado.git, cd avocado, ./scripts/avocado ...

git, git clone https://github.com/avocado-framework/avocado.git, # pip install -r requirements.txt, python setup.py install, avocado ...

rpm, sudo curl http://copr.fedoraproject.org/coprs/lmr/Autotest/repo/fedora-20/lmr-Autotest-fedora-20.repo -o /etc/yum.repos.d/autotest.repo, sudo yum update, sudo yum install avocado, avocado ...

deb, echo "deb http://ppa.launchpad.net/lmr/avocado/ubuntu trusty main" >> /etc/apt/sources.list, sudo apt-get update, sudo apt-get install avocado, avocado ...

avocado-virt

Currently only demonstration of the Avocado flexibility

using git version, 1) git clone $avocado, 2) git clone $avocado-virt, 3) git clone $avocado-virt-tests, cd avocado/avocado ln -s ../../avocado-virt/avocado/virt virt cd -, cd avocado/avocado/plugins ln -s ../../avocado-virt/avocado/plugins/virt.py virt.py ln -s ../../avocado-virt/avocado/plugins/virt_bootstrap.py cd -

using installed version, python setup.py install, yum install avocado-virt

./scripts/avocado virt-bootstrap, Downloads JeOS, Check permissions

./scripts/avocado run ../avocado-virt-tests/qemu/boot.py

./scripts/avocado run ../avocado-virt-tests/qemu/migration/migration.py

./scripts/avocado run ../avocado-virt-tests/qemu/usb_boot.py

Qemu templates, ./scripts/avocado run DEVCONF/avocado-virt/boot_lspci.py --show-job-log, ./scripts/avocado run DEVCONF/avocado-virt/boot_lspci.py --show-job-log --qemu-template DEVCONF/avocado-virt/basic.tpl

About me

Lukáš Doktor

Happy Software Engineer at Red Hat

RHTS (bash)

Autotest/virt-test (python, virtualization)

Avocado (~3 months)

_

Introduction

Avocado is a next generation testing framework inspired by Autotest and modern development tools such as git. Whether you want to automate the test plan made by your development team, do continuous integration, or develop quick tests for the feature you're working on, avocado delivers the tools to help you out.

What is Avocado for me

.

Program which stores results of anything I want

Binary which provides means to help me execute anything using various environments

Set of python* libraries to simplify writing tests

Something easily integrateable with Jenkins (or othe CI)

Tool which allows me to share the results

Most importantly tool which allows me to do all the above in all possible variants

.

Avocado structure

Links:

Avocado homepage: http://avocado-framework.github.io/

Data for this presentation: https://www.dropbox.com/sh/1ucmenbf76vnmn5/AAB4pn157uOOn916JDg2yYzKa?dl=0

Recording: https://www.youtube.com/watch?feature=player_detailpage&v=rj3XMTS-fwE