Import Upstream version 0.5

This commit is contained in:
su-fang 2022-09-14 15:35:12 +08:00
commit 6e13c8afa5
23 changed files with 1638 additions and 0 deletions

14
.gitignore vendored Normal file
View File

@ -0,0 +1,14 @@
TAGS
tags
lib/testtools
MANIFEST
dist
*~
.eggs
.*.swp
testscenarios.egg-info
*.pyc
__pycache__
*.egg
ChangeLog
AUTHORS

17
.travis.yml Normal file
View File

@ -0,0 +1,17 @@
sudo: false
language: python
python:
- "2.7"
- "3.4"
- "3.5"
- "3.6"
- pypy
- pypy3.5
install:
- pip install -U pip
- pip install -U wheel setuptools
- pip install -r requirements.txt
- pip list
- python --version
script:
- python setup.py test

202
Apache-2.0 Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

26
BSD Normal file
View File

@ -0,0 +1,26 @@
Copyright (c) Robert Collins and Testscenarios contributors
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of Robert Collins nor the names of Subunit contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY ROBERT COLLINS AND SUBUNIT CONTRIBUTORS ``AS IS''
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.

31
COPYING Normal file
View File

@ -0,0 +1,31 @@
Testscenarios is licensed under two licenses, the Apache License, Version 2.0
or the 3-clause BSD License. You may use this project under either of these
licenses - choose the one that works best for you.
We require contributions to be licensed under both licenses. The primary
difference between them is that the Apache license takes care of potential
issues with Patents and other intellectual property concerns that some users
or contributors may find important.
Generally every source file in Testscenarios needs a license grant under both
these licenses. As the code is shipped as a single unit, a brief form is used:
----
Copyright (c) [yyyy][,yyyy]* [name or 'Testscenarios Contributors']
Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
license at the users choice. A copy of both licenses are available in the
project source as Apache-2.0 and BSD. You may not use this file except in
compliance with one of these two licences.
Unless required by applicable law or agreed to in writing, software
distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
license you chose for the specific language governing permissions and
limitations under that license.
----
Code that has been incorporated into Testscenarios from other projects will
naturally be under its own license, and will retain that license.
A known list of such code is maintained here:
* No entries.

25
GOALS Normal file
View File

@ -0,0 +1,25 @@
testscenarios goals
===================
* nice, declarative interface for multiplying tests by scenarios.
* plays nice with testresources - when a scenario uses a resource, the
resource ordering logic should be able to group them together.
* (at user discretion) plays nice with $random test discovery
* arbitrary post-load multiplication.
* cross-productable scenarios (for X and for Y)
* extenable scenarios (for X using Y)
* scenarios and the tests that use them are loosely coupled
* tests that use scenarios should be easy to debug
* fast
* usable in trial, bzr, Zope testrunner, nose and the default unittest
TestRunner

39
HACKING Normal file
View File

@ -0,0 +1,39 @@
Contributing to testscenarios
=============================
Code access
+++++++++++
The main branch is `master` in `https://github.com/testing-cabal/testscenarios`
Publish your branches wherever you like, I encourage github hosting though,
as you can use a pull request there to get a web UI for code review.
Copyright
+++++++++
Testscenarios is Copyright (C) 2009,2015 Robert Collins. I'd like to be able to
offer it up for stdlib inclusion once it has proved itself, so am asking for
copyright assignment to me - or for your contributions to be under either the
BSD and Apache-2.0 licences that Testscenarios are with (which permit inclusion
in Python).
Coding standards
++++++++++++++++
PEP-8 coding style please, though I'm not nitpicky. Make sure that 'make check'
passes before sending in a patch.
Code arrangement
++++++++++++++++
The ``testscenarios`` module should simply import classes and functions from
more specific modules, rather than becoming large and bloated itself. For
instance, TestWithScenarios lives in testscenarios.testcase, and is imported in
the testscenarios __init__.py.
Releases
++++++++
Commit a version to NEWS, tag with a signed tag, make release to build and push
to pypi. push to github.

10
MANIFEST.in Normal file
View File

@ -0,0 +1,10 @@
include .bzrignore
include Apache-2.0
include BSD
include COPYING
include GOALS
include HACKING
include MANIFEST.in
include Makefile
include NEWS
include doc/*.py

22
Makefile Normal file
View File

@ -0,0 +1,22 @@
PYTHONPATH:=$(shell pwd):${PYTHONPATH}
PYTHON ?= python
all: check
check:
PYTHONPATH=$(PYTHONPATH) $(PYTHON) -m testtools.run \
testscenarios.test_suite
clean:
find . -name '*.pyc' -print0 | xargs -0 rm -f
TAGS: testscenarios/*.py testscenarios/tests/*.py
ctags -e -R testscenarios/
tags: testscenarios/*.py testscenarios/tests/*.py
ctags -R testscenarios/
release:
python setup.py sdist bdist_wheel upload -s
.PHONY: all check release

64
NEWS Normal file
View File

@ -0,0 +1,64 @@
---------------------------
testscenarios release notes
---------------------------
IN DEVELOPMENT
~~~~~~~~~~~~~~
0.5
~~~
CHANGES
-------
* Tests fixed for Python 3.3, 3.4, 3.5. (Robert Collins)
0.4
~~~
IMPROVEMENTS
------------
* Python 3.2 support added. (Robert Collins)
0.3
~~~
CHANGES
-------
* New function ``per_module_scenarios`` for tests that should be applied across
multiple modules providing the same interface, some of which may not be
available at run time. (Martin Pool)
* ``TestWithScenarios`` is now backed by a mixin - WithScenarios - which can be
mixed into different unittest implementations more cleanly (e.g. unittest2).
(James Polley, Robert Collins)
0.2
~~~
CHANGES
-------
* Adjust the cloned tests ``shortDescription`` if one is present. (Ben Finney)
* Provide a load_tests implementation for easy use, and multiply_scenarios to
create the cross product of scenarios. (Martin Pool)
0.1
~~~
CHANGES
-------
* Created project. The primary interfaces are
``testscenarios.TestWithScenarios`` and
``testscenarios.generate_scenarios``. Documentation is primarily in README.
(Robert Collins)
* Make the README documentation doctest compatible, to be sure it works.
Also various presentation and language touchups. (Martin Pool)
(Adjusted to use doctest directly, and to not print the demo runners
output to stderror during make check - Robert Collins)

316
README Normal file
View File

@ -0,0 +1,316 @@
*****************************************************************
testscenarios: extensions to python unittest to support scenarios
*****************************************************************
Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
license at the users choice. A copy of both licenses are available in the
project source as Apache-2.0 and BSD. You may not use this file except in
compliance with one of these two licences.
Unless required by applicable law or agreed to in writing, software
distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
license you chose for the specific language governing permissions and
limitations under that license.
testscenarios provides clean dependency injection for python unittest style
tests. This can be used for interface testing (testing many implementations via
a single test suite) or for classic dependency injection (provide tests with
dependencies externally to the test code itself, allowing easy testing in
different situations).
Dependencies
============
* Python 2.6+
* testtools <https://launchpad.net/testtools>
Why TestScenarios
=================
Standard Python unittest.py provides on obvious method for running a single
test_foo method with two (or more) scenarios: by creating a mix-in that
provides the functions, objects or settings that make up the scenario. This is
however limited and unsatisfying. Firstly, when two projects are cooperating
on a test suite (for instance, a plugin to a larger project may want to run
the standard tests for a given interface on its implementation), then it is
easy for them to get out of sync with each other: when the list of TestCase
classes to mix-in with changes, the plugin will either fail to run some tests
or error trying to run deleted tests. Secondly, its not as easy to work with
runtime-created-subclasses (a way of dealing with the aforementioned skew)
because they require more indirection to locate the source of the test, and will
often be ignored by e.g. pyflakes pylint etc.
It is the intent of testscenarios to make dynamically running a single test
in multiple scenarios clear, easy to debug and work with even when the list
of scenarios is dynamically generated.
Defining Scenarios
==================
A **scenario** is a tuple of a string name for the scenario, and a dict of
parameters describing the scenario. The name is appended to the test name, and
the parameters are made available to the test instance when it's run.
Scenarios are presented in **scenario lists** which are typically Python lists
but may be any iterable.
Getting Scenarios applied
=========================
At its heart the concept is simple. For a given test object with a list of
scenarios we prepare a new test object for each scenario. This involves:
* Clone the test to a new test with a new id uniquely distinguishing it.
* Apply the scenario to the test by setting each key, value in the scenario
as attributes on the test object.
There are some complicating factors around making this happen seamlessly. These
factors are in two areas:
* Choosing what scenarios to use. (See Setting Scenarios For A Test).
* Getting the multiplication to happen.
Subclasssing
++++++++++++
If you can subclass TestWithScenarios, then the ``run()`` method in
TestWithScenarios will take care of test multiplication. It will at test
execution act as a generator causing multiple tests to execute. For this to
work reliably TestWithScenarios must be first in the MRO and you cannot
override run() or __call__. This is the most robust method, in the sense
that any test runner or test loader that obeys the python unittest protocol
will run all your scenarios.
Manual generation
+++++++++++++++++
If you cannot subclass TestWithScenarios (e.g. because you are using
TwistedTestCase, or TestCaseWithResources, or any one of a number of other
useful test base classes, or need to override run() or __call__ yourself) then
you can cause scenario application to happen later by calling
``testscenarios.generate_scenarios()``. For instance::
>>> import unittest
>>> try:
... from StringIO import StringIO
... except ImportError:
... from io import StringIO
>>> from testscenarios.scenarios import generate_scenarios
This can work with loaders and runners from the standard library, or possibly other
implementations::
>>> loader = unittest.TestLoader()
>>> test_suite = unittest.TestSuite()
>>> runner = unittest.TextTestRunner(stream=StringIO())
>>> mytests = loader.loadTestsFromNames(['doc.test_sample'])
>>> test_suite.addTests(generate_scenarios(mytests))
>>> runner.run(test_suite)
<unittest...TextTestResult run=1 errors=0 failures=0>
Testloaders
+++++++++++
Some test loaders support hooks like ``load_tests`` and ``test_suite``.
Ensuring your tests have had scenario application done through these hooks can
be a good idea - it means that external test runners (which support these hooks
like ``nose``, ``trial``, ``tribunal``) will still run your scenarios. (Of
course, if you are using the subclassing approach this is already a surety).
With ``load_tests``::
>>> def load_tests(standard_tests, module, loader):
... result = loader.suiteClass()
... result.addTests(generate_scenarios(standard_tests))
... return result
as a convenience, this is available in ``load_tests_apply_scenarios``, so a
module using scenario tests need only say ::
>>> from testscenarios import load_tests_apply_scenarios as load_tests
Python 2.7 and greater support a different calling convention for `load_tests``
<https://bugs.launchpad.net/bzr/+bug/607412>. `load_tests_apply_scenarios`
copes with both.
With ``test_suite``::
>>> def test_suite():
... loader = TestLoader()
... tests = loader.loadTestsFromName(__name__)
... result = loader.suiteClass()
... result.addTests(generate_scenarios(tests))
... return result
Setting Scenarios for a test
============================
A sample test using scenarios can be found in the doc/ folder.
See `pydoc testscenarios` for details.
On the TestCase
+++++++++++++++
You can set a scenarios attribute on the test case::
>>> class MyTest(unittest.TestCase):
...
... scenarios = [
... ('scenario1', dict(param=1)),
... ('scenario2', dict(param=2)),]
This provides the main interface by which scenarios are found for a given test.
Subclasses will inherit the scenarios (unless they override the attribute).
After loading
+++++++++++++
Test scenarios can also be generated arbitrarily later, as long as the test has
not yet run. Simply replace (or alter, but be aware that many tests may share a
single scenarios attribute) the scenarios attribute. For instance in this
example some third party tests are extended to run with a custom scenario. ::
>>> import testtools
>>> class TestTransport:
... """Hypothetical test case for bzrlib transport tests"""
... pass
...
>>> stock_library_tests = unittest.TestLoader().loadTestsFromNames(
... ['doc.test_sample'])
...
>>> for test in testtools.iterate_tests(stock_library_tests):
... if isinstance(test, TestTransport):
... test.scenarios = test.scenarios + [my_vfs_scenario]
...
>>> suite = unittest.TestSuite()
>>> suite.addTests(generate_scenarios(stock_library_tests))
Generated tests don't have a ``scenarios`` list, because they don't normally
require any more expansion. However, you can add a ``scenarios`` list back on
to them, and then run them through ``generate_scenarios`` again to generate the
cross product of tests. ::
>>> class CrossProductDemo(unittest.TestCase):
... scenarios = [('scenario_0_0', {}),
... ('scenario_0_1', {})]
... def test_foo(self):
... return
...
>>> suite = unittest.TestSuite()
>>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo")))
>>> for test in testtools.iterate_tests(suite):
... test.scenarios = [
... ('scenario_1_0', {}),
... ('scenario_1_1', {})]
...
>>> suite2 = unittest.TestSuite()
>>> suite2.addTests(generate_scenarios(suite))
>>> print(suite2.countTestCases())
4
Dynamic Scenarios
+++++++++++++++++
A common use case is to have the list of scenarios be dynamic based on plugins
and available libraries. An easy way to do this is to provide a global scope
scenarios somewhere relevant to the tests that will use it, and then that can
be customised, or dynamically populate your scenarios from a registry etc.
For instance::
>>> hash_scenarios = []
>>> try:
... from hashlib import md5
... except ImportError:
... pass
... else:
... hash_scenarios.append(("md5", dict(hash=md5)))
>>> try:
... from hashlib import sha1
... except ImportError:
... pass
... else:
... hash_scenarios.append(("sha1", dict(hash=sha1)))
...
>>> class TestHashContract(unittest.TestCase):
...
... scenarios = hash_scenarios
...
>>> class TestHashPerformance(unittest.TestCase):
...
... scenarios = hash_scenarios
Forcing Scenarios
+++++++++++++++++
The ``apply_scenarios`` function can be useful to apply scenarios to a test
that has none applied. ``apply_scenarios`` is the workhorse for
``generate_scenarios``, except it takes the scenarios passed in rather than
introspecting the test object to determine the scenarios. The
``apply_scenarios`` function does not reset the test scenarios attribute,
allowing it to be used to layer scenarios without affecting existing scenario
selection.
Generating Scenarios
====================
Some functions (currently one :-) are available to ease generation of scenario
lists for common situations.
Testing Per Implementation Module
+++++++++++++++++++++++++++++++++
It is reasonably common to have multiple Python modules that provide the same
capabilities and interface, and to want apply the same tests to all of them.
In some cases, not all of the statically defined implementations will be able
to be used in a particular testing environment. For example, there may be both
a C and a pure-Python implementation of a module. You want to test the C
module if it can be loaded, but also to have the tests pass if the C module has
not been compiled.
The ``per_module_scenarios`` function generates a scenario for each named
module. The module object of the imported module is set in the supplied
attribute name of the resulting scenario.
Modules which raise ``ImportError`` during import will have the
``sys.exc_info()`` of the exception set instead of the module object. Tests
can check for the attribute being a tuple to decide what to do (e.g. to skip).
Note that for the test to be valid, all access to the module under test must go
through the relevant attribute of the test object. If one of the
implementations is also directly imported by the test module or any other,
testscenarios will not magically stop it being used.
Advice on Writing Scenarios
===========================
If a parameterised test is because of a bug run without being parameterized,
it should fail rather than running with defaults, because this can hide bugs.
Producing Scenarios
===================
The `multiply_scenarios` function produces the cross-product of the scenarios
passed in::
>>> from testscenarios.scenarios import multiply_scenarios
>>>
>>> scenarios = multiply_scenarios(
... [('scenario1', dict(param1=1)), ('scenario2', dict(param1=2))],
... [('scenario2', dict(param2=1))],
... )
>>> scenarios == [('scenario1,scenario2', {'param2': 1, 'param1': 1}),
... ('scenario2,scenario2', {'param2': 1, 'param1': 2})]
True

16
doc/__init__.py Normal file
View File

@ -0,0 +1,16 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.

30
doc/example.py Normal file
View File

@ -0,0 +1,30 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
"""Example TestScenario."""
from testscenarios import TestWithScenarios
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class SampleWithScenarios(TestWithScenarios):
scenarios = [scenario1, scenario2]
def test_demo(self):
self.assertIsInstance(self.attribute, str)

22
doc/test_sample.py Normal file
View File

@ -0,0 +1,22 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
import unittest
class TestSample(unittest.TestCase):
def test_so_easy(self):
pass

2
requirements.txt Normal file
View File

@ -0,0 +1,2 @@
pbr >= 0.11
testtools

23
setup.cfg Normal file
View File

@ -0,0 +1,23 @@
[metadata]
name = testscenarios
summary = Testscenarios, a pyunit extension for dependency injection
description-file = README
author = Testing-cabal
author-email = testing-cabal@lists.launchpad.net
home-page = https://launchpad.net/testscenarios
classifier =
Development Status :: 6 - Mature
Intended Audience :: Developers
License :: OSI Approved :: BSD License
License :: OSI Approved :: Apache Software License
Operating System :: OS Independent
Programming Language :: Python
Programming Language :: Python :: 3
Topic :: Software Development :: Quality Assurance
Topic :: Software Development :: Testing
[test]
test_module = testscenarios.tests
[bdist_wheel]
universal = 1

7
setup.py Executable file
View File

@ -0,0 +1,7 @@
#!/usr/bin/env python
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

75
testscenarios/__init__.py Normal file
View File

@ -0,0 +1,75 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
"""Support for running tests with different scenarios declaratively
Testscenarios provides clean dependency injection for python unittest style
tests. This can be used for interface testing (testing many implementations via
a single test suite) or for classic dependency injection (provide tests with
dependencies externally to the test code itself, allowing easy testing in
different situations).
See the README for a manual, and the docstrings on individual functions and
methods for details.
"""
# same format as sys.version_info: "A tuple containing the five components of
# the version number: major, minor, micro, releaselevel, and serial. All
# values except releaselevel are integers; the release level is 'alpha',
# 'beta', 'candidate', or 'final'. The version_info value corresponding to the
# Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a
# releaselevel of 'dev' for unreleased under-development code.
#
# If the releaselevel is 'alpha' then the major/minor/micro components are not
# established at this point, and setup.py will use a version of next-$(revno).
# If the releaselevel is 'final', then the tarball will be major.minor.micro.
# Otherwise it is major.minor.micro~$(revno).
from pbr.version import VersionInfo
_version = VersionInfo('testscenarios')
__version__ = _version.semantic_version().version_tuple()
version = _version.release_string()
__all__ = [
'TestWithScenarios',
'WithScenarios',
'apply_scenario',
'apply_scenarios',
'generate_scenarios',
'load_tests_apply_scenarios',
'multiply_scenarios',
'per_module_scenarios',
]
from testscenarios.scenarios import (
apply_scenario,
generate_scenarios,
load_tests_apply_scenarios,
multiply_scenarios,
per_module_scenarios,
)
from testscenarios.testcase import TestWithScenarios, WithScenarios
def test_suite():
import testscenarios.tests
return testscenarios.tests.test_suite()
def load_tests(standard_tests, module, loader):
standard_tests.addTests(loader.loadTestsFromNames(["testscenarios.tests"]))
return standard_tests

167
testscenarios/scenarios.py Normal file
View File

@ -0,0 +1,167 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
# Copyright (c) 2010, 2011 Martin Pool <mbp@sourcefrog.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
__all__ = [
'apply_scenario',
'apply_scenarios',
'generate_scenarios',
'load_tests_apply_scenarios',
'multiply_scenarios',
]
from itertools import (
chain,
product,
)
import sys
import unittest
from testtools.testcase import clone_test_with_new_id
from testtools import iterate_tests
def apply_scenario(scenario, test):
"""Apply scenario to test.
:param scenario: A tuple (name, parameters) to apply to the test. The test
is cloned, its id adjusted to have (name) after it, and the parameters
dict is used to update the new test.
:param test: The test to apply the scenario to. This test is unaltered.
:return: A new test cloned from test, with the scenario applied.
"""
name, parameters = scenario
scenario_suffix = '(' + name + ')'
newtest = clone_test_with_new_id(test,
test.id() + scenario_suffix)
test_desc = test.shortDescription()
if test_desc is not None:
newtest_desc = "%(test_desc)s %(scenario_suffix)s" % vars()
newtest.shortDescription = (lambda: newtest_desc)
for key, value in parameters.items():
setattr(newtest, key, value)
return newtest
def apply_scenarios(scenarios, test):
"""Apply many scenarios to a test.
:param scenarios: An iterable of scenarios.
:param test: A test to apply the scenarios to.
:return: A generator of tests.
"""
for scenario in scenarios:
yield apply_scenario(scenario, test)
def generate_scenarios(test_or_suite):
"""Yield the tests in test_or_suite with scenario multiplication done.
TestCase objects with no scenarios specified are yielded unaltered. Tests
with scenarios are not yielded at all, instead the results of multiplying
them by the scenarios they specified gets yielded.
:param test_or_suite: A TestCase or TestSuite.
:return: A generator of tests - objects satisfying the TestCase protocol.
"""
for test in iterate_tests(test_or_suite):
scenarios = getattr(test, 'scenarios', None)
if scenarios:
for newtest in apply_scenarios(scenarios, test):
newtest.scenarios = None
yield newtest
else:
yield test
def load_tests_apply_scenarios(*params):
"""Adapter test runner load hooks to call generate_scenarios.
If this is referenced by the `load_tests` attribute of a module, then
testloaders that implement this protocol will automatically arrange for
the scenarios to be expanded. This can be used instead of using
TestWithScenarios.
Two different calling conventions for load_tests have been used, and this
function should support both. Python 2.7 passes (loader, standard_tests,
pattern), and bzr used (standard_tests, module, loader).
:param loader: A TestLoader.
:param standard_test: The test objects found in this module before
multiplication.
"""
if getattr(params[0], 'suiteClass', None) is not None:
loader, standard_tests, pattern = params
else:
standard_tests, module, loader = params
result = loader.suiteClass()
result.addTests(generate_scenarios(standard_tests))
return result
def multiply_scenarios(*scenarios):
"""Multiply two or more iterables of scenarios.
It is safe to pass scenario generators or iterators.
:returns: A list of compound scenarios: the cross-product of all
scenarios, with the names concatenated and the parameters
merged together.
"""
result = []
scenario_lists = map(list, scenarios)
for combination in product(*scenario_lists):
names, parameters = zip(*combination)
scenario_name = ','.join(names)
scenario_parameters = {}
for parameter in parameters:
scenario_parameters.update(parameter)
result.append((scenario_name, scenario_parameters))
return result
def per_module_scenarios(attribute_name, modules):
"""Generate scenarios for available implementation modules.
This is typically used when there is a subsystem implemented, for
example, in both Python and C, and we want to apply the same tests to
both, but the C module may sometimes not be available.
Note: if the module can't be loaded, the sys.exc_info() tuple for the
exception raised during import of the module is used instead of the module
object. A common idiom is to check in setUp for that and raise a skip or
error for that case. No special helpers are supplied in testscenarios as
yet.
:param attribute_name: A name to be set in the scenario parameter
dictionary (and thence onto the test instance) pointing to the
implementation module (or import exception) for this scenario.
:param modules: An iterable of (short_name, module_name), where
the short name is something like 'python' to put in the
scenario name, and the long name is a fully-qualified Python module
name.
"""
scenarios = []
for short_name, module_name in modules:
try:
mod = __import__(module_name, {}, {}, [''])
except:
mod = sys.exc_info()
scenarios.append((
short_name,
{attribute_name: mod}))
return scenarios

70
testscenarios/testcase.py Normal file
View File

@ -0,0 +1,70 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
__all__ = [
'TestWithScenarios',
'WithScenarios',
]
import unittest
from testtools.testcase import clone_test_with_new_id
from testscenarios.scenarios import generate_scenarios
_doc = """
When a test object which inherits from WithScenarios is run, and there is a
non-empty scenarios attribute on the object, the test is multiplied by the
run method into one test per scenario. For this to work reliably the
WithScenarios.run method must not be overriden in a subclass (or overridden
compatibly with WithScenarios).
"""
class WithScenarios(object):
__doc__ = """A mixin for TestCase with support for declarative scenarios.
""" + _doc
def _get_scenarios(self):
return getattr(self, 'scenarios', None)
def countTestCases(self):
scenarios = self._get_scenarios()
if not scenarios:
return 1
else:
return len(scenarios)
def debug(self):
scenarios = self._get_scenarios()
if scenarios:
for test in generate_scenarios(self):
test.debug()
else:
return super(WithScenarios, self).debug()
def run(self, result=None):
scenarios = self._get_scenarios()
if scenarios:
for test in generate_scenarios(self):
test.run(result)
return
else:
return super(WithScenarios, self).run(result)
class TestWithScenarios(WithScenarios, unittest.TestCase):
__doc__ = """Unittest TestCase with support for declarative scenarios.
""" + _doc

View File

@ -0,0 +1,43 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
import doctest
import sys
import unittest
import testscenarios
def test_suite():
result = unittest.TestSuite()
standard_tests = unittest.TestSuite()
module = sys.modules['testscenarios.tests']
loader = unittest.TestLoader()
return load_tests(standard_tests, module, loader)
def load_tests(standard_tests, module, loader):
test_modules = [
'testcase',
'scenarios',
]
prefix = "testscenarios.tests.test_"
test_mod_names = [prefix + test_module for test_module in test_modules]
standard_tests.addTests(loader.loadTestsFromNames(test_mod_names))
doctest.set_unittest_reportflags(doctest.REPORT_ONLY_FIRST_FAILURE)
standard_tests.addTest(
doctest.DocFileSuite("../../README", optionflags=doctest.ELLIPSIS))
return loader.suiteClass(testscenarios.generate_scenarios(standard_tests))

View File

@ -0,0 +1,262 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
# Copyright (c) 2010, 2011 Martin Pool <mbp@sourcefrog.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
import unittest
import testtools
from testtools.matchers import EndsWith
from testtools.tests.helpers import LoggingResult
import testscenarios
from testscenarios.scenarios import (
apply_scenario,
apply_scenarios,
generate_scenarios,
load_tests_apply_scenarios,
multiply_scenarios,
)
class TestGenerateScenarios(testtools.TestCase):
def hook_apply_scenarios(self):
self.addCleanup(setattr, testscenarios.scenarios, 'apply_scenarios',
apply_scenarios)
log = []
def capture(scenarios, test):
log.append((scenarios, test))
return apply_scenarios(scenarios, test)
testscenarios.scenarios.apply_scenarios = capture
return log
def test_generate_scenarios_preserves_normal_test(self):
class ReferenceTest(unittest.TestCase):
def test_pass(self):
pass
test = ReferenceTest("test_pass")
log = self.hook_apply_scenarios()
self.assertEqual([test], list(generate_scenarios(test)))
self.assertEqual([], log)
def test_tests_with_scenarios_calls_apply_scenarios(self):
class ReferenceTest(unittest.TestCase):
scenarios = [('demo', {})]
def test_pass(self):
pass
test = ReferenceTest("test_pass")
log = self.hook_apply_scenarios()
tests = list(generate_scenarios(test))
self.expectThat(
tests[0].id(), EndsWith('ReferenceTest.test_pass(demo)'))
self.assertEqual([([('demo', {})], test)], log)
def test_all_scenarios_yielded(self):
class ReferenceTest(unittest.TestCase):
scenarios = [('1', {}), ('2', {})]
def test_pass(self):
pass
test = ReferenceTest("test_pass")
tests = list(generate_scenarios(test))
self.expectThat(
tests[0].id(), EndsWith('ReferenceTest.test_pass(1)'))
self.expectThat(
tests[1].id(), EndsWith('ReferenceTest.test_pass(2)'))
def test_scenarios_attribute_cleared(self):
class ReferenceTest(unittest.TestCase):
scenarios = [
('1', {'foo': 1, 'bar': 2}),
('2', {'foo': 2, 'bar': 4})]
def test_check_foo(self):
pass
test = ReferenceTest("test_check_foo")
tests = list(generate_scenarios(test))
for adapted in tests:
self.assertEqual(None, adapted.scenarios)
def test_multiple_tests(self):
class Reference1(unittest.TestCase):
scenarios = [('1', {}), ('2', {})]
def test_something(self):
pass
class Reference2(unittest.TestCase):
scenarios = [('3', {}), ('4', {})]
def test_something(self):
pass
suite = unittest.TestSuite()
suite.addTest(Reference1("test_something"))
suite.addTest(Reference2("test_something"))
tests = list(generate_scenarios(suite))
self.assertEqual(4, len(tests))
class TestApplyScenario(testtools.TestCase):
def setUp(self):
super(TestApplyScenario, self).setUp()
self.scenario_name = 'demo'
self.scenario_attrs = {'foo': 'bar'}
self.scenario = (self.scenario_name, self.scenario_attrs)
class ReferenceTest(unittest.TestCase):
def test_pass(self):
pass
def test_pass_with_docstring(self):
""" The test that always passes.
This test case has a PEP 257 conformant docstring,
with its first line being a brief synopsis and the
rest of the docstring explaining that this test
does nothing but pass unconditionally.
"""
pass
self.ReferenceTest = ReferenceTest
def test_sets_specified_id(self):
raw_test = self.ReferenceTest('test_pass')
raw_id = "ReferenceTest.test_pass"
scenario_name = self.scenario_name
expect_id = "%(raw_id)s(%(scenario_name)s)" % vars()
modified_test = apply_scenario(self.scenario, raw_test)
self.expectThat(modified_test.id(), EndsWith(expect_id))
def test_sets_specified_attributes(self):
raw_test = self.ReferenceTest('test_pass')
modified_test = apply_scenario(self.scenario, raw_test)
self.assertEqual('bar', modified_test.foo)
def test_appends_scenario_name_to_short_description(self):
raw_test = self.ReferenceTest('test_pass_with_docstring')
modified_test = apply_scenario(self.scenario, raw_test)
raw_doc = self.ReferenceTest.test_pass_with_docstring.__doc__
raw_desc = raw_doc.split("\n")[0].strip()
scenario_name = self.scenario_name
expect_desc = "%(raw_desc)s (%(scenario_name)s)" % vars()
self.assertEqual(expect_desc, modified_test.shortDescription())
class TestApplyScenarios(testtools.TestCase):
def test_calls_apply_scenario(self):
self.addCleanup(setattr, testscenarios.scenarios, 'apply_scenario',
apply_scenario)
log = []
def capture(scenario, test):
log.append((scenario, test))
testscenarios.scenarios.apply_scenario = capture
scenarios = ["foo", "bar"]
result = list(apply_scenarios(scenarios, "test"))
self.assertEqual([('foo', 'test'), ('bar', 'test')], log)
def test_preserves_scenarios_attribute(self):
class ReferenceTest(unittest.TestCase):
scenarios = [('demo', {})]
def test_pass(self):
pass
test = ReferenceTest("test_pass")
tests = list(apply_scenarios(ReferenceTest.scenarios, test))
self.assertEqual([('demo', {})], ReferenceTest.scenarios)
self.assertEqual(ReferenceTest.scenarios, tests[0].scenarios)
class TestLoadTests(testtools.TestCase):
class SampleTest(unittest.TestCase):
def test_nothing(self):
pass
scenarios = [
('a', {}),
('b', {}),
]
def test_load_tests_apply_scenarios(self):
suite = load_tests_apply_scenarios(
unittest.TestLoader(),
[self.SampleTest('test_nothing')],
None)
result_tests = list(testtools.iterate_tests(suite))
self.assertEquals(
2,
len(result_tests),
result_tests)
def test_load_tests_apply_scenarios_old_style(self):
"""Call load_tests in the way used by bzr."""
suite = load_tests_apply_scenarios(
[self.SampleTest('test_nothing')],
self.__class__.__module__,
unittest.TestLoader(),
)
result_tests = list(testtools.iterate_tests(suite))
self.assertEquals(
2,
len(result_tests),
result_tests)
class TestMultiplyScenarios(testtools.TestCase):
def test_multiply_scenarios(self):
def factory(name):
for i in 'ab':
yield i, {name: i}
scenarios = multiply_scenarios(factory('p'), factory('q'))
self.assertEqual([
('a,a', dict(p='a', q='a')),
('a,b', dict(p='a', q='b')),
('b,a', dict(p='b', q='a')),
('b,b', dict(p='b', q='b')),
],
scenarios)
def test_multiply_many_scenarios(self):
def factory(name):
for i in 'abc':
yield i, {name: i}
scenarios = multiply_scenarios(factory('p'), factory('q'),
factory('r'), factory('t'))
self.assertEqual(
3**4,
len(scenarios),
scenarios)
self.assertEqual(
'a,a,a,a',
scenarios[0][0])
class TestPerModuleScenarios(testtools.TestCase):
def test_per_module_scenarios(self):
"""Generate scenarios for available modules"""
s = testscenarios.scenarios.per_module_scenarios(
'the_module', [
('Python', 'testscenarios'),
('unittest', 'unittest'),
('nonexistent', 'nonexistent'),
])
self.assertEqual('nonexistent', s[-1][0])
self.assertIsInstance(s[-1][1]['the_module'], tuple)
s[-1][1]['the_module'] = None
self.assertEqual(s, [
('Python', {'the_module': testscenarios}),
('unittest', {'the_module': unittest}),
('nonexistent', {'the_module': None}),
])

View File

@ -0,0 +1,155 @@
# testscenarios: extensions to python unittest to allow declarative
# dependency injection ('scenarios') by tests.
#
# Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# license you chose for the specific language governing permissions and
# limitations under that license.
import unittest
import testtools
from testtools.matchers import EndsWith
from testtools.tests.helpers import LoggingResult
import testscenarios
class TestTestWithScenarios(testtools.TestCase):
scenarios = testscenarios.scenarios.per_module_scenarios(
'impl', (('unittest', 'unittest'), ('unittest2', 'unittest2')))
@property
def Implementation(self):
if isinstance(self.impl, tuple):
self.skipTest('import failed - module not installed?')
class Implementation(testscenarios.WithScenarios, self.impl.TestCase):
pass
return Implementation
def test_no_scenarios_no_error(self):
class ReferenceTest(self.Implementation):
def test_pass(self):
pass
test = ReferenceTest("test_pass")
result = unittest.TestResult()
test.run(result)
self.assertTrue(result.wasSuccessful())
self.assertEqual(1, result.testsRun)
def test_with_one_scenario_one_run(self):
class ReferenceTest(self.Implementation):
scenarios = [('demo', {})]
def test_pass(self):
pass
test = ReferenceTest("test_pass")
log = []
result = LoggingResult(log)
test.run(result)
self.assertTrue(result.wasSuccessful())
self.assertEqual(1, result.testsRun)
self.expectThat(
log[0][1].id(), EndsWith('ReferenceTest.test_pass(demo)'))
def test_with_two_scenarios_two_run(self):
class ReferenceTest(self.Implementation):
scenarios = [('1', {}), ('2', {})]
def test_pass(self):
pass
test = ReferenceTest("test_pass")
log = []
result = LoggingResult(log)
test.run(result)
self.assertTrue(result.wasSuccessful())
self.assertEqual(2, result.testsRun)
self.expectThat(
log[0][1].id(), EndsWith('ReferenceTest.test_pass(1)'))
self.expectThat(
log[4][1].id(), EndsWith('ReferenceTest.test_pass(2)'))
def test_attributes_set(self):
class ReferenceTest(self.Implementation):
scenarios = [
('1', {'foo': 1, 'bar': 2}),
('2', {'foo': 2, 'bar': 4})]
def test_check_foo(self):
self.assertEqual(self.foo * 2, self.bar)
test = ReferenceTest("test_check_foo")
log = []
result = LoggingResult(log)
test.run(result)
self.assertTrue(result.wasSuccessful())
self.assertEqual(2, result.testsRun)
def test_scenarios_attribute_cleared(self):
class ReferenceTest(self.Implementation):
scenarios = [
('1', {'foo': 1, 'bar': 2}),
('2', {'foo': 2, 'bar': 4})]
def test_check_foo(self):
self.assertEqual(self.foo * 2, self.bar)
test = ReferenceTest("test_check_foo")
log = []
result = LoggingResult(log)
test.run(result)
self.assertTrue(result.wasSuccessful())
self.assertEqual(2, result.testsRun)
self.assertNotEqual(None, test.scenarios)
self.assertEqual(None, log[0][1].scenarios)
self.assertEqual(None, log[4][1].scenarios)
def test_countTestCases_no_scenarios(self):
class ReferenceTest(self.Implementation):
def test_check_foo(self):
pass
test = ReferenceTest("test_check_foo")
self.assertEqual(1, test.countTestCases())
def test_countTestCases_empty_scenarios(self):
class ReferenceTest(self.Implementation):
scenarios = []
def test_check_foo(self):
pass
test = ReferenceTest("test_check_foo")
self.assertEqual(1, test.countTestCases())
def test_countTestCases_1_scenarios(self):
class ReferenceTest(self.Implementation):
scenarios = [('1', {'foo': 1, 'bar': 2})]
def test_check_foo(self):
pass
test = ReferenceTest("test_check_foo")
self.assertEqual(1, test.countTestCases())
def test_countTestCases_2_scenarios(self):
class ReferenceTest(self.Implementation):
scenarios = [
('1', {'foo': 1, 'bar': 2}),
('2', {'foo': 2, 'bar': 4})]
def test_check_foo(self):
pass
test = ReferenceTest("test_check_foo")
self.assertEqual(2, test.countTestCases())
def test_debug_2_scenarios(self):
log = []
class ReferenceTest(self.Implementation):
scenarios = [
('1', {'foo': 1, 'bar': 2}),
('2', {'foo': 2, 'bar': 4})]
def test_check_foo(self):
log.append(self)
test = ReferenceTest("test_check_foo")
test.debug()
self.assertEqual(2, len(log))
self.assertEqual(None, log[0].scenarios)
self.assertEqual(None, log[1].scenarios)
self.assertNotEqual(log[0].id(), log[1].id())