testing.md 10.6 KB
Newer Older
1 2
# Testing

3
## Overview
4

5 6
We maintain three kinds of tests: unit tests, integration tests,
and acceptance tests.
7

8
### Unit Tests
9

10
* Each test case should be concise: setup, execute, check, and teardown.
11
If you find yourself writing tests with many steps, consider refactoring
12
the unit under tests into smaller units, and then testing those individually.
13

Will Daly committed
14 15
* As a rule of thumb, your unit tests should cover every code branch.

16 17
* Mock or patch external dependencies.
We use [voidspace mock](http://www.voidspace.org.uk/python/mock/).
Jay Zoldak committed
18

19
* We unit test Python code (using [unittest](http://docs.python.org/2/library/unittest.html)) and
20
Javascript (using [Jasmine](http://pivotal.github.io/jasmine/))
Jay Zoldak committed
21

22 23 24
### Integration Tests
* Test several units at the same time.
Note that you can still mock or patch dependencies
25 26
that are not under test!  For example, you might test that
`LoncapaProblem`, `NumericalResponse`, and `CorrectMap` in the
27
`capa` package work together, while still mocking out template rendering.
Jay Zoldak committed
28

29
* Use integration tests to ensure that units are hooked up correctly.
30 31
You do not need to test every possible input--that's what unit
tests are for.  Instead, focus on testing the "happy path"
32
to verify that the components work together correctly.
33

34 35
* Many of our tests use the [Django test client](https://docs.djangoproject.com/en/dev/topics/testing/overview/) to simulate
HTTP requests to the server.
36

37 38 39 40 41 42 43 44 45
### UI Acceptance Tests
* Use these to test that major program features are working correctly.

* We use [lettuce](http://lettuce.it/) to write BDD-style tests.  Most of
these tests simulate user interactions through the browser using
[splinter](http://splinter.cobrateam.info/).

Overall, you want to write the tests that **maximize coverage**
while **minimizing maintenance**.
46 47
In practice, this usually means investing heavily
in unit tests, which tend to be the most robust to changes in the code base.
48

49 50 51 52
![Test Pyramid](test_pyramid.png)

The pyramid above shows the relative number of unit tests, integration tests,
and acceptance tests.  Most of our tests are unit tests or integration tests.
53 54 55

## Test Locations

56
* Python unit and integration tests: Located in
57
subpackages called `tests`.
58
For example, the tests for the `capa` package are located in
59 60 61
`common/lib/capa/capa/tests`.

* Javascript unit tests: Located in `spec` folders.  For example,
62
`common/lib/xmodule/xmodule/js/spec` and `{cms,lms}/static/coffee/spec`
63 64
For consistency, you should use the same directory structure for implementation
and test.  For example, the test for `src/views/module.coffee`
65 66
should be written in `spec/views/module_spec.coffee`.

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88
* UI acceptance tests:
    - Set up and helper methods: `common/djangoapps/terrain`
    - Tests: located in `features` subpackage within a Django app.
    For example: `lms/djangoapps/courseware/features`


## Factories

Many tests delegate set-up to a "factory" class.  For example,
there are factories for creating courses, problems, and users.
This encapsulates set-up logic from tests.

Factories are often implemented using [FactoryBoy](https://readthedocs.org/projects/factoryboy/)

In general, factories should be located close to the code they use.
For example, the factory for creating problem XML definitions
 is located in `common/lib/capa/capa/tests/response_xml_factory.py`
because the `capa` package handles problem XML.


# Running Tests

89
You can run all of the unit-level tests using the command
90

91
    rake test
92

93 94
This includes python, javascript, and documentation tests. It does not, however,
run any acceptance tests.
95 96 97 98 99 100 101

## Running Python Unit tests

We use [nose](https://nose.readthedocs.org/en/latest/) through
the [django-nose plugin](https://pypi.python.org/pypi/django-nose)
to run the test suite.

102
You can run all the python tests using `rake` commands.  For example,
103

104
    rake test:python
105

106
runs all the tests.  It also runs `collectstatic`, which prepares the static files used by the site (for example, compiling Coffeescript to Javascript).
107

108
You can re-run all failed python tests by running
109

110
    rake test:python[--failed]
111

112 113 114 115 116 117 118 119 120 121 122 123
You can also run the tests without `collectstatic`, which tends to be faster:

    rake fasttest_lms

or

    rake fasttest_cms

xmodule can be tested independently, with this:

    rake test_common/lib/xmodule

124 125 126 127 128
other module level tests include

* `rake test_common/lib/capa`
* `rake test_common/lib/calc`

129 130
To run a single django test class:

Will Daly committed
131
    rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest]
132 133 134

To run a single django test:

Will Daly committed
135
    rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login]
136

137 138 139 140
To re-run all failing django tests from lms or cms:

    rake test_lms[--failed]

141
To run a single nose test file:
142

143
    nosetests common/lib/xmodule/xmodule/tests/test_stringify.py
Jay Zoldak committed
144

145
To run a single nose test:
Jay Zoldak committed
146

147
    nosetests common/lib/xmodule/xmodule/tests/test_stringify.py:test_stringify
Jay Zoldak committed
148

149 150 151 152 153 154
To run a single test and get stdout, with proper env config:

    python manage.py cms --settings test test contentstore.tests.test_import_nostatic -s

To run a single test and get stdout and get coverage:

155
    python -m coverage run --rcfile=./common/lib/xmodule/.coveragerc which ./manage.py cms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 contentstore.tests.test_import_nostatic -s # cms example
156 157 158 159
    python -m coverage run --rcfile=./lms/.coveragerc which ./manage.py lms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000  courseware.tests.test_module_render -s # lms example

generate coverage report:

160
    coverage report --rcfile=./common/lib/xmodule/.coveragerc
161 162 163 164

or to get html report:

    coverage html --rcfile=./common/lib/xmodule/.coveragerc
165 166

then browse reports/common/lib/xmodule/cover/index.html
167

Jay Zoldak committed
168

169 170 171
Very handy: if you uncomment the `pdb=1` line in `setup.cfg`, it will drop you into pdb on error.  This lets you go up and down the stack and see what the values of the variables are.  Check out [the pdb documentation](http://docs.python.org/library/pdb.html)


172
### Running Javascript Unit Tests
173

174
We use Jasmine to run JavaScript unit tests.  To run all the JavaScript tests:
175

176
    rake test:js
177

178
To run a specific set of JavaScript tests and print the results to the console:
179

180 181 182 183
    rake test:js:run[lms]
    rake test:js:run[cms]
    rake test:js:run[xmodule]
    rake test:js:run[common]
184

185
To run JavaScript tests in your default browser:
186

187 188 189 190
    rake test:js:dev[lms]
    rake test:js:dev[cms]
    rake test:js:dev[xmodule]
    rake test:js:dev[common]
191

192
These rake commands call through to a custom test runner.  For more info, see [js-test-tool](https://github.com/edx/js-test-tool).
193

194

195 196 197 198 199
### Running Acceptance Tests

We use [Lettuce](http://lettuce.it/) for acceptance testing.
Most of our tests use [Splinter](http://splinter.cobrateam.info/)
to simulate UI browser interactions.  Splinter, in turn,
200
uses [Selenium](http://docs.seleniumhq.org/) to control the Chrome browser.
201

202
**Prerequisite**: You must have [ChromeDriver](https://code.google.com/p/selenium/wiki/ChromeDriver)
203
installed to run the tests in Chrome.  The tests are confirmed to run
204 205
with Chrome (not Chromium) version 28.0.1500.71 with ChromeDriver
version 2.1.210398.
206

207
To run all the acceptance tests:
208
    rake test:acceptance
209

210 211 212 213
To run only for lms or cms:

    rake test:acceptance:lms
    rake test:acceptance:cms
214

215
To test only a specific feature:
216

217
    rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature"]
218 219 220

To test only a specific scenario

221
    rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature -s 3"]
222

223 224
To start the debugger on failure, add the `--pdb` option:

225
    rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature --pdb"]
226 227

To run tests faster by not collecting static files, you can use
228
`rake test:acceptance:lms:fast` and `rake test:acceptance:cms:fast`.
229

230 231 232 233
Acceptance tests will run on a randomized port and can be run in the background of rake cms and lms or unit tests.
To specify the port, change the LETTUCE_SERVER_PORT constant in cms/envs/acceptance.py and lms/envs/acceptance.py
as well as the port listed in cms/djangoapps/contentstore/feature/upload.py

234
**Note**: The acceptance tests can *not* currently run in parallel.
235 236 237 238 239 240 241 242 243 244 245 246 247

## Viewing Test Coverage

We currently collect test coverage information for Python unit/integration tests.

To view test coverage:

1. Run the test suite:

        rake test

2. Generate reports:

248
        rake coverage
249

250 251
3. Reports are located in the `reports` folder.  The command
generates HTML and XML (Cobertura format) reports.
252 253 254 255 256


## Testing using queue servers

When testing problems that use a queue server on AWS (e.g. sandbox-xqueue.edx.org), you'll need to run your server on your public IP, like so.
Jay Zoldak committed
257

258
`./manage.py lms runserver 0.0.0.0:8000`
Jay Zoldak committed
259

260
When you connect to the LMS, you need to use the public ip.  Use `ifconfig` to figure out the number, and connect e.g. to `http://18.3.4.5:8000/`
261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287


## Acceptance Test Techniques

1. Do not assert not if possible for css.  Use world.is_css_present and is_css_not_present
    Errors can arise if checks for the css are performed before the page finishes loading.
    To get around this, there are functions that will wait a period of time for the css to appear
    before returning and return immediately if they are there.  There is a reverse to this function as well.
    It will wait for the css to not appear and returns if it isn't there.

    All css functions can utilize this timeout to ensure that the page is fully loaded

2. Dealing with alerts
    Chrome can hang on javascripts alerts.  If a javascript alert/prompt/confirmation is expected, use the step
    'I will confirm all alerts', 'I will cancel all alerts' or 'I will anser all prompts with "(.*)"' before the step
    that causes the alert in order to properly deal with it.

3. Dealing with stale element reference exceptions
    These exceptions happen if any part of the page is refreshed in between finding an element and accessing the element.
    When possible, use any of the css functions in common/djangoapps/terrain/ui_helpers.py as they will retry the action
    in case of this exception.  If the functionality is not there, wrap the function with world.retry_on_exception.  This function takes in a function and will retry and return the result of the function if there was an exception

4. Scenario Level Constants
    If you want an object to be available for the entire scenario, it can be stored in world.scenario_dict.  This object
    is a dictionary that gets refreshed at the beginning on the scenario.  Currently, the current logged in user and the current created course are stored under 'COURSE' and 'USER'.  This will help prevent strings from being hard coded so the
    acceptance tests can become more flexible.