Commit 5df93d7f by Ned Batchelder

Use nicer .rst for testing.rst

parent 09b235bc
...@@ -31,45 +31,51 @@ Unit Tests ...@@ -31,45 +31,51 @@ Unit Tests
- As a rule of thumb, your unit tests should cover every code branch. - As a rule of thumb, your unit tests should cover every code branch.
- Mock or patch external dependencies. We use voidspace - Mock or patch external dependencies. We use the voidspace `Mock Library`_.
`Mock Library <http://www.voidspace.org.uk/python/mock/>`__.
- We unit test Python code (using `unittest`_) and Javascript (using
`Jasmine`_)
.. _Mock Library: http://www.voidspace.org.uk/python/mock/
.. _unittest: http://docs.python.org/2/library/unittest.html
.. _Jasmine: http://jasmine.github.io/
- We unit test Python code (using
`unittest <http://docs.python.org/2/library/unittest.html>`__) and
Javascript (using `Jasmine <http://jasmine.github.io/>`__)
Integration Tests Integration Tests
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
- Test several units at the same time. Note that you can still mock or - Test several units at the same time. Note that you can still mock or patch
patch dependencies that are not under test! For example, you might dependencies that are not under test! For example, you might test that
test that ``LoncapaProblem``, ``NumericalResponse``, and ``LoncapaProblem``, ``NumericalResponse``, and ``CorrectMap`` in the ``capa``
``CorrectMap`` in the ``capa`` package work together, while still package work together, while still mocking out template rendering.
mocking out template rendering.
- Use integration tests to ensure that units are hooked up correctly. You do
not need to test every possible input--that's what unit tests are for.
Instead, focus on testing the "happy path" to verify that the components work
together correctly.
- Use integration tests to ensure that units are hooked up correctly. - Many of our tests use the `Django test client`_ to simulate HTTP requests to
You do not need to test every possible input--that's what unit tests the server.
are for. Instead, focus on testing the "happy path" to verify that
the components work together correctly. .. _Django test client: https://docs.djangoproject.com/en/dev/topics/testing/overview/
- Many of our tests use the `Django test
client <https://docs.djangoproject.com/en/dev/topics/testing/overview/>`__
to simulate HTTP requests to the server.
UI Acceptance Tests UI Acceptance Tests
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
- Use these to test that major program features are working correctly. - Use these to test that major program features are working correctly.
- We use `Bok - We use `Bok Choy`_ to write end-user acceptance tests directly in Python,
Choy <http://bok-choy.readthedocs.org/en/latest/tutorial.html>`__ to using the framework to maximize reliability and maintainability.
write end-user acceptance tests directly in Python, using the
framework to maximize reliability and maintainability. - We used to use `lettuce`_ to write BDD-style tests but it's now deprecated
in favor of Bok Choy for new tests. Most of these tests simulate user
interactions through the browser using `splinter`_.
.. _Bok Choy: http://bok-choy.readthedocs.org/en/latest/tutorial.html
.. _lettuce: http://lettuce.it/
.. _splinter: http://splinter.cobrateam.info/
- We used to use `lettuce <http://lettuce.it/>`__ to write BDD-style
tests but it's now deprecated in favor of Bok Choy for new tests.
Most of these tests simulate user interactions through the browser
using `splinter <http://splinter.cobrateam.info/>`__.
Internationalization Internationalization
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
...@@ -108,20 +114,20 @@ Many tests delegate set-up to a "factory" class. For example, there are ...@@ -108,20 +114,20 @@ Many tests delegate set-up to a "factory" class. For example, there are
factories for creating courses, problems, and users. This encapsulates factories for creating courses, problems, and users. This encapsulates
set-up logic from tests. set-up logic from tests.
Factories are often implemented using Factories are often implemented using `FactoryBoy`_.
`FactoryBoy <https://readthedocs.org/projects/factoryboy/>`__
In general, factories should be located close to the code they use. For In general, factories should be located close to the code they use. For
example, the factory for creating problem XML definitions is located in example, the factory for creating problem XML definitions is located in
``common/lib/capa/capa/tests/response_xml_factory.py`` because the ``common/lib/capa/capa/tests/response_xml_factory.py`` because the
``capa`` package handles problem XML. ``capa`` package handles problem XML.
.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/
Running Tests Running Tests
============= =============
You can run all of the unit-level tests using this command. You can run all of the unit-level tests using this command::
::
paver test paver test
...@@ -129,20 +135,18 @@ This includes python, javascript, and documentation tests. It does not, ...@@ -129,20 +135,18 @@ This includes python, javascript, and documentation tests. It does not,
however, run any acceptance tests. however, run any acceptance tests.
Note - Note -
`paver` is a scripting tool. To get information about various options, you can run the this command. `paver` is a scripting tool. To get information about various options, you can run the this command::
::
paver -h paver -h
Running Python Unit tests Running Python Unit tests
------------------------- -------------------------
We use `pytest <https://pytest.org/>`__ to run the test suite. We use `pytest`_ to run the test suite.
For example, this command runs all the python test scripts. .. _pytest: https://pytest.org/
:: For example, this command runs all the python test scripts::
paver test_python paver test_python
...@@ -151,49 +155,34 @@ static files used by the site (for example, compiling CoffeeScript to ...@@ -151,49 +155,34 @@ static files used by the site (for example, compiling CoffeeScript to
JavaScript). JavaScript).
You can re-run all failed python tests by running this command (see note at end of You can re-run all failed python tests by running this command (see note at end of
section). section)::
::
paver test_python --failed paver test_python --failed
To test lms python tests use this command. To test lms python tests use this command::
::
paver test_system -s lms paver test_system -s lms
To test cms python tests use this command. To test cms python tests use this command::
::
paver test_system -s cms paver test_system -s cms
To run these tests without ``collectstatic``, which is faster, append the following argument. To run these tests without ``collectstatic``, which is faster, append the following argument::
::
paver test_system -s lms --fasttest paver test_system -s lms --fasttest
To run cms python tests without ``collectstatic`` use this command. To run cms python tests without ``collectstatic`` use this command::
::
paver test_system -s cms --fasttest paver test_system -s cms --fasttest
For the sake of speed, by default the python unit test database tables For the sake of speed, by default the python unit test database tables
are created directly from apps' models. If you want to run the tests are created directly from apps' models. If you want to run the tests
against a database created by applying the migrations instead, use the against a database created by applying the migrations instead, use the
``--enable-migrations`` option. ``--enable-migrations`` option::
::
paver test_system -s lms --enable-migrations paver test_system -s lms --enable-migrations
To run a single django test class use this command. To run a single django test class use this command::
::
paver test_system -t lms/djangoapps/courseware/tests/tests.py::ActivateLoginTest paver test_system -t lms/djangoapps/courseware/tests/tests.py::ActivateLoginTest
...@@ -201,16 +190,12 @@ When developing tests, it is often helpful to be able to really just run ...@@ -201,16 +190,12 @@ When developing tests, it is often helpful to be able to really just run
one single test without the overhead of PIP installs, UX builds, etc. In one single test without the overhead of PIP installs, UX builds, etc. In
this case, it is helpful to look at the output of paver, and run just this case, it is helpful to look at the output of paver, and run just
the specific command (optionally, stripping away coverage metrics). At the specific command (optionally, stripping away coverage metrics). At
the time of this writing, the command is the following. the time of this writing, the command is the following::
::
pytest lms/djangoapps/courseware/tests/test_courses.py pytest lms/djangoapps/courseware/tests/test_courses.py
To run a single test format the command like this. To run a single test format the command like this::
::
paver test_system -t lms/djangoapps/courseware/tests/tests.py::ActivateLoginTest::test_activate_login paver test_system -t lms/djangoapps/courseware/tests/tests.py::ActivateLoginTest::test_activate_login
...@@ -223,9 +208,8 @@ is the number of processes to run tests with, and ``-1`` means one process per ...@@ -223,9 +208,8 @@ is the number of processes to run tests with, and ``-1`` means one process per
available core). Note, however, that when running concurrently, breakpoints may available core). Note, however, that when running concurrently, breakpoints may
not work correctly. not work correctly.
For example: For example::
::
# This will run all tests in the order that they appear in their files, serially # This will run all tests in the order that they appear in their files, serially
paver test_system -s lms --no-randomize --processes=0 paver test_system -s lms --no-randomize --processes=0
...@@ -233,9 +217,7 @@ For example: ...@@ -233,9 +217,7 @@ For example:
paver test_system -s lms --processes=2 paver test_system -s lms --processes=2
To re-run all failing django tests from lms or cms, use the To re-run all failing django tests from lms or cms, use the
``--failed``,\ ``-f`` flag (see note at end of section). ``--failed``,\ ``-f`` flag (see note at end of section)::
::
paver test_system -s lms --failed paver test_system -s lms --failed
paver test_system -s cms --failed paver test_system -s cms --failed
...@@ -244,38 +226,30 @@ There is also a ``--exitfirst``, ``-x`` option that will stop pytest ...@@ -244,38 +226,30 @@ There is also a ``--exitfirst``, ``-x`` option that will stop pytest
after the first failure. after the first failure.
common/lib tests are tested with the ``test_lib`` task, which also common/lib tests are tested with the ``test_lib`` task, which also
accepts the ``--failed`` and ``--exitfirst`` options. accepts the ``--failed`` and ``--exitfirst`` options::
::
paver test_lib -l common/lib/calc paver test_lib -l common/lib/calc
paver test_lib -l common/lib/xmodule --failed paver test_lib -l common/lib/xmodule --failed
For example, this command runs a single python unit test file. For example, this command runs a single python unit test file::
::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py pytest common/lib/xmodule/xmodule/tests/test_stringify.py
To select tests to run based on their name, provide an expression to the `pytest To select tests to run based on their name, provide an expression to the
-k option `pytest -k option`_ which performs a substring match on test names::
<https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name>`__
which performs a substring match on test names.
::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py -k test_stringify pytest common/lib/xmodule/xmodule/tests/test_stringify.py -k test_stringify
Alternatively, you can select tests based on their `node ID .. _pytest -k option: https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name
<https://docs.pytest.org/en/latest/example/markers.html#node-id>`__ directly, .. _node ID: https://docs.pytest.org/en/latest/example/markers.html#node-id
Alternatively, you can select tests based on their `node ID`_ directly,
which is useful when you need to run only one of mutliple tests with the same which is useful when you need to run only one of mutliple tests with the same
name in different classes or files. name in different classes or files.
This command runs any python unit test method that matches the substring This command runs any python unit test method that matches the substring
`test_stringify` within a specified TestCase class within a specified file. `test_stringify` within a specified TestCase class within a specified file::
::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py::TestCase -k test_stringify pytest common/lib/xmodule/xmodule/tests/test_stringify.py::TestCase -k test_stringify
...@@ -286,61 +260,48 @@ same test method, pass the prefix name to the pytest `-k` option. ...@@ -286,61 +260,48 @@ same test method, pass the prefix name to the pytest `-k` option.
If you need to run only one of the test variations, you can the get the If you need to run only one of the test variations, you can the get the
name of all test methods in a class, file, or project, including all ddt.data name of all test methods in a class, file, or project, including all ddt.data
variations, by running pytest with `--collectonly`. variations, by running pytest with `--collectonly`::
::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py --collectonly pytest common/lib/xmodule/xmodule/tests/test_stringify.py --collectonly
This is an example of how to run a single test and get stdout shown immediately, with proper env config. This is an example of how to run a single test and get stdout shown immediately, with proper env config::
::
pytest cms/djangoapps/contentstore/tests/test_import.py -s pytest cms/djangoapps/contentstore/tests/test_import.py -s
These are examples of how to run a single test and get coverage. These are examples of how to run a single test and get coverage::
::
pytest cms/djangoapps/contentstore/tests/test_import.py --cov # cms example pytest cms/djangoapps/contentstore/tests/test_import.py --cov # cms example
pytest lms/djangoapps/courseware/tests/test_module_render.py --cov # lms example pytest lms/djangoapps/courseware/tests/test_module_render.py --cov # lms example
Use this command to generate a coverage report. Use this command to generate a coverage report::
::
coverage report coverage report
Use this command to generate an HTML report. Use this command to generate an HTML report::
::
coverage html coverage html
The report is then saved in reports/common/lib/xmodule/cover/index.html The report is then saved in reports/common/lib/xmodule/cover/index.html
To run tests for stub servers, for example for `YouTube stub To run tests for stub servers, for example for `YouTube stub server`_, you can
server <https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py>`__, run one of these commands::
you can run one of these commands.
::
paver test_system -s cms -t common/djangoapps/terrain/stubs/tests/test_youtube_stub.py paver test_system -s cms -t common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
pytest common/djangoapps/terrain/stubs/tests/test_youtube_stub.py pytest common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
.. _YouTube stub server: https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
.. _the pdb documentation: http://docs.python.org/library/pdb.html
Very handy: if you pass the ``--pdb`` flag to a paver test function, or Very handy: if you pass the ``--pdb`` flag to a paver test function, or
uncomment the ``pdb=1`` line in ``setup.cfg``, the test runner uncomment the ``pdb=1`` line in ``setup.cfg``, the test runner will drop you
will drop you into pdb on error. This lets you go up and down the stack into pdb on error. This lets you go up and down the stack and see what the
and see what the values of the variables are. Check out `the pdb values of the variables are. Check out `the pdb documentation`_. Note that
documentation <http://docs.python.org/library/pdb.html>`__ Note that this this only works if you aren't collecting coverage statistics (pdb and
only works if you aren't collecting coverage statistics (pdb and coverage.py coverage.py use the same mechanism to trace code execution).
use the same mechanism to trace code execution).
Use this command to put a temporary debugging breakpoint in a test. Use this command to put a temporary debugging breakpoint in a test.
If you check this in, your tests will hang on jenkins. If you check this in, your tests will hang on jenkins::
::
import pdb; pdb.set_trace() import pdb; pdb.set_trace()
...@@ -368,9 +329,7 @@ tests:: ...@@ -368,9 +329,7 @@ tests::
paver test_js paver test_js
To run a specific set of JavaScript tests and print the results to the To run a specific set of JavaScript tests and print the results to the
console, run these commands. console, run these commands::
::
paver test_js_run -s lms paver test_js_run -s lms
paver test_js_run -s lms-coffee paver test_js_run -s lms-coffee
...@@ -380,9 +339,7 @@ console, run these commands. ...@@ -380,9 +339,7 @@ console, run these commands.
paver test_js_run -s common paver test_js_run -s common
paver test_js_run -s common-requirejs paver test_js_run -s common-requirejs
To run JavaScript tests in a browser, run these commands. To run JavaScript tests in a browser, run these commands::
::
paver test_js_dev -s lms paver test_js_dev -s lms
paver test_js_dev -s lms-coffee paver test_js_dev -s lms-coffee
...@@ -394,11 +351,11 @@ To run JavaScript tests in a browser, run these commands. ...@@ -394,11 +351,11 @@ To run JavaScript tests in a browser, run these commands.
To debug these tests on devstack in a local browser: To debug these tests on devstack in a local browser:
* first run the appropriate test_js_dev command from above which will open a browser using XQuartz * first run the appropriate test_js_dev command from above which will open a browser using XQuartz
* open http://192.168.33.10:9876/debug.html in your host system's browser of choice * open http://192.168.33.10:9876/debug.html in your host system's browser of choice
* this will run all the tests and show you the results including details of any failures * this will run all the tests and show you the results including details of any failures
* you can click on an individually failing test and/or suite to re-run it by itself * you can click on an individually failing test and/or suite to re-run it by itself
* you can now use the browser's developer tools to debug as you would any other JavaScript code * you can now use the browser's developer tools to debug as you would any other JavaScript code
Note: the port is also output to the console that you ran the tests from if you find that easier. Note: the port is also output to the console that you ran the tests from if you find that easier.
...@@ -408,21 +365,17 @@ info, see `karma-runner.github.io <https://karma-runner.github.io/>`__. ...@@ -408,21 +365,17 @@ info, see `karma-runner.github.io <https://karma-runner.github.io/>`__.
Running Bok Choy Acceptance Tests Running Bok Choy Acceptance Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We use `Bok We use `Bok Choy`_ for acceptance testing. Bok Choy is a UI-level acceptance
Choy <http://bok-choy.readthedocs.org/en/latest/tutorial.html>`__ for test framework for writing robust `Selenium`_ tests in `Python`_. Bok Choy
acceptance testing. Bok Choy is a UI-level acceptance test framework for makes your acceptance tests reliable and maintainable by utilizing the Page
writing robust `Selenium <http://docs.seleniumhq.org/>`__ tests in Object and Promise design patterns.
`Python <https://www.python.org/>`__. Bok Choy makes your acceptance
tests reliable and maintainable by utilizing the Page Object and Promise
design patterns.
**Prerequisites**: **Prerequisites**:
These prerequisites are all automatically installed and available in `Devstack These prerequisites are all automatically installed and available in
<https://github.com/edx/configuration/wiki/edX-Developer-Stack>`__, the `Devstack`_, the supported development enviornment for the Open edX platform.
supported development enviornment for the edX Platform.
* Chromedriver and Chrome (see Running Lettuce Acceptance Tests below for * Chromedriver and Chrome (see `Running Lettuce Acceptance Tests`_ below for
the latest tested versions) the latest tested versions)
* Mongo * Mongo
...@@ -431,49 +384,35 @@ supported development enviornment for the edX Platform. ...@@ -431,49 +384,35 @@ supported development enviornment for the edX Platform.
* mySQL * mySQL
To run all the bok choy acceptance tests run this command. To run all the bok choy acceptance tests run this command::
::
paver test_bokchoy paver test_bokchoy
Once the database has been set up and the static files collected, you Once the database has been set up and the static files collected, you
can use the 'fast' option to skip those tasks. This option can also be can use the 'fast' option to skip those tasks. This option can also be
used with any of the test specs below. used with any of the test specs below::
::
paver test_bokchoy --fasttest paver test_bokchoy --fasttest
For example to run a single test, specify the name of the test file. For example to run a single test, specify the name of the test file::
::
paver test_bokchoy -t lms/test_lms.py paver test_bokchoy -t lms/test_lms.py
Notice the test file location is relative to Notice the test file location is relative to
common/test/acceptance/tests. This is another example. common/test/acceptance/tests. This is another example::
::
paver test_bokchoy -t studio/test_studio_bad_data.py paver test_bokchoy -t studio/test_studio_bad_data.py
To run a single test faster by not repeating setup tasks use the ``--fasttest`` option. To run a single test faster by not repeating setup tasks use the ``--fasttest`` option::
::
paver test_bokchoy -t studio/test_studio_bad_data.py --fasttest paver test_bokchoy -t studio/test_studio_bad_data.py --fasttest
To test only a certain feature, specify the file and the testcase class. To test only a certain feature, specify the file and the testcase class::
::
paver test_bokchoy -t studio/test_studio_bad_data.py::BadComponentTest paver test_bokchoy -t studio/test_studio_bad_data.py::BadComponentTest
To execute only a certain test case, specify the file name, class, and To execute only a certain test case, specify the file name, class, and
test case method. test case method::
::
paver test_bokchoy -t lms/test_lms.py::RegistrationTest::test_register paver test_bokchoy -t lms/test_lms.py::RegistrationTest::test_register
...@@ -481,9 +420,7 @@ During acceptance test execution, log files and also screenshots of ...@@ -481,9 +420,7 @@ During acceptance test execution, log files and also screenshots of
failed tests are captured in test\_root/log. failed tests are captured in test\_root/log.
Use this command to put a temporary debugging breakpoint in a test. Use this command to put a temporary debugging breakpoint in a test.
If you check this in, your tests will hang on jenkins. If you check this in, your tests will hang on jenkins::
::
import pdb; pdb.set_trace() import pdb; pdb.set_trace()
...@@ -492,33 +429,36 @@ override the modulestore that is used, use the default\_store option. ...@@ -492,33 +429,36 @@ override the modulestore that is used, use the default\_store option.
The currently supported stores are: 'split' The currently supported stores are: 'split'
(xmodule.modulestore.split\_mongo.split\_draft.DraftVersioningModuleStore) (xmodule.modulestore.split\_mongo.split\_draft.DraftVersioningModuleStore)
and 'draft' (xmodule.modulestore.mongo.DraftMongoModuleStore). This is an example and 'draft' (xmodule.modulestore.mongo.DraftMongoModuleStore). This is an example
for the 'draft' store. for the 'draft' store::
::
paver test_bokchoy --default_store='draft' paver test_bokchoy --default_store='draft'
Running Bok Choy Accessibility Tests Running Bok Choy Accessibility Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We use Bok We use Bok Choy for `automated accessibility testing`_. Bok Choy, a UI-level
Choy for `automated accessibility testing acceptance test framework for writing robust `Selenium`_ tests in `Python`_,
<http://bok-choy.readthedocs.org/en/latest/accessibility.html>`__. includes the ability to perform accessibility audits on web pages using `Google
Bok Choy, a UI-level acceptance test framework for writing robust Accessibility Developer Tools`_ or `Deque's aXe Core`_. For more details about
`Selenium <http://docs.seleniumhq.org/>`__ how to write accessibility tests, please read the `Bok Choy documentation`_ and
tests in `Python <https://www.python.org/>`__, includes the ability to perform the `Automated Accessibility Tests`_ Open edX Confluence page.
accessibility audits on web pages using `Google Accessibility Developer Tools
<https://github.com/GoogleChrome/accessibility-developer-tools/>`__ or .. _automated accessibility testing: http://bok-choy.readthedocs.org/en/latest/accessibility.html
`Deque's aXe Core <https://github.com/dequelabs/axe-core/>`__. .. _Selenium: http://docs.seleniumhq.org/
For more details about how to write accessibility tests, please read .. _Python: https://www.python.org/
the `Bok Choy documentation <http://bok-choy.readthedocs.org/en/latest/accessibility.html>`__ .. _Google Accessibility Developer Tools: https://github.com/GoogleChrome/accessibility-developer-tools/
and the Automated Accessibility Tests `openedx Confluence page .. _Deque's aXe Core: https://github.com/dequelabs/axe-core/
<https://openedx.atlassian.net/wiki/display/TE/Automated+Accessibility+Tests>`__. .. _Bok Choy documentation: http://bok-choy.readthedocs.org/en/latest/accessibility.html
.. _Automated Accessibility Tests: https://openedx.atlassian.net/wiki/display/TE/Automated+Accessibility+Tests
**Prerequisites**: **Prerequisites**:
These prerequisites are all automatically installed and available in `Devstack These prerequisites are all automatically installed and available in
<https://github.com/edx/configuration/wiki/edX-Developer-Stack>`__ (since the Cypress release), the supported development environment for the edX Platform. `Devstack`_ (since the Cypress release), the supported development environment
for the Open edX platform.
.. _Devstack: https://github.com/edx/configuration/wiki/edX-Developer-Stack
* Mongo * Mongo
...@@ -526,16 +466,12 @@ These prerequisites are all automatically installed and available in `Devstack ...@@ -526,16 +466,12 @@ These prerequisites are all automatically installed and available in `Devstack
* mySQL * mySQL
To run all the bok choy accessibility tests use this command. To run all the bok choy accessibility tests use this command::
::
paver test_a11y paver test_a11y
To run specific tests, use the ``-t`` flag to specify a pytest-style test spec To run specific tests, use the ``-t`` flag to specify a pytest-style test spec
relative to the ``common/test/acceptance/tests`` directory. This is an example for it. relative to the ``common/test/acceptance/tests`` directory. This is an example for it::
::
paver test_a11y -t lms/test_lms_dashboard.py::LmsDashboardA11yTest::test_dashboard_course_listings_a11y paver test_a11y -t lms/test_lms_dashboard.py::LmsDashboardA11yTest::test_dashboard_course_listings_a11y
...@@ -585,7 +521,7 @@ teardown and other unmanaged state. ...@@ -585,7 +521,7 @@ teardown and other unmanaged state.
paver test_bokchoy --serversonly paver test_bokchoy --serversonly
Note if setup has already been done, you can run:: Note if setup has already been done, you can run::
paver test_bokchoy --serversonly --fasttest paver test_bokchoy --serversonly --fasttest
...@@ -602,47 +538,35 @@ properly clean up. ...@@ -602,47 +538,35 @@ properly clean up.
Running Lettuce Acceptance Tests Running Lettuce Acceptance Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Although it's deprecated now `Lettuce <http://lettuce.it/>`__ acceptance tests Although it's deprecated now `lettuce`_ acceptance tests still exists in the
still exists in the code base. Most of code base. Most of our tests use `Splinter`_ to simulate UI browser
our tests use `Splinter <http://splinter.cobrateam.info/>`__ to simulate interactions. Splinter, in turn, uses `Selenium`_ to control the Chrome
UI browser interactions. Splinter, in turn, uses
`Selenium <http://docs.seleniumhq.org/>`__ to control the Chrome
browser. browser.
**Prerequisite**: You must have **Prerequisite**: You must have `ChromeDriver`_ installed to run the tests in
`ChromeDriver <https://code.google.com/p/selenium/wiki/ChromeDriver>`__ Chrome. The tests are confirmed to run with Chrome (not Chromium) version
installed to run the tests in Chrome. The tests are confirmed to run 34.0.1847.116 with ChromeDriver version 2.6.232917.
with Chrome (not Chromium) version 34.0.1847.116 with ChromeDriver
version 2.6.232917.
To run all the acceptance tests, run this command. .. _ChromeDriver: https://code.google.com/p/selenium/wiki/ChromeDriver
:: To run all the acceptance tests, run this command::
paver test_acceptance paver test_acceptance
To run only for lms or cms, run one of these commands. To run only for lms or cms, run one of these commands::
::
paver test_acceptance -s lms paver test_acceptance -s lms
paver test_acceptance -s cms paver test_acceptance -s cms
For example, this command tests only a specific feature. For example, this command tests only a specific feature::
::
paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature" paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature"
A command like this tests only a specific scenario. A command like this tests only a specific scenario::
::
paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature -s 3" paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature -s 3"
To start the debugger on failure, pass the ``--pdb`` option to the paver command like this. To start the debugger on failure, pass the ``--pdb`` option to the paver command like this::
::
paver test_acceptance -s lms --pdb --extra_args="lms/djangoapps/courseware/features/problems.feature" paver test_acceptance -s lms --pdb --extra_args="lms/djangoapps/courseware/features/problems.feature"
...@@ -671,7 +595,7 @@ During acceptance test execution, Django log files are written to ...@@ -671,7 +595,7 @@ During acceptance test execution, Django log files are written to
**Note**: The acceptance tests can *not* currently run in parallel. **Note**: The acceptance tests can *not* currently run in parallel.
Running Tests on Paver Scripts Running Tests on Paver Scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To run tests on the scripts that power the various Paver commands, use the following command:: To run tests on the scripts that power the various Paver commands, use the following command::
...@@ -682,9 +606,7 @@ Testing internationalization with dummy translations ...@@ -682,9 +606,7 @@ Testing internationalization with dummy translations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Any text you add to the platform should be internationalized. To generate Any text you add to the platform should be internationalized. To generate
translations for your new strings, run the following command. translations for your new strings, run the following command::
::
paver i18n_dummy paver i18n_dummy
...@@ -694,16 +616,12 @@ You can then preview the dummy languages on your local machine and also in ...@@ -694,16 +616,12 @@ You can then preview the dummy languages on your local machine and also in
your sandbox, if and when you create one. your sandbox, if and when you create one.
The dummy language files that are generated during this process can be The dummy language files that are generated during this process can be
found in the following locations. found in the following locations::
::
conf/locale/{LANG_CODE} conf/locale/{LANG_CODE}
There are a few JavaScript files that are generated from this process. You There are a few JavaScript files that are generated from this process. You
can find those in the following locations. can find those in the following locations::
::
lms/static/js/i18n/{LANG_CODE} lms/static/js/i18n/{LANG_CODE}
cms/static/js/i18n/{LANG_CODE} cms/static/js/i18n/{LANG_CODE}
...@@ -725,9 +643,7 @@ the step:: ...@@ -725,9 +643,7 @@ the step::
to your scenario. This step can be added anywhere, and will enable to your scenario. This step can be added anywhere, and will enable
automatic screenshots for all following steps for that scenario only. automatic screenshots for all following steps for that scenario only.
You can also use the step You can also use the step::
::
Given I disable capturing of screenshots before and after each step Given I disable capturing of screenshots before and after each step
...@@ -740,17 +656,13 @@ according to the template string ...@@ -740,17 +656,13 @@ according to the template string
``{scenario_number}__{step_number}__{step_function_name}__{"1_before"|"2_after"}``. ``{scenario_number}__{step_number}__{step_function_name}__{"1_before"|"2_after"}``.
If you don't want to have screenshots be captured for all steps, but If you don't want to have screenshots be captured for all steps, but
rather want fine grained control, you can use this decorator before any Python function in ``feature_name.py`` file. rather want fine grained control, you can use this decorator before any Python function in ``feature_name.py`` file::
::
@capture_screenshot_before_after @capture_screenshot_before_after
The decorator will capture two screenshots: one before the decorated function runs, The decorator will capture two screenshots: one before the decorated function runs,
and one after. Also, this function is available, and can be inserted at any point in code to capture a and one after. Also, this function is available, and can be inserted at any point in code to capture a
screenshot specifically in that place. screenshot specifically in that place::
::
from lettuce import world; world.capture_screenshot("image_name") from lettuce import world; world.capture_screenshot("image_name")
...@@ -769,15 +681,11 @@ unit/integration tests. ...@@ -769,15 +681,11 @@ unit/integration tests.
To view test coverage: To view test coverage:
1. Run the test suite with this command. 1. Run the test suite with this command::
::
paver test paver test
2. Generate reports with this command. 2. Generate reports with this command::
::
paver coverage paver coverage
...@@ -787,84 +695,68 @@ To view test coverage: ...@@ -787,84 +695,68 @@ To view test coverage:
Python Code Style Quality Python Code Style Quality
------------------------- -------------------------
To view Python code style quality (including pep8 and pylint violations) run this command. To view Python code style quality (including pep8 and pylint violations) run this command::
::
paver run_quality paver run_quality
More specific options are below. More specific options are below.
- These commands run a particular quality report. - These commands run a particular quality report::
::
paver run_pep8 paver run_pep8
paver run_pylint paver run_pylint
- This command runs a report, and sets it to fail if it exceeds a given number - This command runs a report, and sets it to fail if it exceeds a given number
of violations. of violations::
::
paver run_pep8 --limit=800 paver run_pep8 --limit=800
- The ``run_quality`` uses the underlying diff-quality tool (which is - The ``run_quality`` uses the underlying diff-quality tool (which is packaged
packaged with with `diff-cover`_). With that, the command can be set to fail if a certain
`diff-cover <https://github.com/Bachmann1234/diff-cover>`__). With diff threshold is not met. For example, to cause the process to fail if
that, the command can be set to fail if a certain diff threshold is quality expectations are less than 100% when compared to master (or in other
not met. For example, to cause the process to fail if quality words, if style quality is worse than what is already on master)::
expectations are less than 100% when compared to master (or in other
words, if style quality is worse than what is already on master).
::
paver run_quality --percentage=100 paver run_quality --percentage=100
- Note that 'fixme' violations are not counted with run\_quality. To - Note that 'fixme' violations are not counted with run\_quality. To
see all 'TODO' lines, use this command. see all 'TODO' lines, use this command::
::
paver find_fixme --system=lms paver find_fixme --system=lms
``system`` is an optional argument here. It defaults to ``system`` is an optional argument here. It defaults to
``cms,lms,common``. ``cms,lms,common``.
.. _diff-cover: https://github.com/Bachmann1234/diff-cover
JavaScript Code Style Quality
------------------
To view JavaScript code style quality run this command. JavaScript Code Style Quality
-----------------------------
:: To view JavaScript code style quality run this command::
paver run_eslint paver run_eslint
- This command also comes with a ``--limit`` switch, this is an example of that switch. - This command also comes with a ``--limit`` switch, this is an example of that switch::
::
paver run_eslint --limit=50000 paver run_eslint --limit=50000
Code Complexity Tools Code Complexity Tools
---------------------- ---------------------
Two tools are available for evaluating complexity of edx-platform code: Two tools are available for evaluating complexity of edx-platform code:
- `radon <https://radon.readthedocs.org/en/latest/>`__ for Python code complexity. - `radon <https://radon.readthedocs.org/en/latest/>`__ for Python code
* To obtain complexity, run complexity. To obtain complexity, run::
::
paver run_complexity paver run_complexity
- `plato <https://github.com/es-analysis/plato>`__ for JavaScript code complexity. Several options are available on the command line; see documentation. - `plato <https://github.com/es-analysis/plato>`__ for JavaScript code
* Below, the following command will produce an html report in a subdirectory called "jscomplexity" complexity. Several options are available on the command line; see
documentation. Below, the following command will produce an HTML report in a
:: subdirectory called "jscomplexity"::
plato -q -x common/static/js/vendor/ -t common -e .eslintrc.json -r -d jscomplexity common/static/js/ plato -q -x common/static/js/vendor/ -t common -e .eslintrc.json -r -d jscomplexity common/static/js/
...@@ -875,9 +767,9 @@ Testing using queue servers ...@@ -875,9 +767,9 @@ Testing using queue servers
When testing problems that use a queue server on AWS (e.g. When testing problems that use a queue server on AWS (e.g.
sandbox-xqueue.edx.org), you'll need to run your server on your public sandbox-xqueue.edx.org), you'll need to run your server on your public
IP, like so. IP, like so::
``./manage.py lms runserver 0.0.0.0:8000`` ./manage.py lms runserver 0.0.0.0:8000
When you connect to the LMS, you need to use the public ip. Use When you connect to the LMS, you need to use the public ip. Use
``ifconfig`` to figure out the number, and connect e.g. to ``ifconfig`` to figure out the number, and connect e.g. to
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment