Commit f3900b23 by Vedran Karačić

Merge pull request #10458 from edx/vkaracic/testing.rst-update

Changed command description texts to be sentences
parents b0236318 731d8a4e
...@@ -112,7 +112,7 @@ example, the factory for creating problem XML definitions is located in ...@@ -112,7 +112,7 @@ example, the factory for creating problem XML definitions is located in
Running Tests Running Tests
============= =============
You can run all of the unit-level tests using the command You can run all of the unit-level tests using this command.
:: ::
...@@ -122,7 +122,7 @@ This includes python, javascript, and documentation tests. It does not, ...@@ -122,7 +122,7 @@ This includes python, javascript, and documentation tests. It does not,
however, run any acceptance tests. however, run any acceptance tests.
Note - Note -
`paver` is a scripting tool. To get information about various options, you can run the following command - `paver` is a scripting tool. To get information about various options, you can run the this command.
:: ::
paver -h paver -h
Running Python Unit tests Running Python Unit tests
...@@ -132,44 +132,50 @@ We use `nose <https://nose.readthedocs.org/en/latest/>`__ through the ...@@ -132,44 +132,50 @@ We use `nose <https://nose.readthedocs.org/en/latest/>`__ through the
`django-nose plugin <https://pypi.python.org/pypi/django-nose>`__ to run `django-nose plugin <https://pypi.python.org/pypi/django-nose>`__ to run
the test suite. the test suite.
You can run all the python tests using ``paver`` commands. For example, For example, this command runs all the python test scripts.
:: ::
paver test_python paver test_python
runs all the tests. It also runs ``collectstatic``, which prepares the It also runs ``collectstatic``, which prepares the
static files used by the site (for example, compiling Coffeescript to static files used by the site (for example, compiling CoffeeScript to
Javascript). JavaScript).
You can re-run all failed python tests by running: (see note at end of You can re-run all failed python tests by running this command (see note at end of
section) section).
:: ::
paver test_python --failed paver test_python --failed
To test lms or cms python, use:: To test lms python tests use this command.
::
paver test_system -s lms paver test_system -s lms
or To test cms python tests use this command.
:: ::
paver test_system -s cms paver test_system -s cms
You can also run these tests without ``collectstatic``, which is faster:: To run these tests without ``collectstatic``, which is faster, append the following argument.
::
paver test_system -s lms --fasttest paver test_system -s lms --fasttest
or To run cms python tests without ``collectstatic`` use this command.
:: ::
paver test_system -s cms --fasttest paver test_system -s cms --fasttest
To run a single django test class:: To run a single django test class use this command.
::
paver test_system -t lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest paver test_system -t lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest
...@@ -177,16 +183,21 @@ When developing tests, it is often helpful to be able to really just run ...@@ -177,16 +183,21 @@ When developing tests, it is often helpful to be able to really just run
one single test without the overhead of PIP installs, UX builds, etc. In one single test without the overhead of PIP installs, UX builds, etc. In
this case, it is helpful to look at the output of paver, and run just this case, it is helpful to look at the output of paver, and run just
the specific command (optionally, stripping away coverage metrics). At the specific command (optionally, stripping away coverage metrics). At
the time of this writing, the command is:: the time of this writing, the command is the following.
::
python ./manage.py lms test --verbosity=1 lms/djangoapps/courseware/tests/test_courses.py --traceback --settings=test python ./manage.py lms test --verbosity=1 lms/djangoapps/courseware/tests/test_courses.py --traceback --settings=test
To run a single django test::
To run a single test format the command like this.
::
paver test_system -t lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login paver test_system -t lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login
To re-run all failing django tests from lms or cms, use the To re-run all failing django tests from lms or cms, use the
``--failed``,\ ``-f`` flag (see note at end of section) ``--failed``,\ ``-f`` flag (see note at end of section).
:: ::
...@@ -197,46 +208,58 @@ There is also a ``--fail_fast``, ``-x`` option that will stop nosetests ...@@ -197,46 +208,58 @@ There is also a ``--fail_fast``, ``-x`` option that will stop nosetests
after the first failure. after the first failure.
common/lib tests are tested with the ``test_lib`` task, which also common/lib tests are tested with the ``test_lib`` task, which also
accepts the ``--failed`` and ``--fail_fast`` options. For example:: accepts the ``--failed`` and ``--fail_fast`` options.
paver test_lib -l common/lib/calc
or
:: ::
paver test_lib -l common/lib/calc
paver test_lib -l common/lib/xmodule --failed paver test_lib -l common/lib/xmodule --failed
To run a single nose test file:: For example, this command runs a single nose test file.
::
nosetests common/lib/xmodule/xmodule/tests/test_stringify.py nosetests common/lib/xmodule/xmodule/tests/test_stringify.py
To run a single nose test:: This command runs a single nose test within a specified file.
::
nosetests common/lib/xmodule/xmodule/tests/test_stringify.py:test_stringify nosetests common/lib/xmodule/xmodule/tests/test_stringify.py:test_stringify
To run a single test and get stdout, with proper env config::
This is an example of how to run a single test and get stdout, with proper env config.
::
python manage.py cms --settings test test contentstore.tests.test_import_nostatic -s python manage.py cms --settings test test contentstore.tests.test_import_nostatic -s
To run a single test and get stdout and get coverage:: These are examples of how to run a single test and get stdout and get coverage.
::
python -m coverage run --rcfile=./common/lib/xmodule/.coveragerc which ./manage.py cms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 contentstore.tests.test_import_nostatic -s # cms example python -m coverage run --rcfile=./common/lib/xmodule/.coveragerc which ./manage.py cms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 contentstore.tests.test_import_nostatic -s # cms example
python -m coverage run --rcfile=./lms/.coveragerc which ./manage.py lms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 courseware.tests.test_module_render -s # lms example python -m coverage run --rcfile=./lms/.coveragerc which ./manage.py lms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 courseware.tests.test_module_render -s # lms example
generate coverage report:: Use this command to generate coverage report.
::
coverage report --rcfile=./common/lib/xmodule/.coveragerc coverage report --rcfile=./common/lib/xmodule/.coveragerc
or to get html report:: Use this command to generate an HTML report.
::
coverage html --rcfile=./common/lib/xmodule/.coveragerc coverage html --rcfile=./common/lib/xmodule/.coveragerc
then browse reports/common/lib/xmodule/cover/index.html The report is then saved in reports/common/lib/xmodule/cover/index.html
To run tests for stub servers, for example for `YouTube stub To run tests for stub servers, for example for `YouTube stub
server <https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py>`__, server <https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py>`__,
you can do one of:: you can run one of these commands.
::
paver test_system -s cms -t common/djangoapps/terrain/stubs/tests/test_youtube_stub.py paver test_system -s cms -t common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
python -m coverage run --rcfile=cms/.coveragerc `which ./manage.py` cms --settings test test --traceback common/djangoapps/terrain/stubs/tests/test_youtube_stub.py python -m coverage run --rcfile=cms/.coveragerc `which ./manage.py` cms --settings test test --traceback common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
...@@ -271,7 +294,9 @@ tests:: ...@@ -271,7 +294,9 @@ tests::
paver test_js paver test_js
To run a specific set of JavaScript tests and print the results to the To run a specific set of JavaScript tests and print the results to the
console:: console, run these commands.
::
paver test_js_run -s lms paver test_js_run -s lms
paver test_js_run -s lms-coffee paver test_js_run -s lms-coffee
...@@ -281,7 +306,9 @@ console:: ...@@ -281,7 +306,9 @@ console::
paver test_js_run -s common paver test_js_run -s common
paver test_js_run -s common-requirejs paver test_js_run -s common-requirejs
To run JavaScript tests in a browser: To run JavaScript tests in a browser, run these commands.
::
paver test_js_dev -s lms paver test_js_dev -s lms
paver test_js_dev -s lms-coffee paver test_js_dev -s lms-coffee
...@@ -331,42 +358,58 @@ supported development enviornment for the edX Platform. ...@@ -331,42 +358,58 @@ supported development enviornment for the edX Platform.
* mySQL * mySQL
To run all the bok choy acceptance tests:: To run all the bok choy acceptance tests run this command.
::
paver test_bokchoy paver test_bokchoy
Once the database has been set up and the static files collected, you Once the database has been set up and the static files collected, you
can use the 'fast' option to skip those tasks. This option can also be can use the 'fast' option to skip those tasks. This option can also be
used with any of the test specs below:: used with any of the test specs below.
::
paver test_bokchoy --fasttest paver test_bokchoy --fasttest
To run a single test, specify the name of the test file. For example:: For example to run a single test, specify the name of the test file.
::
paver test_bokchoy -t lms/test_lms.py paver test_bokchoy -t lms/test_lms.py
Notice the test file location is relative to Notice the test file location is relative to
common/test/acceptance/tests. For example:: common/test/acceptance/tests. This is another example.
::
paver test_bokchoy -t studio/test_studio_bad_data.py paver test_bokchoy -t studio/test_studio_bad_data.py
To run a single test faster by not repeating setup tasks:: To run a single test faster by not repeating setup tasks us the ``--fasttest`` option.
::
paver test_bokchoy -t studio/test_studio_bad_data.py --fasttest paver test_bokchoy -t studio/test_studio_bad_data.py --fasttest
To test only a certain feature, specify the file and the testcase class:: To test only a certain feature, specify the file and the testcase class.
::
paver test_bokchoy -t studio/test_studio_bad_data.py:BadComponentTest paver test_bokchoy -t studio/test_studio_bad_data.py:BadComponentTest
To execute only a certain test case, specify the file name, class, and To execute only a certain test case, specify the file name, class, and
test case method:: test case method.
::
paver test_bokchoy -t lms/test_lms.py:RegistrationTest.test_register paver test_bokchoy -t lms/test_lms.py:RegistrationTest.test_register
During acceptance test execution, log files and also screenshots of During acceptance test execution, log files and also screenshots of
failed tests are captured in test\_root/log. failed tests are captured in test\_root/log.
To put a debugging breakpoint in a test use:: Use this command to put a debugging breakpoint in a test.
::
from nose.tools import set_trace; set_trace() from nose.tools import set_trace; set_trace()
...@@ -374,8 +417,10 @@ By default, all bokchoy tests are run with the 'split' ModuleStore. To ...@@ -374,8 +417,10 @@ By default, all bokchoy tests are run with the 'split' ModuleStore. To
override the modulestore that is used, use the default\_store option. override the modulestore that is used, use the default\_store option.
The currently supported stores are: 'split' The currently supported stores are: 'split'
(xmodule.modulestore.split\_mongo.split\_draft.DraftVersioningModuleStore) (xmodule.modulestore.split\_mongo.split\_draft.DraftVersioningModuleStore)
and 'draft' (xmodule.modulestore.mongo.DraftMongoModuleStore). For and 'draft' (xmodule.modulestore.mongo.DraftMongoModuleStore). This is an example
example:: for the 'draft' store.
::
paver test_bokchoy --default_store='draft' paver test_bokchoy --default_store='draft'
...@@ -407,12 +452,16 @@ These prerequisites are all automatically installed and available in `Devstack ...@@ -407,12 +452,16 @@ These prerequisites are all automatically installed and available in `Devstack
* mySQL * mySQL
To run all the bok choy accessibility tests:: To run all the bok choy accessibility tests use this command.
::
paver test_bokchoy --extra_args="-a 'a11y'" paver test_bokchoy --extra_args="-a 'a11y'"
To run specific tests, use the ``-t`` flag to specify a nose-style test spec To run specific tests, use the ``-t`` flag to specify a nose-style test spec
relative to the ``common/test/acceptance/tests`` directory:: relative to the ``common/test/acceptance/tests`` directory. This is an example for it.
::
paver test_bokchoy --extra_args="-a 'a11y'" -t test_lms_dashboard.py:LmsDashboardA11yTest.test_dashboard_course_listings_a11y paver test_bokchoy --extra_args="-a 'a11y'" -t test_lms_dashboard.py:LmsDashboardA11yTest.test_dashboard_course_listings_a11y
...@@ -431,26 +480,34 @@ installed to run the tests in Chrome. The tests are confirmed to run ...@@ -431,26 +480,34 @@ installed to run the tests in Chrome. The tests are confirmed to run
with Chrome (not Chromium) version 34.0.1847.116 with ChromeDriver with Chrome (not Chromium) version 34.0.1847.116 with ChromeDriver
version 2.6.232917. version 2.6.232917.
To run all the acceptance tests:: To run all the acceptance tests, run this command.
::
paver test_acceptance paver test_acceptance
To run only for lms or cms:: To run only for lms or cms, run one of these commands.
::
paver test_acceptance -s lms paver test_acceptance -s lms
paver test_acceptance -s cms paver test_acceptance -s cms
To test only a specific feature:: For example, this command tests only a specific feature.
::
paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature" paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature"
To test only a specific scenario A command like this tests only a specific scenario.
:: ::
paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature -s 3" paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature -s 3"
To start the debugger on failure, pass the ``--pdb`` option to the paver command:: To start the debugger on failure, pass the ``--pdb`` option to the paver command like this.
::
paver test_acceptance -s lms --pdb --extra_args="lms/djangoapps/courseware/features/problems.feature" paver test_acceptance -s lms --pdb --extra_args="lms/djangoapps/courseware/features/problems.feature"
...@@ -506,24 +563,21 @@ according to the template string ...@@ -506,24 +563,21 @@ according to the template string
``{scenario_number}__{step_number}__{step_function_name}__{"1_before"|"2_after"}``. ``{scenario_number}__{step_number}__{step_function_name}__{"1_before"|"2_after"}``.
If you don't want to have screenshots be captured for all steps, but If you don't want to have screenshots be captured for all steps, but
rather want fine grained control, you can use the decorator rather want fine grained control, you can use this decorator before any Python function in ``feature_name.py`` file.
:: ::
@capture_screenshot_before_after @capture_screenshot_before_after
before any Python function in ``feature_name.py`` file. The decorator The decorator will capture two screenshots: one before the decorated function runs,
will capture two screenshots - one before the decorated function runs, and one after. Also, this function is available, and can be inserted at any point in code to capture a
and one after. Also, the function screenshot specifically in that place.
:: ::
from lettuce import world; world.capture_screenshot("image_name") from lettuce import world; world.capture_screenshot("image_name")
is available, and can be inserted at any point in code to capture a In both cases the captured screenshots will go to the same folder as when using the step method: ``{TEST_ROOT}/log/auto_screenshot``.
screenshot specifically in that place. In both cases the captured
screenshots will go to the same folder as when using the step method -
``{TEST_ROOT}/log/auto_screenshot``.
A totally different approach to visually seeing acceptance tests run in A totally different approach to visually seeing acceptance tests run in
Vagrant is to redirect Vagrant X11 session to your local machine. Please Vagrant is to redirect Vagrant X11 session to your local machine. Please
...@@ -538,11 +592,15 @@ unit/integration tests. ...@@ -538,11 +592,15 @@ unit/integration tests.
To view test coverage: To view test coverage:
1. Run the test suite:: 1. Run the test suite with this command.
::
paver test paver test
2. Generate reports:: 2. Generate reports with this command.
::
paver coverage paver coverage
...@@ -550,21 +608,27 @@ To view test coverage: ...@@ -550,21 +608,27 @@ To view test coverage:
HTML and XML (Cobertura format) reports. HTML and XML (Cobertura format) reports.
Python Code Style Quality Python Code Style Quality
------------------ -------------------------
To view Python code style quality (including pep8 and pylint violations) run this command.
To view Python code style quality (including pep8 and pylint violations):: ::
paver run_quality paver run_quality
More specific options are below. More specific options are below.
- Running a particular quality report:: - These commands run a particular quality report.
::
paver run_pep8 paver run_pep8
paver run_pylint paver run_pylint
- Running a report, and setting it to fail if it exceeds a given number - This command runs a report, and sets it to fail if it exceeds a given number
of violations:: of violations.
::
paver run_pep8 --limit=800 paver run_pep8 --limit=800
...@@ -574,12 +638,16 @@ More specific options are below. ...@@ -574,12 +638,16 @@ More specific options are below.
that, the command can be set to fail if a certain diff threshold is that, the command can be set to fail if a certain diff threshold is
not met. For example, to cause the process to fail if quality not met. For example, to cause the process to fail if quality
expectations are less than 100% when compared to master (or in other expectations are less than 100% when compared to master (or in other
words, if style quality is worse than what's already on master):: words, if style quality is worse than what is already on master).
::
paver run_quality --percentage=100 paver run_quality --percentage=100
- Note that 'fixme' violations are not counted with run\_quality. To - Note that 'fixme' violations are not counted with run\_quality. To
see all 'TODO' lines, use:: see all 'TODO' lines, use this command.
::
paver find_fixme --system=lms paver find_fixme --system=lms
...@@ -590,11 +658,15 @@ More specific options are below. ...@@ -590,11 +658,15 @@ More specific options are below.
JavaScript Code Style Quality JavaScript Code Style Quality
------------------ ------------------
To view JavaScript code style quality:: To view JavaScript code style quality run this command.
::
paver run_jshint paver run_jshint
- This command also comes with a ``--limit`` switch, for example:: - This command also comes with a ``--limit`` switch, this is an example of that switch.
::
paver run_jshint --limit=700 paver run_jshint --limit=700
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment