Commit 5df93d7f by Ned Batchelder

Use nicer .rst for testing.rst

parent 09b235bc
......@@ -31,45 +31,51 @@ Unit Tests
- As a rule of thumb, your unit tests should cover every code branch.
- Mock or patch external dependencies. We use voidspace
`Mock Library <http://www.voidspace.org.uk/python/mock/>`__.
- Mock or patch external dependencies. We use the voidspace `Mock Library`_.
- We unit test Python code (using `unittest`_) and Javascript (using
`Jasmine`_)
.. _Mock Library: http://www.voidspace.org.uk/python/mock/
.. _unittest: http://docs.python.org/2/library/unittest.html
.. _Jasmine: http://jasmine.github.io/
- We unit test Python code (using
`unittest <http://docs.python.org/2/library/unittest.html>`__) and
Javascript (using `Jasmine <http://jasmine.github.io/>`__)
Integration Tests
~~~~~~~~~~~~~~~~~
- Test several units at the same time. Note that you can still mock or
patch dependencies that are not under test! For example, you might
test that ``LoncapaProblem``, ``NumericalResponse``, and
``CorrectMap`` in the ``capa`` package work together, while still
mocking out template rendering.
- Test several units at the same time. Note that you can still mock or patch
dependencies that are not under test! For example, you might test that
``LoncapaProblem``, ``NumericalResponse``, and ``CorrectMap`` in the ``capa``
package work together, while still mocking out template rendering.
- Use integration tests to ensure that units are hooked up correctly. You do
not need to test every possible input--that's what unit tests are for.
Instead, focus on testing the "happy path" to verify that the components work
together correctly.
- Use integration tests to ensure that units are hooked up correctly.
You do not need to test every possible input--that's what unit tests
are for. Instead, focus on testing the "happy path" to verify that
the components work together correctly.
- Many of our tests use the `Django test client`_ to simulate HTTP requests to
the server.
.. _Django test client: https://docs.djangoproject.com/en/dev/topics/testing/overview/
- Many of our tests use the `Django test
client <https://docs.djangoproject.com/en/dev/topics/testing/overview/>`__
to simulate HTTP requests to the server.
UI Acceptance Tests
~~~~~~~~~~~~~~~~~~~
- Use these to test that major program features are working correctly.
- We use `Bok
Choy <http://bok-choy.readthedocs.org/en/latest/tutorial.html>`__ to
write end-user acceptance tests directly in Python, using the
framework to maximize reliability and maintainability.
- We use `Bok Choy`_ to write end-user acceptance tests directly in Python,
using the framework to maximize reliability and maintainability.
- We used to use `lettuce`_ to write BDD-style tests but it's now deprecated
in favor of Bok Choy for new tests. Most of these tests simulate user
interactions through the browser using `splinter`_.
.. _Bok Choy: http://bok-choy.readthedocs.org/en/latest/tutorial.html
.. _lettuce: http://lettuce.it/
.. _splinter: http://splinter.cobrateam.info/
- We used to use `lettuce <http://lettuce.it/>`__ to write BDD-style
tests but it's now deprecated in favor of Bok Choy for new tests.
Most of these tests simulate user interactions through the browser
using `splinter <http://splinter.cobrateam.info/>`__.
Internationalization
~~~~~~~~~~~~~~~~~~~~
......@@ -108,20 +114,20 @@ Many tests delegate set-up to a "factory" class. For example, there are
factories for creating courses, problems, and users. This encapsulates
set-up logic from tests.
Factories are often implemented using
`FactoryBoy <https://readthedocs.org/projects/factoryboy/>`__
Factories are often implemented using `FactoryBoy`_.
In general, factories should be located close to the code they use. For
example, the factory for creating problem XML definitions is located in
``common/lib/capa/capa/tests/response_xml_factory.py`` because the
``capa`` package handles problem XML.
.. _FactoryBoy: https://readthedocs.org/projects/factoryboy/
Running Tests
=============
You can run all of the unit-level tests using this command.
::
You can run all of the unit-level tests using this command::
paver test
......@@ -129,20 +135,18 @@ This includes python, javascript, and documentation tests. It does not,
however, run any acceptance tests.
Note -
`paver` is a scripting tool. To get information about various options, you can run the this command.
::
`paver` is a scripting tool. To get information about various options, you can run the this command::
paver -h
Running Python Unit tests
-------------------------
We use `pytest <https://pytest.org/>`__ to run the test suite.
We use `pytest`_ to run the test suite.
For example, this command runs all the python test scripts.
.. _pytest: https://pytest.org/
::
For example, this command runs all the python test scripts::
paver test_python
......@@ -151,49 +155,34 @@ static files used by the site (for example, compiling CoffeeScript to
JavaScript).
You can re-run all failed python tests by running this command (see note at end of
section).
::
section)::
paver test_python --failed
To test lms python tests use this command.
::
To test lms python tests use this command::
paver test_system -s lms
To test cms python tests use this command.
::
To test cms python tests use this command::
paver test_system -s cms
To run these tests without ``collectstatic``, which is faster, append the following argument.
::
To run these tests without ``collectstatic``, which is faster, append the following argument::
paver test_system -s lms --fasttest
To run cms python tests without ``collectstatic`` use this command.
::
To run cms python tests without ``collectstatic`` use this command::
paver test_system -s cms --fasttest
For the sake of speed, by default the python unit test database tables
are created directly from apps' models. If you want to run the tests
against a database created by applying the migrations instead, use the
``--enable-migrations`` option.
::
``--enable-migrations`` option::
paver test_system -s lms --enable-migrations
To run a single django test class use this command.
::
To run a single django test class use this command::
paver test_system -t lms/djangoapps/courseware/tests/tests.py::ActivateLoginTest
......@@ -201,16 +190,12 @@ When developing tests, it is often helpful to be able to really just run
one single test without the overhead of PIP installs, UX builds, etc. In
this case, it is helpful to look at the output of paver, and run just
the specific command (optionally, stripping away coverage metrics). At
the time of this writing, the command is the following.
::
the time of this writing, the command is the following::
pytest lms/djangoapps/courseware/tests/test_courses.py
To run a single test format the command like this.
::
To run a single test format the command like this::
paver test_system -t lms/djangoapps/courseware/tests/tests.py::ActivateLoginTest::test_activate_login
......@@ -223,9 +208,8 @@ is the number of processes to run tests with, and ``-1`` means one process per
available core). Note, however, that when running concurrently, breakpoints may
not work correctly.
For example:
For example::
::
# This will run all tests in the order that they appear in their files, serially
paver test_system -s lms --no-randomize --processes=0
......@@ -233,9 +217,7 @@ For example:
paver test_system -s lms --processes=2
To re-run all failing django tests from lms or cms, use the
``--failed``,\ ``-f`` flag (see note at end of section).
::
``--failed``,\ ``-f`` flag (see note at end of section)::
paver test_system -s lms --failed
paver test_system -s cms --failed
......@@ -244,38 +226,30 @@ There is also a ``--exitfirst``, ``-x`` option that will stop pytest
after the first failure.
common/lib tests are tested with the ``test_lib`` task, which also
accepts the ``--failed`` and ``--exitfirst`` options.
::
accepts the ``--failed`` and ``--exitfirst`` options::
paver test_lib -l common/lib/calc
paver test_lib -l common/lib/xmodule --failed
For example, this command runs a single python unit test file.
::
For example, this command runs a single python unit test file::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py
To select tests to run based on their name, provide an expression to the `pytest
-k option
<https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name>`__
which performs a substring match on test names.
::
To select tests to run based on their name, provide an expression to the
`pytest -k option`_ which performs a substring match on test names::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py -k test_stringify
Alternatively, you can select tests based on their `node ID
<https://docs.pytest.org/en/latest/example/markers.html#node-id>`__ directly,
.. _pytest -k option: https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name
.. _node ID: https://docs.pytest.org/en/latest/example/markers.html#node-id
Alternatively, you can select tests based on their `node ID`_ directly,
which is useful when you need to run only one of mutliple tests with the same
name in different classes or files.
This command runs any python unit test method that matches the substring
`test_stringify` within a specified TestCase class within a specified file.
::
`test_stringify` within a specified TestCase class within a specified file::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py::TestCase -k test_stringify
......@@ -286,61 +260,48 @@ same test method, pass the prefix name to the pytest `-k` option.
If you need to run only one of the test variations, you can the get the
name of all test methods in a class, file, or project, including all ddt.data
variations, by running pytest with `--collectonly`.
::
variations, by running pytest with `--collectonly`::
pytest common/lib/xmodule/xmodule/tests/test_stringify.py --collectonly
This is an example of how to run a single test and get stdout shown immediately, with proper env config.
::
This is an example of how to run a single test and get stdout shown immediately, with proper env config::
pytest cms/djangoapps/contentstore/tests/test_import.py -s
These are examples of how to run a single test and get coverage.
::
These are examples of how to run a single test and get coverage::
pytest cms/djangoapps/contentstore/tests/test_import.py --cov # cms example
pytest lms/djangoapps/courseware/tests/test_module_render.py --cov # lms example
Use this command to generate a coverage report.
::
Use this command to generate a coverage report::
coverage report
Use this command to generate an HTML report.
::
Use this command to generate an HTML report::
coverage html
The report is then saved in reports/common/lib/xmodule/cover/index.html
To run tests for stub servers, for example for `YouTube stub
server <https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py>`__,
you can run one of these commands.
::
To run tests for stub servers, for example for `YouTube stub server`_, you can
run one of these commands::
paver test_system -s cms -t common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
pytest common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
.. _YouTube stub server: https://github.com/edx/edx-platform/blob/master/common/djangoapps/terrain/stubs/tests/test_youtube_stub.py
.. _the pdb documentation: http://docs.python.org/library/pdb.html
Very handy: if you pass the ``--pdb`` flag to a paver test function, or
uncomment the ``pdb=1`` line in ``setup.cfg``, the test runner
will drop you into pdb on error. This lets you go up and down the stack
and see what the values of the variables are. Check out `the pdb
documentation <http://docs.python.org/library/pdb.html>`__ Note that this
only works if you aren't collecting coverage statistics (pdb and coverage.py
use the same mechanism to trace code execution).
uncomment the ``pdb=1`` line in ``setup.cfg``, the test runner will drop you
into pdb on error. This lets you go up and down the stack and see what the
values of the variables are. Check out `the pdb documentation`_. Note that
this only works if you aren't collecting coverage statistics (pdb and
coverage.py use the same mechanism to trace code execution).
Use this command to put a temporary debugging breakpoint in a test.
If you check this in, your tests will hang on jenkins.
::
If you check this in, your tests will hang on jenkins::
import pdb; pdb.set_trace()
......@@ -368,9 +329,7 @@ tests::
paver test_js
To run a specific set of JavaScript tests and print the results to the
console, run these commands.
::
console, run these commands::
paver test_js_run -s lms
paver test_js_run -s lms-coffee
......@@ -380,9 +339,7 @@ console, run these commands.
paver test_js_run -s common
paver test_js_run -s common-requirejs
To run JavaScript tests in a browser, run these commands.
::
To run JavaScript tests in a browser, run these commands::
paver test_js_dev -s lms
paver test_js_dev -s lms-coffee
......@@ -394,11 +351,11 @@ To run JavaScript tests in a browser, run these commands.
To debug these tests on devstack in a local browser:
* first run the appropriate test_js_dev command from above which will open a browser using XQuartz
* open http://192.168.33.10:9876/debug.html in your host system's browser of choice
* this will run all the tests and show you the results including details of any failures
* you can click on an individually failing test and/or suite to re-run it by itself
* you can now use the browser's developer tools to debug as you would any other JavaScript code
* first run the appropriate test_js_dev command from above which will open a browser using XQuartz
* open http://192.168.33.10:9876/debug.html in your host system's browser of choice
* this will run all the tests and show you the results including details of any failures
* you can click on an individually failing test and/or suite to re-run it by itself
* you can now use the browser's developer tools to debug as you would any other JavaScript code
Note: the port is also output to the console that you ran the tests from if you find that easier.
......@@ -408,21 +365,17 @@ info, see `karma-runner.github.io <https://karma-runner.github.io/>`__.
Running Bok Choy Acceptance Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We use `Bok
Choy <http://bok-choy.readthedocs.org/en/latest/tutorial.html>`__ for
acceptance testing. Bok Choy is a UI-level acceptance test framework for
writing robust `Selenium <http://docs.seleniumhq.org/>`__ tests in
`Python <https://www.python.org/>`__. Bok Choy makes your acceptance
tests reliable and maintainable by utilizing the Page Object and Promise
design patterns.
We use `Bok Choy`_ for acceptance testing. Bok Choy is a UI-level acceptance
test framework for writing robust `Selenium`_ tests in `Python`_. Bok Choy
makes your acceptance tests reliable and maintainable by utilizing the Page
Object and Promise design patterns.
**Prerequisites**:
These prerequisites are all automatically installed and available in `Devstack
<https://github.com/edx/configuration/wiki/edX-Developer-Stack>`__, the
supported development enviornment for the edX Platform.
These prerequisites are all automatically installed and available in
`Devstack`_, the supported development enviornment for the Open edX platform.
* Chromedriver and Chrome (see Running Lettuce Acceptance Tests below for
* Chromedriver and Chrome (see `Running Lettuce Acceptance Tests`_ below for
the latest tested versions)
* Mongo
......@@ -431,49 +384,35 @@ supported development enviornment for the edX Platform.
* mySQL
To run all the bok choy acceptance tests run this command.
::
To run all the bok choy acceptance tests run this command::
paver test_bokchoy
Once the database has been set up and the static files collected, you
can use the 'fast' option to skip those tasks. This option can also be
used with any of the test specs below.
::
used with any of the test specs below::
paver test_bokchoy --fasttest
For example to run a single test, specify the name of the test file.
::
For example to run a single test, specify the name of the test file::
paver test_bokchoy -t lms/test_lms.py
Notice the test file location is relative to
common/test/acceptance/tests. This is another example.
::
common/test/acceptance/tests. This is another example::
paver test_bokchoy -t studio/test_studio_bad_data.py
To run a single test faster by not repeating setup tasks use the ``--fasttest`` option.
::
To run a single test faster by not repeating setup tasks use the ``--fasttest`` option::
paver test_bokchoy -t studio/test_studio_bad_data.py --fasttest
To test only a certain feature, specify the file and the testcase class.
::
To test only a certain feature, specify the file and the testcase class::
paver test_bokchoy -t studio/test_studio_bad_data.py::BadComponentTest
To execute only a certain test case, specify the file name, class, and
test case method.
::
test case method::
paver test_bokchoy -t lms/test_lms.py::RegistrationTest::test_register
......@@ -481,9 +420,7 @@ During acceptance test execution, log files and also screenshots of
failed tests are captured in test\_root/log.
Use this command to put a temporary debugging breakpoint in a test.
If you check this in, your tests will hang on jenkins.
::
If you check this in, your tests will hang on jenkins::
import pdb; pdb.set_trace()
......@@ -492,33 +429,36 @@ override the modulestore that is used, use the default\_store option.
The currently supported stores are: 'split'
(xmodule.modulestore.split\_mongo.split\_draft.DraftVersioningModuleStore)
and 'draft' (xmodule.modulestore.mongo.DraftMongoModuleStore). This is an example
for the 'draft' store.
::
for the 'draft' store::
paver test_bokchoy --default_store='draft'
Running Bok Choy Accessibility Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We use Bok
Choy for `automated accessibility testing
<http://bok-choy.readthedocs.org/en/latest/accessibility.html>`__.
Bok Choy, a UI-level acceptance test framework for writing robust
`Selenium <http://docs.seleniumhq.org/>`__
tests in `Python <https://www.python.org/>`__, includes the ability to perform
accessibility audits on web pages using `Google Accessibility Developer Tools
<https://github.com/GoogleChrome/accessibility-developer-tools/>`__ or
`Deque's aXe Core <https://github.com/dequelabs/axe-core/>`__.
For more details about how to write accessibility tests, please read
the `Bok Choy documentation <http://bok-choy.readthedocs.org/en/latest/accessibility.html>`__
and the Automated Accessibility Tests `openedx Confluence page
<https://openedx.atlassian.net/wiki/display/TE/Automated+Accessibility+Tests>`__.
We use Bok Choy for `automated accessibility testing`_. Bok Choy, a UI-level
acceptance test framework for writing robust `Selenium`_ tests in `Python`_,
includes the ability to perform accessibility audits on web pages using `Google
Accessibility Developer Tools`_ or `Deque's aXe Core`_. For more details about
how to write accessibility tests, please read the `Bok Choy documentation`_ and
the `Automated Accessibility Tests`_ Open edX Confluence page.
.. _automated accessibility testing: http://bok-choy.readthedocs.org/en/latest/accessibility.html
.. _Selenium: http://docs.seleniumhq.org/
.. _Python: https://www.python.org/
.. _Google Accessibility Developer Tools: https://github.com/GoogleChrome/accessibility-developer-tools/
.. _Deque's aXe Core: https://github.com/dequelabs/axe-core/
.. _Bok Choy documentation: http://bok-choy.readthedocs.org/en/latest/accessibility.html
.. _Automated Accessibility Tests: https://openedx.atlassian.net/wiki/display/TE/Automated+Accessibility+Tests
**Prerequisites**:
These prerequisites are all automatically installed and available in `Devstack
<https://github.com/edx/configuration/wiki/edX-Developer-Stack>`__ (since the Cypress release), the supported development environment for the edX Platform.
These prerequisites are all automatically installed and available in
`Devstack`_ (since the Cypress release), the supported development environment
for the Open edX platform.
.. _Devstack: https://github.com/edx/configuration/wiki/edX-Developer-Stack
* Mongo
......@@ -526,16 +466,12 @@ These prerequisites are all automatically installed and available in `Devstack
* mySQL
To run all the bok choy accessibility tests use this command.
::
To run all the bok choy accessibility tests use this command::
paver test_a11y
To run specific tests, use the ``-t`` flag to specify a pytest-style test spec
relative to the ``common/test/acceptance/tests`` directory. This is an example for it.
::
relative to the ``common/test/acceptance/tests`` directory. This is an example for it::
paver test_a11y -t lms/test_lms_dashboard.py::LmsDashboardA11yTest::test_dashboard_course_listings_a11y
......@@ -585,7 +521,7 @@ teardown and other unmanaged state.
paver test_bokchoy --serversonly
Note if setup has already been done, you can run::
Note if setup has already been done, you can run::
paver test_bokchoy --serversonly --fasttest
......@@ -602,47 +538,35 @@ properly clean up.
Running Lettuce Acceptance Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Although it's deprecated now `Lettuce <http://lettuce.it/>`__ acceptance tests
still exists in the code base. Most of
our tests use `Splinter <http://splinter.cobrateam.info/>`__ to simulate
UI browser interactions. Splinter, in turn, uses
`Selenium <http://docs.seleniumhq.org/>`__ to control the Chrome
Although it's deprecated now `lettuce`_ acceptance tests still exists in the
code base. Most of our tests use `Splinter`_ to simulate UI browser
interactions. Splinter, in turn, uses `Selenium`_ to control the Chrome
browser.
**Prerequisite**: You must have
`ChromeDriver <https://code.google.com/p/selenium/wiki/ChromeDriver>`__
installed to run the tests in Chrome. The tests are confirmed to run
with Chrome (not Chromium) version 34.0.1847.116 with ChromeDriver
version 2.6.232917.
**Prerequisite**: You must have `ChromeDriver`_ installed to run the tests in
Chrome. The tests are confirmed to run with Chrome (not Chromium) version
34.0.1847.116 with ChromeDriver version 2.6.232917.
To run all the acceptance tests, run this command.
.. _ChromeDriver: https://code.google.com/p/selenium/wiki/ChromeDriver
::
To run all the acceptance tests, run this command::
paver test_acceptance
To run only for lms or cms, run one of these commands.
::
To run only for lms or cms, run one of these commands::
paver test_acceptance -s lms
paver test_acceptance -s cms
For example, this command tests only a specific feature.
::
For example, this command tests only a specific feature::
paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature"
A command like this tests only a specific scenario.
::
A command like this tests only a specific scenario::
paver test_acceptance -s lms --extra_args="lms/djangoapps/courseware/features/problems.feature -s 3"
To start the debugger on failure, pass the ``--pdb`` option to the paver command like this.
::
To start the debugger on failure, pass the ``--pdb`` option to the paver command like this::
paver test_acceptance -s lms --pdb --extra_args="lms/djangoapps/courseware/features/problems.feature"
......@@ -671,7 +595,7 @@ During acceptance test execution, Django log files are written to
**Note**: The acceptance tests can *not* currently run in parallel.
Running Tests on Paver Scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To run tests on the scripts that power the various Paver commands, use the following command::
......@@ -682,9 +606,7 @@ Testing internationalization with dummy translations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Any text you add to the platform should be internationalized. To generate
translations for your new strings, run the following command.
::
translations for your new strings, run the following command::
paver i18n_dummy
......@@ -694,16 +616,12 @@ You can then preview the dummy languages on your local machine and also in
your sandbox, if and when you create one.
The dummy language files that are generated during this process can be
found in the following locations.
::
found in the following locations::
conf/locale/{LANG_CODE}
There are a few JavaScript files that are generated from this process. You
can find those in the following locations.
::
can find those in the following locations::
lms/static/js/i18n/{LANG_CODE}
cms/static/js/i18n/{LANG_CODE}
......@@ -725,9 +643,7 @@ the step::
to your scenario. This step can be added anywhere, and will enable
automatic screenshots for all following steps for that scenario only.
You can also use the step
::
You can also use the step::
Given I disable capturing of screenshots before and after each step
......@@ -740,17 +656,13 @@ according to the template string
``{scenario_number}__{step_number}__{step_function_name}__{"1_before"|"2_after"}``.
If you don't want to have screenshots be captured for all steps, but
rather want fine grained control, you can use this decorator before any Python function in ``feature_name.py`` file.
::
rather want fine grained control, you can use this decorator before any Python function in ``feature_name.py`` file::
@capture_screenshot_before_after
The decorator will capture two screenshots: one before the decorated function runs,
and one after. Also, this function is available, and can be inserted at any point in code to capture a
screenshot specifically in that place.
::
screenshot specifically in that place::
from lettuce import world; world.capture_screenshot("image_name")
......@@ -769,15 +681,11 @@ unit/integration tests.
To view test coverage:
1. Run the test suite with this command.
::
1. Run the test suite with this command::
paver test
2. Generate reports with this command.
::
2. Generate reports with this command::
paver coverage
......@@ -787,84 +695,68 @@ To view test coverage:
Python Code Style Quality
-------------------------
To view Python code style quality (including pep8 and pylint violations) run this command.
::
To view Python code style quality (including pep8 and pylint violations) run this command::
paver run_quality
More specific options are below.
- These commands run a particular quality report.
::
- These commands run a particular quality report::
paver run_pep8
paver run_pylint
- This command runs a report, and sets it to fail if it exceeds a given number
of violations.
::
of violations::
paver run_pep8 --limit=800
- The ``run_quality`` uses the underlying diff-quality tool (which is
packaged with
`diff-cover <https://github.com/Bachmann1234/diff-cover>`__). With
that, the command can be set to fail if a certain diff threshold is
not met. For example, to cause the process to fail if quality
expectations are less than 100% when compared to master (or in other
words, if style quality is worse than what is already on master).
::
- The ``run_quality`` uses the underlying diff-quality tool (which is packaged
with `diff-cover`_). With that, the command can be set to fail if a certain
diff threshold is not met. For example, to cause the process to fail if
quality expectations are less than 100% when compared to master (or in other
words, if style quality is worse than what is already on master)::
paver run_quality --percentage=100
- Note that 'fixme' violations are not counted with run\_quality. To
see all 'TODO' lines, use this command.
::
see all 'TODO' lines, use this command::
paver find_fixme --system=lms
``system`` is an optional argument here. It defaults to
``cms,lms,common``.
.. _diff-cover: https://github.com/Bachmann1234/diff-cover
JavaScript Code Style Quality
------------------
To view JavaScript code style quality run this command.
JavaScript Code Style Quality
-----------------------------
::
To view JavaScript code style quality run this command::
paver run_eslint
- This command also comes with a ``--limit`` switch, this is an example of that switch.
::
- This command also comes with a ``--limit`` switch, this is an example of that switch::
paver run_eslint --limit=50000
Code Complexity Tools
----------------------
---------------------
Two tools are available for evaluating complexity of edx-platform code:
- `radon <https://radon.readthedocs.org/en/latest/>`__ for Python code complexity.
* To obtain complexity, run
::
- `radon <https://radon.readthedocs.org/en/latest/>`__ for Python code
complexity. To obtain complexity, run::
paver run_complexity
- `plato <https://github.com/es-analysis/plato>`__ for JavaScript code complexity. Several options are available on the command line; see documentation.
* Below, the following command will produce an html report in a subdirectory called "jscomplexity"
::
- `plato <https://github.com/es-analysis/plato>`__ for JavaScript code
complexity. Several options are available on the command line; see
documentation. Below, the following command will produce an HTML report in a
subdirectory called "jscomplexity"::
plato -q -x common/static/js/vendor/ -t common -e .eslintrc.json -r -d jscomplexity common/static/js/
......@@ -875,9 +767,9 @@ Testing using queue servers
When testing problems that use a queue server on AWS (e.g.
sandbox-xqueue.edx.org), you'll need to run your server on your public
IP, like so.
IP, like so::
``./manage.py lms runserver 0.0.0.0:8000``
./manage.py lms runserver 0.0.0.0:8000
When you connect to the LMS, you need to use the public ip. Use
``ifconfig`` to figure out the number, and connect e.g. to
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment