Commit 0c63343f by David Baumgold

Merge pull request #2172 from edx/db/doc-build-url

Reorganize doc for i18n
parents 157f734b 7928535b
......@@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import ConfigParser
from django.conf import settings
from django.template import RequestContext
from util.request import safe_get_host
requestcontext = None
......@@ -24,3 +26,21 @@ class MakoMiddleware(object):
requestcontext = RequestContext(request)
requestcontext['is_secure'] = request.is_secure()
requestcontext['site'] = safe_get_host(request)
requestcontext['doc_url'] = self.get_doc_url_func(request)
def get_doc_url_func(self, request):
config_file = open(settings.REPO_ROOT / "docs" / "config.ini")
config = ConfigParser.ConfigParser()
config.readfp(config_file)
# in the future, we will detect the locale; for now, we will
# hardcode en_us, since we only have English documentation
locale = "en_us"
def doc_url(token):
try:
return config.get(locale, token)
except ConfigParser.NoOptionError:
return config.get(locale, "default")
return doc_url
[en_us]
default=default
.. _ORA for Students:
Open Response Assessments for Students
======================================
.. _ORA Introduction:
Introduction to Open Response Assessments
-----------------------------------------
.. note::
Modify this section according to your course. For example, you
can delete sentences such as "For more information, see :ref:`ORA Peer Assessment`"
and "For more information, see :ref:`ORA AI Assessment`" if your ORA problem doesn't
contain peer assessments or AI assessments and you want to delete these sections from
this document.
Open response assessments allow you to submit a short written answer,
an essay, or a file such as an image or computer code file.
When you come to an open response assessment problem, you see the name of the
problem, the assessment types, the text of the question, the field where you'll
enter your response, and the **Save** and **Submit** buttons.
.. image:: /Images/ExampleORA.gif
If an open response assessment asks you to submit a file, you'll also see a button
that you'll click to upload your file.
.. image:: /Images/ExampleORA_File.gif
The *assessment types* can include *self assessment*, *peer assessment*, and *artificial intelligence (AI) assessment*. The
assessment types run in the order in which they appear in the problem.
- In a self assessment, you assess your response according a rubric that the
instructor has created. For more information, see :ref:`ORA Self Assessment`.
- In a peer assessment, you grade
responses that your peers have submitted while several of your peers
grade your response. For more information, see
:ref:`ORA Peer Assessment`.
- In an AI assessment, a computer algorithm grades your response. For more information,
see :ref:`ORA AI Assessment`.
An open response assessment problem doesn't have to use all assessment types. For example, one problem
may use self assessment and AI assessment, while another problem may use self assessment
and peer assessment, and another problem may use only peer assessment.
You'll answer open response assessment problems in much the same way that you answer other
problems. For more information about how to submit responses, see :ref:`ORA Submit a Response`.
When you submit a response to an open response assessment, the next step
depends on the type of assessment that the problem uses. For more information,
see :ref:`ORA Self Assessment`, :ref:`ORA Peer Assessment`, and :ref:`ORA AI Assessment`.
After you submit your response, your score will be available shortly - sometimes within a few
minutes. For information about how to access your score after your response has been graded,
see :ref:`ORA Access Scores`.
If you want to experiment with open response assessments, you can try out the open
assessment problems in the `EdX Demo <https://courses.edx.org/courses/edX/DemoX/Demo_Course/info>`_
course. To get started, go
to the `Self-Assessed Essay <https://courses.edx.org/courses/edX/DemoX/Demo_Course/courseware/graded_interactions/machine_grading/2>`_
unit, and then enter a response in the **Response** field under the
question. You can enter your own response, or you can use one of the sample
responses in the `Sample Answers <https://courses.edx.org/courses/edX/DemoX/Demo_Course/courseware/graded_interactions/machine_grading/6/>`_
unit.
.. _ORA Submit a Response:
Submit a Response
-----------------
Submitting a response is slightly different if you're submitting a written response
or uploading a file.
#. Enter the response that you want to submit.
- If you're submitting a written response, type your response in the
**Response** field.
- If you're uploading a file, click **Choose File** under the **Response**
field. In the dialog box that opens, select the file that you want to upload,
and then click **Open**.
#. Click **Submit**, and then click **OK** in the dialog box to continue.
.. note:: If you want to save your response and work on it again later, click **Save**.
An "Answer saved, but not yet submitted" message appears directly under the **Save** and
**Submit** buttons.
After you submit your response, the assessment types start running in the order in which they
appear in the problem. For more information,
see :ref:`ORA Self Assessment`, :ref:`ORA Peer Assessment`, or :ref:`ORA AI Assessment`.
.. _ORA Self Assessment:
Self Assessment
---------------
.. note::
You can delete this section if your ORA problem doesn't use self assessments.
In a self assessment, the rubric for the problem appears below your response immediately
after you submit the response. You then assess your response based on the rubric.
Perform a Self Assessment
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Submit a response to a self-assessed ORA problem.
#. When the rubric appears, compare your response with the rubric, and select the
option that you think is appropriate for each category.
.. image:: /Images/Rubric1.gif
#. Click **Submit assessment**.
Your response appears, and you can see the scores that you gave
yourself.
.. _ORA Peer Assessment:
Peer Assessment
---------------
.. note::
You can delete this section if your ORA problem doesn't use peer assessments.
In a peer assessment, several students in the course grade your response while you grade
other students' responses. You have to grade a number of your peers' responses before
you receive your score. (After you grade the minimum number of responses required to
receive your score, you can grade as many additional responses as you want.)
After you submit your response for grading, the following
message appears under your response.
**Your response has been submitted. Please check back later for your grade.**
.. warning:: In peer assessments, the **due date** is the date by which you must not only submit your own response, but finish grading the required number of your peers' responses.
Peer Grading Interface
~~~~~~~~~~~~~~~~~~~~~~
The area where you'll grade responses is the *peer
grading interface*. Each course that has peer assessments has at least
one peer grading interface. There may be just one peer grading interface
for the whole course, or each individual problem may have its own
separate peer grading interface.
.. image:: /Images/PGI_FromOEC_2Problems.gif
Perform a Peer Assessment
~~~~~~~~~~~~~~~~~~~~~~~~~
.. warning:: In peer assessments, the **due date** is the date by which you must not only submit your own response, but finish grading the required number of your peers' responses.
Performing a peer assessment has several steps. You can find detailed instructions for each step
below.
#. :ref:`Access Responses`, either in the body of the
course or from the **Open Ended Console** page.
#. :ref:`Learn to Grade` (this process is called
*calibration*).
#. :ref:`Grade Responses` from other students.
.. _Access Responses:
Step 1: Access responses from other students
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note::
Modify the content in this section according to
your course. For example, if your students can only grade by using the **Open
Ended Console** page, change the introductory sentence below, and delete the
second and third bullets.
**Note** *You can only grade a response if you've submitted a response to the
question, an instructor has already graded at least 20 responses, and
there are more essays from other students left to grade. If you haven't submitted
a response or no responses are available for grading, you see a yellow message in the
interface.*
.. image:: /Images/PAStudent_NoSubmissions.gif
There are several ways to access other students' responses, depending on
the way that the course is set up.
- Through the **Open Ended Console** page. This option is always
available for every course. To access the **Open Ended Console** page,
click the **Open Ended Panel** tab at the top of any page in the course.
When you see the list of problems that have responses available to grade,
click the name of the problem that you want to open it.
.. image:: /Images/PGI_FromOEC_2Problems.gif
- Through the courseware, in a specific unit. This option is only available if the
instructor has included a peer grading interface for the problem in the body of
the course. To access responses in the courseware, go to the unit that contains
the open response assessment problem. Scroll down past the response that you
submitted until you see the peer grading interface that appears below the problem.
.. image:: /Images/PGI_InUnitComposite.gif
- Through the courseware, in a separate section. This option may not be available
for your course. If it is, you'll see the section for peer grading in the
course accordion on the left side of your screen. For example, MIT's 6.00x:
Introduction to Computer Science and Programming course has a separate section
that holds all the course peer grading interfaces. To access peer grading for
a problem, you click the problem name.
.. image:: /Images/PGI_Multiple-600x.gif
.. _Learn to Grade:
Step 2: Learn to grade
^^^^^^^^^^^^^^^^^^^^^^
Before you grade your peers' responses, you must learn to grade
the same way that an instructor would. In this process, called
*calibration*, you'll grade several responses that an instructor has already
graded. If your grading is similar to the instructor's, you can begin grading
other students' responses to the question.
#. Click the name of the problem. When the **Learning to grade** page
opens, click **Start learning to grade**.
#. When the problem opens, compare the student's response with the
rubric. Select the options that best apply to the response, and then
click **Submit**.
#. Review the **How did I do?** message that you receive, and then click
**Continue**.
.. image:: /Images/PG_Calibration_Correct.gif
.. image:: /Images/PG_Calibration_Incorrect.gif
When you click **Continue**, the next student response appears for
you to grade, and you see a yellow **Calibration essay saved** message in
the top left corner of the page.
#. Continue to grade responses. After you grade the required number of
responses correctly, you receive a **Ready to grade!** message. You
can then start to grade responses for other students.
.. _Grade Responses:
Step 3: Grade responses
^^^^^^^^^^^^^^^^^^^^^^^
When you grade a peer assessment response, you can not only select
options in the rubric, but also provide additional feedback for the
student who submitted the response.
#. When the response opens, select the options in the rubric that you
feel best apply to the response, as you did in the calibration process.
If you have concerns about the response, you can select other
options to flag the response for instructor review. You don't have to fill
out the rubric before you select these options.
- If you aren't sure how to grade the response, select the **I am unsure about
the scores I have given above** check box.
- If the response is offensive, or if you suspect that it contains plagiarized
material, select the **This submission has explicit, offensive, or (I suspect)
plagiarized content** check box.
#. Under **Written Feedback**, write a comment about the score that you
gave the response.
#. Click **Submit**. You see a **Successfully saved your feedback**
message at the top of the screen, and the next response opens.
#. Continue to grade until you've graded the required number of
responses (usually 3). When you've graded enough responses, you
receive the following message.
.. image:: /Images/DoneGrading.gif
When you see this message, you can access the score for your own
response. For more information, see :ref:`ORA Access Scores`.
If you want to grade additional responses at any time, you can go back
to the **Peer Grading** page and click the name of the problem that you want
to continue grading.
.. note:: When a response opens for you to grade, it leaves the current "grading pool"
that other instructors or students are grading from, which prevents other
instructors or students from
grading the response while you are working on it. If you do not submit a score
for this response within 30 minutes, the response returns to the grading pool
(so that it again becomes available for others to grade), even if you still have
the response open on your screen.
If the response returns to the grading pool (because the 30 minutes have passed),
but the response is still open on your screen, you can still submit feedback for
that response. If another instructor or student grades the response after it returns to the
grading pool but before you submit your feedback, the response receives two grades.
If you click your browser's **Back** button to return to the problem list before you
click **Submit** to submit your feedback for a response, the response stays outside
the grading pool until 30 minutes have passed. When the response returns to the
grading pool, you can grade it.
.. _ORA AI Assessment:
Artificial Intelligence (AI) Assessment
---------------------------------------
.. note::
You can delete this section if your ORA problem doesn't use AI assessments.
In an AI assessment, an instructor grades a sample set of student responses to the
open response assessment problem. A machine learning algorithm then creates a model
based on the instructor's scores and grades the remaining students' responses.
After you submit your response to an AI assessment, the following message appears under your
response.
**Your response has been submitted. Please check back later for your grade.**
Depending on the time that it takes for the instructor to grade a sample set of
responses, you may receive your grade within minutes, or you may have to wait
a few days. You won't receive a notification when your score is ready, so keep
checking back.
For more information about accessing your scores, see :ref:`ORA Access Scores`.
.. _ORA Access Scores:
Access Scores and Feedback
--------------------------
.. note::
Modify the text in this section to apply to your course.
For *self assessments*, the score that you give yourself appears as soon as you submit
the score.
For *peer assessments* and *AI assessments*, you'll access your scores through the **Open Ended Console** page.
#. In the EdX Demo course, click the **Open Ended Panel** tab at the top
of the page.
#. On the **Open Ended Console** page, click **Problems You Have
Submitted**.
#. On the **Open Ended Problems** page, check the **Status** column to
see whether your responses have been graded. The status for each problem is
either **Waiting to be Graded** or **Finished**.
#. If **Finished** appears in the **Status** column for the problem you want,
click the name of the problem to see your score for that problem. When you
click the name of the problem, the problem opens in the courseware.
For both AI and peer assessments, the score appears below your response
in an abbreviated version of the rubric.
.. image:: /Images/AIScoredResponse.gif
For peer assessments, you can
also see the written feedback that your response received from different
graders.
.. image:: /Images/PeerScoredResponse.gif
If you want to see the full rubric for either an AI or peer assessment,
click **Toggle Full Rubric**.
.. note:: For a peer assessment, if you haven't yet graded enough
problems to see your score, you receive a message that lets you know how
many problems you still need to grade.
.. image:: /Images/FeedbackNotAvailable.gif
For more information about grading peer assessments, see :ref:`ORA Peer Assessment`.
Resubmitting a Response
-----------------------
.. note::
You can delete this section if you don't allow students to submit multiple responses.
Some open response assessments allow multiple attempts. For these
problems, a **New Submission** button appears below your original
response.
If you want to answer the question again, click **New Submission** to
clear your former response, and click **OK** in the dialog box that
appears. You can then enter a new response for the problem.
.. _Tools:
#############################
Working with Tools
#############################
***************************
Overview of Tools in Studio
***************************
In addition to text, images, and different types of problems, Studio allows you
to add customized learning tools such as word clouds to your course.
- :ref:`LTI Component`: LTI components allow you to add an external learning application
or textbook to Studio.
- :ref:`Word Cloud`: Word clouds arrange text that students enter - for example, in
response to a question - into a colorful graphic that students can see.
- :ref:`Zooming image`: Zooming images allow you to enlarge sections of an image so
that students can see the section in detail.
.. _LTI Component:
**************
LTI Components
**************
You may have discovered or developed an external learning application
that you want to add to your online course. Or, you may have a digital
copy of your textbook that uses a format other than PDF. You can add
external learning applications or textbooks to Studio by using a
Learning Tools Interoperability (LTI) component. The LTI component is
based on the `IMS Global Learning Tools
Interoperability <http://www.imsglobal.org/LTI/v1p1p1/ltiIMGv1p1p1.html>`_
version 1.1.1 specifications.
You can use an LTI component in two ways.
- You can add external LTI content that is displayed only, such as
textbook content that doesn’t require a student response.
- You can add external LTI content that requires a student response. An
external provider will grade student responses.
Before you create an LTI component from an external LTI provider in a
unit, you need the following information.
- The **LTI ID**. This is a value that you create to refer to the external LTI
provider. You should create an LTI ID that you can remember easily.
The LTI ID can contain uppercase and lowercase alphanumeric
characters, as well as underscore characters (_). It can contain any
number of characters. For example, you may create an LTI ID that is
as simple as **test_lti_id**, or your LTI ID may be a string of
numbers and letters such as **id_21441** or
**book_lti_provider_from_new_york**.
- The **client key**. This value is a sequence of characters that you
obtain from the LTI provider. The client key is used for
authentication and can contain any number of characters. For example,
your client key may be **b289378-f88d-2929-ctools.umich.edu**.
- The **client secret**. This value is a sequence of characters that
you obtain from the LTI provider. The client secret is used for
authentication and can contain any number of characters. For example,
your client secret may be something as simple as **secret**, or it
may be a string of numbers and letters such as **23746387264** or
**yt4984yr8**.
- The **launch URL** (if the LTI component requires a student response
that will be graded). You obtain the launch URL from the LTI
provider. The launch URL is the URL that Studio sends to the external
LTI provider so that the provider can send back students’ grades.
Create an LTI Component
-----------------------
Creating an LTI component in your course has three steps.
#. Add LTI to the **advanced_modules** policy key.
#. Register the LTI provider.
#. Create the LTI component in an individual unit.
Step 1. Add LTI to the Advanced Modules Policy Key
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. On the **Settings** menu, click **Advanced Settings**.
#. On the **Advanced Settings** page, locate the **Manual Policy
Definition** section, and then locate the **advanced_modules**
policy key (this key is at the top of the list).
.. image:: Images/AdvancedModulesEmpty.gif
:alt: Image of the advanced_modules key in the Advanced Settings page
#. Under **Policy Value**, place your cursor between the brackets, and
then enter **“lti”**. Make sure to include the quotation marks, but
not the period.
.. image:: Images/LTI_Policy_Key.gif
:alt: Image of the advanced_modules key in the Advanced Settings page, with the lti value added
**Note** If the **Policy Value** field already contains text, place your
cursor directly after the closing quotation mark for the final item, and
then enter a comma followed by **“lti”** (make sure that you include the
quotation marks).
#. At the bottom of the page, click **Save Changes**.
The page refreshes automatically. At the top of the page,
you see a notification that your changes have been saved.
Step 2. Register the External LTI Provider
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To regiser the external LTI provider, you’ll add the LIT ID, the client
key, and the client secret in the **lti_passports** policy key.
#. On the **Advanced Settings** page, locate the **lti_passports**
policy key.
#. Under **Policy Value**, place your cursor between the brackets, and
then enter the LTI ID, client key, and client secret in the following
format (make sure to include the quotation marks and the colons).
::
“lti_id:client_key:client_secret”
For example, the value in the **lti_passports** field may be the following.
::
“test_lti_id:b289378-f88d-2929-ctools.umich.edu:secret”
If you have multiple LTI providers, separate the values with a comma.
Make sure to surround each entry with quotation marks.
::
"test_lti_id:b289378-f88d-2929-ctools.umich.edu:secret",
"id_21441:b289378-f88d-2929-ctools.school.edu:23746387264",
"book_lti_provider_from_new_york:b289378-f88d-2929-ctools.company.com:yt4984yr8"
#. At the bottom of the page, click **Save Changes**.
The page refreshes automatically. At the top of the page,
you see a notification that your changes have been saved, and you can
see your entries in the **lti_passports** policy key.
Step 3. Add the LTI Component to a Unit
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. In the unit where you want to create the problem, click **Advanced**
under **Add New Component**, and then click **LTI**.
#. In the component that appears, click **Edit**.
#. In the component editor, set the options that you want. See the table
below for a description of each option.
#. Click **Save**.
.. list-table::
:widths: 10 80
:header-rows: 1
* - `Setting`
- Description
* - `Display Name`
- Specifies the name of the problem. This name appears above the problem and in
the course ribbon at the top of the page in the courseware.
* - `custom_parameters`
- Enables you to add one or more custom parameters. For example, if you've added an
e-book, a custom parameter may include the page that your e-book should open to.
You could also use a custom parameter to set the background color of the LTI component.
Every custom parameter has a key and a value. You must add the key and value in the following format.
::
key=value
For example, a custom parameter may resemble the following.
::
bgcolor=red
page=144
To add a custom parameter, click **Add**.
* - `graded`
- Indicates whether the grade for the problem counts towards student's total grade. By
default, this value is set to **False**.
* - `has_score`
- Specifies whether the problem has a numerical score. By default, this value
is set to **False**.
* - `launch_url`
- Lists the URL that Studio sends to the external LTI provider so that the provider
can send back students' grades. This setting is only used if **graded** is set to
**True**.
* - `lti_id`
- Specifies the LTI ID for the external LTI provider. This value must be the same
LTI ID that you entered on the **Advanced Settings** page.
* - `open_in_a_new_page`
- Indicates whether the problem opens in a new page. If you set this value to **True**,
the student clicks a link that opens the LTI content in a new window. If you set
this value to **False**, the LTI content opens in an IFrame in the current page.
* - `weight`
- Specifies the number of points possible for the problem. By default, if an
external LTI provider grades the problem, the problem is worth 1 point, and
a student’s score can be any value between 0 and 1.
For more information about problem weights and computing point scores, see :ref:`Problem Weight`.
.. _Word Cloud:
**********
Word Cloud
**********
In a word cloud exercise, students enter words into a field in response
to a question or prompt. The words all the students have entered then
appear instantly as a colorful graphic, with the most popular responses
appearing largest. The graphic becomes larger as more students answer.
Students can both see the way their peers have answered and contribute
their thoughts to the group.
For example, the following word cloud was created from students'
responses to a question in a HarvardX course.
.. image:: Images/WordCloudExample.gif
:alt: Image of a word cloud problem
Create a Word Cloud Exercise
----------------------------
To create a word cloud exercise:
#. Add the Word Cloud advanced component. To do this, add the
"word_cloud" key value to the **Advanced Settings** page. (For more
information, see the instructions in :ref:`Specialized Problems`.)
#. In the unit where you want to create the problem, click **Advanced**
under **Add New Component**.
#. In the list of problem types, click **Word Cloud**.
#. In the component that appears, click **Edit**.
#. In the component editor, specify the settings that you want. You can
leave the default value for everything except **Display Name**.
- **Display Name**: The name that appears in the course ribbon and
as a heading above the problem.
- **Inputs**: The number of text boxes into which students can enter
words, phrases, or sentences.
- **Maximum Words**: The maximum number of words that the word cloud
displays. If students enter 300 different words but the maximum is
set to 250, only the 250 most commonly entered words appear in the
word cloud.
- **Show Percents**: The number of times that students have entered
a given word as a percentage of all words entered appears near
that word.
#. Click **Save**.
For more information, see `Xml Format of "Word Cloud" Module
<https://edx.readthedocs.org/en/latest/course_data_formats/word_cloud/word_cloud.html#>`_.
.. _Zooming Image:
******************
Zooming Image Tool
******************
Some edX courses use extremely large, extremely detailed graphics. To make it
easier to understand we can offer two versions of those graphics, with the zoomed
section showing when you click on the main view.
The example below is from 7.00x: Introduction to Biology and shows a subset of the
biochemical reactions that cells carry out.
.. image:: Images/Zooming_Image.gif
:alt: Image of a zooming image
Create a Zooming Image Tool
---------------------------
#. Under **Add New Component**, click **html**, and then click **Zooming Image**.
#. In the empty component that appears, click **Edit**.
#. When the component editor opens, replace the example content with your own content.
#. Click **Save** to save the HTML component.
<problem display_name="Drag and drop demos: drag and drop icons or labels
to proper positions." >
<customresponse>
<text>
<h4>[Anyof rule example]</h4><br/>
<h4>Please label hydrogen atoms connected with left carbon atom.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true"
one_per_target="true" no_labels="true" label_bg_color="rgb(222, 139, 238)">
<draggable id="1" label="Hydrogen" />
<draggable id="2" label="Hydrogen" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2" x="133" y="3" w="70" h="70"/>
<target id="t3" x="2" y="384" w="70" h="70"/>
<target id="t4" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7" x="393" y="463" w="70" h="70"/>
<target id="t8" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{'draggables': ['1', '2'],
'targets': ['t2', 't3', 't4' ],
'rule':'anyof'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="2" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="3" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="4" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="5" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="6" icon="/static/images/images_list/lcao-mo/u_d.png" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="8" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="9" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="10" icon="/static/images/images_list/lcao-mo/up.png"/>
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<draggable id="12" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<draggable id="14" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" />
<draggable id="18" icon="/static/images/images_list/lcao-mo/d.png" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['1', '2', '3', '4', '5', '6'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2'
],
'rule': 'unordered_equal'
}, {
'draggables': ['7','8', '9', '10'],
'targets': ['p_left_1', 'p_left_2', 'p_right_1','p_right_2'],
'rule': 'unordered_equal'
}, {
'draggables': ['11', '12'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'unordered_equal'
}, {
'draggables': ['13', '14'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'unordered_equal'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'unordered_equal'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'unordered_equal'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Another complex grading example]</h4><br/>
<h4>Describe oxygen molecule in LCAO-MO</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" one_per_target="true">
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="2" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="3" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="4" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="5" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="6" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="v_fb_1" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="v_fb_2" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="v_fb_3" icon="/static/images/images_list/lcao-mo/u_d.png" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="8" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="9" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="10" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="v_ub_1" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="v_ub_2" icon="/static/images/images_list/lcao-mo/up.png"/>
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<draggable id="12" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<draggable id="14" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" />
<draggable id="18" icon="/static/images/images_list/lcao-mo/d.png" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [{
'draggables': ['1', '2', '3', '4', '5', '6', 'v_fb_1', 'v_fb_2', 'v_fb_3'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2',
'p_sigma', 'p_left_1', 'p_right_3'
],
'rule': 'anyof'
}, {
'draggables': ['7', '8', '9', '10', 'v_ub_1', 'v_ub_2'],
'targets': [
'p_left_2', 'p_left_3', 'p_right_1', 'p_right_2', 'p_pi_star_1',
'p_pi_star_2'
],
'rule': 'anyof'
}, {
'draggables': ['11', '12'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'anyof'
}, {
'draggables': ['13', '14'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'anyof'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'anyof'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'anyof'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Individual targets with outlines, One draggable per target]</h4><br/>
<h4>
Drag -Ant- to first position and -Star- to third position </h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name_with_icon': 't1', 'name4': 't2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[SMALL IMAGE, Individual targets WITHOUT outlines, One draggable
per target]</h4><br/>
<h4>
Move -Star- to the volcano opening, and -Label3- on to
the right ear of the cow.
</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow3.png" target_outline="false">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="111" y="58" w="90" h="90"/>
<target id="t2" x="212" y="90" w="90" h="90"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name4': 't1',
'7': 't2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Many draggables per target]</h4><br/>
<h4>Move -Star- and -Ant- to most left target
and -Label3- and -Label2- to most right target.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name4': 't1',
'name_with_icon': 't1',
'5': 't2',
'7':'t2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Draggables can be placed anywhere on base image]</h4><br/>
<h4>
Place -Grass- in the middle of the image and -Ant- in the
right upper corner.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" >
<draggable id="1" label="Label 1"/>
<draggable id="ant" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="grass" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'grass': [[300, 200], 200],
'ant': [[500, 0], 200]}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Another anyof example]</h4><br/>
<h4>Please identify the Carbon and Oxygen atoms in the molecule.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true" one_per_target="true">
<draggable id="l1_c" label="Carbon" />
<draggable id="l2" label="Methane"/>
<draggable id="l3_o" label="Oxygen" />
<draggable id="l4" label="Calcium" />
<draggable id="l5" label="Methane"/>
<draggable id="l6" label="Calcium" />
<draggable id="l7" label="Hydrogen" />
<draggable id="l8_c" label="Carbon" />
<draggable id="l9" label="Hydrogen" />
<draggable id="l10_o" label="Oxygen" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2" x="133" y="3" w="70" h="70"/>
<target id="t3" x="2" y="384" w="70" h="70"/>
<target id="t4" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7" x="393" y="463" w="70" h="70"/>
<target id="t8" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['l3_o', 'l10_o'],
'targets': ['t1_o', 't9_o'],
'rule': 'anyof'
},
{
'draggables': ['l1_c','l8_c'],
'targets': ['t5_c','t6_c'],
'rule': 'anyof'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Again another anyof example]</h4><br/>
<h4>If the element appears in this molecule, drag the label onto it</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true"
one_per_target="true" no_labels="true" label_bg_color="rgb(222, 139, 238)">
<draggable id="1" label="Hydrogen" />
<draggable id="2" label="Hydrogen" />
<draggable id="3" label="Nytrogen" />
<draggable id="4" label="Nytrogen" />
<draggable id="5" label="Boron" />
<draggable id="6" label="Boron" />
<draggable id="7" label="Carbon" />
<draggable id="8" label="Carbon" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2_h" x="133" y="3" w="70" h="70"/>
<target id="t3_h" x="2" y="384" w="70" h="70"/>
<target id="t4_h" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7_h" x="393" y="463" w="70" h="70"/>
<target id="t8_h" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10_h" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['t5_c', 't6_c'],
'rule': 'anyof'
},
{
'draggables': ['1', '2'],
'targets': ['t2_h', 't3_h', 't4_h', 't7_h', 't8_h', 't10_h'],
'rule': 'anyof'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Wrong base image url example]
</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow3_bad.png" target_outline="false">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="111" y="58" w="90" h="90"/>
<target id="t2" x="212" y="90" w="90" h="90"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name4': 't1',
'7': 't2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
</problem>
<problem display_name="Drag and drop demos: drag and drop icons or labels
to proper positions." >
<customresponse>
<text>
<h4>[Draggable is reusable example]</h4>
<br/>
<h4>Please label all hydrogen atoms.</h4>
<br/>
</text>
<drag_and_drop_input
img="/static/images/images_list/ethglycol.jpg"
target_outline="true"
one_per_target="true"
no_labels="true"
label_bg_color="rgb(222, 139, 238)"
>
<draggable id="1" label="Hydrogen" can_reuse='true' />
<target id="t1_o" x="10" y="67" w="100" h="100" />
<target id="t2" x="133" y="3" w="70" h="70" />
<target id="t3" x="2" y="384" w="70" h="70" />
<target id="t4" x="95" y="386" w="70" h="70" />
<target id="t5_c" x="94" y="293" w="91" h="91" />
<target id="t6_c" x="328" y="294" w="91" h="91" />
<target id="t7" x="393" y="463" w="70" h="70" />
<target id="t8" x="344" y="214" w="70" h="70" />
<target id="t9_o" x="445" y="162" w="100" h="100" />
<target id="t10" x="591" y="132" w="70" h="70" />
</drag_and_drop_input>
<answer type="loncapa/python">
<![CDATA[
correct_answer = [{
'draggables': ['1'],
'targets': ['t2', 't3', 't4', 't7', 't8', 't10'],
'rule': 'exact'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]>
</answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png" can_reuse="true" />
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png" can_reuse="true" />
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" can_reuse="true" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" can_reuse="true" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" can_reuse="true" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['1'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2'
],
'rule': 'exact'
}, {
'draggables': ['7'],
'targets': ['p_left_1', 'p_left_2', 'p_right_1','p_right_2'],
'rule': 'exact'
}, {
'draggables': ['11'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'exact'
}, {
'draggables': ['13'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'exact'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'exact'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'exact'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Many draggables per target]</h4><br/>
<h4>Move two Stars and three Ants to most left target
and one Label3 and four Label2 to most right target.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['name4'],
'targets': [
't1', 't1'
],
'rule': 'exact'
},
{
'draggables': ['name_with_icon'],
'targets': [
't1', 't1', 't1'
],
'rule': 'exact'
},
{
'draggables': ['5'],
'targets': [
't2', 't2', 't2', 't2'
],
'rule': 'exact'
},
{
'draggables': ['7'],
'targets': [
't2'
],
'rule': 'exact'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Draggables can be placed anywhere on base image]</h4><br/>
<h4>
Place -Grass- in the middle of the image and -Ant- in the
right upper corner.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" >
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="ant" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="grass" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {
'grass': [[300, 200], 200],
'ant': [[500, 0], 200]
}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Another anyof example]</h4><br/>
<h4>Please identify the Carbon and Oxygen atoms in the molecule.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true" one_per_target="true">
<draggable id="l1_c" label="Carbon" can_reuse="true" />
<draggable id="l2" label="Methane" can_reuse="true" />
<draggable id="l3_o" label="Oxygen" can_reuse="true" />
<draggable id="l4" label="Calcium" can_reuse="true" />
<draggable id="l7" label="Hydrogen" can_reuse="true" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2" x="133" y="3" w="70" h="70"/>
<target id="t3" x="2" y="384" w="70" h="70"/>
<target id="t4" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7" x="393" y="463" w="70" h="70"/>
<target id="t8" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['l3_o'],
'targets': ['t1_o', 't9_o'],
'rule': 'exact'
},
{
'draggables': ['l1_c'],
'targets': ['t5_c', 't6_c'],
'rule': 'exact'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Exact number of draggables for a set of targets.]</h4><br/>
<h4>Drag two Grass and one Star to first or second positions, and three Cloud to any of the three positions.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['name_label_icon3', 'name_label_icon3'],
'targets': ['t1', 't3'],
'rule': 'unordered_equal+number'
},
{
'draggables': ['name4'],
'targets': ['t1', 't3'],
'rule': 'anyof+number'
},
{
'draggables': ['with_icon', 'with_icon', 'with_icon'],
'targets': ['t1', 't2', 't3'],
'rule': 'anyof+number'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[As many as you like draggables for a set of targets.]</h4><br/>
<h4>Drag some Grass to any of the targets, and some Stars to either first or last target.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['name_label_icon3'],
'targets': ['t1', 't2', 't3'],
'rule': 'anyof'
},
{
'draggables': ['name4'],
'targets': ['t1', 't2'],
'rule': 'anyof'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
</problem>
<problem display_name="Drag and drop demos chem features: drag and drop icons or labels
to proper positions." attempts="10">
<customresponse>
<text>
<h4>[Simple grading example: draggables on draggables]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4><br/>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="up_and_down" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="up" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<draggable id="s" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p" icon="/static/images/images_list/lcao-mo/orbital_triple.png" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
<target id="3" x="68" y="0" w="32" h="32"/>
</draggable>
<!-- positions of electrons and electron pairs -->
<target id="s_l" x="130" y="360" w="32" h="32"/>
<target id="s_r" x="505" y="360" w="32" h="32"/>
<target id="p_l" x="80" y="100" w="100" h="32"/>
<target id="p_r" x="465" y="100" w="100" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['p'],
'targets': ['p_l', 'p_r'],
'rule': 'unordered_equal'
},
{
'draggables': ['s'],
'targets': ['s_l', 's_r'],
'rule': 'unordered_equal'
},
{
'draggables': ['up_and_down'],
'targets': [
's_l[s][1]', 's_r[s][1]'
],
'rule': 'unordered_equal'
},
{
'draggables': ['up'],
'targets': [
'p_l[p][1]', 'p_l[p][3]', 'p_r[p][1]', 'p_r[p][3]'
],
'rule': 'unordered_equal'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example: draggables on draggables]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo-clean.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="up_and_down" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="up" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<!-- images that should not be dragged -->
<draggable id="down" icon="/static/images/images_list/lcao-mo/d.png" can_reuse="true" />
<draggable id="s" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p" icon="/static/images/images_list/lcao-mo/orbital_triple.png" can_reuse="true" label="p orbital" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
<target id="3" x="68" y="0" w="32" h="32"/>
</draggable>
<draggable id="s-sigma" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s-sigma orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="s-sigma*" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s-sigma* orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-pi" icon="/static/images/images_list/lcao-mo/orbital_double.png" label="p-pi orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-sigma" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="p-sigma orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-pi*" icon="/static/images/images_list/lcao-mo/orbital_double.png" label="p-pi* orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-sigma*" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="p-sigma* orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<!-- positions of electrons and electron pairs -->
<target id="s-left-target" x="130" y="360" w="32" h="32"/>
<target id="s-right-target" x="505" y="360" w="32" h="32"/>
<target id="s-sigma-target" x="315" y="425" w="32" h="32"/>
<target id="s-sigma*-target" x="315" y="290" w="32" h="32"/>
<target id="p-left-target" x="80" y="100" w="100" h="32"/>
<target id="p-right-target" x="480" y="100" w="100" h="32"/>
<target id="p-pi-target" x="300" y="220" w="66" h="32"/>
<target id="p-sigma-target" x="315" y="170" w="32" h="32"/>
<target id="p-pi*-target" x="300" y="40" w="66" h="32"/>
<target id="p-sigma*-target" x="315" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{'draggables': ['p'], 'targets': ['p-left-target', 'p-right-target'], 'rule': 'unordered_equal'},
{'draggables': ['s'], 'targets': ['s-left-target', 's-right-target'], 'rule': 'unordered_equal'},
{'draggables': ['s-sigma'], 'targets': ['s-sigma-target'], 'rule': 'exact'},
{'draggables': ['s-sigma*'], 'targets': ['s-sigma*-target'], 'rule': 'exact'},
{'draggables': ['p-pi'], 'targets': ['p-pi-target'], 'rule': 'exact'},
{'draggables': ['p-sigma'], 'targets': ['p-sigma-target'], 'rule': 'exact'},
{'draggables': ['p-pi*'], 'targets': ['p-pi*-target'], 'rule': 'exact'},
{'draggables': ['p-sigma*'], 'targets': ['p-sigma*-target'], 'rule': 'exact'},
{
'draggables': ['up_and_down'],
'targets': ['s-left-target[s][1]', 's-right-target[s][1]', 's-sigma-target[s-sigma][1]', 's-sigma*-target[s-sigma*][1]', 'p-pi-target[p-pi][1]', 'p-pi-target[p-pi][2]'],
'rule': 'unordered_equal'
},
{
'draggables': ['up'],
'targets': ['p-left-target[p][1]', 'p-left-target[p][2]', 'p-right-target[p][2]', 'p-right-target[p][3]',],
'rule': 'unordered_equal'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example: no draggables on draggables]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true">
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png" can_reuse="true" />
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png" can_reuse="true" />
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" can_reuse="true" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" can_reuse="true" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" can_reuse="true" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['1'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2'
],
'rule': 'exact'
}, {
'draggables': ['7'],
'targets': ['p_left_1', 'p_left_2', 'p_right_2','p_right_3'],
'rule': 'exact'
}, {
'draggables': ['11'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'exact'
}, {
'draggables': ['13'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'exact'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'exact'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'exact'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
</problem>
**********************************************
XML format of drag and drop input [inputtypes]
**********************************************
.. module:: drag_and_drop_input
Format description
==================
The main tag of Drag and Drop (DnD) input is::
<drag_and_drop_input> ... </drag_and_drop_input>
``drag_and_drop_input`` can include any number of the following 2 tags:
``draggable`` and ``target``.
drag_and_drop_input tag
-----------------------
The main container for a single instance of DnD. The following attributes can
be specified for this tag::
img - Relative path to an image that will be the base image. All draggables
can be dragged onto it.
target_outline - Specify whether an outline (gray dashed line) should be
drawn around targets (if they are specified). It can be either
'true' or 'false'. If not specified, the default value is
'false'.
one_per_target - Specify whether to allow more than one draggable to be
placed onto a single target. It can be either 'true' or 'false'. If
not specified, the default value is 'true'.
no_labels - default is false, in default behaviour if label is not set, label
is obtained from id. If no_labels is true, labels are not automatically
populated from id, and one can not set labels and obtain only icons.
draggable tag
-------------
Draggable tag specifies a single draggable object which has the following
attributes::
id - Unique identifier of the draggable object.
label - Human readable label that will be shown to the user.
icon - Relative path to an image that will be shown to the user.
can_reuse - true or false, default is false. If true, same draggable can be
used multiple times.
A draggable is what the user must drag out of the slider and place onto the
base image. After a drag operation, if the center of the draggable ends up
outside the rectangular dimensions of the image, it will be returned back
to the slider.
In order for the grader to work, it is essential that a unique ID
is provided. Otherwise, there will be no way to tell which draggable is at what
coordinate, or over what target. Label and icon attributes are optional. If
they are provided they will be used, otherwise, you can have an empty
draggable. The path is relative to 'course_folder' folder, for example,
/static/images/img1.png.
target tag
----------
Target tag specifies a single target object which has the following required
attributes::
id - Unique identifier of the target object.
x - X-coordinate on the base image where the top left corner of the target
will be positioned.
y - Y-coordinate on the base image where the top left corner of the target
will be positioned.
w - Width of the target.
h - Height of the target.
A target specifies a place on the base image where a draggable can be
positioned. By design, if the center of a draggable lies within the target
(i.e. in the rectangle defined by [[x, y], [x + w, y + h]], then it is within
the target. Otherwise, it is outside.
If at lest one target is provided, the behavior of the client side logic
changes. If a draggable is not dragged on to a target, it is returned back to
the slider.
If no targets are provided, then a draggable can be dragged and placed anywhere
on the base image.
Targets on draggables
---------------------
Sometimes it is not enough to have targets only on the base image, and all of the
draggables on these targets. If a complex problem exists where a draggable must
become itself a target (or many targets), then the following extended syntax
can be used: ::
<draggable {attribute list}>
<target {attribute list} />
<target {attribute list} />
<target {attribute list} />
...
</draggable>
The attribute list in the tags above ('draggable' and 'target') is the same as for
normal 'draggable' and 'target' tags. The only difference is when you will be
specifying inner target position coordinates. Using the 'x' and 'y' attributes you
are setting the offset of the inner target from the upper-left corner of the
parent draggable (that contains the inner target).
Limitations of targets on draggables
------------------------------------
1.) Currently there is a limitation to the level of nesting of targets.
Even though you can pile up a large number of draggables on targets that themselves
are on draggables, the Drag and Drop instance will be graded only in the case if
there is a maximum of two levels of targets. The first level are the "base" targets.
They are attached to the base image. The second level are the targets defined on
draggables.
2.) Another limitation is that the target bounds are not checked against
other targets.
For now, it is the responsibility of the person who is constructing the course
material to make sure that there is no overlapping of targets. It is also preferable
that targets on draggables are smaller than the actual parent draggable. Technically
this is not necessary, but from the usability perspective it is desirable.
3.) You can have targets on draggables only in the case when there are base targets
defined (base targets are attached to the base image).
If you do not have base targets, then you can only have a single level of nesting
(draggables on the base image). In this case the client side will be reporting (x,y)
positions of each draggables on the base image.
Correct answer format
---------------------
(NOTE: For specifying answers for targets on draggables please see next section.)
There are two correct answer formats: short and long
If short from correct answer is mapping of 'draggable_id' to 'target_id'::
correct_answer = {'grass': [[300, 200], 200], 'ant': [[500, 0], 200]}
correct_answer = {'name4': 't1', '7': 't2'}
In long form correct answer is list of dicts. Every dict has 3 keys:
draggables, targets and rule. For example::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['t5_c', 't6_c'],
'rule': 'anyof'
},
{
'draggables': ['1', '2'],
'targets': ['t2_h', 't3_h', 't4_h', 't7_h', 't8_h', 't10_h'],
'rule': 'anyof'
}]
Draggables is list of draggables id. Target is list of targets id, draggables
must be dragged to with considering rule. Rule is string.
Draggables in dicts inside correct_answer list must not intersect!!!
Wrong (for draggable id 7)::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['t5_c', 't6_c'],
'rule': 'anyof'
},
{
'draggables': ['7', '2'],
'targets': ['t2_h', 't3_h', 't4_h', 't7_h', 't8_h', 't10_h'],
'rule': 'anyof'
}]
Rules are: exact, anyof, unordered_equal, anyof+number, unordered_equal+number
.. such long lines are needed for sphinx to display lists correctly
- Exact rule means that targets for draggable id's in user_answer are the same that targets from correct answer. For example, for draggables 7 and 8 user must drag 7 to target1 and 8 to target2 if correct_answer is::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['tartget1', 'target2'],
'rule': 'exact'
}]
- unordered_equal rule allows draggables be dragged to targets unordered. If one want to allow for student to drag 7 to target1 or target2 and 8 to target2 or target 1 and 7 and 8 must be in different targets, then correct answer must be::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['tartget1', 'target2'],
'rule': 'unordered_equal'
}]
- Anyof rule allows draggables to be dragged to any of targets. If one want to allow for student to drag 7 and 8 to target1 or target2, which means that if 7 is on target1 and 8 is on target1 or 7 on target2 and 8 on target2 or 7 on target1 and 8 on target2. Any of theese are correct which anyof rule::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['tartget1', 'target2'],
'rule': 'anyof'
}]
- If you have can_reuse true, then you, for example, have draggables a,b,c and 10 targets. These will allow you to drag 4 'a' draggables to ['target1', 'target4', 'target7', 'target10'] , you do not need to write 'a' four times. Also this will allow you to drag 'b' draggable to target2 or target5 for target5 and target2 etc..::
correct_answer = [
{
'draggables': ['a'],
'targets': ['target1', 'target4', 'target7', 'target10'],
'rule': 'unordered_equal'
},
{
'draggables': ['b'],
'targets': ['target2', 'target5', 'target8'],
'rule': 'anyof'
},
{
'draggables': ['c'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal'
}]
- And sometimes you want to allow drag only two 'b' draggables, in these case you should use 'anyof+number' of 'unordered_equal+number' rule::
correct_answer = [
{
'draggables': ['a', 'a', 'a'],
'targets': ['target1', 'target4', 'target7'],
'rule': 'unordered_equal+numbers'
},
{
'draggables': ['b', 'b'],
'targets': ['target2', 'target5', 'target8'],
'rule': 'anyof+numbers'
},
{
'draggables': ['c'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal'
}]
In case if we have no multiple draggables per targets (one_per_target="true"),
for same number of draggables, anyof is equal to unordered_equal
If we have can_reuse=true, than one must use only long form of correct answer.
Answer format for targets on draggables
---------------------------------------
As with the cases described above, an answer must provide precise positioning for
each draggable (on which targets it must reside). In the case when a draggable must
be placed on a target that itself is on a draggable, then the answer must contain
the chain of target-draggable-target. It is best to understand this on an example.
Suppose we have three draggables - 'up', 's', and 'p'. Draggables 's', and 'p' have targets
on themselves. More specifically, 'p' has three targets - '1', '2', and '3'. The first
requirement is that 's', and 'p' are positioned on specific targets on the base image.
The second requirement is that draggable 'up' is positioned on specific targets of
draggable 'p'. Below is an excerpt from a problem.::
<draggable id="up" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<draggable id="s" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p" icon="/static/images/images_list/lcao-mo/orbital_triple.png" can_reuse="true" label="p orbital" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
<target id="3" x="68" y="0" w="32" h="32"/>
</draggable>
...
correct_answer = [
{
'draggables': ['p'],
'targets': ['p-left-target', 'p-right-target'],
'rule': 'unordered_equal'
},
{
'draggables': ['s'],
'targets': ['s-left-target', 's-right-target'],
'rule': 'unordered_equal'
},
{
'draggables': ['up'],
'targets': ['p-left-target[p][1]', 'p-left-target[p][2]', 'p-right-target[p][2]', 'p-right-target[p][3]',],
'rule': 'unordered_equal'
}
]
Note that it is a requirement to specify rules for all draggables, even if some draggable gets included
in more than one chain.
Grading logic
-------------
1. User answer (that comes from browser) and correct answer (from xml) are parsed to the same format::
group_id: group_draggables, group_targets, group_rule
Group_id is ordinal number, for every dict in correct answer incremental
group_id is assigned: 0, 1, 2, ...
Draggables from user answer are added to same group_id where identical draggables
from correct answer are, for example::
If correct_draggables[group_0] = [t1, t2] then
user_draggables[group_0] are all draggables t1 and t2 from user answer:
[t1] or [t1, t2] or [t1, t2, t2] etc..
2. For every group from user answer, for that group draggables, if 'number' is in group rule, set() is applied,
if 'number' is not in rule, set is not applied::
set() : [t1, t2, t3, t3] -> [t1, t2, ,t3]
For every group, at this step, draggables lists are equal.
3. For every group, lists of targets are compared using rule for that group.
Set and '+number' cases
.......................
Set() and '+number' are needed only for case of reusable draggables,
for other cases there are no equal draggables in list, so set() does nothing.
.. such long lines needed for sphinx to display nicely
* Usage of set() operation allows easily create rule for case of "any number of same draggable can be dragged to some targets"::
{
'draggables': ['draggable_1'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'anyof'
}
* 'number' rule is used for the case of reusable draggables, when one want to fix number of draggable to drag. In this example only two instances of draggables_1 are allowed to be dragged::
{
'draggables': ['draggable_1', 'draggable_1'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'anyof+number'
}
* Note, that in using rule 'exact', one does not need 'number', because you can't recognize from user interface which reusable draggable is on which target. Absurd example::
{
'draggables': ['draggable_1', 'draggable_1', 'draggable_2'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'exact'
}
Correct handling of this example is to create different rules for draggable_1 and
draggable_2
* For 'unordered_equal' (or 'exact' too) we don't need 'number' if you have only same draggable in group, as targets length will provide constraint for the number of draggables::
{
'draggables': ['draggable_1'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal'
}
This means that only three draggaggables 'draggable_1' can be dragged.
* But if you have more that one different reusable draggable in list, you may use 'number' rule::
{
'draggables': ['draggable_1', 'draggable_1', 'draggable_2'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal+number'
}
If not use number, draggables list will be setted to ['draggable_1', 'draggable_2']
Logic flow
----------
(Click on image to see full size version.)
.. image:: draganddrop_logic_flow.png
:width: 100%
:target: _images/draganddrop_logic_flow.png
Example
=======
Examples of draggables that can't be reused
-------------------------------------------
.. literalinclude:: drag-n-drop-demo.xml
Draggables can be reused
------------------------
.. literalinclude:: drag-n-drop-demo2.xml
Examples of targets on draggables
---------------------------------
.. literalinclude:: drag-n-drop-demo3.xml
##############
Course Grading
##############
This document is written to help professors understand how a final grade for a
course is computed.
Course grading is the process of taking all of the problems scores for a student
in a course and generating a final score (and corresponding letter grade). This
grading process can be split into two phases - totaling sections and section
weighting.
*****************
Totaling sections
*****************
The process of totaling sections is to get a percentage score (between 0.0 and
1.0) for every section in the course. A section is any module that is a direct
child of a chapter. For example, psets, labs, and sequences are all common
sections. Only the *percentage* on the section will be available to compute the
final grade, *not* the final number of points earned / possible.
.. important::
For a section to be included in the final grade, the policies file must set
`graded = True` for the section.
For each section, the grading function retrieves all problems within the
section. The section percentage is computed as (total points earned) / (total
points possible).
******************
Weighting Problems
******************
In some cases, one might want to give weights to problems within a section. For
example, a final exam might contain four questions each worth 1 point by default.
This means each question would by default have the same weight. If one wanted
the first problem to be worth 50% of the final exam, the policy file could specify
weights of 30, 10, 10, and 10 to the four problems, respectively.
Note that the default weight of a problem **is not 1**. The default weight of a
problem is the module's `max_grade`.
If weighting is set, each problem is worth the number of points assigned, regardless of the number of responses it contains.
Consider a Homework section that contains two problems.
.. code-block:: xml
<problem display_name=”Problem 1”>
<numericalresponse> ... </numericalreponse>
</problem>
.. code-block:: xml
<problem display_name=”Problem 2”>
<numericalresponse> ... </numericalreponse>
<numericalresponse> ... </numericalreponse>
<numericalresponse> ... </numericalreponse>
</problem>
Without weighting, Problem 1 is worth 25% of the assignment, and Problem 2 is worth 75% of the assignment.
Weighting for the problems can be set in the policy.json file.
.. code-block:: json
"problem/problem1": {
"weight": 2
},
"problem/problem2": {
"weight": 2
},
With the above weighting, Problems 1 and 2 are each worth 50% of the assignment.
Please note: When problems have weight, the point value is automatically included in the display name *except* when `"weight": 1`. When the weight is 1, no visual change occurs in the display name, leaving the point value open to interpretation to the student.
******************
Weighting Sections
******************
Once each section has a percentage score, we must total those sections into a
final grade. Of course, not every section has equal weight in the final grade.
The policies for weighting sections into a final grade are specified in the
grading_policy.json file.
The `grading_policy.json` file specifies several sub-graders that are each given
a weight and factored into the final grade. There are currently two types of
sub-graders, section format graders and single section graders.
We will use this simple example of a grader with one section format grader and
one single section grader.
.. code-block:: json
"GRADER" : [
{
"type" : "Homework",
"min_count" : 12,
"drop_count" : 2,
"short_label" : "HW",
"weight" : 0.4
},
{
"type" : "Final",
"name" : "Final Exam",
"short_label" : "Final",
"weight" : 0.6
}
]
Section Format Graders
======================
A section format grader grades a set of sections with the same format, as
defined in the course policy file. To make a vertical named Homework1 be graded
by the Homework section format grader, the following definition would be in the
course policy file.
.. code-block:: json
"vertical/Homework1": {
"display_name": "Homework 1",
"graded": true,
"format": "Homework"
},
In the example above, the section format grader declares that it will expect to
find at least 12 sections with the format "Homework". It will drop the lowest 2.
All of the homework assignments will have equal weight, relative to each other
(except, of course, for the assignments that are dropped).
This format supports forecasting the number of homework assignments. For
example, if the course only has 3 homeworks written, but the section format
grader has been told to expect 12, the missing 9 will have an assumed 0% and
will still show up in the grade breakdown.
A section format grader will also show the average of that section in the grade
breakdown (shown on the Progress page, gradebook, etc.).
Single Section Graders
======================
A single section grader grades exactly that - a single section. If a section
is found with a matching format and display name then the score of that section
is used. If not, a score of 0% is assumed.
Combining sub-graders
=====================
The final grade is computed by taking the score and weight of each sub grader.
In the above example, homework will be 40% of the final grade. The final exam
will be 60% of the final grade.
**************************
Displaying the final grade
**************************
The final grade is then rounded up to the nearest percentage point. This is so
the system can consistently display a percentage without worrying whether the
displayed percentage has been rounded up or down (potentially misleading the
student). The formula for the rounding is::
rounded_percent = round(computed_percent * 100 + 0.05) / 100
The grading policy file also specifies the cutoffs for the grade levels. A
grade is either A, B, or C. If the student does not reach the cutoff threshold
for a C grade then the student has not earned a grade and will not be eligible
for a certificate. Letter grades are only awarded to students who have
completed the course. There is no notion of a failing letter grade.
*********************************************
XML format of graphical slider tool [xmodule]
*********************************************
.. module:: xml_format_gst
Format description
==================
Graphical slider tool (GST) main tag is::
<graphical_slider_tool> BODY </graphical_slider_tool>
``graphical_slider_tool`` tag must have two children tags: ``render``
and ``configuration``.
Render tag
----------
Render tag can contain usual html tags mixed with some GST specific tags::
<slider/> - represents jQuery slider for changing a parameter's value
<textbox/> - represents a text input field for changing a parameter's value
<plot/> - represents Flot JS plot element
Also GST will track all elements inside ``<render></render>`` where ``id``
attribute is set, and a corresponding parameter referencing that ``id`` is present
in the configuration section below. These will be referred to as dynamic elements.
The contents of the <render> section will be shown to the user after
all occurrences of::
<slider var="{parameter name}" [style="{CSS statements}"] />
<textbox var="{parameter name}" [style="{CSS statements}"] />
<plot [style="{CSS statements}"] />
have been converted to actual sliders, text inputs, and a plot graph.
Everything in square brackets is optional. After initialization, all
text input fields, sliders, and dynamic elements will be set to the initial
values of the parameters that they are assigned to.
``{parameter name}`` specifies the parameter to which the slider or text
input will be attached to.
[style="{CSS statements}"] specifies valid CSS styling. It will be passed
directly to the browser without any parsing.
There is a one-to-one relationship between a slider and a parameter.
I.e. for one parameter you can put only one ``<slider>`` in the
``<render>`` section. However, you don't have to specify a slider - they
are optional.
There is a many-to-one relationship between text inputs and a
parameter. I.e. for one parameter you can put many '<textbox>' elements in
the ``<render>`` section. However, you don't have to specify a text
input - they are optional.
You can put only one ``<plot>`` in the ``<render>`` section. It is not
required.
Slider tag
..........
Slider tag must have ``var`` attribute and optional ``style`` attribute::
<slider var='a' style="width:400px;float:left;" />
After processing, slider tags will be replaced by jQuery UI sliders with applied
``style`` attribute.
``var`` attribute must correspond to a parameter. Parameters can be used in any
of the ``function`` tags in ``functions`` tag. By moving slider, value of
parameter ``a`` will change, and so result of function, that depends on parameter
``a``, will also change.
Textbox tag
...........
Texbox tag must have ``var`` attribute and optional ``style`` attribute::
<textbox var="b" style="width:50px; float:left; margin-left:10px;" />
After processing, textbox tags will be replaced by html text inputs with applied
``style`` attribute. If you want a readonly text input, then you should use a
dynamic element instead (see section below "HTML tagsd with ID").
``var`` attribute must correspond to a parameter. Parameters can be used in any
of the ``function`` tags in ``functions`` tag. By changing the value on the text input,
value of parameter ``a`` will change, and so result of function, that depends on
parameter ``a``, will also change.
Plot tag
........
Plot tag may have optional ``style`` attribute::
<plot style="width:50px; float:left; margin-left:10px;" />
After processing plot tags will be replaced by Flot JS plot with applied
``style`` attribute.
HTML tags with ID (dynamic elements)
....................................
Any HTML tag with ID, e.g. ``<span id="answer_span_1">`` can be used as a
place where result of function can be inserted. To insert function result to
an element, element ID must be included in ``function`` tag as ``el_id`` attribute
and ``output`` value must be ``"element"``::
<function output="element" el_id="answer_span_1">
function add(a, b, precision) {
var x = Math.pow(10, precision || 2);
return (Math.round(a * x) + Math.round(b * x)) / x;
}
return add(a, b, 5);
</function>
Configuration tag
-----------------
The configuration tag contains parameter settings, graph
settings, and function definitions which are to be plotted on the
graph and that use specified parameters.
Configuration tag contains two mandatory tag ``functions`` and ``parameters`` and
may contain another ``plot`` tag.
Parameters tag
..............
``Parameters`` tag contains ``parameter`` tags. Each ``parameter`` tag must have
``var``, ``max``, ``min``, ``step`` and ``initial`` attributes::
<parameters>
<param var="a" min="-10.0" max="10.0" step="0.1" initial="0" />
<param var="b" min="-10.0" max="10.0" step="0.1" initial="0" />
</parameters>
``var`` attribute links min, max, step and initial values to parameter name.
``min`` attribute is the minimal value that a parameter can take. Slider and input
values can not go below it.
``max`` attribute is the maximal value that a parameter can take. Slider and input
values can not go over it.
``step`` attribute is value of slider step. When a slider increase or decreases
the specified parameter, it will do so by the amount specified with 'step'
``initial`` attribute is the initial value that the specified parameter should be
set to. Sliders and inputs will initially show this value.
The parameter's name is specified by the ``var`` property. All occurrences
of sliders and/or text inputs that specify a ``var`` property, will be
connected to this parameter - i.e. they will reflect the current
value of the parameter, and will be updated when the parameter
changes.
If at lest one of these attributes is not set, then the parameter
will not be used, slider's and/or text input elements that specify
this parameter will not be activated, and the specified functions
which use this parameter will not return a numeric value. This means
that neglecting to specify at least one of the attributes for some
parameter will have the result of the whole GST instance not working
properly.
Functions tag
.............
For the GST to do something, you must defined at least one
function, which can use any of the specified parameter values. The
function expects to take the ``x`` value, do some calculations, and
return the ``y`` value. I.e. this is a 2D plot in Cartesian
coordinates. This is how the default function is meant to be used for
the graph.
There are other special cases of functions. They are used mainly for
outputting to elements, plot labels, or for custom output. Because
the return a single value, and that value is meant for a single element,
these function are invoked only with the set of all of the parameters.
I.e. no ``x`` value is available inside them. They are useful for
showing the current value of a parameter, showing complex static
formulas where some parameter's value must change, and other useful
things.
The different style of function is specified by the ``output`` attribute.
Each function must be defined inside ``function`` tag in ``functions`` tag::
<functions>
<function output="element" el_id="answer_span_1">
function add(a, b, precision) {
var x = Math.pow(10, precision || 2);
return (Math.round(a * x) + Math.round(b * x)) / x;
}
return add(a, b, 5);
</function>
</functions>
The parameter names (along with their values, as provided from text
inputs and/or sliders), will be available inside all defined
functions. A defined function body string will be parsed internally
by the browser's JavaScript engine and converted to a true JS
function.
The function's parameter list will automatically be created and
populated, and will include the ``x`` (when ``output`` is not specified or
is set to ``"graph"``), and all of the specified parameter values (from sliders
and text inputs). This means that each of the defined functions will have
access to all of the parameter values. You don't have to use them, but
they will be there.
Examples::
<function>
return x;
</function>
<function dot="true" label="\(y_2\)">
return (x + a) * Math.sin(x * b);
</function>
<function color="green">
function helperFunc(c1) {
return c1 * c1 - a;
}
return helperFunc(x + 10 * a * b) + Math.sin(a - x);
</function>
Required parameters::
function body:
A string composing a normal JavaScript function
except that there is no function declaration
(along with parameters), and no closing bracket.
So if you normally would have written your
JavaScript function like this:
function myFunc(x, a, b) {
return x * a + b;
}
here you must specify just the function body
(everything that goes between '{' and '}'). So,
you would specify the above function like so (the
bare-bone minimum):
<function>return x * a + b;</function>
VERY IMPORTANT: Because the function will be passed
to the browser as a single string, depending on implementation
specifics, the end-of-line characters can be stripped. This
means that single line JavaScript comments (starting with "//")
can lead to the effect that everything after the first such comment
will be treated as a comment. Therefore, it is absolutely
necessary that such single line comments are not used when
defining functions for GST. You can safely use the alternative
multiple line JavaScript comments (such comments start with "/*"
and end with "*/).
VERY IMPORTANT: If you have a large function body, and decide to
split it into several lines, than you must wrap it in "CDATA" like
so:
<function>
<![CDATA[
var dNew;
dNew = 0.3;
return x * a + b - dNew;
]]>
</function>
Optional parameters::
color: Color name ('red', 'green', etc.) or in the form of
'#FFFF00'. If not specified, a default color (different
one for each graphed function) will be given by Flot JS.
line: A string - 'true' or 'false'. Should the data points be
connected by a line on the graph? Default is 'true'.
dot: A string - 'true' or 'false'. Should points be shown for
each data point on the graph? Default is 'false'.
bar: A string - 'true' or 'false'. When set to 'true', points
will be plotted as bars.
label: A string. If provided, will be shown in the legend, along
with the color that was used to plot the function.
output: 'element', 'none', 'plot_label', or 'graph'. If not defined,
function will be plotted (same as setting 'output' to 'graph').
If defined, and other than 'graph', function will not be
plotted, but it's output will be inserted into the element
with ID specified by 'el_id' attribute.
el_id: Id of HTML element, defined in '<render>' section. Value of
function will be inserted as content of this element.
disable_auto_return: By default, if JavaScript function string is written
without a "return" statement, the "return" will be
prepended to it. Set to "true" to disable this
functionality. This is done so that simple functions
can be defined in an easy fashion (for example, "a",
which will be translated into "return a").
update_on: A string - 'change', or 'slide'. Default (if not set) is
'slide'. This defines the event on which a given function is
called, and its result is inserted into an element. This
setting is relevant only when "output" is other than "graph".
When specifying ``el_id``, it is essential to set "output" to one of
element - GST will invoke the function, and the return of it will be
inserted into a HTML element with id specified by ``el_id``.
none - GST will simply inoke the function. It is left to the instructor
who writes the JavaScript function body to update all necesary
HTML elements inside the function, before it exits. This is done
so that extra steps can be preformed after an HTML element has
been updated with a value. Note, that because the return value
from this function is not actually used, it will be tempting to
omit the "return" statement. However, in this case, the attribute
"disable_auto_return" must be set to "true" in order to prevent
GST from inserting a "return" statement automatically.
plot_label - GST will process all plot labels (which are strings), and
will replace the all instances of substrings specified by
``el_id`` with the returned value of the function. This is
necessary if you want a label in the graph to have some changing
number. Because of the nature of Flot JS, it is impossible to
achieve the same effect by setting the "output" attribute
to "element", and including a HTML element in the label.
The above values for "output" will tell GST that the function is meant for an
HTML element (not for graph), and that it should not get an 'x' parameter (along
with some value).
[Note on MathJax and labels]
............................
Independently of this module, will render all TeX code
within the ``<render>`` section into nice mathematical formulas. Just
remember to wrap it in one of::
\( and \) - for inline formulas (formulas surrounded by
standard text)
\[ and \] - if you want the formula to be a separate line
It is possible to define a label in standard TeX notation. The JS
library MathJax will work on these labels also because they are
inserted on top of the plot as standard HTML (text within a DIV).
If the label is dynamic, i.e. it will contain some text (numeric, or other)
that has to be updated on a parameter's change, then one can define
a special function to handle this. The "output" of such a function must be
set to "none", and the JavaScript code inside this function must update the
MathJax element by itself. Before exiting, MathJax typeset function should
be called so that the new text will be re-rendered by MathJax. For example::
<render>
...
<span id="dynamic_mathjax"></span>
</render>
...
<function output="none" el_id="dynamic_mathjax">
<![CDATA[
var out_text;
out_text = "\\[\\mathrm{Percent \\space of \\space treated \\space with \\space YSS=\\frac{"
+(treated_men*10)+"\\space men *"
+(your_m_tx_yss/100)+"\\space prev. +\\space "
+((100-treated_men)*10)+"\\space women *"
+(your_f_tx_yss/100)+"\\space prev.}"
+"{1000\\space total\\space treated\\space patients}"
+"="+drummond_combined[0][1]+"\\%}\\]";
mathjax_for_prevalence_calcs+="\\[\\mathrm{Percent \\space of \\space untreated \\space with \\space YSS=\\frac{"
+(untreated_men*10)+"\\space men *"
+(your_m_utx_yss/100)+"\\space prev. +\\space "
+((100-untreated_men)*10)+"\\space women *"
+(your_f_utx_yss/100)+"\\space prev.}"
+"{1000\\space total\\space untreated\\space patients}"
+"="+drummond_combined[1][1]+"\\%}\\]";
$("#dynamic_mathjax").html(out_text);
MathJax.Hub.Queue(["Typeset",MathJax.Hub,"dynamic_mathjax"]);
]]>
</function>
...
Plot tag
........
``Plot`` tag inside ``configuration`` tag defines settings for plot output.
Required parameters::
xrange: 2 functions that must return value. Value is constant (3.1415)
or depend on parameter from parameters section:
<xrange>
<min>return 0;</min>
<max>return 30;</max>
</xrange>
or
<xrange>
<min>return -a;</min>
<max>return a;</max>
</xrange>
All functions will be calculated over domain between xrange:min
and xrange:max. Xrange depending on parameter is extremely
useful when domain(s) of your function(s) depends on parameter
(like circle, when parameter is radius and you want to allow
to change it).
Optional parameters::
num_points: Number of data points to generated for the plot. If
this is not set, the number of points will be
calculated as width / 5.
bar_width: If functions are present which are to be plotted as bars,
then this parameter specifies the width of the bars. A
numeric value for this parameter is expected.
bar_align: If functions are present which are to be plotted as bars,
then this parameter specifies how to align the bars relative
to the tick. Available values are "left" and "center".
xticks,
yticks: 3 floating point numbers separated by commas. This
specifies how many ticks are created, what number they
start at, and what number they end at. This is different
from the 'xrange' setting in that it has nothing to do
with the data points - it control what area of the
Cartesian space you will see. The first number is the
first tick's value, the second number is the step
between each tick, the third number is the value of the
last tick. If these configurations are not specified,
Flot will chose them for you based on the data points
set that he is currently plotting. Usually, this results
in a nice graph, however, sometimes you need to fine
grain the controls. For example, when you want to show
a fixed area of the Cartesian space, even when the data
set changes. On it's own, Flot will recalculate the
ticks, which will result in a different graph each time.
By specifying the xticks, yticks configurations, only
the plotted data will change - the axes (ticks) will
remain as you have defined them.
xticks_names, yticks_names:
A JSON string which represents a mapping of xticks, yticks
values to some defined strings. If specified, the graph will
not have any xticks, yticks except those for which a string
value has been defined in the JSON string. Note that the
matching will be string-based and not numeric. I.e. if a tick
value was "3.70" before, then inside the JSON there should be
a mapping like {..., "3.70": "Some string", ...}. Example:
<xticks_names>
<![CDATA[
{
"1": "Treated", "2": "Not Treated",
"4": "Treated", "5": "Not Treated",
"7": "Treated", "8": "Not Treated"
}
]]>
</xticks_names>
<yticks_names>
<![CDATA[
{"0": "0%", "10": "10%", "20": "20%", "30": "30%", "40": "40%", "50": "50%"}
]]>
</yticks_names>
xunits,
yunits: Units values to be set on axes. Use MathJax. Example:
<xunits>\(cm\)</xunits>
<yunits>\(m\)</yunits>
moving_label:
A way to specify a label that should be positioned dynamically,
based on the values of some parameters, or some other factors.
It is similar to a <function>, but it is only valid for a plot
because it is drawn relative to the plot coordinate system.
Multiple "moving_label" configurations can be provided, each one
with a unique text and a unique set of functions that determine
it's dynamic positioning.
Each "moving_label" can have a "color" attribute (CSS color notation),
and a "weight" attribute. "weight" can be one of "normal" or "bold",
and determines the styling of moving label's text.
Each "moving_label" function should return an object with a 'x'
and 'y properties. Within those functions, all of the parameter
names along with their value are available.
Example (note that "return" statement is missing; it will be automatically
inserted by GST):
<moving_label text="Co" weight="bold" color="red>
<![CDATA[ {'x': -50, 'y': c0};]]>
</moving_label>
asymptote:
Add a vertical or horizontal asymptote to the graph which will
be dynamically repositioned based on the specified function.
It is similar to the function in that it provides a JavaScript body function
string. This function will be used to calculate the position of the asymptote
relative to the axis specified by the "type" parameter.
Required parameters:
type:
Which axis should the asymptote be plotted against. Available values
are "x" and "y".
Optional parameters:
color:
The color of the line. A valid CSS color string is expected.
Example
=======
Plotting, sliders and inputs
----------------------------
.. literalinclude:: gst_example_with_documentation.xml
Update of html elements, no plotting
------------------------------------
.. literalinclude:: gst_example_html_element_output.xml
Circle with dynamic radius
--------------------------
.. literalinclude:: gst_example_dynamic_range.xml
Example of a bar graph
----------------------
.. literalinclude:: gst_example_bars.xml
Example of moving labels of graph
---------------------------------
.. literalinclude:: gst_example_dynamic_labels.xml
##############################################################################
JS Input
##############################################################################
This document explains how to write a JSInput input type. JSInput is meant to
allow problem authors to easily turn working standalone HTML files into
problems that can be integrated into the edX platform. Since it's aim is
flexibility, it can be seen as the input and client-side equivalent of
CustomResponse.
A JSInput input creates an iframe into a static HTML page, and passes the
return value of author-specified functions to the enclosing response type
(generally CustomResponse). JSInput can also stored and retrieve state.
******************************************************************************
Format
******************************************************************************
A jsinput problem looks like this:
.. code-block:: xml
<problem>
<script type="loncapa/python">
def all_true(exp, ans): return ans == "hi"
</script>
<customresponse cfn="all_true">
<jsinput gradefn="gradefn"
height="500"
get_statefn="getstate"
set_statefn="setstate"
html_file="/static/jsinput.html"/>
</customresponse>
</problem>
The accepted attributes are:
============== ============== ========= ==========
Attribute Name Value Type Required? Default
============== ============== ========= ==========
html_file Url string Yes None
gradefn Function name Yes `gradefn`
set_statefn Function name No None
get_statefn Function name No None
height Integer No `500`
width Integer No `400`
============== ============== ========= ==========
******************************************************************************
Required Attributes
******************************************************************************
==============================================================================
html_file
==============================================================================
The `html_file` attribute specifies what html file the iframe will point to. This
should be located in the content directory.
The iframe is created using the sandbox attribute; while popups, scripts, and
pointer locks are allowed, the iframe cannot access its parent's attributes.
The html file should contain an accesible gradefn function. To check whether
the gradefn will be accessible to JSInput, check that, in the console,::
"`gradefn"
Returns the right thing. When used by JSInput, `gradefn` is called with::
`gradefn`.call(`obj`)
Where `obj` is the object-part of `gradefn`. For example, if `gradefn` is
`myprog.myfn`, JSInput will call `myprog.myfun.call(myprog)`. (This is to
ensure "`this`" continues to refer to what `gradefn` expects.)
Aside from that, more or less anything goes. Note that currently there is no
support for inheriting css or javascript from the parent (aside from the
Chrome-only `seamless` attribute, which is set to true by default).
==============================================================================
gradefn
==============================================================================
The `gradefn` attribute specifies the name of the function that will be called
when a user clicks on the "Check" button, and which should return the student's
answer. This answer will (unless both the get_statefn and set_statefn
attributes are also used) be passed as a string to the enclosing response type.
In the customresponse example above, this means cfn will be passed this answer
as `ans`.
If the `gradefn` function throws an exception when a student attempts to
submit a problem, the submission is aborted, and the student receives a generic
alert. The alert can be customised by making the exception name `Waitfor
Exception`; in that case, the alert message will be the exception message.
**IMPORTANT** : the `gradefn` function should not be at all asynchronous, since
this could result in the student's latest answer not being passed correctly.
Moreover, the function should also return promptly, since currently the student
has no indication that her answer is being calculated/produced.
******************************************************************************
Option Attributes
******************************************************************************
The `height` and `width` attributes are straightforward: they specify the
height and width of the iframe. Both are limited by the enclosing DOM elements,
so for instance there is an implicit max-width of around 900.
In the future, JSInput may attempt to make these dimensions match the html
file's dimensions (up to the aforementioned limits), but currently it defaults
to `500` and `400` for `height` and `width`, respectively.
==============================================================================
set_statefn
==============================================================================
Sometimes a problem author will want information about a student's previous
answers ("state") to be saved and reloaded. If the attribute `set_statefn` is
used, the function given as its value will be passed the state as a string
argument whenever there is a state, and the student returns to a problem. It is
the responsibility of the function to then use this state approriately.
The state that is passed is:
1. The previous output of `gradefn` (i.e., the previous answer) if
`get_statefn` is not defined.
2. The previous output of `get_statefn` (see below) otherwise.
It is the responsibility of the iframe to do proper verification of the
argument that it receives via `set_statefn`.
==============================================================================
get_statefn
==============================================================================
Sometimes the state and the answer are quite different. For instance, a problem
that involves using a javascript program that allows the student to alter a
molecule may grade based on the molecule's hidrophobicity, but from the
hidrophobicity it might be incapable of restoring the state. In that case, a
*separate* state may be stored and loaded by `set_statefn`. Note that if
`get_statefn` is defined, the answer (i.e., what is passed to the enclosing
response type) will be a json string with the following format::
{
answer: `[answer string]`
state: `[state string]`
}
It is the responsibility of the enclosing response type to then parse this as
json.
######################
Discussion Forums Data
######################
Discussions in edX are stored in a MongoDB database as collections of JSON documents.
The primary collection holding all posts and comments written by users is `contents`. There are two types of objects stored here, though they share much of the same structure. A `CommentThread` represents a comment that opens a new thread -- usually a student question of some sort. A `Comment` is a reply in the conversation started by a `CommentThread`.
*****************
Shared Attributes
*****************
The attributes that `Comment` and `CommentThread` objects share are listed below.
`_id`
-----
The 12-byte MongoDB unique ID for this collection. Like all MongoDB IDs, they are monotonically increasing and the first four bytes are a timestamp.
`_type`
-------
`CommentThread` or `Comment` depending on the type of object.
`anonymous`
-----------
If true, this `Comment` or `CommentThread` will show up as written by anonymous, even to those who have moderator privileges in the forums.
`anonymous_to_peers`
--------------------
The idea behind this field was that `anonymous_to_peers = true` would make the the comment appear anonymous to your fellow students, but would allow the course staff to see who you were. However, that was never implemented in the UI, and only `anonymous` is actually used. The `anonymous_to_peers` field is always false.
`at_position_list`
------------------
No longer used. Child comments (replies) are just sorted by their `created_at` timestamp instead.
`author_id`
-----------
The user who wrote this. Corresponds to the user IDs we store in our MySQL database as `auth_user.id`
`body`
------
Text of the comment in Markdown. UTF-8 encoded.
`course_id`
-----------
The full course_id of the course that this comment was made in, including org and run. This value can be seen in the URL when browsing the courseware section. Example: `BerkeleyX/Stat2.1x/2013_Spring`
`created_at`
------------
Timestamp in UTC. Example: `ISODate("2013-02-21T03:03:04.587Z")`
`updated_at`
------------
Timestamp in UTC. Example: `ISODate("2013-02-21T03:03:04.587Z")`
`votes`
-------
Both `CommentThread` and `Comment` objects support voting. `Comment` objects that are replies to other comments still have this attribute, even though there is no way to actually vote on them in the UI. This attribute is a dictionary that has the following inside:
* `up` = list of User IDs that up-voted this comment or thread.
* `down` = list of User IDs that down-voted this comment or thread (no longer used).
* `up_count` = total upvotes received.
* `down_count` = total downvotes received (no longer used).
* `count` = total votes cast.
* `point` = net vote, now always equal to `up_count`.
A user only has one vote per `Comment` or `CommentThread`. Though it's still written to the database, the UI no longer displays an option to downvote anything.
*************
CommentThread
*************
The following fields are specific to `CommentThread` objects. Each thread in the forums is represented by one `CommentThread`.
`closed`
--------
If true, this thread was closed by a forum moderator/admin.
`comment_count`
---------------
The number of comment replies in this thread. This includes all replies to replies, but does not include the original comment that started the thread. So if we had::
CommentThread: "What's a good breakfast?"
* Comment: "Just eat cereal!"
* Comment: "Try a Loco Moco, it's amazing!"
* Comment: "A Loco Moco? Only if you want a heart attack!"
* Comment: "But it's worth it! Just get a spam musubi on the side."
In that exchange, the `comment_count` for the `CommentThread` is `4`.
`commentable_id`
----------------
We can attach a discussion to any piece of content in the course, or to top level categories like "General" and "Troubleshooting". When the `commentable_id` is a high level category, it's specified in the course's policy file. When it's a specific content piece (e.g. `600x_l5_p8`, meaning 6.00x, Lecture Sequence 5, Problem 8), it's taken from a discussion module in the course.
`last_activity_at`
------------------
Timestamp in UTC indicating the last time there was activity in the thread (new posts, edits, etc). Closing the thread does not affect the value in this field.
`tags_array`
------------
Meant to be a list of tags that were user definable, but no longer used.
`title`
-------
Title of the thread, UTF-8 string.
*******
Comment
*******
The following fields are specific to `Comment` objects. A `Comment` is a reply to a `CommentThread` (so an answer to the question), or a reply to another `Comment` (a comment about somebody's answer). It used to be the case that `Comment` replies could nest much more deeply, but we later capped it at just these three levels (question, answer, comment) much in the way that StackOverflow does.
`endorsed`
----------
Boolean value, true if a forum moderator or instructor has marked that this `Comment` is a correct answer for whatever question the thread was asking. Exists for `Comments` that are replies to other `Comments`, but in that case `endorsed` is always false because there's no way to endorse such comments through the UI.
`comment_thread_id`
-------------------
What `CommentThread` are we a part of? All `Comment` objects have this.
`parent_id`
-----------
The `parent_id` is the `_id` of the `Comment` that this comment was made in reply to. Note that this only occurs in a `Comment` that is a reply to another `Comment`; it does not appear in a `Comment` that is a reply to a `CommentThread`.
`parent_ids`
------------
The `parent_ids` attribute appears in all `Comment` objects, and contains the `_id` of all ancestor comments. Since the UI now prevents comments from being nested more than one layer deep, it will only ever have at most one element in it. If a `Comment` has no parent, it's an empty list.
##############################
Student Info and Progress Data
##############################
The following sections detail how edX stores student state data internally, and is useful for developers and researchers who are examining database exports. This information includes demographic information collected at signup, course enrollment, course progress, and certificate status.
Conventions to keep in mind:
* We currently use MySQL 5.1 with InnoDB tables
* All strings are stored as UTF-8.
* All datetimes are stored as UTC.
* Tables that are built into the Django framework are not documented here unless we use them in unconventional ways.
All of our tables will be described below, first in summary form with field types and constraints, and then with a detailed explanation of each field. For those not familiar with the MySQL schema terminology in the table summaries:
`Type`
This is the kind of data it is, along with the size of the field. When a numeric field has a length specified, it just means that's how many digits we want displayed -- it has no affect on the number of bytes used.
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `int`
- 4 byte integer.
* - `smallint`
- 2 byte integer, sometimes used for enumerated values.
* - `tinyint`
- 1 byte integer, but usually just used to indicate a boolean field with 0 = False and 1 = True.
* - `varchar`
- String, typically short and indexable. The length is the number of chars, not bytes (so unicode friendly).
* - `longtext`
- A long block of text, usually not indexed.
* - `date`
- Date
* - `datetime`
- Datetime in UTC, precision in seconds.
`Null`
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `YES`
- `NULL` values are allowed
* - `NO`
- `NULL` values are not allowed
.. note::
Django often just places blank strings instead of NULL when it wants to indicate a text value is optional. This is used more meaningful for numeric and date fields.
`Key`
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `PRI`
- Primary key for the table, usually named `id`, unique
* - `UNI`
- Unique
* - `MUL`
- Indexed for fast lookup, but the same value can appear multiple times. A Unique index that allows `NULL` can also show up as `MUL`.
****************
User Information
****************
`auth_user`
===========
The `auth_user` table is built into the Django web framework that we use. It holds generic information necessary for basic login and permissions information. It has the following fields::
+------------------------------+--------------+------+-----+
| Field | Type | Null | Key |
+------------------------------+--------------+------+-----+
| id | int(11) | NO | PRI |
| username | varchar(30) | NO | UNI |
| first_name | varchar(30) | NO | | # Never used
| last_name | varchar(30) | NO | | # Never used
| email | varchar(75) | NO | UNI |
| password | varchar(128) | NO | |
| is_staff | tinyint(1) | NO | |
| is_active | tinyint(1) | NO | |
| is_superuser | tinyint(1) | NO | |
| last_login | datetime | NO | |
| date_joined | datetime | NO | |
| status | varchar(2) | NO | | # No longer used
| email_key | varchar(32) | YES | | # No longer used
| avatar_type | varchar(1) | NO | | # No longer used
| country | varchar(2) | NO | | # No longer used
| show_country | tinyint(1) | NO | | # No longer used
| date_of_birth | date | YES | | # No longer used
| interesting_tags | longtext | NO | | # No longer used
| ignored_tags | longtext | NO | | # No longer used
| email_tag_filter_strategy | smallint(6) | NO | | # No longer used
| display_tag_filter_strategy | smallint(6) | NO | | # No longer used
| consecutive_days_visit_count | int(11) | NO | | # No longer used
+------------------------------+--------------+------+-----+
`id`
----
Primary key, and the value typically used in URLs that reference the user. A user has the same value for `id` here as they do in the MongoDB database's users collection. Foreign keys referencing `auth_user.id` will often be named `user_id`, but are sometimes named `student_id`.
`username`
----------
The unique username for a user in our system. It may contain alphanumeric, _, @, +, . and - characters. The username is the only information that the students give about themselves that we currently expose to other students. We have never allowed people to change their usernames so far, but that's not something we guarantee going forward.
`first_name`
------------
.. note::
Not used; we store a user's full name in `auth_userprofile.name` instead.
`last_name`
-----------
.. note::
Not used; we store a user's full name in `auth_userprofile.name` instead.
`email`
-------
Their email address. While Django by default makes this optional, we make it required, since it's the primary mechanism through which people log in. Must be unique to each user. Never shown to other users.
`password`
----------
A hashed version of the user's password. Depending on when the password was last set, this will either be a SHA1 hash or PBKDF2 with SHA256 (Django 1.3 uses the former and 1.4 the latter).
`is_staff`
----------
This value is `1` if the user is a staff member *of edX* with corresponding elevated privileges that cut across courses. It does not indicate that the person is a member of the course staff for any given course. Generally, users with this flag set to 1 are either edX program managers responsible for course delivery, or edX developers who need access for testing and debugging purposes. People who have `is_staff = 1` get instructor privileges on all courses, along with having additional debug information show up in the instructor tab.
Note that this designation has no bearing with a user's role in the forums, and confers no elevated privileges there.
Most users have a `0` for this value.
`is_active`
-----------
This value is `1` if the user has clicked on the activation link that was sent to them when they created their account, and `0` otherwise. Users who have `is_active = 0` generally cannot log into the system. However, when users first create their account, they are automatically logged in even though they are not active. This is to let them experience the site immediately without having to check their email. They just get a little banner at the top of their dashboard reminding them to check their email and activate their account when they have time. If they log out, they won't be able to log back in again until they've activated. However, because our sessions last a long time, it is theoretically possible for someone to use the site as a student for days without being "active".
Once `is_active` is set to `1`, the only circumstance where it would be set back to `0` would be if we decide to ban the user (which is very rare, manual operation).
`is_superuser`
--------------
Value is `1` if the user has admin privileges. Only the earliest developers of the system have this set to `1`, and it's no longer really used in the codebase. Set to 0 for almost everybody.
`last_login`
------------
A datetime of the user's last login. Should not be used as a proxy for activity, since people can use the site all the time and go days between logging in and out.
`date_joined`
-------------
Date that the account was created (NOT when it was activated).
`(obsolete fields)`
-------------------
All the following fields were added by an application called Askbot, a discussion forum package that is no longer part of the system:
* `status`
* `email_key`
* `avatar_type`
* `country`
* `show_country`
* `date_of_birth`
* `interesting_tags`
* `ignored_tags`
* `email_tag_filter_strategy`
* `display_tag_filter_strategy`
* `consecutive_days_visit_count`
Only users who were part of the prototype 6.002x course run in the Spring of 2012 would have any information in these fields. Even with those users, most of this information was never collected. Only the fields that are automatically generated have any values in them, such as tag settings.
These fields are completely unrelated to the discussion forums we currently use, and will eventually be dropped from this table.
`auth_userprofile`
==================
The `auth_userprofile` table is mostly used to store user demographic information collected during the signup process. We also use it to store certain additional metadata relating to certificates. Every row in this table corresponds to one row in `auth_user`::
+--------------------+--------------+------+-----+
| Field | Type | Null | Key |
+--------------------+--------------+------+-----+
| id | int(11) | NO | PRI |
| user_id | int(11) | NO | UNI |
| name | varchar(255) | NO | MUL |
| language | varchar(255) | NO | MUL | # Prototype course users only
| location | varchar(255) | NO | MUL | # Prototype course users only
| meta | longtext | NO | |
| courseware | varchar(255) | NO | | # No longer used
| gender | varchar(6) | YES | MUL | # Only users signed up after prototype
| mailing_address | longtext | YES | | # Only users signed up after prototype
| year_of_birth | int(11) | YES | MUL | # Only users signed up after prototype
| level_of_education | varchar(6) | YES | MUL | # Only users signed up after prototype
| goals | longtext | YES | | # Only users signed up after prototype
| allow_certificate | tinyint(1) | NO | |
+--------------------+--------------+------+-----+
There is an important split in demographic data gathered for the students who signed up during the MITx prototype phase in the spring of 2012, and those that signed up afterwards.
`id`
----
Primary key, not referenced anywhere else.
`user_id`
---------
A foreign key that maps to `auth_user.id`.
`name`
------
String for a user's full name. We make no constraints on language or breakdown into first/last name. The names are never shown to other students. Foreign students usually enter a romanized version of their names, but not always.
It used to be our policy to require manual approval of name changes to guard the integrity of the certificates. Students would submit a name change request and someone from the team would approve or reject as appropriate. Later, we decided to allow the name changes to take place automatically, but to log previous names in the `meta` field.
`language`
----------
User's preferred language, asked during the sign up process for the 6.002x prototype course given in the Spring of 2012. This information stopped being collected after the transition from MITx to edX happened, but we never removed the values from our first group of students. Sometimes written in those languages.
`location`
----------
User's location, asked during the sign up process for the 6.002x prototype course given in the Spring of 2012. We weren't specific, so people tended to put the city they were in, though some just specified their country and some got as specific as their street address. Again, sometimes romanized and sometimes written in their native language. Like `language`, we stopped collecting this field when we transitioned from MITx to edX, so it's only available for our first batch of students.
`meta`
------
An optional, freeform text field that stores JSON data. This was a hack to allow us to associate arbitrary metadata with a user. An example of the JSON that can be stored here is::
{
"old_names" : [
["Mike Smith", "Mike's too informal for a certificate.", "2012-11-15T17:28:12.658126"],
["Michael Smith", "I want to add a middle name as well.", "2013-02-07T11:15:46.524331"]
],
"old_emails" : [["mr_mike@email.com", "2012-10-18T15:21:41.916389"]],
"6002x_exit_response" : {
"rating": ["6"],
"teach_ee": ["I do not teach EE."],
"improvement_textbook": ["I'd like to get the full PDF."],
"future_offerings": ["true"],
"university_comparison":
["This course was <strong>on the same level</strong> as the university class."],
"improvement_lectures": ["More PowerPoint!"],
"highest_degree": ["Bachelor's degree."],
"future_classes": ["true"],
"future_updates": ["true"],
"favorite_parts": ["Releases, bug fixes, and askbot."]
}
}
The following are details about this metadata. Please note that the fields described below are found as JSON attributes *inside* the `meta` field, and are *not* separate database fields of their own.
`old_names`
A list of the previous names this user had, and the timestamps at which they submitted a request to change those names. These name change request submissions used to require a staff member to approve it before the name change took effect. This is no longer the case, though we still record their previous names.
Note that the value stored for each entry is the name they had, not the name they requested to get changed to. People often changed their names as the time for certificate generation approached, to replace nicknames with their actual names or correct spelling/punctuation errors.
The timestamps are UTC, like all datetimes stored in our system.
`old_emails`
A list of previous emails this user had, with timestamps of when they changed them, in a format similar to `old_names`. There was never an approval process for this.
The timestamps are UTC, like all datetimes stored in our system.
`6002x_exit_response`
Answers to a survey that was sent to students after the prototype 6.002x course in the Spring of 2012. The questions and number of questions were randomly selected to measure how much survey length affected response rate. Only students from this course have this field.
`courseware`
------------
This can be ignored. At one point, it was part of a way to do A/B tests, but it has not been used for anything meaningful since the conclusion of the prototype course in the spring of 2012.
`gender`
--------
Dropdown field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field.
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `NULL`
- This student signed up before this information was collected
* - `''` (blank)
- User did not specify gender
* - `'f'`
- Female
* - `'m'`
- Male
* - `'o'`
- Other
`mailing_address`
-----------------
Text field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field. Students who elected not to enter anything will have a blank string.
`year_of_birth`
---------------
Dropdown field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field. Students who decided not to fill this in will also have NULL.
`level_of_education`
--------------------
Dropdown field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field.
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `NULL`
- This student signed up before this information was collected
* - `''` (blank)
- User did not specify level of education.
* - `'p'`
- Doctorate
* - `'p_se'`
- Doctorate in science or engineering (no longer used)
* - `'p_oth'`
- Doctorate in another field (no longer used)
* - `'m'`
- Master's or professional degree
* - `'b'`
- Bachelor's degree
* - `'a'`
- Associate's degree
* - `'hs'`
- Secondary/high school
* - `'jhs'`
- Junior secondary/junior high/middle school
* - `'el'`
- Elementary/primary school
* - `'none'`
- None
* - `'other'`
- Other
`goals`
-------
Text field collected during student signup in response to the prompt, "Goals in signing up for edX". We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field. Students who elected not to enter anything will have a blank string.
`allow_certificate`
-------------------
Set to `1` for most students. This field is set to `0` if log analysis has revealed that this student is accessing our site from a country that the US has an embargo against. At this time, we do not issue certificates to students from those countries.
`student_courseenrollment`
==========================
A row in this table represents a student's enrollment for a particular course run. If they decide to unenroll in the course, we set `is_active` to `False`. We still leave all their state in `courseware_studentmodule` untouched, so they will not lose courseware state if they unenroll and reenroll.
`id`
----
Primary key.
`user_id`
---------
Student's ID in `auth_user.id`
`course_id`
-----------
The ID of the course run they're enrolling in (e.g. `MITx/6.002x/2012_Fall`). You can get this from the URL when you're viewing courseware on your browser.
`created`
---------
Datetime of enrollment, UTC.
`is_active`
-----------
Boolean indicating whether this enrollment is active. If an enrollment is not active, a student is not enrolled in that course. This lets us unenroll students without losing a record of what courses they were enrolled in previously. This was introduced in the 2013-08-20 release. Before this release, unenrolling a student simply deleted the row in `student_courseenrollment`.
`mode`
------
String indicating what kind of enrollment this was. The default is "honor" (honor certificate) and all enrollments prior to 2013-08-20 will be of that type. Other types being considered are "audit" and "verified_id".
`user_id_map`
==========================
A row in this table maps a student's real user ID to an anonymous ID generated to obfuscate the student's identity.
.. list-table::
:widths: 15 15 15 15
:header-rows: 1
* - Field
- Type
- Null
- Key
* - hashid
- int(11)
- NO
- PRI
* - id
- int(11)
- NO
-
* - username
- varchar(30)
- NO
-
`hash_id`
----
The user ID generated to obfuscate the student's identity.
`user_id`
---------
The student's ID in `auth_user.id`.
`username`
-----------
The student's username in `auth_user.id`.
*******************
Courseware Progress
*******************
Any piece of content in the courseware can store state and score in the `courseware_studentmodule` table. Grades and the user Progress page are generated by doing a walk of the course contents, searching for graded items, looking up a student's entries for those items in `courseware_studentmodule` via `(course_id, student_id, module_id)`, and then applying the grade weighting found in the course policy and grading policy files. Course policy files determine how much weight one problem has relative to another, and grading policy files determine how much categories of problems are weighted (e.g. HW=50%, Final=25%, etc.).
.. warning::
**Modules might not be what you expect!**
It's important to understand what "modules" are in the context of our system, as the terminology can be confusing. For the conventions of this table and many parts of our code, a "module" is a content piece that appears in the courseware. This can be nearly anything that appears when users are in the courseware tab: a video, a piece of HTML, a problem, etc. Modules can also be collections of other modules, such as sequences, verticals (modules stacked together on the same page), weeks, chapters, etc. In fact, the course itself is a top level module that contains all the other contents of the course as children. You can imagine the entire course as a tree with modules at every node.
Modules can store state, but whether and how they do so is up to the implemenation for that particular kind of module. When a user loads page, we look up all the modules they need to render in order to display it, and then we ask the database to look up state for those modules for that user. If there is corresponding entry for that user for a given module, we create a new row and set the state to an empty JSON dictionary.
`courseware_studentmodule`
==========================
The `courseware_studentmodule` table holds all courseware state for a given user. Every student has a separate row for every piece of content in the course, making this by far our largest table::
+-------------+--------------+------+-----+
| Field | Type | Null | Key |
+-------------+--------------+------+-----+
| id | int(11) | NO | PRI |
| module_type | varchar(32) | NO | MUL |
| module_id | varchar(255) | NO | MUL |
| student_id | int(11) | NO | MUL |
| state | longtext | YES | |
| grade | double | YES | MUL | # problem, selfassessment, and combinedopenended use this
| created | datetime | NO | MUL |
| modified | datetime | NO | MUL |
| max_grade | double | YES | | # problem, selfassessment, and combinedopenended use this
| done | varchar(8) | NO | MUL | # ignore this
| course_id | varchar(255) | NO | MUL |
+-------------+--------------+------+-----+
`id`
----
Primary key. Rarely used though, since most lookups on this table are searches on the three tuple of `(course_id, student_id, module_id)`.
`module_type`
-------------
.. list-table::
:widths: 10 80
:header-rows: 0
* - `chapter`
- The top level categories for a course. Each of these is usually labeled as a Week in the courseware, but this is just convention.
* - `combinedopenended`
- A new module type developed for grading open ended questions via self assessment, peer assessment, and machine learning.
* - `conditional`
- A new module type recently developed for 8.02x, this allows you to prevent access to certain parts of the courseware if other parts have not been completed first.
* - `course`
- The top level course module of which all course content is descended.
* - `problem`
- A problem that the user can submit solutions for. We have many different varieties.
* - `problemset`
- A collection of problems and supplementary materials, typically used for homeworks and rendered as a horizontal icon bar in the courseware. Use is inconsistent, and some courses use a `sequential` instead.
* - `selfassessment`
- Self assessment problems. An early test of the open ended grading system that is not in widespread use yet. Recently deprecated in favor of `combinedopenended`.
* - `sequential`
- A collection of videos, problems, and other materials, rendered as a horizontal icon bar in the courseware.
* - `videosequence`
- A collection of videos, exercise problems, and other materials, rendered as a horizontal icon bar in the courseware. Use is inconsistent, and some courses use a `sequential` instead.
There's been substantial muddling of our container types, particularly between sequentials, problemsets, and videosequences. In the beginning we only had sequentials, and these ended up being used primarily for two purposes: creating a sequence of lecture videos and exercises for instruction, and creating homework problem sets. The `problemset` and `videosequence` types were created with the hope that our system would have a better semantic understanding of what a sequence actually represented, and could at a later point choose to render them differently to the user if it was appropriate. Due to a variety of reasons, migration over to this has been spotty. They all render the same way at the moment.
`module_id`
-----------
Unique ID for a distinct piece of content in a course, these are recorded as URLs of the form `i4x://{org}/{course_num}/{module_type}/{module_name}`. Having URLs of this form allows us to give content a canonical representation even as we are in a state of transition between backend data stores.
.. list-table:: Breakdown of example `module_id`: `i4x://MITx/3.091x/problemset/Sample_Problems`
:widths: 10 20 70
:header-rows: 1
* - Part
- Example
- Definition
* - `i4x://`
-
- Just a convention we ran with. We had plans for the domain `i4x.org` at one point.
* - `org`
- `MITx`
- The organization part of the ID, indicating what organization created this piece of content.
* - `course_num`
- `3.091x`
- The course number this content was created for. Note that there is no run information here, so you can't know what runs of the course this content is being used for from the `module_id` alone; you have to look at the `courseware_studentmodule.course_id` field.
* - `module_type`
- `problemset`
- The module type, same value as what's in the `courseware_studentmodule.module_type` field.
* - `module_name`
- `Sample_Problems`
- The name given for this module by the content creators. If the module was not named, the system will generate a name based on the type and a hash of its contents (ex: `selfassessment_03c483062389`).
`student_id`
------------
A reference to `auth_user.id`, this is the student that this module state row belongs to.
`state`
-------
This is a JSON text field where different module types are free to store their state however they wish.
Container Modules: `course`, `chapter`, `problemset`, `sequential`, `videosequence`
The state for all of these is a JSON dictionary indicating the user's last known position within this container. This is 1-indexed, not 0-indexed, mostly because it went out that way at one point and we didn't want to later break saved navigation state for users.
Example: `{"position" : 3}`
When this user last interacted with this course/chapter/etc., they had clicked on the third child element. Note that the position is a simple index and not a `module_id`, so if you rearranged the order of the contents, it would not be smart enough to accomodate the changes and would point users to the wrong place.
The hierarchy goes: `course > chapter > (problemset | sequential | videosequence)`
`combinedopenended`
TODO: More details to come.
`conditional`
Conditionals don't actually store any state, so this value is always an empty JSON dictionary (`'{}'`). We should probably remove these entries altogether.
`problem`
There are many kinds of problems supported by the system, and they all have different state requirements. Note that one problem can have many different response fields. If a problem generates a random circuit and asks five questions about it, then all of that is stored in one row in `courseware_studentmodule`.
TODO: Write out different problem types and their state.
`selfassessment`
TODO: More details to come.
`grade`
-------
Floating point value indicating the total unweighted grade for this problem that the student has scored. Basically how many responses they got right within the problem.
Only `problem` and `selfassessment` types use this field. All other modules set this to `NULL`. Due to a quirk in how rendering is done, `grade` can also be `NULL` for a tenth of a second or so the first time that a user loads a problem. The initial load will trigger two writes, the first of which will set the `grade` to `NULL`, and the second of which will set it to `0`.
`created`
---------
Datetime when this row was created (i.e. when the student first accessed this piece of content).
`modified`
----------
Datetime when we last updated this row. Set to be equal to `created` at first. A change in `modified` implies that there was a state change, usually in response to a user action like saving or submitting a problem, or clicking on a navigational element that records its state. However it can also be triggered if the module writes multiple times on its first load, like problems do (see note in `grade`).
`max_grade`
-----------
Floating point value indicating the total possible unweighted grade for this problem, or basically the number of responses that are in this problem. Though in practice it's the same for every entry with the same `module_id`, it is technically possible for it to be anything. The problems are dynamic enough where you could create a random number of responses if you wanted. This a bad idea and will probably cause grading errors, but it is possible.
Another way in which `max_grade` can differ between entries with the same `module_id` is if the problem was modified after the `max_grade` was written and the user never went back to the problem after it was updated. This might happen if a member of the course staff puts out a problem with five parts, realizes that the last part doesn't make sense, and decides to remove it. People who saw and answered it when it had five parts and never came back to it after the changes had been made will have a `max_grade` of `5`, while people who saw it later will have a `max_grade` of `4`.
These complexities in our grading system are a high priority target for refactoring in the near future.
Only `problem` and `selfassessment` types use this field. All other modules set this to `NULL`.
`done`
------
Ignore this field. It was supposed to be an indication whether something was finished, but was never properly used and is just `'na'` in every row.
`course_id`
-----------
The course that this row applies to, represented in the form org/course/run (ex: `MITx/6.002x/2012_Fall`). The same course content (same `module_id`) can be used in different courses, and a student's state needs to be tracked separately for each course.
************
Certificates
************
`certificates_generatedcertificate`
===================================
The generatedcertificate table tracks certificate state for students who have been graded after a course completes. Currently the table is only populated when a course ends and a script is run to grade students who have completed the course::
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| user_id | int(11) | NO | MUL | NULL | |
| download_url | varchar(128) | NO | | NULL | |
| grade | varchar(5) | NO | | NULL | |
| course_id | varchar(255) | NO | MUL | NULL | |
| key | varchar(32) | NO | | NULL | |
| distinction | tinyint(1) | NO | | NULL | |
| status | varchar(32) | NO | | NULL | |
| verify_uuid | varchar(32) | NO | | NULL | |
| download_uuid | varchar(32) | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| created_date | datetime | NO | | NULL | |
| modified_date | datetime | NO | | NULL | |
| error_reason | varchar(512) | NO | | NULL | |
+---------------+--------------+------+-----+---------+----------------+
`user_id`, `course_id`
----------------------
The table is indexed by user and course
`status`
--------
Status may be one of these states:
* `unavailable`
* `generating`
* `regenerating`
* `deleting`
* `deleted`
* `downloadable`
* `notpassing`
* `restricted`
* `error`
After a course has been graded and certificates have been issued status will be one of:
* `downloadable`
* `notpassing`
* `restricted`
If the status is `downloadable` then the student passed the course and there will be a certificate available for download.
`download_url`
--------------
The `download_uuid` has the full URL to the certificate
`download_uuid`, `verify_uuid`
------------------------------
The two uuids are what uniquely identify the download url and the url used to download the certificate.
`distinction`
-------------
This was used for letters of distinction for 188.1x and is not being used for any current courses
`name`
------
This field records the name of the student that was set at the time the student was graded and the certificate was generated.
`grade`
-------
The grade of the student recorded at the time the certificate was generated. This may be different than the current grade since grading is only done once for a course when it ends.
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
Q_FLAG =
ifeq ($(quiet), true)
Q_FLAG = -Q
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = $(Q_FLAG) -d $(BUILDDIR)/doctrees -c source $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/edX.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/edX.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/edX"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/edX"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
# -*- coding: utf-8 -*-
#pylint: disable=C0103
#pylint: disable=W0622
#pylint: disable=W0212
#pylint: disable=W0613
import sys, os
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append('../../../')
from docs.shared.conf import *
# Add any paths that contain templates here, relative to this directory.
templates_path.append('source/_templates')
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path.append('source/_static')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../..'))
root = os.path.abspath('../../..')
sys.path.append(root)
sys.path.append(os.path.join(root, "common/djangoapps"))
sys.path.append(os.path.join(root, "common/lib"))
sys.path.append(os.path.join(root, "common/lib/sandbox-packages"))
sys.path.append(os.path.join(root, "lms/djangoapps"))
sys.path.append(os.path.join(root, "lms/lib"))
sys.path.append(os.path.join(root, "cms/djangoapps"))
sys.path.append(os.path.join(root, "cms/lib"))
sys.path.insert(0, os.path.abspath(os.path.normpath(os.path.dirname(__file__)
+ '/../../')))
sys.path.append('.')
# django configuration - careful here
if on_rtd:
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms'
else:
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms.envs.test'
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx',
'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.pngmath',
'sphinx.ext.mathjax', 'sphinx.ext.viewcode']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['build']
# Output file base name for HTML help builder.
htmlhelp_basename = 'edXDocs'
# --- Mock modules ------------------------------------------------------------
# Mock all the modules that the readthedocs build can't import
import mock
class Mock(object):
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return Mock()
@classmethod
def __getattr__(cls, name):
if name in ('__file__', '__path__'):
return '/dev/null'
elif name[0] == name[0].upper():
mockType = type(name, (), {})
mockType.__module__ = __name__
return mockType
else:
return Mock()
# The list of modules and submodules that we know give RTD trouble.
# Make sure you've tried including the relevant package in
# docs/share/requirements.txt before adding to this list.
MOCK_MODULES = [
'numpy',
'matplotlib',
'matplotlib.pyplot',
'scipy.interpolate',
'scipy.constants',
'scipy.optimize',
]
if on_rtd:
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = Mock()
# -----------------------------------------------------------------------------
# from http://djangosnippets.org/snippets/2533/
# autogenerate models definitions
import inspect
import types
from HTMLParser import HTMLParser
def force_unicode(s, encoding='utf-8', strings_only=False, errors='strict'):
"""
Similar to smart_unicode, except that lazy instances are resolved to
strings, rather than kept as lazy objects.
If strings_only is True, don't convert (some) non-string-like objects.
"""
if strings_only and isinstance(s, (types.NoneType, int)):
return s
if not isinstance(s, basestring,):
if hasattr(s, '__unicode__'):
s = unicode(s)
else:
s = unicode(str(s), encoding, errors)
elif not isinstance(s, unicode):
s = unicode(s, encoding, errors)
return s
class MLStripper(HTMLParser):
def __init__(self):
self.reset()
self.fed = []
def handle_data(self, d):
self.fed.append(d)
def get_data(self):
return ''.join(self.fed)
def strip_tags(html):
s = MLStripper()
s.feed(html)
return s.get_data()
def process_docstring(app, what, name, obj, options, lines):
"""Autodoc django models"""
# This causes import errors if left outside the function
from django.db import models
# If you want extract docs from django forms:
# from django import forms
# from django.forms.models import BaseInlineFormSet
# Only look at objects that inherit from Django's base MODEL class
if inspect.isclass(obj) and issubclass(obj, models.Model):
# Grab the field list from the meta class
fields = obj._meta._fields()
for field in fields:
# Decode and strip any html out of the field's help text
help_text = strip_tags(force_unicode(field.help_text))
# Decode and capitalize the verbose name, for use if there isn't
# any help text
verbose_name = force_unicode(field.verbose_name).capitalize()
if help_text:
# Add the model field to the end of the docstring as a param
# using the help text as the description
lines.append(u':param %s: %s' % (field.attname, help_text))
else:
# Add the model field to the end of the docstring as a param
# using the verbose name as the description
lines.append(u':param %s: %s' % (field.attname, verbose_name))
# Add the field's type to the docstring
lines.append(u':type %s: %s' % (field.attname, type(field).__name__))
return lines
def setup(app):
"""Setup docsting processors"""
#Register the docstring processor with sphinx
app.connect('autodoc-process-docstring', process_docstring)
.. module:: transcripts
======================================================
Developer’s workflow for the timed transcripts in CMS.
======================================================
:download:`Multipage pdf version of Timed Transcripts workflow. <transcripts_workflow.pdf>`
:download:`Open office graph version (source for pdf). <transcripts_workflow.odg>`
:download:`List of implemented acceptance tests. <transcripts_acceptance_tests.odt>`
Description
===========
Timed Transcripts functionality is added in separate tab of Video module Editor, that is active by default. This tab is called `Basic`, another tab is called `Advanced` and contains default metadata fields.
`Basic` tab is a simple representation of `Advanced` tab that provides functionality to speed up adding Video module with transcripts to the course.
To make more accurate adjustments `Advanced` tab should be used.
Front-end part of `Basic` tab has 4 editors/views:
* Display name
* 3 editors for inserting Video URLs.
Video URL fields might contain 3 kinds of URLs:
* **YouTube** link. There are supported formats:
* http://www.youtube.com/watch?v=OEoXaMPEzfM&feature=feedrec_grec_index ;
* http://www.youtube.com/user/IngridMichaelsonVEVO#p/a/u/1/OEoXaMPEzfM ;
* http://www.youtube.com/v/OEoXaMPEzfM?fs=1&amp;hl=en_US&amp;rel=0 ;
* http://www.youtube.com/watch?v=OEoXaMPEzfM#t=0m10s ;
* http://www.youtube.com/embed/OEoXaMPEzfM?rel=0 ;
* http://www.youtube.com/watch?v=OEoXaMPEzfM ;
* http://youtu.be/OEoXaMPEzfM ;
* **MP4** video source;
* **WEBM** video source.
Each of these kind of URLs can be specified just **ONCE**. Otherwise, error message occurs on front-end.
After filling editor **transcripts/check** method will be invoked with the parameters described below (see `API`_). Depending on conditions, that are also described below (see `Commands`_), this method responds with a *command* and front-end renders the appropriate View.
Each View can have specific actions. There is a list of supported actions:
* Download Timed Transcripts;
* Upload Timed Transcripts;
* Import Timed Transcripts from YouTube;
* Replace edX Timed Transcripts by Timed Transcripts from YouTube;
* Choose Timed Transcripts;
* Use existing Timed Transcripts.
All of these actions are handled by 7 API methods described below (see `API`_).
Because rollback functionality isn't implemented now, after invoking some of the actions user cannot revert changes by clicking button `Cancel`.
To remove timed transcripts file from the video just go to `Advanced` tab and clear field `sub` then Save changes.
Commands
========
Command from front-end point of view is just a reference to the needed View with possible actions that user can do depending on conditions described below (See edx-platform/cms/static/js/views/transcripts/message_manager.js:21-29).
So,
* **IF** YouTube transcripts present locally **AND** on YouTube server **AND** both of these transcripts files are **DIFFERENT**, we respond with `replace` command. Ask user to replace local transcripts file by YouTube's ones.
* **IF** YouTube transcripts present **ONLY** locally, we respond with `found` command.
* **IF** YouTube transcripts present **ONLY** on YouTube server, we respond with `import` command. Ask user to import transcripts file from YouTube server.
* **IF** player is in HTML5 video mode. It means that **ONLY** html5 sources are added:
* **IF** just 1 html5 source was added or both html5 sources have **EQUAL** transcripts files, then we respond with `found` command.
* **OTHERWISE**, when 2 html5 sources were added and founded transcripts files are **DIFFERENT**, we respond with `choose` command. In this case, user should choose which one transcripts file he wants to use.
* **IF** we are working with just 1 field **AND** item.sub field **HAS** a value **AND** user fills editor/view by the new value/video source without transcripts file, we respond with `use_existing` command. In this case, user will have possibility to use transcripts file from previous video.
* **OTHERWISE**, we will respond with `not_found` command.
Synchronization and Saving workflow
====================================
For now saving mechanism works as follows:
On click `Save` button **ModuleEdit** class (See edx-platform/cms/static/coffee/src/views/module_edit.coffee:83-101) grabs values from all modified metadata fields and sends all this data to the server.
Because of the fact that Timed Transcripts is module specific functionality, ModuleEdit class is not extended. Instead, to apply all changes that user did in the `Basic` tab, we use synchronization mechanism of TabsEditingDescriptor class. That mechanism provides us possibility to do needed actions on Tab switching and on Save (See edx-platform/cms/templates/widgets/video/transcripts.html).
On tab switching and when save action is invoked, JavaScript code synchronize collections (Metadata Collection and Transcripts Collection). You can see synchronization logic in the edx-platform/cms/static/js/views/transcripts/editor.js:72-219. In this case, Metadata fields always have the actual data.
Special cases
=============
1. Status message `Timed Transcript Conflict` (Choose) where one of 2 transcripts files should be chosen **-->** click `Save` button without choosing **-->** open Editor **-->** status message `Timed Transcript Found` will be shown and transcripts file will be chosen in random order.
2. status message `Timed Transcript Conflict` (Choose) where one of 2 transcripts files should be chosen **-->** open `Advanced` tab without choosing **-->** get back to `Basic` tab **-->** status message `Timed Transcript Found` will be shown and transcripts file will be chosen in random order.
3. The same issues with `Timed Transcript Not Updated` (Use existing).
API
===
We provide 7 API methods to work with timed transcripts
(edx-platform/cms/urls.py:23-29):
* transcripts/upload
* transcripts/download
* transcripts/check
* transcripts/choose
* transcripts/replace
* transcripts/rename
* transcripts/save
**"transcripts/upload"** method is used for uploading SRT transcripts for the
HTML5 and YouTube video modules.
*Method:*
POST
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
- file - BLOB file
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/download"** method is used for downloading SRT transcripts for the
HTML5 and YouTube video modules.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- subs_id - file name that is used to find transcripts file in the storage.
*Response:*
HTTP 404
or
HTTP 200 + BLOB of SRT file
**"transcripts/check"** method is used for checking availability of timed transcripts
for the video module.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
command: string with action to front-end what to do and what to show to user,
subs: file name of transcripts file that was found in the storage,
html5_local: [] or [True] or [True, True],
is_youtube_mode: True/False,
youtube_local: True/False,
youtube_server: True/False,
youtube_diff: True/False,
current_item_subs: string with value of item.sub field,
status: 'Error' or 'Success'
}
**"transcripts/choose"** method is used for choosing which transcripts file should be used.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
- html5_id - file name of chosen transcripts file.
*Response:*
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/replace"** method is used for handling `import` and `replace` commands.
Invoking this method starts downloading new transcripts file from YouTube server.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/rename"** method is used for handling `use_existing` command.
After invoking this method current transcripts file will be copied and renamed to another one with name of current video passed in the editor/view.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/save"** method is used for handling `save` command.
After invoking this method all changes will be saved that were done before this moment.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- metadata - new values for the metadata fields.
- currents_subs - list with the file names of videos passed in the editor/view.
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error'
}
Transcripts modules:
====================
.. automodule:: contentstore.views.transcripts_ajax
:members:
:show-inheritance:
.. automodule:: contentstore.transcripts_utils
:members:
:show-inheritance:
......@@ -5,8 +5,8 @@ import sys, os
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append(os.path.abspath('../../../../'))
sys.path.append(os.path.abspath('../../../'))
sys.path.append(os.path.abspath('../../'))
from docs.shared.conf import *
......@@ -30,4 +30,4 @@ copyright = u'2013, edX Documentation Team'
# The short X.Y version.
version = ''
# The full version, including alpha/beta/rc tags.
release = ''
\ No newline at end of file
release = ''
.. _ORA for Students:
Open Response Assessments for Students
======================================
.. _ORA Introduction:
Introduction to Open Response Assessments
-----------------------------------------
.. note::
Modify this section according to your course. For example, you
can delete sentences such as "For more information, see :ref:`ORA Peer Assessment`"
and "For more information, see :ref:`ORA AI Assessment`" if your ORA problem doesn't
contain peer assessments or AI assessments and you want to delete these sections from
this document.
Open response assessments allow you to submit a short written answer,
an essay, or a file such as an image or computer code file.
When you come to an open response assessment problem, you see the name of the
problem, the assessment types, the text of the question, the field where you'll
enter your response, and the **Save** and **Submit** buttons.
.. image:: /Images/ExampleORA.gif
If an open response assessment asks you to submit a file, you'll also see a button
that you'll click to upload your file.
.. image:: /Images/ExampleORA_File.gif
The *assessment types* can include *self assessment*, *peer assessment*, and *artificial intelligence (AI) assessment*. The
assessment types run in the order in which they appear in the problem.
- In a self assessment, you assess your response according a rubric that the
instructor has created. For more information, see :ref:`ORA Self Assessment`.
- In a peer assessment, you grade
responses that your peers have submitted while several of your peers
grade your response. For more information, see
:ref:`ORA Peer Assessment`.
- In an AI assessment, a computer algorithm grades your response. For more information,
see :ref:`ORA AI Assessment`.
An open response assessment problem doesn't have to use all assessment types. For example, one problem
may use self assessment and AI assessment, while another problem may use self assessment
and peer assessment, and another problem may use only peer assessment.
You'll answer open response assessment problems in much the same way that you answer other
problems. For more information about how to submit responses, see :ref:`ORA Submit a Response`.
When you submit a response to an open response assessment, the next step
depends on the type of assessment that the problem uses. For more information,
see :ref:`ORA Self Assessment`, :ref:`ORA Peer Assessment`, and :ref:`ORA AI Assessment`.
After you submit your response, your score will be available shortly - sometimes within a few
minutes. For information about how to access your score after your response has been graded,
see :ref:`ORA Access Scores`.
If you want to experiment with open response assessments, you can try out the open
assessment problems in the `EdX Demo <https://courses.edx.org/courses/edX/DemoX/Demo_Course/info>`_
course. To get started, go
to the `Self-Assessed Essay <https://courses.edx.org/courses/edX/DemoX/Demo_Course/courseware/graded_interactions/machine_grading/2>`_
unit, and then enter a response in the **Response** field under the
question. You can enter your own response, or you can use one of the sample
responses in the `Sample Answers <https://courses.edx.org/courses/edX/DemoX/Demo_Course/courseware/graded_interactions/machine_grading/6/>`_
unit.
.. _ORA Submit a Response:
Submit a Response
-----------------
Submitting a response is slightly different if you're submitting a written response
or uploading a file.
#. Enter the response that you want to submit.
- If you're submitting a written response, type your response in the
**Response** field.
- If you're uploading a file, click **Choose File** under the **Response**
field. In the dialog box that opens, select the file that you want to upload,
and then click **Open**.
#. Click **Submit**, and then click **OK** in the dialog box to continue.
.. note:: If you want to save your response and work on it again later, click **Save**.
An "Answer saved, but not yet submitted" message appears directly under the **Save** and
**Submit** buttons.
After you submit your response, the assessment types start running in the order in which they
appear in the problem. For more information,
see :ref:`ORA Self Assessment`, :ref:`ORA Peer Assessment`, or :ref:`ORA AI Assessment`.
.. _ORA Self Assessment:
Self Assessment
---------------
.. note::
You can delete this section if your ORA problem doesn't use self assessments.
In a self assessment, the rubric for the problem appears below your response immediately
after you submit the response. You then assess your response based on the rubric.
Perform a Self Assessment
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Submit a response to a self-assessed ORA problem.
#. When the rubric appears, compare your response with the rubric, and select the
option that you think is appropriate for each category.
.. image:: /Images/Rubric1.gif
#. Click **Submit assessment**.
Your response appears, and you can see the scores that you gave
yourself.
.. _ORA Peer Assessment:
Peer Assessment
---------------
.. note::
You can delete this section if your ORA problem doesn't use peer assessments.
In a peer assessment, several students in the course grade your response while you grade
other students' responses. You have to grade a number of your peers' responses before
you receive your score. (After you grade the minimum number of responses required to
receive your score, you can grade as many additional responses as you want.)
After you submit your response for grading, the following
message appears under your response.
**Your response has been submitted. Please check back later for your grade.**
.. warning:: In peer assessments, the **due date** is the date by which you must not only submit your own response, but finish grading the required number of your peers' responses.
Peer Grading Interface
~~~~~~~~~~~~~~~~~~~~~~
The area where you'll grade responses is the *peer
grading interface*. Each course that has peer assessments has at least
one peer grading interface. There may be just one peer grading interface
for the whole course, or each individual problem may have its own
separate peer grading interface.
.. image:: /Images/PGI_FromOEC_2Problems.gif
Perform a Peer Assessment
~~~~~~~~~~~~~~~~~~~~~~~~~
.. warning:: In peer assessments, the **due date** is the date by which you must not only submit your own response, but finish grading the required number of your peers' responses.
Performing a peer assessment has several steps. You can find detailed instructions for each step
below.
#. :ref:`Access Responses`, either in the body of the
course or from the **Open Ended Console** page.
#. :ref:`Learn to Grade` (this process is called
*calibration*).
#. :ref:`Grade Responses` from other students.
.. _Access Responses:
Step 1: Access responses from other students
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note::
Modify the content in this section according to
your course. For example, if your students can only grade by using the **Open
Ended Console** page, change the introductory sentence below, and delete the
second and third bullets.
**Note** *You can only grade a response if you've submitted a response to the
question, an instructor has already graded at least 20 responses, and
there are more essays from other students left to grade. If you haven't submitted
a response or no responses are available for grading, you see a yellow message in the
interface.*
.. image:: /Images/PAStudent_NoSubmissions.gif
There are several ways to access other students' responses, depending on
the way that the course is set up.
- Through the **Open Ended Console** page. This option is always
available for every course. To access the **Open Ended Console** page,
click the **Open Ended Panel** tab at the top of any page in the course.
When you see the list of problems that have responses available to grade,
click the name of the problem that you want to open it.
.. image:: /Images/PGI_FromOEC_2Problems.gif
- Through the courseware, in a specific unit. This option is only available if the
instructor has included a peer grading interface for the problem in the body of
the course. To access responses in the courseware, go to the unit that contains
the open response assessment problem. Scroll down past the response that you
submitted until you see the peer grading interface that appears below the problem.
.. image:: /Images/PGI_InUnitComposite.gif
- Through the courseware, in a separate section. This option may not be available
for your course. If it is, you'll see the section for peer grading in the
course accordion on the left side of your screen. For example, MIT's 6.00x:
Introduction to Computer Science and Programming course has a separate section
that holds all the course peer grading interfaces. To access peer grading for
a problem, you click the problem name.
.. image:: /Images/PGI_Multiple-600x.gif
.. _Learn to Grade:
Step 2: Learn to grade
^^^^^^^^^^^^^^^^^^^^^^
Before you grade your peers' responses, you must learn to grade
the same way that an instructor would. In this process, called
*calibration*, you'll grade several responses that an instructor has already
graded. If your grading is similar to the instructor's, you can begin grading
other students' responses to the question.
#. Click the name of the problem. When the **Learning to grade** page
opens, click **Start learning to grade**.
#. When the problem opens, compare the student's response with the
rubric. Select the options that best apply to the response, and then
click **Submit**.
#. Review the **How did I do?** message that you receive, and then click
**Continue**.
.. image:: /Images/PG_Calibration_Correct.gif
.. image:: /Images/PG_Calibration_Incorrect.gif
When you click **Continue**, the next student response appears for
you to grade, and you see a yellow **Calibration essay saved** message in
the top left corner of the page.
#. Continue to grade responses. After you grade the required number of
responses correctly, you receive a **Ready to grade!** message. You
can then start to grade responses for other students.
.. _Grade Responses:
Step 3: Grade responses
^^^^^^^^^^^^^^^^^^^^^^^
When you grade a peer assessment response, you can not only select
options in the rubric, but also provide additional feedback for the
student who submitted the response.
#. When the response opens, select the options in the rubric that you
feel best apply to the response, as you did in the calibration process.
If you have concerns about the response, you can select other
options to flag the response for instructor review. You don't have to fill
out the rubric before you select these options.
- If you aren't sure how to grade the response, select the **I am unsure about
the scores I have given above** check box.
- If the response is offensive, or if you suspect that it contains plagiarized
material, select the **This submission has explicit, offensive, or (I suspect)
plagiarized content** check box.
#. Under **Written Feedback**, write a comment about the score that you
gave the response.
#. Click **Submit**. You see a **Successfully saved your feedback**
message at the top of the screen, and the next response opens.
#. Continue to grade until you've graded the required number of
responses (usually 3). When you've graded enough responses, you
receive the following message.
.. image:: /Images/DoneGrading.gif
When you see this message, you can access the score for your own
response. For more information, see :ref:`ORA Access Scores`.
If you want to grade additional responses at any time, you can go back
to the **Peer Grading** page and click the name of the problem that you want
to continue grading.
.. note:: When a response opens for you to grade, it leaves the current "grading pool"
that other instructors or students are grading from, which prevents other
instructors or students from
grading the response while you are working on it. If you do not submit a score
for this response within 30 minutes, the response returns to the grading pool
(so that it again becomes available for others to grade), even if you still have
the response open on your screen.
If the response returns to the grading pool (because the 30 minutes have passed),
but the response is still open on your screen, you can still submit feedback for
that response. If another instructor or student grades the response after it returns to the
grading pool but before you submit your feedback, the response receives two grades.
If you click your browser's **Back** button to return to the problem list before you
click **Submit** to submit your feedback for a response, the response stays outside
the grading pool until 30 minutes have passed. When the response returns to the
grading pool, you can grade it.
.. _ORA AI Assessment:
Artificial Intelligence (AI) Assessment
---------------------------------------
.. note::
You can delete this section if your ORA problem doesn't use AI assessments.
In an AI assessment, an instructor grades a sample set of student responses to the
open response assessment problem. A machine learning algorithm then creates a model
based on the instructor's scores and grades the remaining students' responses.
After you submit your response to an AI assessment, the following message appears under your
response.
**Your response has been submitted. Please check back later for your grade.**
Depending on the time that it takes for the instructor to grade a sample set of
responses, you may receive your grade within minutes, or you may have to wait
a few days. You won't receive a notification when your score is ready, so keep
checking back.
For more information about accessing your scores, see :ref:`ORA Access Scores`.
.. _ORA Access Scores:
Access Scores and Feedback
--------------------------
.. note::
Modify the text in this section to apply to your course.
For *self assessments*, the score that you give yourself appears as soon as you submit
the score.
For *peer assessments* and *AI assessments*, you'll access your scores through the **Open Ended Console** page.
#. In the EdX Demo course, click the **Open Ended Panel** tab at the top
of the page.
#. On the **Open Ended Console** page, click **Problems You Have
Submitted**.
#. On the **Open Ended Problems** page, check the **Status** column to
see whether your responses have been graded. The status for each problem is
either **Waiting to be Graded** or **Finished**.
#. If **Finished** appears in the **Status** column for the problem you want,
click the name of the problem to see your score for that problem. When you
click the name of the problem, the problem opens in the courseware.
For both AI and peer assessments, the score appears below your response
in an abbreviated version of the rubric.
.. image:: /Images/AIScoredResponse.gif
For peer assessments, you can
also see the written feedback that your response received from different
graders.
.. image:: /Images/PeerScoredResponse.gif
If you want to see the full rubric for either an AI or peer assessment,
click **Toggle Full Rubric**.
.. note:: For a peer assessment, if you haven't yet graded enough
problems to see your score, you receive a message that lets you know how
many problems you still need to grade.
.. image:: /Images/FeedbackNotAvailable.gif
For more information about grading peer assessments, see :ref:`ORA Peer Assessment`.
Resubmitting a Response
-----------------------
.. note::
You can delete this section if you don't allow students to submit multiple responses.
Some open response assessments allow multiple attempts. For these
problems, a **New Submission** button appears below your original
response.
If you want to answer the question again, click **New Submission** to
clear your former response, and click **OK** in the dialog box that
appears. You can then enter a new response for the problem.
.. _Tools:
#############################
Working with Tools
#############################
***************************
Overview of Tools in Studio
***************************
In addition to text, images, and different types of problems, Studio allows you
to add customized learning tools such as word clouds to your course.
- :ref:`LTI Component`: LTI components allow you to add an external learning application
or textbook to Studio.
- :ref:`Word Cloud`: Word clouds arrange text that students enter - for example, in
response to a question - into a colorful graphic that students can see.
- :ref:`Zooming image`: Zooming images allow you to enlarge sections of an image so
that students can see the section in detail.
.. _LTI Component:
**************
LTI Components
**************
You may have discovered or developed an external learning application
that you want to add to your online course. Or, you may have a digital
copy of your textbook that uses a format other than PDF. You can add
external learning applications or textbooks to Studio by using a
Learning Tools Interoperability (LTI) component. The LTI component is
based on the `IMS Global Learning Tools
Interoperability <http://www.imsglobal.org/LTI/v1p1p1/ltiIMGv1p1p1.html>`_
version 1.1.1 specifications.
You can use an LTI component in two ways.
- You can add external LTI content that is displayed only, such as
textbook content that doesn’t require a student response.
- You can add external LTI content that requires a student response. An
external provider will grade student responses.
Before you create an LTI component from an external LTI provider in a
unit, you need the following information.
- The **LTI ID**. This is a value that you create to refer to the external LTI
provider. You should create an LTI ID that you can remember easily.
The LTI ID can contain uppercase and lowercase alphanumeric
characters, as well as underscore characters (_). It can contain any
number of characters. For example, you may create an LTI ID that is
as simple as **test_lti_id**, or your LTI ID may be a string of
numbers and letters such as **id_21441** or
**book_lti_provider_from_new_york**.
- The **client key**. This value is a sequence of characters that you
obtain from the LTI provider. The client key is used for
authentication and can contain any number of characters. For example,
your client key may be **b289378-f88d-2929-ctools.umich.edu**.
- The **client secret**. This value is a sequence of characters that
you obtain from the LTI provider. The client secret is used for
authentication and can contain any number of characters. For example,
your client secret may be something as simple as **secret**, or it
may be a string of numbers and letters such as **23746387264** or
**yt4984yr8**.
- The **launch URL** (if the LTI component requires a student response
that will be graded). You obtain the launch URL from the LTI
provider. The launch URL is the URL that Studio sends to the external
LTI provider so that the provider can send back students’ grades.
Create an LTI Component
-----------------------
Creating an LTI component in your course has three steps.
#. Add LTI to the **advanced_modules** policy key.
#. Register the LTI provider.
#. Create the LTI component in an individual unit.
Step 1. Add LTI to the Advanced Modules Policy Key
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. On the **Settings** menu, click **Advanced Settings**.
#. On the **Advanced Settings** page, locate the **Manual Policy
Definition** section, and then locate the **advanced_modules**
policy key (this key is at the top of the list).
.. image:: Images/AdvancedModulesEmpty.gif
:alt: Image of the advanced_modules key in the Advanced Settings page
#. Under **Policy Value**, place your cursor between the brackets, and
then enter **“lti”**. Make sure to include the quotation marks, but
not the period.
.. image:: Images/LTI_Policy_Key.gif
:alt: Image of the advanced_modules key in the Advanced Settings page, with the lti value added
**Note** If the **Policy Value** field already contains text, place your
cursor directly after the closing quotation mark for the final item, and
then enter a comma followed by **“lti”** (make sure that you include the
quotation marks).
#. At the bottom of the page, click **Save Changes**.
The page refreshes automatically. At the top of the page,
you see a notification that your changes have been saved.
Step 2. Register the External LTI Provider
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To regiser the external LTI provider, you’ll add the LIT ID, the client
key, and the client secret in the **lti_passports** policy key.
#. On the **Advanced Settings** page, locate the **lti_passports**
policy key.
#. Under **Policy Value**, place your cursor between the brackets, and
then enter the LTI ID, client key, and client secret in the following
format (make sure to include the quotation marks and the colons).
::
“lti_id:client_key:client_secret”
For example, the value in the **lti_passports** field may be the following.
::
“test_lti_id:b289378-f88d-2929-ctools.umich.edu:secret”
If you have multiple LTI providers, separate the values with a comma.
Make sure to surround each entry with quotation marks.
::
"test_lti_id:b289378-f88d-2929-ctools.umich.edu:secret",
"id_21441:b289378-f88d-2929-ctools.school.edu:23746387264",
"book_lti_provider_from_new_york:b289378-f88d-2929-ctools.company.com:yt4984yr8"
#. At the bottom of the page, click **Save Changes**.
The page refreshes automatically. At the top of the page,
you see a notification that your changes have been saved, and you can
see your entries in the **lti_passports** policy key.
Step 3. Add the LTI Component to a Unit
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. In the unit where you want to create the problem, click **Advanced**
under **Add New Component**, and then click **LTI**.
#. In the component that appears, click **Edit**.
#. In the component editor, set the options that you want. See the table
below for a description of each option.
#. Click **Save**.
.. list-table::
:widths: 10 80
:header-rows: 1
* - `Setting`
- Description
* - `Display Name`
- Specifies the name of the problem. This name appears above the problem and in
the course ribbon at the top of the page in the courseware.
* - `custom_parameters`
- Enables you to add one or more custom parameters. For example, if you've added an
e-book, a custom parameter may include the page that your e-book should open to.
You could also use a custom parameter to set the background color of the LTI component.
Every custom parameter has a key and a value. You must add the key and value in the following format.
::
key=value
For example, a custom parameter may resemble the following.
::
bgcolor=red
page=144
To add a custom parameter, click **Add**.
* - `graded`
- Indicates whether the grade for the problem counts towards student's total grade. By
default, this value is set to **False**.
* - `has_score`
- Specifies whether the problem has a numerical score. By default, this value
is set to **False**.
* - `launch_url`
- Lists the URL that Studio sends to the external LTI provider so that the provider
can send back students' grades. This setting is only used if **graded** is set to
**True**.
* - `lti_id`
- Specifies the LTI ID for the external LTI provider. This value must be the same
LTI ID that you entered on the **Advanced Settings** page.
* - `open_in_a_new_page`
- Indicates whether the problem opens in a new page. If you set this value to **True**,
the student clicks a link that opens the LTI content in a new window. If you set
this value to **False**, the LTI content opens in an IFrame in the current page.
* - `weight`
- Specifies the number of points possible for the problem. By default, if an
external LTI provider grades the problem, the problem is worth 1 point, and
a student’s score can be any value between 0 and 1.
For more information about problem weights and computing point scores, see :ref:`Problem Weight`.
.. _Word Cloud:
**********
Word Cloud
**********
In a word cloud exercise, students enter words into a field in response
to a question or prompt. The words all the students have entered then
appear instantly as a colorful graphic, with the most popular responses
appearing largest. The graphic becomes larger as more students answer.
Students can both see the way their peers have answered and contribute
their thoughts to the group.
For example, the following word cloud was created from students'
responses to a question in a HarvardX course.
.. image:: Images/WordCloudExample.gif
:alt: Image of a word cloud problem
Create a Word Cloud Exercise
----------------------------
To create a word cloud exercise:
#. Add the Word Cloud advanced component. To do this, add the
"word_cloud" key value to the **Advanced Settings** page. (For more
information, see the instructions in :ref:`Specialized Problems`.)
#. In the unit where you want to create the problem, click **Advanced**
under **Add New Component**.
#. In the list of problem types, click **Word Cloud**.
#. In the component that appears, click **Edit**.
#. In the component editor, specify the settings that you want. You can
leave the default value for everything except **Display Name**.
- **Display Name**: The name that appears in the course ribbon and
as a heading above the problem.
- **Inputs**: The number of text boxes into which students can enter
words, phrases, or sentences.
- **Maximum Words**: The maximum number of words that the word cloud
displays. If students enter 300 different words but the maximum is
set to 250, only the 250 most commonly entered words appear in the
word cloud.
- **Show Percents**: The number of times that students have entered
a given word as a percentage of all words entered appears near
that word.
#. Click **Save**.
For more information, see `Xml Format of "Word Cloud" Module
<https://edx.readthedocs.org/en/latest/course_data_formats/word_cloud/word_cloud.html#>`_.
.. _Zooming Image:
******************
Zooming Image Tool
******************
Some edX courses use extremely large, extremely detailed graphics. To make it
easier to understand we can offer two versions of those graphics, with the zoomed
section showing when you click on the main view.
The example below is from 7.00x: Introduction to Biology and shows a subset of the
biochemical reactions that cells carry out.
.. image:: Images/Zooming_Image.gif
:alt: Image of a zooming image
Create a Zooming Image Tool
---------------------------
#. Under **Add New Component**, click **html**, and then click **Zooming Image**.
#. In the empty component that appears, click **Edit**.
#. When the component editor opens, replace the example content with your own content.
#. Click **Save** to save the HTML component.
......@@ -5,8 +5,8 @@ import sys, os
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append(os.path.abspath('../../../../'))
sys.path.append(os.path.abspath('../../../'))
sys.path.append(os.path.abspath('../../'))
from docs.shared.conf import *
......@@ -31,4 +31,4 @@ version = ''
release = ''
#Added to turn off smart quotes so users can copy JSON values without problems.
html_use_smartypants = False
\ No newline at end of file
html_use_smartypants = False
<problem display_name="Drag and drop demos: drag and drop icons or labels
to proper positions." >
<customresponse>
<text>
<h4>[Anyof rule example]</h4><br/>
<h4>Please label hydrogen atoms connected with left carbon atom.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true"
one_per_target="true" no_labels="true" label_bg_color="rgb(222, 139, 238)">
<draggable id="1" label="Hydrogen" />
<draggable id="2" label="Hydrogen" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2" x="133" y="3" w="70" h="70"/>
<target id="t3" x="2" y="384" w="70" h="70"/>
<target id="t4" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7" x="393" y="463" w="70" h="70"/>
<target id="t8" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{'draggables': ['1', '2'],
'targets': ['t2', 't3', 't4' ],
'rule':'anyof'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="2" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="3" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="4" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="5" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="6" icon="/static/images/images_list/lcao-mo/u_d.png" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="8" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="9" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="10" icon="/static/images/images_list/lcao-mo/up.png"/>
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<draggable id="12" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<draggable id="14" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" />
<draggable id="18" icon="/static/images/images_list/lcao-mo/d.png" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['1', '2', '3', '4', '5', '6'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2'
],
'rule': 'unordered_equal'
}, {
'draggables': ['7','8', '9', '10'],
'targets': ['p_left_1', 'p_left_2', 'p_right_1','p_right_2'],
'rule': 'unordered_equal'
}, {
'draggables': ['11', '12'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'unordered_equal'
}, {
'draggables': ['13', '14'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'unordered_equal'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'unordered_equal'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'unordered_equal'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Another complex grading example]</h4><br/>
<h4>Describe oxygen molecule in LCAO-MO</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" one_per_target="true">
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="2" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="3" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="4" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="5" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="6" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="v_fb_1" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="v_fb_2" icon="/static/images/images_list/lcao-mo/u_d.png" />
<draggable id="v_fb_3" icon="/static/images/images_list/lcao-mo/u_d.png" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="8" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="9" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="10" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="v_ub_1" icon="/static/images/images_list/lcao-mo/up.png"/>
<draggable id="v_ub_2" icon="/static/images/images_list/lcao-mo/up.png"/>
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<draggable id="12" icon="/static/images/images_list/lcao-mo/sigma.png"/>
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<draggable id="14" icon="/static/images/images_list/lcao-mo/sigma_s.png"/>
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" />
<draggable id="18" icon="/static/images/images_list/lcao-mo/d.png" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [{
'draggables': ['1', '2', '3', '4', '5', '6', 'v_fb_1', 'v_fb_2', 'v_fb_3'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2',
'p_sigma', 'p_left_1', 'p_right_3'
],
'rule': 'anyof'
}, {
'draggables': ['7', '8', '9', '10', 'v_ub_1', 'v_ub_2'],
'targets': [
'p_left_2', 'p_left_3', 'p_right_1', 'p_right_2', 'p_pi_star_1',
'p_pi_star_2'
],
'rule': 'anyof'
}, {
'draggables': ['11', '12'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'anyof'
}, {
'draggables': ['13', '14'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'anyof'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'anyof'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'anyof'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Individual targets with outlines, One draggable per target]</h4><br/>
<h4>
Drag -Ant- to first position and -Star- to third position </h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name_with_icon': 't1', 'name4': 't2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[SMALL IMAGE, Individual targets WITHOUT outlines, One draggable
per target]</h4><br/>
<h4>
Move -Star- to the volcano opening, and -Label3- on to
the right ear of the cow.
</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow3.png" target_outline="false">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="111" y="58" w="90" h="90"/>
<target id="t2" x="212" y="90" w="90" h="90"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name4': 't1',
'7': 't2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Many draggables per target]</h4><br/>
<h4>Move -Star- and -Ant- to most left target
and -Label3- and -Label2- to most right target.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name4': 't1',
'name_with_icon': 't1',
'5': 't2',
'7':'t2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Draggables can be placed anywhere on base image]</h4><br/>
<h4>
Place -Grass- in the middle of the image and -Ant- in the
right upper corner.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" >
<draggable id="1" label="Label 1"/>
<draggable id="ant" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="grass" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'grass': [[300, 200], 200],
'ant': [[500, 0], 200]}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Another anyof example]</h4><br/>
<h4>Please identify the Carbon and Oxygen atoms in the molecule.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true" one_per_target="true">
<draggable id="l1_c" label="Carbon" />
<draggable id="l2" label="Methane"/>
<draggable id="l3_o" label="Oxygen" />
<draggable id="l4" label="Calcium" />
<draggable id="l5" label="Methane"/>
<draggable id="l6" label="Calcium" />
<draggable id="l7" label="Hydrogen" />
<draggable id="l8_c" label="Carbon" />
<draggable id="l9" label="Hydrogen" />
<draggable id="l10_o" label="Oxygen" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2" x="133" y="3" w="70" h="70"/>
<target id="t3" x="2" y="384" w="70" h="70"/>
<target id="t4" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7" x="393" y="463" w="70" h="70"/>
<target id="t8" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['l3_o', 'l10_o'],
'targets': ['t1_o', 't9_o'],
'rule': 'anyof'
},
{
'draggables': ['l1_c','l8_c'],
'targets': ['t5_c','t6_c'],
'rule': 'anyof'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Again another anyof example]</h4><br/>
<h4>If the element appears in this molecule, drag the label onto it</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true"
one_per_target="true" no_labels="true" label_bg_color="rgb(222, 139, 238)">
<draggable id="1" label="Hydrogen" />
<draggable id="2" label="Hydrogen" />
<draggable id="3" label="Nytrogen" />
<draggable id="4" label="Nytrogen" />
<draggable id="5" label="Boron" />
<draggable id="6" label="Boron" />
<draggable id="7" label="Carbon" />
<draggable id="8" label="Carbon" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2_h" x="133" y="3" w="70" h="70"/>
<target id="t3_h" x="2" y="384" w="70" h="70"/>
<target id="t4_h" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7_h" x="393" y="463" w="70" h="70"/>
<target id="t8_h" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10_h" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['t5_c', 't6_c'],
'rule': 'anyof'
},
{
'draggables': ['1', '2'],
'targets': ['t2_h', 't3_h', 't4_h', 't7_h', 't8_h', 't10_h'],
'rule': 'anyof'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Wrong base image url example]
</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow3_bad.png" target_outline="false">
<draggable id="1" label="Label 1"/>
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg"/>
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" />
<draggable id="5" label="Label2" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" />
<draggable id="7" label="Label3" />
<target id="t1" x="111" y="58" w="90" h="90"/>
<target id="t2" x="212" y="90" w="90" h="90"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {'name4': 't1',
'7': 't2'}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
</problem>
<problem display_name="Drag and drop demos: drag and drop icons or labels
to proper positions." >
<customresponse>
<text>
<h4>[Draggable is reusable example]</h4>
<br/>
<h4>Please label all hydrogen atoms.</h4>
<br/>
</text>
<drag_and_drop_input
img="/static/images/images_list/ethglycol.jpg"
target_outline="true"
one_per_target="true"
no_labels="true"
label_bg_color="rgb(222, 139, 238)"
>
<draggable id="1" label="Hydrogen" can_reuse='true' />
<target id="t1_o" x="10" y="67" w="100" h="100" />
<target id="t2" x="133" y="3" w="70" h="70" />
<target id="t3" x="2" y="384" w="70" h="70" />
<target id="t4" x="95" y="386" w="70" h="70" />
<target id="t5_c" x="94" y="293" w="91" h="91" />
<target id="t6_c" x="328" y="294" w="91" h="91" />
<target id="t7" x="393" y="463" w="70" h="70" />
<target id="t8" x="344" y="214" w="70" h="70" />
<target id="t9_o" x="445" y="162" w="100" h="100" />
<target id="t10" x="591" y="132" w="70" h="70" />
</drag_and_drop_input>
<answer type="loncapa/python">
<![CDATA[
correct_answer = [{
'draggables': ['1'],
'targets': ['t2', 't3', 't4', 't7', 't8', 't10'],
'rule': 'exact'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]>
</answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png" can_reuse="true" />
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png" can_reuse="true" />
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" can_reuse="true" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" can_reuse="true" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" can_reuse="true" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['1'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2'
],
'rule': 'exact'
}, {
'draggables': ['7'],
'targets': ['p_left_1', 'p_left_2', 'p_right_1','p_right_2'],
'rule': 'exact'
}, {
'draggables': ['11'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'exact'
}, {
'draggables': ['13'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'exact'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'exact'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'exact'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Many draggables per target]</h4><br/>
<h4>Move two Stars and three Ants to most left target
and one Label3 and four Label2 to most right target.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['name4'],
'targets': [
't1', 't1'
],
'rule': 'exact'
},
{
'draggables': ['name_with_icon'],
'targets': [
't1', 't1', 't1'
],
'rule': 'exact'
},
{
'draggables': ['5'],
'targets': [
't2', 't2', 't2', 't2'
],
'rule': 'exact'
},
{
'draggables': ['7'],
'targets': [
't2'
],
'rule': 'exact'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Draggables can be placed anywhere on base image]</h4><br/>
<h4>
Place -Grass- in the middle of the image and -Ant- in the
right upper corner.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" >
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="ant" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="grass" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = {
'grass': [[300, 200], 200],
'ant': [[500, 0], 200]
}
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Another anyof example]</h4><br/>
<h4>Please identify the Carbon and Oxygen atoms in the molecule.</h4><br/>
</text>
<drag_and_drop_input img="/static/images/images_list/ethglycol.jpg" target_outline="true" one_per_target="true">
<draggable id="l1_c" label="Carbon" can_reuse="true" />
<draggable id="l2" label="Methane" can_reuse="true" />
<draggable id="l3_o" label="Oxygen" can_reuse="true" />
<draggable id="l4" label="Calcium" can_reuse="true" />
<draggable id="l7" label="Hydrogen" can_reuse="true" />
<target id="t1_o" x="10" y="67" w="100" h="100"/>
<target id="t2" x="133" y="3" w="70" h="70"/>
<target id="t3" x="2" y="384" w="70" h="70"/>
<target id="t4" x="95" y="386" w="70" h="70"/>
<target id="t5_c" x="94" y="293" w="91" h="91"/>
<target id="t6_c" x="328" y="294" w="91" h="91"/>
<target id="t7" x="393" y="463" w="70" h="70"/>
<target id="t8" x="344" y="214" w="70" h="70"/>
<target id="t9_o" x="445" y="162" w="100" h="100"/>
<target id="t10" x="591" y="132" w="70" h="70"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['l3_o'],
'targets': ['t1_o', 't9_o'],
'rule': 'exact'
},
{
'draggables': ['l1_c'],
'targets': ['t5_c', 't6_c'],
'rule': 'exact'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Exact number of draggables for a set of targets.]</h4><br/>
<h4>Drag two Grass and one Star to first or second positions, and three Cloud to any of the three positions.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['name_label_icon3', 'name_label_icon3'],
'targets': ['t1', 't3'],
'rule': 'unordered_equal+number'
},
{
'draggables': ['name4'],
'targets': ['t1', 't3'],
'rule': 'anyof+number'
},
{
'draggables': ['with_icon', 'with_icon', 'with_icon'],
'targets': ['t1', 't2', 't3'],
'rule': 'anyof+number'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[As many as you like draggables for a set of targets.]</h4><br/>
<h4>Drag some Grass to any of the targets, and some Stars to either first or last target.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/cow.png" target_outline="true" one_per_target="false">
<draggable id="1" label="Label 1" can_reuse="true" />
<draggable id="name_with_icon" label="Ant" icon="/static/images/images_list/ant.jpg" can_reuse="true" />
<draggable id="with_icon" label="Cloud" icon="/static/images/images_list/cloud.jpg" can_reuse="true" />
<draggable id="5" label="Label2" can_reuse="true" />
<draggable id="2" label="Drop" icon="/static/images/images_list/drop.jpg" can_reuse="true" />
<draggable id="name_label_icon3" label="Grass" icon="/static/images/images_list/grass.jpg" can_reuse="true" />
<draggable id="name4" label="Star" icon="/static/images/images_list/star.png" can_reuse="true" />
<draggable id="7" label="Label3" can_reuse="true" />
<target id="t1" x="20" y="20" w="90" h="90"/>
<target id="t2" x="300" y="100" w="90" h="90"/>
<target id="t3" x="150" y="40" w="50" h="50"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['name_label_icon3'],
'targets': ['t1', 't2', 't3'],
'rule': 'anyof'
},
{
'draggables': ['name4'],
'targets': ['t1', 't2'],
'rule': 'anyof'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
</problem>
<problem display_name="Drag and drop demos chem features: drag and drop icons or labels
to proper positions." attempts="10">
<customresponse>
<text>
<h4>[Simple grading example: draggables on draggables]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4><br/>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="up_and_down" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="up" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<draggable id="s" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p" icon="/static/images/images_list/lcao-mo/orbital_triple.png" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
<target id="3" x="68" y="0" w="32" h="32"/>
</draggable>
<!-- positions of electrons and electron pairs -->
<target id="s_l" x="130" y="360" w="32" h="32"/>
<target id="s_r" x="505" y="360" w="32" h="32"/>
<target id="p_l" x="80" y="100" w="100" h="32"/>
<target id="p_r" x="465" y="100" w="100" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['p'],
'targets': ['p_l', 'p_r'],
'rule': 'unordered_equal'
},
{
'draggables': ['s'],
'targets': ['s_l', 's_r'],
'rule': 'unordered_equal'
},
{
'draggables': ['up_and_down'],
'targets': [
's_l[s][1]', 's_r[s][1]'
],
'rule': 'unordered_equal'
},
{
'draggables': ['up'],
'targets': [
'p_l[p][1]', 'p_l[p][3]', 'p_r[p][1]', 'p_r[p][3]'
],
'rule': 'unordered_equal'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example: draggables on draggables]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo-clean.jpg" target_outline="true" >
<!-- filled bond -->
<draggable id="up_and_down" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="up" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<!-- images that should not be dragged -->
<draggable id="down" icon="/static/images/images_list/lcao-mo/d.png" can_reuse="true" />
<draggable id="s" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p" icon="/static/images/images_list/lcao-mo/orbital_triple.png" can_reuse="true" label="p orbital" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
<target id="3" x="68" y="0" w="32" h="32"/>
</draggable>
<draggable id="s-sigma" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s-sigma orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="s-sigma*" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s-sigma* orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-pi" icon="/static/images/images_list/lcao-mo/orbital_double.png" label="p-pi orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-sigma" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="p-sigma orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-pi*" icon="/static/images/images_list/lcao-mo/orbital_double.png" label="p-pi* orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
</draggable>
<draggable id="p-sigma*" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="p-sigma* orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<!-- positions of electrons and electron pairs -->
<target id="s-left-target" x="130" y="360" w="32" h="32"/>
<target id="s-right-target" x="505" y="360" w="32" h="32"/>
<target id="s-sigma-target" x="315" y="425" w="32" h="32"/>
<target id="s-sigma*-target" x="315" y="290" w="32" h="32"/>
<target id="p-left-target" x="80" y="100" w="100" h="32"/>
<target id="p-right-target" x="480" y="100" w="100" h="32"/>
<target id="p-pi-target" x="300" y="220" w="66" h="32"/>
<target id="p-sigma-target" x="315" y="170" w="32" h="32"/>
<target id="p-pi*-target" x="300" y="40" w="66" h="32"/>
<target id="p-sigma*-target" x="315" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{'draggables': ['p'], 'targets': ['p-left-target', 'p-right-target'], 'rule': 'unordered_equal'},
{'draggables': ['s'], 'targets': ['s-left-target', 's-right-target'], 'rule': 'unordered_equal'},
{'draggables': ['s-sigma'], 'targets': ['s-sigma-target'], 'rule': 'exact'},
{'draggables': ['s-sigma*'], 'targets': ['s-sigma*-target'], 'rule': 'exact'},
{'draggables': ['p-pi'], 'targets': ['p-pi-target'], 'rule': 'exact'},
{'draggables': ['p-sigma'], 'targets': ['p-sigma-target'], 'rule': 'exact'},
{'draggables': ['p-pi*'], 'targets': ['p-pi*-target'], 'rule': 'exact'},
{'draggables': ['p-sigma*'], 'targets': ['p-sigma*-target'], 'rule': 'exact'},
{
'draggables': ['up_and_down'],
'targets': ['s-left-target[s][1]', 's-right-target[s][1]', 's-sigma-target[s-sigma][1]', 's-sigma*-target[s-sigma*][1]', 'p-pi-target[p-pi][1]', 'p-pi-target[p-pi][2]'],
'rule': 'unordered_equal'
},
{
'draggables': ['up'],
'targets': ['p-left-target[p][1]', 'p-left-target[p][2]', 'p-right-target[p][2]', 'p-right-target[p][3]',],
'rule': 'unordered_equal'
}
]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
<customresponse>
<text>
<h4>[Complex grading example: no draggables on draggables]</h4><br/>
<h4>Describe carbon molecule in LCAO-MO.</h4>
<br/>
</text>
<drag_and_drop_input img="/static/images/images_list/lcao-mo/lcao-mo.jpg" target_outline="true">
<!-- filled bond -->
<draggable id="1" icon="/static/images/images_list/lcao-mo/u_d.png" can_reuse="true" />
<!-- up bond -->
<draggable id="7" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<!-- sigma -->
<draggable id="11" icon="/static/images/images_list/lcao-mo/sigma.png" can_reuse="true" />
<!-- sigma* -->
<draggable id="13" icon="/static/images/images_list/lcao-mo/sigma_s.png" can_reuse="true" />
<!-- pi -->
<draggable id="15" icon="/static/images/images_list/lcao-mo/pi.png" can_reuse="true" />
<!-- pi* -->
<draggable id="16" icon="/static/images/images_list/lcao-mo/pi_s.png" can_reuse="true" />
<!-- images that should not be dragged -->
<draggable id="17" icon="/static/images/images_list/lcao-mo/d.png" can_reuse="true" />
<!-- positions of electrons and electron pairs -->
<target id="s_left" x="130" y="360" w="32" h="32"/>
<target id="s_right" x="505" y="360" w="32" h="32"/>
<target id="s_sigma" x="320" y="425" w="32" h="32"/>
<target id="s_sigma_star" x="320" y="290" w="32" h="32"/>
<target id="p_left_1" x="80" y="100" w="32" h="32"/>
<target id="p_left_2" x="125" y="100" w="32" h="32"/>
<target id="p_left_3" x="175" y="100" w="32" h="32"/>
<target id="p_right_1" x="465" y="100" w="32" h="32"/>
<target id="p_right_2" x="515" y="100" w="32" h="32"/>
<target id="p_right_3" x="560" y="100" w="32" h="32"/>
<target id="p_pi_1" x="290" y="220" w="32" h="32"/>
<target id="p_pi_2" x="335" y="220" w="32" h="32"/>
<target id="p_sigma" x="315" y="170" w="32" h="32"/>
<target id="p_pi_star_1" x="290" y="40" w="32" h="32"/>
<target id="p_pi_star_2" x="340" y="40" w="32" h="32"/>
<target id="p_sigma_star" x="315" y="0" w="32" h="32"/>
<!-- positions of names of energy levels -->
<target id="s_sigma_name" x="400" y="425" w="32" h="32"/>
<target id="s_sigma_star_name" x="400" y="290" w="32" h="32"/>
<target id="p_pi_name" x="400" y="220" w="32" h="32"/>
<target id="p_sigma_name" x="400" y="170" w="32" h="32"/>
<target id="p_pi_star_name" x="400" y="40" w="32" h="32"/>
<target id="p_sigma_star_name" x="400" y="0" w="32" h="32"/>
</drag_and_drop_input>
<answer type="loncapa/python"><![CDATA[
correct_answer = [
{
'draggables': ['1'],
'targets': [
's_left', 's_right', 's_sigma', 's_sigma_star', 'p_pi_1', 'p_pi_2'
],
'rule': 'exact'
}, {
'draggables': ['7'],
'targets': ['p_left_1', 'p_left_2', 'p_right_2','p_right_3'],
'rule': 'exact'
}, {
'draggables': ['11'],
'targets': ['s_sigma_name', 'p_sigma_name'],
'rule': 'exact'
}, {
'draggables': ['13'],
'targets': ['s_sigma_star_name', 'p_sigma_star_name'],
'rule': 'exact'
}, {
'draggables': ['15'],
'targets': ['p_pi_name'],
'rule': 'exact'
}, {
'draggables': ['16'],
'targets': ['p_pi_star_name'],
'rule': 'exact'
}]
if draganddrop.grade(submission[0], correct_answer):
correct = ['correct']
else:
correct = ['incorrect']
]]></answer>
</customresponse>
</problem>
**********************************************
XML format of drag and drop input [inputtypes]
**********************************************
.. module:: drag_and_drop_input
Format description
==================
The main tag of Drag and Drop (DnD) input is::
<drag_and_drop_input> ... </drag_and_drop_input>
``drag_and_drop_input`` can include any number of the following 2 tags:
``draggable`` and ``target``.
drag_and_drop_input tag
-----------------------
The main container for a single instance of DnD. The following attributes can
be specified for this tag::
img - Relative path to an image that will be the base image. All draggables
can be dragged onto it.
target_outline - Specify whether an outline (gray dashed line) should be
drawn around targets (if they are specified). It can be either
'true' or 'false'. If not specified, the default value is
'false'.
one_per_target - Specify whether to allow more than one draggable to be
placed onto a single target. It can be either 'true' or 'false'. If
not specified, the default value is 'true'.
no_labels - default is false, in default behaviour if label is not set, label
is obtained from id. If no_labels is true, labels are not automatically
populated from id, and one can not set labels and obtain only icons.
draggable tag
-------------
Draggable tag specifies a single draggable object which has the following
attributes::
id - Unique identifier of the draggable object.
label - Human readable label that will be shown to the user.
icon - Relative path to an image that will be shown to the user.
can_reuse - true or false, default is false. If true, same draggable can be
used multiple times.
A draggable is what the user must drag out of the slider and place onto the
base image. After a drag operation, if the center of the draggable ends up
outside the rectangular dimensions of the image, it will be returned back
to the slider.
In order for the grader to work, it is essential that a unique ID
is provided. Otherwise, there will be no way to tell which draggable is at what
coordinate, or over what target. Label and icon attributes are optional. If
they are provided they will be used, otherwise, you can have an empty
draggable. The path is relative to 'course_folder' folder, for example,
/static/images/img1.png.
target tag
----------
Target tag specifies a single target object which has the following required
attributes::
id - Unique identifier of the target object.
x - X-coordinate on the base image where the top left corner of the target
will be positioned.
y - Y-coordinate on the base image where the top left corner of the target
will be positioned.
w - Width of the target.
h - Height of the target.
A target specifies a place on the base image where a draggable can be
positioned. By design, if the center of a draggable lies within the target
(i.e. in the rectangle defined by [[x, y], [x + w, y + h]], then it is within
the target. Otherwise, it is outside.
If at lest one target is provided, the behavior of the client side logic
changes. If a draggable is not dragged on to a target, it is returned back to
the slider.
If no targets are provided, then a draggable can be dragged and placed anywhere
on the base image.
Targets on draggables
---------------------
Sometimes it is not enough to have targets only on the base image, and all of the
draggables on these targets. If a complex problem exists where a draggable must
become itself a target (or many targets), then the following extended syntax
can be used: ::
<draggable {attribute list}>
<target {attribute list} />
<target {attribute list} />
<target {attribute list} />
...
</draggable>
The attribute list in the tags above ('draggable' and 'target') is the same as for
normal 'draggable' and 'target' tags. The only difference is when you will be
specifying inner target position coordinates. Using the 'x' and 'y' attributes you
are setting the offset of the inner target from the upper-left corner of the
parent draggable (that contains the inner target).
Limitations of targets on draggables
------------------------------------
1.) Currently there is a limitation to the level of nesting of targets.
Even though you can pile up a large number of draggables on targets that themselves
are on draggables, the Drag and Drop instance will be graded only in the case if
there is a maximum of two levels of targets. The first level are the "base" targets.
They are attached to the base image. The second level are the targets defined on
draggables.
2.) Another limitation is that the target bounds are not checked against
other targets.
For now, it is the responsibility of the person who is constructing the course
material to make sure that there is no overlapping of targets. It is also preferable
that targets on draggables are smaller than the actual parent draggable. Technically
this is not necessary, but from the usability perspective it is desirable.
3.) You can have targets on draggables only in the case when there are base targets
defined (base targets are attached to the base image).
If you do not have base targets, then you can only have a single level of nesting
(draggables on the base image). In this case the client side will be reporting (x,y)
positions of each draggables on the base image.
Correct answer format
---------------------
(NOTE: For specifying answers for targets on draggables please see next section.)
There are two correct answer formats: short and long
If short from correct answer is mapping of 'draggable_id' to 'target_id'::
correct_answer = {'grass': [[300, 200], 200], 'ant': [[500, 0], 200]}
correct_answer = {'name4': 't1', '7': 't2'}
In long form correct answer is list of dicts. Every dict has 3 keys:
draggables, targets and rule. For example::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['t5_c', 't6_c'],
'rule': 'anyof'
},
{
'draggables': ['1', '2'],
'targets': ['t2_h', 't3_h', 't4_h', 't7_h', 't8_h', 't10_h'],
'rule': 'anyof'
}]
Draggables is list of draggables id. Target is list of targets id, draggables
must be dragged to with considering rule. Rule is string.
Draggables in dicts inside correct_answer list must not intersect!!!
Wrong (for draggable id 7)::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['t5_c', 't6_c'],
'rule': 'anyof'
},
{
'draggables': ['7', '2'],
'targets': ['t2_h', 't3_h', 't4_h', 't7_h', 't8_h', 't10_h'],
'rule': 'anyof'
}]
Rules are: exact, anyof, unordered_equal, anyof+number, unordered_equal+number
.. such long lines are needed for sphinx to display lists correctly
- Exact rule means that targets for draggable id's in user_answer are the same that targets from correct answer. For example, for draggables 7 and 8 user must drag 7 to target1 and 8 to target2 if correct_answer is::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['tartget1', 'target2'],
'rule': 'exact'
}]
- unordered_equal rule allows draggables be dragged to targets unordered. If one want to allow for student to drag 7 to target1 or target2 and 8 to target2 or target 1 and 7 and 8 must be in different targets, then correct answer must be::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['tartget1', 'target2'],
'rule': 'unordered_equal'
}]
- Anyof rule allows draggables to be dragged to any of targets. If one want to allow for student to drag 7 and 8 to target1 or target2, which means that if 7 is on target1 and 8 is on target1 or 7 on target2 and 8 on target2 or 7 on target1 and 8 on target2. Any of theese are correct which anyof rule::
correct_answer = [
{
'draggables': ['7', '8'],
'targets': ['tartget1', 'target2'],
'rule': 'anyof'
}]
- If you have can_reuse true, then you, for example, have draggables a,b,c and 10 targets. These will allow you to drag 4 'a' draggables to ['target1', 'target4', 'target7', 'target10'] , you do not need to write 'a' four times. Also this will allow you to drag 'b' draggable to target2 or target5 for target5 and target2 etc..::
correct_answer = [
{
'draggables': ['a'],
'targets': ['target1', 'target4', 'target7', 'target10'],
'rule': 'unordered_equal'
},
{
'draggables': ['b'],
'targets': ['target2', 'target5', 'target8'],
'rule': 'anyof'
},
{
'draggables': ['c'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal'
}]
- And sometimes you want to allow drag only two 'b' draggables, in these case you should use 'anyof+number' of 'unordered_equal+number' rule::
correct_answer = [
{
'draggables': ['a', 'a', 'a'],
'targets': ['target1', 'target4', 'target7'],
'rule': 'unordered_equal+numbers'
},
{
'draggables': ['b', 'b'],
'targets': ['target2', 'target5', 'target8'],
'rule': 'anyof+numbers'
},
{
'draggables': ['c'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal'
}]
In case if we have no multiple draggables per targets (one_per_target="true"),
for same number of draggables, anyof is equal to unordered_equal
If we have can_reuse=true, than one must use only long form of correct answer.
Answer format for targets on draggables
---------------------------------------
As with the cases described above, an answer must provide precise positioning for
each draggable (on which targets it must reside). In the case when a draggable must
be placed on a target that itself is on a draggable, then the answer must contain
the chain of target-draggable-target. It is best to understand this on an example.
Suppose we have three draggables - 'up', 's', and 'p'. Draggables 's', and 'p' have targets
on themselves. More specifically, 'p' has three targets - '1', '2', and '3'. The first
requirement is that 's', and 'p' are positioned on specific targets on the base image.
The second requirement is that draggable 'up' is positioned on specific targets of
draggable 'p'. Below is an excerpt from a problem.::
<draggable id="up" icon="/static/images/images_list/lcao-mo/up.png" can_reuse="true" />
<draggable id="s" icon="/static/images/images_list/lcao-mo/orbital_single.png" label="s orbital" can_reuse="true" >
<target id="1" x="0" y="0" w="32" h="32"/>
</draggable>
<draggable id="p" icon="/static/images/images_list/lcao-mo/orbital_triple.png" can_reuse="true" label="p orbital" >
<target id="1" x="0" y="0" w="32" h="32"/>
<target id="2" x="34" y="0" w="32" h="32"/>
<target id="3" x="68" y="0" w="32" h="32"/>
</draggable>
...
correct_answer = [
{
'draggables': ['p'],
'targets': ['p-left-target', 'p-right-target'],
'rule': 'unordered_equal'
},
{
'draggables': ['s'],
'targets': ['s-left-target', 's-right-target'],
'rule': 'unordered_equal'
},
{
'draggables': ['up'],
'targets': ['p-left-target[p][1]', 'p-left-target[p][2]', 'p-right-target[p][2]', 'p-right-target[p][3]',],
'rule': 'unordered_equal'
}
]
Note that it is a requirement to specify rules for all draggables, even if some draggable gets included
in more than one chain.
Grading logic
-------------
1. User answer (that comes from browser) and correct answer (from xml) are parsed to the same format::
group_id: group_draggables, group_targets, group_rule
Group_id is ordinal number, for every dict in correct answer incremental
group_id is assigned: 0, 1, 2, ...
Draggables from user answer are added to same group_id where identical draggables
from correct answer are, for example::
If correct_draggables[group_0] = [t1, t2] then
user_draggables[group_0] are all draggables t1 and t2 from user answer:
[t1] or [t1, t2] or [t1, t2, t2] etc..
2. For every group from user answer, for that group draggables, if 'number' is in group rule, set() is applied,
if 'number' is not in rule, set is not applied::
set() : [t1, t2, t3, t3] -> [t1, t2, ,t3]
For every group, at this step, draggables lists are equal.
3. For every group, lists of targets are compared using rule for that group.
Set and '+number' cases
.......................
Set() and '+number' are needed only for case of reusable draggables,
for other cases there are no equal draggables in list, so set() does nothing.
.. such long lines needed for sphinx to display nicely
* Usage of set() operation allows easily create rule for case of "any number of same draggable can be dragged to some targets"::
{
'draggables': ['draggable_1'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'anyof'
}
* 'number' rule is used for the case of reusable draggables, when one want to fix number of draggable to drag. In this example only two instances of draggables_1 are allowed to be dragged::
{
'draggables': ['draggable_1', 'draggable_1'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'anyof+number'
}
* Note, that in using rule 'exact', one does not need 'number', because you can't recognize from user interface which reusable draggable is on which target. Absurd example::
{
'draggables': ['draggable_1', 'draggable_1', 'draggable_2'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'exact'
}
Correct handling of this example is to create different rules for draggable_1 and
draggable_2
* For 'unordered_equal' (or 'exact' too) we don't need 'number' if you have only same draggable in group, as targets length will provide constraint for the number of draggables::
{
'draggables': ['draggable_1'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal'
}
This means that only three draggaggables 'draggable_1' can be dragged.
* But if you have more that one different reusable draggable in list, you may use 'number' rule::
{
'draggables': ['draggable_1', 'draggable_1', 'draggable_2'],
'targets': ['target3', 'target6', 'target9'],
'rule': 'unordered_equal+number'
}
If not use number, draggables list will be setted to ['draggable_1', 'draggable_2']
Logic flow
----------
(Click on image to see full size version.)
.. image:: draganddrop_logic_flow.png
:width: 100%
:target: _images/draganddrop_logic_flow.png
Example
=======
Examples of draggables that can't be reused
-------------------------------------------
.. literalinclude:: drag-n-drop-demo.xml
Draggables can be reused
------------------------
.. literalinclude:: drag-n-drop-demo2.xml
Examples of targets on draggables
---------------------------------
.. literalinclude:: drag-n-drop-demo3.xml
##############
Course Grading
##############
This document is written to help professors understand how a final grade for a
course is computed.
Course grading is the process of taking all of the problems scores for a student
in a course and generating a final score (and corresponding letter grade). This
grading process can be split into two phases - totaling sections and section
weighting.
*****************
Totaling sections
*****************
The process of totaling sections is to get a percentage score (between 0.0 and
1.0) for every section in the course. A section is any module that is a direct
child of a chapter. For example, psets, labs, and sequences are all common
sections. Only the *percentage* on the section will be available to compute the
final grade, *not* the final number of points earned / possible.
.. important::
For a section to be included in the final grade, the policies file must set
`graded = True` for the section.
For each section, the grading function retrieves all problems within the
section. The section percentage is computed as (total points earned) / (total
points possible).
******************
Weighting Problems
******************
In some cases, one might want to give weights to problems within a section. For
example, a final exam might contain four questions each worth 1 point by default.
This means each question would by default have the same weight. If one wanted
the first problem to be worth 50% of the final exam, the policy file could specify
weights of 30, 10, 10, and 10 to the four problems, respectively.
Note that the default weight of a problem **is not 1**. The default weight of a
problem is the module's `max_grade`.
If weighting is set, each problem is worth the number of points assigned, regardless of the number of responses it contains.
Consider a Homework section that contains two problems.
.. code-block:: xml
<problem display_name=”Problem 1”>
<numericalresponse> ... </numericalreponse>
</problem>
.. code-block:: xml
<problem display_name=”Problem 2”>
<numericalresponse> ... </numericalreponse>
<numericalresponse> ... </numericalreponse>
<numericalresponse> ... </numericalreponse>
</problem>
Without weighting, Problem 1 is worth 25% of the assignment, and Problem 2 is worth 75% of the assignment.
Weighting for the problems can be set in the policy.json file.
.. code-block:: json
"problem/problem1": {
"weight": 2
},
"problem/problem2": {
"weight": 2
},
With the above weighting, Problems 1 and 2 are each worth 50% of the assignment.
Please note: When problems have weight, the point value is automatically included in the display name *except* when `"weight": 1`. When the weight is 1, no visual change occurs in the display name, leaving the point value open to interpretation to the student.
******************
Weighting Sections
******************
Once each section has a percentage score, we must total those sections into a
final grade. Of course, not every section has equal weight in the final grade.
The policies for weighting sections into a final grade are specified in the
grading_policy.json file.
The `grading_policy.json` file specifies several sub-graders that are each given
a weight and factored into the final grade. There are currently two types of
sub-graders, section format graders and single section graders.
We will use this simple example of a grader with one section format grader and
one single section grader.
.. code-block:: json
"GRADER" : [
{
"type" : "Homework",
"min_count" : 12,
"drop_count" : 2,
"short_label" : "HW",
"weight" : 0.4
},
{
"type" : "Final",
"name" : "Final Exam",
"short_label" : "Final",
"weight" : 0.6
}
]
Section Format Graders
======================
A section format grader grades a set of sections with the same format, as
defined in the course policy file. To make a vertical named Homework1 be graded
by the Homework section format grader, the following definition would be in the
course policy file.
.. code-block:: json
"vertical/Homework1": {
"display_name": "Homework 1",
"graded": true,
"format": "Homework"
},
In the example above, the section format grader declares that it will expect to
find at least 12 sections with the format "Homework". It will drop the lowest 2.
All of the homework assignments will have equal weight, relative to each other
(except, of course, for the assignments that are dropped).
This format supports forecasting the number of homework assignments. For
example, if the course only has 3 homeworks written, but the section format
grader has been told to expect 12, the missing 9 will have an assumed 0% and
will still show up in the grade breakdown.
A section format grader will also show the average of that section in the grade
breakdown (shown on the Progress page, gradebook, etc.).
Single Section Graders
======================
A single section grader grades exactly that - a single section. If a section
is found with a matching format and display name then the score of that section
is used. If not, a score of 0% is assumed.
Combining sub-graders
=====================
The final grade is computed by taking the score and weight of each sub grader.
In the above example, homework will be 40% of the final grade. The final exam
will be 60% of the final grade.
**************************
Displaying the final grade
**************************
The final grade is then rounded up to the nearest percentage point. This is so
the system can consistently display a percentage without worrying whether the
displayed percentage has been rounded up or down (potentially misleading the
student). The formula for the rounding is::
rounded_percent = round(computed_percent * 100 + 0.05) / 100
The grading policy file also specifies the cutoffs for the grade levels. A
grade is either A, B, or C. If the student does not reach the cutoff threshold
for a C grade then the student has not earned a grade and will not be eligible
for a certificate. Letter grades are only awarded to students who have
completed the course. There is no notion of a failing letter grade.
*********************************************
XML format of graphical slider tool [xmodule]
*********************************************
.. module:: xml_format_gst
Format description
==================
Graphical slider tool (GST) main tag is::
<graphical_slider_tool> BODY </graphical_slider_tool>
``graphical_slider_tool`` tag must have two children tags: ``render``
and ``configuration``.
Render tag
----------
Render tag can contain usual html tags mixed with some GST specific tags::
<slider/> - represents jQuery slider for changing a parameter's value
<textbox/> - represents a text input field for changing a parameter's value
<plot/> - represents Flot JS plot element
Also GST will track all elements inside ``<render></render>`` where ``id``
attribute is set, and a corresponding parameter referencing that ``id`` is present
in the configuration section below. These will be referred to as dynamic elements.
The contents of the <render> section will be shown to the user after
all occurrences of::
<slider var="{parameter name}" [style="{CSS statements}"] />
<textbox var="{parameter name}" [style="{CSS statements}"] />
<plot [style="{CSS statements}"] />
have been converted to actual sliders, text inputs, and a plot graph.
Everything in square brackets is optional. After initialization, all
text input fields, sliders, and dynamic elements will be set to the initial
values of the parameters that they are assigned to.
``{parameter name}`` specifies the parameter to which the slider or text
input will be attached to.
[style="{CSS statements}"] specifies valid CSS styling. It will be passed
directly to the browser without any parsing.
There is a one-to-one relationship between a slider and a parameter.
I.e. for one parameter you can put only one ``<slider>`` in the
``<render>`` section. However, you don't have to specify a slider - they
are optional.
There is a many-to-one relationship between text inputs and a
parameter. I.e. for one parameter you can put many '<textbox>' elements in
the ``<render>`` section. However, you don't have to specify a text
input - they are optional.
You can put only one ``<plot>`` in the ``<render>`` section. It is not
required.
Slider tag
..........
Slider tag must have ``var`` attribute and optional ``style`` attribute::
<slider var='a' style="width:400px;float:left;" />
After processing, slider tags will be replaced by jQuery UI sliders with applied
``style`` attribute.
``var`` attribute must correspond to a parameter. Parameters can be used in any
of the ``function`` tags in ``functions`` tag. By moving slider, value of
parameter ``a`` will change, and so result of function, that depends on parameter
``a``, will also change.
Textbox tag
...........
Texbox tag must have ``var`` attribute and optional ``style`` attribute::
<textbox var="b" style="width:50px; float:left; margin-left:10px;" />
After processing, textbox tags will be replaced by html text inputs with applied
``style`` attribute. If you want a readonly text input, then you should use a
dynamic element instead (see section below "HTML tagsd with ID").
``var`` attribute must correspond to a parameter. Parameters can be used in any
of the ``function`` tags in ``functions`` tag. By changing the value on the text input,
value of parameter ``a`` will change, and so result of function, that depends on
parameter ``a``, will also change.
Plot tag
........
Plot tag may have optional ``style`` attribute::
<plot style="width:50px; float:left; margin-left:10px;" />
After processing plot tags will be replaced by Flot JS plot with applied
``style`` attribute.
HTML tags with ID (dynamic elements)
....................................
Any HTML tag with ID, e.g. ``<span id="answer_span_1">`` can be used as a
place where result of function can be inserted. To insert function result to
an element, element ID must be included in ``function`` tag as ``el_id`` attribute
and ``output`` value must be ``"element"``::
<function output="element" el_id="answer_span_1">
function add(a, b, precision) {
var x = Math.pow(10, precision || 2);
return (Math.round(a * x) + Math.round(b * x)) / x;
}
return add(a, b, 5);
</function>
Configuration tag
-----------------
The configuration tag contains parameter settings, graph
settings, and function definitions which are to be plotted on the
graph and that use specified parameters.
Configuration tag contains two mandatory tag ``functions`` and ``parameters`` and
may contain another ``plot`` tag.
Parameters tag
..............
``Parameters`` tag contains ``parameter`` tags. Each ``parameter`` tag must have
``var``, ``max``, ``min``, ``step`` and ``initial`` attributes::
<parameters>
<param var="a" min="-10.0" max="10.0" step="0.1" initial="0" />
<param var="b" min="-10.0" max="10.0" step="0.1" initial="0" />
</parameters>
``var`` attribute links min, max, step and initial values to parameter name.
``min`` attribute is the minimal value that a parameter can take. Slider and input
values can not go below it.
``max`` attribute is the maximal value that a parameter can take. Slider and input
values can not go over it.
``step`` attribute is value of slider step. When a slider increase or decreases
the specified parameter, it will do so by the amount specified with 'step'
``initial`` attribute is the initial value that the specified parameter should be
set to. Sliders and inputs will initially show this value.
The parameter's name is specified by the ``var`` property. All occurrences
of sliders and/or text inputs that specify a ``var`` property, will be
connected to this parameter - i.e. they will reflect the current
value of the parameter, and will be updated when the parameter
changes.
If at lest one of these attributes is not set, then the parameter
will not be used, slider's and/or text input elements that specify
this parameter will not be activated, and the specified functions
which use this parameter will not return a numeric value. This means
that neglecting to specify at least one of the attributes for some
parameter will have the result of the whole GST instance not working
properly.
Functions tag
.............
For the GST to do something, you must defined at least one
function, which can use any of the specified parameter values. The
function expects to take the ``x`` value, do some calculations, and
return the ``y`` value. I.e. this is a 2D plot in Cartesian
coordinates. This is how the default function is meant to be used for
the graph.
There are other special cases of functions. They are used mainly for
outputting to elements, plot labels, or for custom output. Because
the return a single value, and that value is meant for a single element,
these function are invoked only with the set of all of the parameters.
I.e. no ``x`` value is available inside them. They are useful for
showing the current value of a parameter, showing complex static
formulas where some parameter's value must change, and other useful
things.
The different style of function is specified by the ``output`` attribute.
Each function must be defined inside ``function`` tag in ``functions`` tag::
<functions>
<function output="element" el_id="answer_span_1">
function add(a, b, precision) {
var x = Math.pow(10, precision || 2);
return (Math.round(a * x) + Math.round(b * x)) / x;
}
return add(a, b, 5);
</function>
</functions>
The parameter names (along with their values, as provided from text
inputs and/or sliders), will be available inside all defined
functions. A defined function body string will be parsed internally
by the browser's JavaScript engine and converted to a true JS
function.
The function's parameter list will automatically be created and
populated, and will include the ``x`` (when ``output`` is not specified or
is set to ``"graph"``), and all of the specified parameter values (from sliders
and text inputs). This means that each of the defined functions will have
access to all of the parameter values. You don't have to use them, but
they will be there.
Examples::
<function>
return x;
</function>
<function dot="true" label="\(y_2\)">
return (x + a) * Math.sin(x * b);
</function>
<function color="green">
function helperFunc(c1) {
return c1 * c1 - a;
}
return helperFunc(x + 10 * a * b) + Math.sin(a - x);
</function>
Required parameters::
function body:
A string composing a normal JavaScript function
except that there is no function declaration
(along with parameters), and no closing bracket.
So if you normally would have written your
JavaScript function like this:
function myFunc(x, a, b) {
return x * a + b;
}
here you must specify just the function body
(everything that goes between '{' and '}'). So,
you would specify the above function like so (the
bare-bone minimum):
<function>return x * a + b;</function>
VERY IMPORTANT: Because the function will be passed
to the browser as a single string, depending on implementation
specifics, the end-of-line characters can be stripped. This
means that single line JavaScript comments (starting with "//")
can lead to the effect that everything after the first such comment
will be treated as a comment. Therefore, it is absolutely
necessary that such single line comments are not used when
defining functions for GST. You can safely use the alternative
multiple line JavaScript comments (such comments start with "/*"
and end with "*/).
VERY IMPORTANT: If you have a large function body, and decide to
split it into several lines, than you must wrap it in "CDATA" like
so:
<function>
<![CDATA[
var dNew;
dNew = 0.3;
return x * a + b - dNew;
]]>
</function>
Optional parameters::
color: Color name ('red', 'green', etc.) or in the form of
'#FFFF00'. If not specified, a default color (different
one for each graphed function) will be given by Flot JS.
line: A string - 'true' or 'false'. Should the data points be
connected by a line on the graph? Default is 'true'.
dot: A string - 'true' or 'false'. Should points be shown for
each data point on the graph? Default is 'false'.
bar: A string - 'true' or 'false'. When set to 'true', points
will be plotted as bars.
label: A string. If provided, will be shown in the legend, along
with the color that was used to plot the function.
output: 'element', 'none', 'plot_label', or 'graph'. If not defined,
function will be plotted (same as setting 'output' to 'graph').
If defined, and other than 'graph', function will not be
plotted, but it's output will be inserted into the element
with ID specified by 'el_id' attribute.
el_id: Id of HTML element, defined in '<render>' section. Value of
function will be inserted as content of this element.
disable_auto_return: By default, if JavaScript function string is written
without a "return" statement, the "return" will be
prepended to it. Set to "true" to disable this
functionality. This is done so that simple functions
can be defined in an easy fashion (for example, "a",
which will be translated into "return a").
update_on: A string - 'change', or 'slide'. Default (if not set) is
'slide'. This defines the event on which a given function is
called, and its result is inserted into an element. This
setting is relevant only when "output" is other than "graph".
When specifying ``el_id``, it is essential to set "output" to one of
element - GST will invoke the function, and the return of it will be
inserted into a HTML element with id specified by ``el_id``.
none - GST will simply inoke the function. It is left to the instructor
who writes the JavaScript function body to update all necesary
HTML elements inside the function, before it exits. This is done
so that extra steps can be preformed after an HTML element has
been updated with a value. Note, that because the return value
from this function is not actually used, it will be tempting to
omit the "return" statement. However, in this case, the attribute
"disable_auto_return" must be set to "true" in order to prevent
GST from inserting a "return" statement automatically.
plot_label - GST will process all plot labels (which are strings), and
will replace the all instances of substrings specified by
``el_id`` with the returned value of the function. This is
necessary if you want a label in the graph to have some changing
number. Because of the nature of Flot JS, it is impossible to
achieve the same effect by setting the "output" attribute
to "element", and including a HTML element in the label.
The above values for "output" will tell GST that the function is meant for an
HTML element (not for graph), and that it should not get an 'x' parameter (along
with some value).
[Note on MathJax and labels]
............................
Independently of this module, will render all TeX code
within the ``<render>`` section into nice mathematical formulas. Just
remember to wrap it in one of::
\( and \) - for inline formulas (formulas surrounded by
standard text)
\[ and \] - if you want the formula to be a separate line
It is possible to define a label in standard TeX notation. The JS
library MathJax will work on these labels also because they are
inserted on top of the plot as standard HTML (text within a DIV).
If the label is dynamic, i.e. it will contain some text (numeric, or other)
that has to be updated on a parameter's change, then one can define
a special function to handle this. The "output" of such a function must be
set to "none", and the JavaScript code inside this function must update the
MathJax element by itself. Before exiting, MathJax typeset function should
be called so that the new text will be re-rendered by MathJax. For example::
<render>
...
<span id="dynamic_mathjax"></span>
</render>
...
<function output="none" el_id="dynamic_mathjax">
<![CDATA[
var out_text;
out_text = "\\[\\mathrm{Percent \\space of \\space treated \\space with \\space YSS=\\frac{"
+(treated_men*10)+"\\space men *"
+(your_m_tx_yss/100)+"\\space prev. +\\space "
+((100-treated_men)*10)+"\\space women *"
+(your_f_tx_yss/100)+"\\space prev.}"
+"{1000\\space total\\space treated\\space patients}"
+"="+drummond_combined[0][1]+"\\%}\\]";
mathjax_for_prevalence_calcs+="\\[\\mathrm{Percent \\space of \\space untreated \\space with \\space YSS=\\frac{"
+(untreated_men*10)+"\\space men *"
+(your_m_utx_yss/100)+"\\space prev. +\\space "
+((100-untreated_men)*10)+"\\space women *"
+(your_f_utx_yss/100)+"\\space prev.}"
+"{1000\\space total\\space untreated\\space patients}"
+"="+drummond_combined[1][1]+"\\%}\\]";
$("#dynamic_mathjax").html(out_text);
MathJax.Hub.Queue(["Typeset",MathJax.Hub,"dynamic_mathjax"]);
]]>
</function>
...
Plot tag
........
``Plot`` tag inside ``configuration`` tag defines settings for plot output.
Required parameters::
xrange: 2 functions that must return value. Value is constant (3.1415)
or depend on parameter from parameters section:
<xrange>
<min>return 0;</min>
<max>return 30;</max>
</xrange>
or
<xrange>
<min>return -a;</min>
<max>return a;</max>
</xrange>
All functions will be calculated over domain between xrange:min
and xrange:max. Xrange depending on parameter is extremely
useful when domain(s) of your function(s) depends on parameter
(like circle, when parameter is radius and you want to allow
to change it).
Optional parameters::
num_points: Number of data points to generated for the plot. If
this is not set, the number of points will be
calculated as width / 5.
bar_width: If functions are present which are to be plotted as bars,
then this parameter specifies the width of the bars. A
numeric value for this parameter is expected.
bar_align: If functions are present which are to be plotted as bars,
then this parameter specifies how to align the bars relative
to the tick. Available values are "left" and "center".
xticks,
yticks: 3 floating point numbers separated by commas. This
specifies how many ticks are created, what number they
start at, and what number they end at. This is different
from the 'xrange' setting in that it has nothing to do
with the data points - it control what area of the
Cartesian space you will see. The first number is the
first tick's value, the second number is the step
between each tick, the third number is the value of the
last tick. If these configurations are not specified,
Flot will chose them for you based on the data points
set that he is currently plotting. Usually, this results
in a nice graph, however, sometimes you need to fine
grain the controls. For example, when you want to show
a fixed area of the Cartesian space, even when the data
set changes. On it's own, Flot will recalculate the
ticks, which will result in a different graph each time.
By specifying the xticks, yticks configurations, only
the plotted data will change - the axes (ticks) will
remain as you have defined them.
xticks_names, yticks_names:
A JSON string which represents a mapping of xticks, yticks
values to some defined strings. If specified, the graph will
not have any xticks, yticks except those for which a string
value has been defined in the JSON string. Note that the
matching will be string-based and not numeric. I.e. if a tick
value was "3.70" before, then inside the JSON there should be
a mapping like {..., "3.70": "Some string", ...}. Example:
<xticks_names>
<![CDATA[
{
"1": "Treated", "2": "Not Treated",
"4": "Treated", "5": "Not Treated",
"7": "Treated", "8": "Not Treated"
}
]]>
</xticks_names>
<yticks_names>
<![CDATA[
{"0": "0%", "10": "10%", "20": "20%", "30": "30%", "40": "40%", "50": "50%"}
]]>
</yticks_names>
xunits,
yunits: Units values to be set on axes. Use MathJax. Example:
<xunits>\(cm\)</xunits>
<yunits>\(m\)</yunits>
moving_label:
A way to specify a label that should be positioned dynamically,
based on the values of some parameters, or some other factors.
It is similar to a <function>, but it is only valid for a plot
because it is drawn relative to the plot coordinate system.
Multiple "moving_label" configurations can be provided, each one
with a unique text and a unique set of functions that determine
it's dynamic positioning.
Each "moving_label" can have a "color" attribute (CSS color notation),
and a "weight" attribute. "weight" can be one of "normal" or "bold",
and determines the styling of moving label's text.
Each "moving_label" function should return an object with a 'x'
and 'y properties. Within those functions, all of the parameter
names along with their value are available.
Example (note that "return" statement is missing; it will be automatically
inserted by GST):
<moving_label text="Co" weight="bold" color="red>
<![CDATA[ {'x': -50, 'y': c0};]]>
</moving_label>
asymptote:
Add a vertical or horizontal asymptote to the graph which will
be dynamically repositioned based on the specified function.
It is similar to the function in that it provides a JavaScript body function
string. This function will be used to calculate the position of the asymptote
relative to the axis specified by the "type" parameter.
Required parameters:
type:
Which axis should the asymptote be plotted against. Available values
are "x" and "y".
Optional parameters:
color:
The color of the line. A valid CSS color string is expected.
Example
=======
Plotting, sliders and inputs
----------------------------
.. literalinclude:: gst_example_with_documentation.xml
Update of html elements, no plotting
------------------------------------
.. literalinclude:: gst_example_html_element_output.xml
Circle with dynamic radius
--------------------------
.. literalinclude:: gst_example_dynamic_range.xml
Example of a bar graph
----------------------
.. literalinclude:: gst_example_bars.xml
Example of moving labels of graph
---------------------------------
.. literalinclude:: gst_example_dynamic_labels.xml
##############################################################################
JS Input
##############################################################################
This document explains how to write a JSInput input type. JSInput is meant to
allow problem authors to easily turn working standalone HTML files into
problems that can be integrated into the edX platform. Since it's aim is
flexibility, it can be seen as the input and client-side equivalent of
CustomResponse.
A JSInput input creates an iframe into a static HTML page, and passes the
return value of author-specified functions to the enclosing response type
(generally CustomResponse). JSInput can also stored and retrieve state.
******************************************************************************
Format
******************************************************************************
A jsinput problem looks like this:
.. code-block:: xml
<problem>
<script type="loncapa/python">
def all_true(exp, ans): return ans == "hi"
</script>
<customresponse cfn="all_true">
<jsinput gradefn="gradefn"
height="500"
get_statefn="getstate"
set_statefn="setstate"
html_file="/static/jsinput.html"/>
</customresponse>
</problem>
The accepted attributes are:
============== ============== ========= ==========
Attribute Name Value Type Required? Default
============== ============== ========= ==========
html_file Url string Yes None
gradefn Function name Yes `gradefn`
set_statefn Function name No None
get_statefn Function name No None
height Integer No `500`
width Integer No `400`
============== ============== ========= ==========
******************************************************************************
Required Attributes
******************************************************************************
==============================================================================
html_file
==============================================================================
The `html_file` attribute specifies what html file the iframe will point to. This
should be located in the content directory.
The iframe is created using the sandbox attribute; while popups, scripts, and
pointer locks are allowed, the iframe cannot access its parent's attributes.
The html file should contain an accesible gradefn function. To check whether
the gradefn will be accessible to JSInput, check that, in the console,::
"`gradefn"
Returns the right thing. When used by JSInput, `gradefn` is called with::
`gradefn`.call(`obj`)
Where `obj` is the object-part of `gradefn`. For example, if `gradefn` is
`myprog.myfn`, JSInput will call `myprog.myfun.call(myprog)`. (This is to
ensure "`this`" continues to refer to what `gradefn` expects.)
Aside from that, more or less anything goes. Note that currently there is no
support for inheriting css or javascript from the parent (aside from the
Chrome-only `seamless` attribute, which is set to true by default).
==============================================================================
gradefn
==============================================================================
The `gradefn` attribute specifies the name of the function that will be called
when a user clicks on the "Check" button, and which should return the student's
answer. This answer will (unless both the get_statefn and set_statefn
attributes are also used) be passed as a string to the enclosing response type.
In the customresponse example above, this means cfn will be passed this answer
as `ans`.
If the `gradefn` function throws an exception when a student attempts to
submit a problem, the submission is aborted, and the student receives a generic
alert. The alert can be customised by making the exception name `Waitfor
Exception`; in that case, the alert message will be the exception message.
**IMPORTANT** : the `gradefn` function should not be at all asynchronous, since
this could result in the student's latest answer not being passed correctly.
Moreover, the function should also return promptly, since currently the student
has no indication that her answer is being calculated/produced.
******************************************************************************
Option Attributes
******************************************************************************
The `height` and `width` attributes are straightforward: they specify the
height and width of the iframe. Both are limited by the enclosing DOM elements,
so for instance there is an implicit max-width of around 900.
In the future, JSInput may attempt to make these dimensions match the html
file's dimensions (up to the aforementioned limits), but currently it defaults
to `500` and `400` for `height` and `width`, respectively.
==============================================================================
set_statefn
==============================================================================
Sometimes a problem author will want information about a student's previous
answers ("state") to be saved and reloaded. If the attribute `set_statefn` is
used, the function given as its value will be passed the state as a string
argument whenever there is a state, and the student returns to a problem. It is
the responsibility of the function to then use this state approriately.
The state that is passed is:
1. The previous output of `gradefn` (i.e., the previous answer) if
`get_statefn` is not defined.
2. The previous output of `get_statefn` (see below) otherwise.
It is the responsibility of the iframe to do proper verification of the
argument that it receives via `set_statefn`.
==============================================================================
get_statefn
==============================================================================
Sometimes the state and the answer are quite different. For instance, a problem
that involves using a javascript program that allows the student to alter a
molecule may grade based on the molecule's hidrophobicity, but from the
hidrophobicity it might be incapable of restoring the state. In that case, a
*separate* state may be stored and loaded by `set_statefn`. Note that if
`get_statefn` is defined, the answer (i.e., what is passed to the enclosing
response type) will be a json string with the following format::
{
answer: `[answer string]`
state: `[state string]`
}
It is the responsibility of the enclosing response type to then parse this as
json.
######################
Discussion Forums Data
######################
Discussions in edX are stored in a MongoDB database as collections of JSON documents.
The primary collection holding all posts and comments written by users is `contents`. There are two types of objects stored here, though they share much of the same structure. A `CommentThread` represents a comment that opens a new thread -- usually a student question of some sort. A `Comment` is a reply in the conversation started by a `CommentThread`.
*****************
Shared Attributes
*****************
The attributes that `Comment` and `CommentThread` objects share are listed below.
`_id`
-----
The 12-byte MongoDB unique ID for this collection. Like all MongoDB IDs, they are monotonically increasing and the first four bytes are a timestamp.
`_type`
-------
`CommentThread` or `Comment` depending on the type of object.
`anonymous`
-----------
If true, this `Comment` or `CommentThread` will show up as written by anonymous, even to those who have moderator privileges in the forums.
`anonymous_to_peers`
--------------------
The idea behind this field was that `anonymous_to_peers = true` would make the the comment appear anonymous to your fellow students, but would allow the course staff to see who you were. However, that was never implemented in the UI, and only `anonymous` is actually used. The `anonymous_to_peers` field is always false.
`at_position_list`
------------------
No longer used. Child comments (replies) are just sorted by their `created_at` timestamp instead.
`author_id`
-----------
The user who wrote this. Corresponds to the user IDs we store in our MySQL database as `auth_user.id`
`body`
------
Text of the comment in Markdown. UTF-8 encoded.
`course_id`
-----------
The full course_id of the course that this comment was made in, including org and run. This value can be seen in the URL when browsing the courseware section. Example: `BerkeleyX/Stat2.1x/2013_Spring`
`created_at`
------------
Timestamp in UTC. Example: `ISODate("2013-02-21T03:03:04.587Z")`
`updated_at`
------------
Timestamp in UTC. Example: `ISODate("2013-02-21T03:03:04.587Z")`
`votes`
-------
Both `CommentThread` and `Comment` objects support voting. `Comment` objects that are replies to other comments still have this attribute, even though there is no way to actually vote on them in the UI. This attribute is a dictionary that has the following inside:
* `up` = list of User IDs that up-voted this comment or thread.
* `down` = list of User IDs that down-voted this comment or thread (no longer used).
* `up_count` = total upvotes received.
* `down_count` = total downvotes received (no longer used).
* `count` = total votes cast.
* `point` = net vote, now always equal to `up_count`.
A user only has one vote per `Comment` or `CommentThread`. Though it's still written to the database, the UI no longer displays an option to downvote anything.
*************
CommentThread
*************
The following fields are specific to `CommentThread` objects. Each thread in the forums is represented by one `CommentThread`.
`closed`
--------
If true, this thread was closed by a forum moderator/admin.
`comment_count`
---------------
The number of comment replies in this thread. This includes all replies to replies, but does not include the original comment that started the thread. So if we had::
CommentThread: "What's a good breakfast?"
* Comment: "Just eat cereal!"
* Comment: "Try a Loco Moco, it's amazing!"
* Comment: "A Loco Moco? Only if you want a heart attack!"
* Comment: "But it's worth it! Just get a spam musubi on the side."
In that exchange, the `comment_count` for the `CommentThread` is `4`.
`commentable_id`
----------------
We can attach a discussion to any piece of content in the course, or to top level categories like "General" and "Troubleshooting". When the `commentable_id` is a high level category, it's specified in the course's policy file. When it's a specific content piece (e.g. `600x_l5_p8`, meaning 6.00x, Lecture Sequence 5, Problem 8), it's taken from a discussion module in the course.
`last_activity_at`
------------------
Timestamp in UTC indicating the last time there was activity in the thread (new posts, edits, etc). Closing the thread does not affect the value in this field.
`tags_array`
------------
Meant to be a list of tags that were user definable, but no longer used.
`title`
-------
Title of the thread, UTF-8 string.
*******
Comment
*******
The following fields are specific to `Comment` objects. A `Comment` is a reply to a `CommentThread` (so an answer to the question), or a reply to another `Comment` (a comment about somebody's answer). It used to be the case that `Comment` replies could nest much more deeply, but we later capped it at just these three levels (question, answer, comment) much in the way that StackOverflow does.
`endorsed`
----------
Boolean value, true if a forum moderator or instructor has marked that this `Comment` is a correct answer for whatever question the thread was asking. Exists for `Comments` that are replies to other `Comments`, but in that case `endorsed` is always false because there's no way to endorse such comments through the UI.
`comment_thread_id`
-------------------
What `CommentThread` are we a part of? All `Comment` objects have this.
`parent_id`
-----------
The `parent_id` is the `_id` of the `Comment` that this comment was made in reply to. Note that this only occurs in a `Comment` that is a reply to another `Comment`; it does not appear in a `Comment` that is a reply to a `CommentThread`.
`parent_ids`
------------
The `parent_ids` attribute appears in all `Comment` objects, and contains the `_id` of all ancestor comments. Since the UI now prevents comments from being nested more than one layer deep, it will only ever have at most one element in it. If a `Comment` has no parent, it's an empty list.
##############################
Student Info and Progress Data
##############################
The following sections detail how edX stores student state data internally, and is useful for developers and researchers who are examining database exports. This information includes demographic information collected at signup, course enrollment, course progress, and certificate status.
Conventions to keep in mind:
* We currently use MySQL 5.1 with InnoDB tables
* All strings are stored as UTF-8.
* All datetimes are stored as UTC.
* Tables that are built into the Django framework are not documented here unless we use them in unconventional ways.
All of our tables will be described below, first in summary form with field types and constraints, and then with a detailed explanation of each field. For those not familiar with the MySQL schema terminology in the table summaries:
`Type`
This is the kind of data it is, along with the size of the field. When a numeric field has a length specified, it just means that's how many digits we want displayed -- it has no affect on the number of bytes used.
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `int`
- 4 byte integer.
* - `smallint`
- 2 byte integer, sometimes used for enumerated values.
* - `tinyint`
- 1 byte integer, but usually just used to indicate a boolean field with 0 = False and 1 = True.
* - `varchar`
- String, typically short and indexable. The length is the number of chars, not bytes (so unicode friendly).
* - `longtext`
- A long block of text, usually not indexed.
* - `date`
- Date
* - `datetime`
- Datetime in UTC, precision in seconds.
`Null`
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `YES`
- `NULL` values are allowed
* - `NO`
- `NULL` values are not allowed
.. note::
Django often just places blank strings instead of NULL when it wants to indicate a text value is optional. This is used more meaningful for numeric and date fields.
`Key`
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `PRI`
- Primary key for the table, usually named `id`, unique
* - `UNI`
- Unique
* - `MUL`
- Indexed for fast lookup, but the same value can appear multiple times. A Unique index that allows `NULL` can also show up as `MUL`.
****************
User Information
****************
`auth_user`
===========
The `auth_user` table is built into the Django web framework that we use. It holds generic information necessary for basic login and permissions information. It has the following fields::
+------------------------------+--------------+------+-----+
| Field | Type | Null | Key |
+------------------------------+--------------+------+-----+
| id | int(11) | NO | PRI |
| username | varchar(30) | NO | UNI |
| first_name | varchar(30) | NO | | # Never used
| last_name | varchar(30) | NO | | # Never used
| email | varchar(75) | NO | UNI |
| password | varchar(128) | NO | |
| is_staff | tinyint(1) | NO | |
| is_active | tinyint(1) | NO | |
| is_superuser | tinyint(1) | NO | |
| last_login | datetime | NO | |
| date_joined | datetime | NO | |
| status | varchar(2) | NO | | # No longer used
| email_key | varchar(32) | YES | | # No longer used
| avatar_type | varchar(1) | NO | | # No longer used
| country | varchar(2) | NO | | # No longer used
| show_country | tinyint(1) | NO | | # No longer used
| date_of_birth | date | YES | | # No longer used
| interesting_tags | longtext | NO | | # No longer used
| ignored_tags | longtext | NO | | # No longer used
| email_tag_filter_strategy | smallint(6) | NO | | # No longer used
| display_tag_filter_strategy | smallint(6) | NO | | # No longer used
| consecutive_days_visit_count | int(11) | NO | | # No longer used
+------------------------------+--------------+------+-----+
`id`
----
Primary key, and the value typically used in URLs that reference the user. A user has the same value for `id` here as they do in the MongoDB database's users collection. Foreign keys referencing `auth_user.id` will often be named `user_id`, but are sometimes named `student_id`.
`username`
----------
The unique username for a user in our system. It may contain alphanumeric, _, @, +, . and - characters. The username is the only information that the students give about themselves that we currently expose to other students. We have never allowed people to change their usernames so far, but that's not something we guarantee going forward.
`first_name`
------------
.. note::
Not used; we store a user's full name in `auth_userprofile.name` instead.
`last_name`
-----------
.. note::
Not used; we store a user's full name in `auth_userprofile.name` instead.
`email`
-------
Their email address. While Django by default makes this optional, we make it required, since it's the primary mechanism through which people log in. Must be unique to each user. Never shown to other users.
`password`
----------
A hashed version of the user's password. Depending on when the password was last set, this will either be a SHA1 hash or PBKDF2 with SHA256 (Django 1.3 uses the former and 1.4 the latter).
`is_staff`
----------
This value is `1` if the user is a staff member *of edX* with corresponding elevated privileges that cut across courses. It does not indicate that the person is a member of the course staff for any given course. Generally, users with this flag set to 1 are either edX program managers responsible for course delivery, or edX developers who need access for testing and debugging purposes. People who have `is_staff = 1` get instructor privileges on all courses, along with having additional debug information show up in the instructor tab.
Note that this designation has no bearing with a user's role in the forums, and confers no elevated privileges there.
Most users have a `0` for this value.
`is_active`
-----------
This value is `1` if the user has clicked on the activation link that was sent to them when they created their account, and `0` otherwise. Users who have `is_active = 0` generally cannot log into the system. However, when users first create their account, they are automatically logged in even though they are not active. This is to let them experience the site immediately without having to check their email. They just get a little banner at the top of their dashboard reminding them to check their email and activate their account when they have time. If they log out, they won't be able to log back in again until they've activated. However, because our sessions last a long time, it is theoretically possible for someone to use the site as a student for days without being "active".
Once `is_active` is set to `1`, the only circumstance where it would be set back to `0` would be if we decide to ban the user (which is very rare, manual operation).
`is_superuser`
--------------
Value is `1` if the user has admin privileges. Only the earliest developers of the system have this set to `1`, and it's no longer really used in the codebase. Set to 0 for almost everybody.
`last_login`
------------
A datetime of the user's last login. Should not be used as a proxy for activity, since people can use the site all the time and go days between logging in and out.
`date_joined`
-------------
Date that the account was created (NOT when it was activated).
`(obsolete fields)`
-------------------
All the following fields were added by an application called Askbot, a discussion forum package that is no longer part of the system:
* `status`
* `email_key`
* `avatar_type`
* `country`
* `show_country`
* `date_of_birth`
* `interesting_tags`
* `ignored_tags`
* `email_tag_filter_strategy`
* `display_tag_filter_strategy`
* `consecutive_days_visit_count`
Only users who were part of the prototype 6.002x course run in the Spring of 2012 would have any information in these fields. Even with those users, most of this information was never collected. Only the fields that are automatically generated have any values in them, such as tag settings.
These fields are completely unrelated to the discussion forums we currently use, and will eventually be dropped from this table.
`auth_userprofile`
==================
The `auth_userprofile` table is mostly used to store user demographic information collected during the signup process. We also use it to store certain additional metadata relating to certificates. Every row in this table corresponds to one row in `auth_user`::
+--------------------+--------------+------+-----+
| Field | Type | Null | Key |
+--------------------+--------------+------+-----+
| id | int(11) | NO | PRI |
| user_id | int(11) | NO | UNI |
| name | varchar(255) | NO | MUL |
| language | varchar(255) | NO | MUL | # Prototype course users only
| location | varchar(255) | NO | MUL | # Prototype course users only
| meta | longtext | NO | |
| courseware | varchar(255) | NO | | # No longer used
| gender | varchar(6) | YES | MUL | # Only users signed up after prototype
| mailing_address | longtext | YES | | # Only users signed up after prototype
| year_of_birth | int(11) | YES | MUL | # Only users signed up after prototype
| level_of_education | varchar(6) | YES | MUL | # Only users signed up after prototype
| goals | longtext | YES | | # Only users signed up after prototype
| allow_certificate | tinyint(1) | NO | |
+--------------------+--------------+------+-----+
There is an important split in demographic data gathered for the students who signed up during the MITx prototype phase in the spring of 2012, and those that signed up afterwards.
`id`
----
Primary key, not referenced anywhere else.
`user_id`
---------
A foreign key that maps to `auth_user.id`.
`name`
------
String for a user's full name. We make no constraints on language or breakdown into first/last name. The names are never shown to other students. Foreign students usually enter a romanized version of their names, but not always.
It used to be our policy to require manual approval of name changes to guard the integrity of the certificates. Students would submit a name change request and someone from the team would approve or reject as appropriate. Later, we decided to allow the name changes to take place automatically, but to log previous names in the `meta` field.
`language`
----------
User's preferred language, asked during the sign up process for the 6.002x prototype course given in the Spring of 2012. This information stopped being collected after the transition from MITx to edX happened, but we never removed the values from our first group of students. Sometimes written in those languages.
`location`
----------
User's location, asked during the sign up process for the 6.002x prototype course given in the Spring of 2012. We weren't specific, so people tended to put the city they were in, though some just specified their country and some got as specific as their street address. Again, sometimes romanized and sometimes written in their native language. Like `language`, we stopped collecting this field when we transitioned from MITx to edX, so it's only available for our first batch of students.
`meta`
------
An optional, freeform text field that stores JSON data. This was a hack to allow us to associate arbitrary metadata with a user. An example of the JSON that can be stored here is::
{
"old_names" : [
["Mike Smith", "Mike's too informal for a certificate.", "2012-11-15T17:28:12.658126"],
["Michael Smith", "I want to add a middle name as well.", "2013-02-07T11:15:46.524331"]
],
"old_emails" : [["mr_mike@email.com", "2012-10-18T15:21:41.916389"]],
"6002x_exit_response" : {
"rating": ["6"],
"teach_ee": ["I do not teach EE."],
"improvement_textbook": ["I'd like to get the full PDF."],
"future_offerings": ["true"],
"university_comparison":
["This course was <strong>on the same level</strong> as the university class."],
"improvement_lectures": ["More PowerPoint!"],
"highest_degree": ["Bachelor's degree."],
"future_classes": ["true"],
"future_updates": ["true"],
"favorite_parts": ["Releases, bug fixes, and askbot."]
}
}
The following are details about this metadata. Please note that the fields described below are found as JSON attributes *inside* the `meta` field, and are *not* separate database fields of their own.
`old_names`
A list of the previous names this user had, and the timestamps at which they submitted a request to change those names. These name change request submissions used to require a staff member to approve it before the name change took effect. This is no longer the case, though we still record their previous names.
Note that the value stored for each entry is the name they had, not the name they requested to get changed to. People often changed their names as the time for certificate generation approached, to replace nicknames with their actual names or correct spelling/punctuation errors.
The timestamps are UTC, like all datetimes stored in our system.
`old_emails`
A list of previous emails this user had, with timestamps of when they changed them, in a format similar to `old_names`. There was never an approval process for this.
The timestamps are UTC, like all datetimes stored in our system.
`6002x_exit_response`
Answers to a survey that was sent to students after the prototype 6.002x course in the Spring of 2012. The questions and number of questions were randomly selected to measure how much survey length affected response rate. Only students from this course have this field.
`courseware`
------------
This can be ignored. At one point, it was part of a way to do A/B tests, but it has not been used for anything meaningful since the conclusion of the prototype course in the spring of 2012.
`gender`
--------
Dropdown field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field.
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `NULL`
- This student signed up before this information was collected
* - `''` (blank)
- User did not specify gender
* - `'f'`
- Female
* - `'m'`
- Male
* - `'o'`
- Other
`mailing_address`
-----------------
Text field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field. Students who elected not to enter anything will have a blank string.
`year_of_birth`
---------------
Dropdown field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field. Students who decided not to fill this in will also have NULL.
`level_of_education`
--------------------
Dropdown field collected during student signup. We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field.
.. list-table::
:widths: 10 80
:header-rows: 1
* - Value
- Meaning
* - `NULL`
- This student signed up before this information was collected
* - `''` (blank)
- User did not specify level of education.
* - `'p'`
- Doctorate
* - `'p_se'`
- Doctorate in science or engineering (no longer used)
* - `'p_oth'`
- Doctorate in another field (no longer used)
* - `'m'`
- Master's or professional degree
* - `'b'`
- Bachelor's degree
* - `'a'`
- Associate's degree
* - `'hs'`
- Secondary/high school
* - `'jhs'`
- Junior secondary/junior high/middle school
* - `'el'`
- Elementary/primary school
* - `'none'`
- None
* - `'other'`
- Other
`goals`
-------
Text field collected during student signup in response to the prompt, "Goals in signing up for edX". We only started collecting this information after the transition from MITx to edX, so prototype course students will have `NULL` for this field. Students who elected not to enter anything will have a blank string.
`allow_certificate`
-------------------
Set to `1` for most students. This field is set to `0` if log analysis has revealed that this student is accessing our site from a country that the US has an embargo against. At this time, we do not issue certificates to students from those countries.
`student_courseenrollment`
==========================
A row in this table represents a student's enrollment for a particular course run. If they decide to unenroll in the course, we set `is_active` to `False`. We still leave all their state in `courseware_studentmodule` untouched, so they will not lose courseware state if they unenroll and reenroll.
`id`
----
Primary key.
`user_id`
---------
Student's ID in `auth_user.id`
`course_id`
-----------
The ID of the course run they're enrolling in (e.g. `MITx/6.002x/2012_Fall`). You can get this from the URL when you're viewing courseware on your browser.
`created`
---------
Datetime of enrollment, UTC.
`is_active`
-----------
Boolean indicating whether this enrollment is active. If an enrollment is not active, a student is not enrolled in that course. This lets us unenroll students without losing a record of what courses they were enrolled in previously. This was introduced in the 2013-08-20 release. Before this release, unenrolling a student simply deleted the row in `student_courseenrollment`.
`mode`
------
String indicating what kind of enrollment this was. The default is "honor" (honor certificate) and all enrollments prior to 2013-08-20 will be of that type. Other types being considered are "audit" and "verified_id".
`user_id_map`
==========================
A row in this table maps a student's real user ID to an anonymous ID generated to obfuscate the student's identity.
.. list-table::
:widths: 15 15 15 15
:header-rows: 1
* - Field
- Type
- Null
- Key
* - hashid
- int(11)
- NO
- PRI
* - id
- int(11)
- NO
-
* - username
- varchar(30)
- NO
-
`hash_id`
----
The user ID generated to obfuscate the student's identity.
`user_id`
---------
The student's ID in `auth_user.id`.
`username`
-----------
The student's username in `auth_user.id`.
*******************
Courseware Progress
*******************
Any piece of content in the courseware can store state and score in the `courseware_studentmodule` table. Grades and the user Progress page are generated by doing a walk of the course contents, searching for graded items, looking up a student's entries for those items in `courseware_studentmodule` via `(course_id, student_id, module_id)`, and then applying the grade weighting found in the course policy and grading policy files. Course policy files determine how much weight one problem has relative to another, and grading policy files determine how much categories of problems are weighted (e.g. HW=50%, Final=25%, etc.).
.. warning::
**Modules might not be what you expect!**
It's important to understand what "modules" are in the context of our system, as the terminology can be confusing. For the conventions of this table and many parts of our code, a "module" is a content piece that appears in the courseware. This can be nearly anything that appears when users are in the courseware tab: a video, a piece of HTML, a problem, etc. Modules can also be collections of other modules, such as sequences, verticals (modules stacked together on the same page), weeks, chapters, etc. In fact, the course itself is a top level module that contains all the other contents of the course as children. You can imagine the entire course as a tree with modules at every node.
Modules can store state, but whether and how they do so is up to the implemenation for that particular kind of module. When a user loads page, we look up all the modules they need to render in order to display it, and then we ask the database to look up state for those modules for that user. If there is corresponding entry for that user for a given module, we create a new row and set the state to an empty JSON dictionary.
`courseware_studentmodule`
==========================
The `courseware_studentmodule` table holds all courseware state for a given user. Every student has a separate row for every piece of content in the course, making this by far our largest table::
+-------------+--------------+------+-----+
| Field | Type | Null | Key |
+-------------+--------------+------+-----+
| id | int(11) | NO | PRI |
| module_type | varchar(32) | NO | MUL |
| module_id | varchar(255) | NO | MUL |
| student_id | int(11) | NO | MUL |
| state | longtext | YES | |
| grade | double | YES | MUL | # problem, selfassessment, and combinedopenended use this
| created | datetime | NO | MUL |
| modified | datetime | NO | MUL |
| max_grade | double | YES | | # problem, selfassessment, and combinedopenended use this
| done | varchar(8) | NO | MUL | # ignore this
| course_id | varchar(255) | NO | MUL |
+-------------+--------------+------+-----+
`id`
----
Primary key. Rarely used though, since most lookups on this table are searches on the three tuple of `(course_id, student_id, module_id)`.
`module_type`
-------------
.. list-table::
:widths: 10 80
:header-rows: 0
* - `chapter`
- The top level categories for a course. Each of these is usually labeled as a Week in the courseware, but this is just convention.
* - `combinedopenended`
- A new module type developed for grading open ended questions via self assessment, peer assessment, and machine learning.
* - `conditional`
- A new module type recently developed for 8.02x, this allows you to prevent access to certain parts of the courseware if other parts have not been completed first.
* - `course`
- The top level course module of which all course content is descended.
* - `problem`
- A problem that the user can submit solutions for. We have many different varieties.
* - `problemset`
- A collection of problems and supplementary materials, typically used for homeworks and rendered as a horizontal icon bar in the courseware. Use is inconsistent, and some courses use a `sequential` instead.
* - `selfassessment`
- Self assessment problems. An early test of the open ended grading system that is not in widespread use yet. Recently deprecated in favor of `combinedopenended`.
* - `sequential`
- A collection of videos, problems, and other materials, rendered as a horizontal icon bar in the courseware.
* - `videosequence`
- A collection of videos, exercise problems, and other materials, rendered as a horizontal icon bar in the courseware. Use is inconsistent, and some courses use a `sequential` instead.
There's been substantial muddling of our container types, particularly between sequentials, problemsets, and videosequences. In the beginning we only had sequentials, and these ended up being used primarily for two purposes: creating a sequence of lecture videos and exercises for instruction, and creating homework problem sets. The `problemset` and `videosequence` types were created with the hope that our system would have a better semantic understanding of what a sequence actually represented, and could at a later point choose to render them differently to the user if it was appropriate. Due to a variety of reasons, migration over to this has been spotty. They all render the same way at the moment.
`module_id`
-----------
Unique ID for a distinct piece of content in a course, these are recorded as URLs of the form `i4x://{org}/{course_num}/{module_type}/{module_name}`. Having URLs of this form allows us to give content a canonical representation even as we are in a state of transition between backend data stores.
.. list-table:: Breakdown of example `module_id`: `i4x://MITx/3.091x/problemset/Sample_Problems`
:widths: 10 20 70
:header-rows: 1
* - Part
- Example
- Definition
* - `i4x://`
-
- Just a convention we ran with. We had plans for the domain `i4x.org` at one point.
* - `org`
- `MITx`
- The organization part of the ID, indicating what organization created this piece of content.
* - `course_num`
- `3.091x`
- The course number this content was created for. Note that there is no run information here, so you can't know what runs of the course this content is being used for from the `module_id` alone; you have to look at the `courseware_studentmodule.course_id` field.
* - `module_type`
- `problemset`
- The module type, same value as what's in the `courseware_studentmodule.module_type` field.
* - `module_name`
- `Sample_Problems`
- The name given for this module by the content creators. If the module was not named, the system will generate a name based on the type and a hash of its contents (ex: `selfassessment_03c483062389`).
`student_id`
------------
A reference to `auth_user.id`, this is the student that this module state row belongs to.
`state`
-------
This is a JSON text field where different module types are free to store their state however they wish.
Container Modules: `course`, `chapter`, `problemset`, `sequential`, `videosequence`
The state for all of these is a JSON dictionary indicating the user's last known position within this container. This is 1-indexed, not 0-indexed, mostly because it went out that way at one point and we didn't want to later break saved navigation state for users.
Example: `{"position" : 3}`
When this user last interacted with this course/chapter/etc., they had clicked on the third child element. Note that the position is a simple index and not a `module_id`, so if you rearranged the order of the contents, it would not be smart enough to accomodate the changes and would point users to the wrong place.
The hierarchy goes: `course > chapter > (problemset | sequential | videosequence)`
`combinedopenended`
TODO: More details to come.
`conditional`
Conditionals don't actually store any state, so this value is always an empty JSON dictionary (`'{}'`). We should probably remove these entries altogether.
`problem`
There are many kinds of problems supported by the system, and they all have different state requirements. Note that one problem can have many different response fields. If a problem generates a random circuit and asks five questions about it, then all of that is stored in one row in `courseware_studentmodule`.
TODO: Write out different problem types and their state.
`selfassessment`
TODO: More details to come.
`grade`
-------
Floating point value indicating the total unweighted grade for this problem that the student has scored. Basically how many responses they got right within the problem.
Only `problem` and `selfassessment` types use this field. All other modules set this to `NULL`. Due to a quirk in how rendering is done, `grade` can also be `NULL` for a tenth of a second or so the first time that a user loads a problem. The initial load will trigger two writes, the first of which will set the `grade` to `NULL`, and the second of which will set it to `0`.
`created`
---------
Datetime when this row was created (i.e. when the student first accessed this piece of content).
`modified`
----------
Datetime when we last updated this row. Set to be equal to `created` at first. A change in `modified` implies that there was a state change, usually in response to a user action like saving or submitting a problem, or clicking on a navigational element that records its state. However it can also be triggered if the module writes multiple times on its first load, like problems do (see note in `grade`).
`max_grade`
-----------
Floating point value indicating the total possible unweighted grade for this problem, or basically the number of responses that are in this problem. Though in practice it's the same for every entry with the same `module_id`, it is technically possible for it to be anything. The problems are dynamic enough where you could create a random number of responses if you wanted. This a bad idea and will probably cause grading errors, but it is possible.
Another way in which `max_grade` can differ between entries with the same `module_id` is if the problem was modified after the `max_grade` was written and the user never went back to the problem after it was updated. This might happen if a member of the course staff puts out a problem with five parts, realizes that the last part doesn't make sense, and decides to remove it. People who saw and answered it when it had five parts and never came back to it after the changes had been made will have a `max_grade` of `5`, while people who saw it later will have a `max_grade` of `4`.
These complexities in our grading system are a high priority target for refactoring in the near future.
Only `problem` and `selfassessment` types use this field. All other modules set this to `NULL`.
`done`
------
Ignore this field. It was supposed to be an indication whether something was finished, but was never properly used and is just `'na'` in every row.
`course_id`
-----------
The course that this row applies to, represented in the form org/course/run (ex: `MITx/6.002x/2012_Fall`). The same course content (same `module_id`) can be used in different courses, and a student's state needs to be tracked separately for each course.
************
Certificates
************
`certificates_generatedcertificate`
===================================
The generatedcertificate table tracks certificate state for students who have been graded after a course completes. Currently the table is only populated when a course ends and a script is run to grade students who have completed the course::
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| user_id | int(11) | NO | MUL | NULL | |
| download_url | varchar(128) | NO | | NULL | |
| grade | varchar(5) | NO | | NULL | |
| course_id | varchar(255) | NO | MUL | NULL | |
| key | varchar(32) | NO | | NULL | |
| distinction | tinyint(1) | NO | | NULL | |
| status | varchar(32) | NO | | NULL | |
| verify_uuid | varchar(32) | NO | | NULL | |
| download_uuid | varchar(32) | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| created_date | datetime | NO | | NULL | |
| modified_date | datetime | NO | | NULL | |
| error_reason | varchar(512) | NO | | NULL | |
+---------------+--------------+------+-----+---------+----------------+
`user_id`, `course_id`
----------------------
The table is indexed by user and course
`status`
--------
Status may be one of these states:
* `unavailable`
* `generating`
* `regenerating`
* `deleting`
* `deleted`
* `downloadable`
* `notpassing`
* `restricted`
* `error`
After a course has been graded and certificates have been issued status will be one of:
* `downloadable`
* `notpassing`
* `restricted`
If the status is `downloadable` then the student passed the course and there will be a certificate available for download.
`download_url`
--------------
The `download_uuid` has the full URL to the certificate
`download_uuid`, `verify_uuid`
------------------------------
The two uuids are what uniquely identify the download url and the url used to download the certificate.
`distinction`
-------------
This was used for letters of distinction for 188.1x and is not being used for any current courses
`name`
------
This field records the name of the student that was set at the time the student was graded and the certificate was generated.
`grade`
-------
The grade of the student recorded at the time the certificate was generated. This may be different than the current grade since grading is only done once for a course when it ends.
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
Q_FLAG =
ifeq ($(quiet), true)
Q_FLAG = -Q
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = $(Q_FLAG) -d $(BUILDDIR)/doctrees -c source $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/edX.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/edX.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/edX"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/edX"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
# -*- coding: utf-8 -*-
#pylint: disable=C0103
#pylint: disable=W0622
#pylint: disable=W0212
#pylint: disable=W0613
import sys, os
from path import path
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append('../../../../')
from docs.shared.conf import *
# Add any paths that contain templates here, relative to this directory.
templates_path.append('source/_templates')
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path.append('source/_static')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
root = path('../../../..').abspath()
sys.path.insert(0, root)
sys.path.append(root / "common/djangoapps")
sys.path.append(root / "common/lib")
sys.path.append(root / "common/lib/sandbox-packages")
sys.path.append(root / "lms/djangoapps")
sys.path.append(root / "lms/lib")
sys.path.append(root / "cms/djangoapps")
sys.path.append(root / "cms/lib")
sys.path.insert(0, os.path.abspath(os.path.normpath(os.path.dirname(__file__)
+ '/../../../')))
sys.path.append('.')
# django configuration - careful here
if on_rtd:
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms'
else:
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms.envs.test'
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx',
'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.pngmath',
'sphinx.ext.mathjax', 'sphinx.ext.viewcode']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['build']
# Output file base name for HTML help builder.
htmlhelp_basename = 'edXDocs'
# --- Mock modules ------------------------------------------------------------
# Mock all the modules that the readthedocs build can't import
import mock
class Mock(object):
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return Mock()
@classmethod
def __getattr__(cls, name):
if name in ('__file__', '__path__'):
return '/dev/null'
elif name[0] == name[0].upper():
mockType = type(name, (), {})
mockType.__module__ = __name__
return mockType
else:
return Mock()
# The list of modules and submodules that we know give RTD trouble.
# Make sure you've tried including the relevant package in
# docs/share/requirements.txt before adding to this list.
MOCK_MODULES = [
'numpy',
'matplotlib',
'matplotlib.pyplot',
'scipy.interpolate',
'scipy.constants',
'scipy.optimize',
]
if on_rtd:
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = Mock()
# -----------------------------------------------------------------------------
# from http://djangosnippets.org/snippets/2533/
# autogenerate models definitions
import inspect
import types
from HTMLParser import HTMLParser
def force_unicode(s, encoding='utf-8', strings_only=False, errors='strict'):
"""
Similar to smart_unicode, except that lazy instances are resolved to
strings, rather than kept as lazy objects.
If strings_only is True, don't convert (some) non-string-like objects.
"""
if strings_only and isinstance(s, (types.NoneType, int)):
return s
if not isinstance(s, basestring,):
if hasattr(s, '__unicode__'):
s = unicode(s)
else:
s = unicode(str(s), encoding, errors)
elif not isinstance(s, unicode):
s = unicode(s, encoding, errors)
return s
class MLStripper(HTMLParser):
def __init__(self):
self.reset()
self.fed = []
def handle_data(self, d):
self.fed.append(d)
def get_data(self):
return ''.join(self.fed)
def strip_tags(html):
s = MLStripper()
s.feed(html)
return s.get_data()
def process_docstring(app, what, name, obj, options, lines):
"""Autodoc django models"""
# This causes import errors if left outside the function
from django.db import models
# If you want extract docs from django forms:
# from django import forms
# from django.forms.models import BaseInlineFormSet
# Only look at objects that inherit from Django's base MODEL class
if inspect.isclass(obj) and issubclass(obj, models.Model):
# Grab the field list from the meta class
fields = obj._meta._fields()
for field in fields:
# Decode and strip any html out of the field's help text
help_text = strip_tags(force_unicode(field.help_text))
# Decode and capitalize the verbose name, for use if there isn't
# any help text
verbose_name = force_unicode(field.verbose_name).capitalize()
if help_text:
# Add the model field to the end of the docstring as a param
# using the help text as the description
lines.append(u':param %s: %s' % (field.attname, help_text))
else:
# Add the model field to the end of the docstring as a param
# using the verbose name as the description
lines.append(u':param %s: %s' % (field.attname, verbose_name))
# Add the field's type to the docstring
lines.append(u':type %s: %s' % (field.attname, type(field).__name__))
return lines
def setup(app):
"""Setup docsting processors"""
#Register the docstring processor with sphinx
app.connect('autodoc-process-docstring', process_docstring)
.. module:: transcripts
======================================================
Developer’s workflow for the timed transcripts in CMS.
======================================================
:download:`Multipage pdf version of Timed Transcripts workflow. <transcripts_workflow.pdf>`
:download:`Open office graph version (source for pdf). <transcripts_workflow.odg>`
:download:`List of implemented acceptance tests. <transcripts_acceptance_tests.odt>`
Description
===========
Timed Transcripts functionality is added in separate tab of Video module Editor, that is active by default. This tab is called `Basic`, another tab is called `Advanced` and contains default metadata fields.
`Basic` tab is a simple representation of `Advanced` tab that provides functionality to speed up adding Video module with transcripts to the course.
To make more accurate adjustments `Advanced` tab should be used.
Front-end part of `Basic` tab has 4 editors/views:
* Display name
* 3 editors for inserting Video URLs.
Video URL fields might contain 3 kinds of URLs:
* **YouTube** link. There are supported formats:
* http://www.youtube.com/watch?v=OEoXaMPEzfM&feature=feedrec_grec_index ;
* http://www.youtube.com/user/IngridMichaelsonVEVO#p/a/u/1/OEoXaMPEzfM ;
* http://www.youtube.com/v/OEoXaMPEzfM?fs=1&amp;hl=en_US&amp;rel=0 ;
* http://www.youtube.com/watch?v=OEoXaMPEzfM#t=0m10s ;
* http://www.youtube.com/embed/OEoXaMPEzfM?rel=0 ;
* http://www.youtube.com/watch?v=OEoXaMPEzfM ;
* http://youtu.be/OEoXaMPEzfM ;
* **MP4** video source;
* **WEBM** video source.
Each of these kind of URLs can be specified just **ONCE**. Otherwise, error message occurs on front-end.
After filling editor **transcripts/check** method will be invoked with the parameters described below (see `API`_). Depending on conditions, that are also described below (see `Commands`_), this method responds with a *command* and front-end renders the appropriate View.
Each View can have specific actions. There is a list of supported actions:
* Download Timed Transcripts;
* Upload Timed Transcripts;
* Import Timed Transcripts from YouTube;
* Replace edX Timed Transcripts by Timed Transcripts from YouTube;
* Choose Timed Transcripts;
* Use existing Timed Transcripts.
All of these actions are handled by 7 API methods described below (see `API`_).
Because rollback functionality isn't implemented now, after invoking some of the actions user cannot revert changes by clicking button `Cancel`.
To remove timed transcripts file from the video just go to `Advanced` tab and clear field `sub` then Save changes.
Commands
========
Command from front-end point of view is just a reference to the needed View with possible actions that user can do depending on conditions described below (See edx-platform/cms/static/js/views/transcripts/message_manager.js:21-29).
So,
* **IF** YouTube transcripts present locally **AND** on YouTube server **AND** both of these transcripts files are **DIFFERENT**, we respond with `replace` command. Ask user to replace local transcripts file by YouTube's ones.
* **IF** YouTube transcripts present **ONLY** locally, we respond with `found` command.
* **IF** YouTube transcripts present **ONLY** on YouTube server, we respond with `import` command. Ask user to import transcripts file from YouTube server.
* **IF** player is in HTML5 video mode. It means that **ONLY** html5 sources are added:
* **IF** just 1 html5 source was added or both html5 sources have **EQUAL** transcripts files, then we respond with `found` command.
* **OTHERWISE**, when 2 html5 sources were added and founded transcripts files are **DIFFERENT**, we respond with `choose` command. In this case, user should choose which one transcripts file he wants to use.
* **IF** we are working with just 1 field **AND** item.sub field **HAS** a value **AND** user fills editor/view by the new value/video source without transcripts file, we respond with `use_existing` command. In this case, user will have possibility to use transcripts file from previous video.
* **OTHERWISE**, we will respond with `not_found` command.
Synchronization and Saving workflow
====================================
For now saving mechanism works as follows:
On click `Save` button **ModuleEdit** class (See edx-platform/cms/static/coffee/src/views/module_edit.coffee:83-101) grabs values from all modified metadata fields and sends all this data to the server.
Because of the fact that Timed Transcripts is module specific functionality, ModuleEdit class is not extended. Instead, to apply all changes that user did in the `Basic` tab, we use synchronization mechanism of TabsEditingDescriptor class. That mechanism provides us possibility to do needed actions on Tab switching and on Save (See edx-platform/cms/templates/widgets/video/transcripts.html).
On tab switching and when save action is invoked, JavaScript code synchronize collections (Metadata Collection and Transcripts Collection). You can see synchronization logic in the edx-platform/cms/static/js/views/transcripts/editor.js:72-219. In this case, Metadata fields always have the actual data.
Special cases
=============
1. Status message `Timed Transcript Conflict` (Choose) where one of 2 transcripts files should be chosen **-->** click `Save` button without choosing **-->** open Editor **-->** status message `Timed Transcript Found` will be shown and transcripts file will be chosen in random order.
2. status message `Timed Transcript Conflict` (Choose) where one of 2 transcripts files should be chosen **-->** open `Advanced` tab without choosing **-->** get back to `Basic` tab **-->** status message `Timed Transcript Found` will be shown and transcripts file will be chosen in random order.
3. The same issues with `Timed Transcript Not Updated` (Use existing).
API
===
We provide 7 API methods to work with timed transcripts
(edx-platform/cms/urls.py:23-29):
* transcripts/upload
* transcripts/download
* transcripts/check
* transcripts/choose
* transcripts/replace
* transcripts/rename
* transcripts/save
**"transcripts/upload"** method is used for uploading SRT transcripts for the
HTML5 and YouTube video modules.
*Method:*
POST
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
- file - BLOB file
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/download"** method is used for downloading SRT transcripts for the
HTML5 and YouTube video modules.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- subs_id - file name that is used to find transcripts file in the storage.
*Response:*
HTTP 404
or
HTTP 200 + BLOB of SRT file
**"transcripts/check"** method is used for checking availability of timed transcripts
for the video module.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
command: string with action to front-end what to do and what to show to user,
subs: file name of transcripts file that was found in the storage,
html5_local: [] or [True] or [True, True],
is_youtube_mode: True/False,
youtube_local: True/False,
youtube_server: True/False,
youtube_diff: True/False,
current_item_subs: string with value of item.sub field,
status: 'Error' or 'Success'
}
**"transcripts/choose"** method is used for choosing which transcripts file should be used.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
- html5_id - file name of chosen transcripts file.
*Response:*
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/replace"** method is used for handling `import` and `replace` commands.
Invoking this method starts downloading new transcripts file from YouTube server.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/rename"** method is used for handling `use_existing` command.
After invoking this method current transcripts file will be copied and renamed to another one with name of current video passed in the editor/view.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- video_list - list with information about the links currently passed in the editor/view.
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error',
subs: value of uploaded and saved sub field in the video item.
}
**"transcripts/save"** method is used for handling `save` command.
After invoking this method all changes will be saved that were done before this moment.
*Method:*
GET
*Parameters:*
- id - location ID of the Xmodule
- metadata - new values for the metadata fields.
- currents_subs - list with the file names of videos passed in the editor/view.
*Response:*
HTTP 400
or
HTTP 200 + JSON:
.. code::
{
status: 'Success' or 'Error'
}
Transcripts modules:
====================
.. automodule:: contentstore.views.transcripts_ajax
:members:
:show-inheritance:
.. automodule:: contentstore.transcripts_utils
:members:
:show-inheritance:
Scope
This document describes code quality standards for the i4x
system.
1. Coding Standards
Code falls into four categories:
* Deployed. Running on a live server.
* Production. Intended for deployment.
* Scaffolding. Intended to define interfaces for future work, and
minimal implementations to support further development.
* Prototype. Experimental new features.
1.1 Deployed
The standards for deployed code are identical to production. In
general, we tend to do either:
1) Perform a final verification QA cycle on changed parts of code
before deploying.
2) Use code on a staging or internal server for a week before
deploying.
1.2 Production
All production code must be peer-reviewed. The code must meet the
following standards:
1) Test Suite. Code must have reasonable, although not complete, test
coverage.
2) Consistent. Code must follow PEP8
3) Clean Abstractions.
4) Future-Compatible. Code must not be incompatible with the
long-term vision of either the codebase or of edX.
5) Properly Documented
6) Maintainable and deployable
7) Robust.
All code paths must be manually or automatically verified.
1.3 Scaffolding
All scaffolding code should be peer-reviewed. The code must meet the
following standards:
1) Testable. We do not require test coverage, but we do require the
code to be structured such that it is possible to build tests.
2) Consistent. Code must follow PEP8
3) Clean abstractions or obvious throw-away code. One of the goals
of scaffolding is to define proper abstractions.
4) Future-Compatible. Code must not be incompatible with the
long-term vision of either the codebase or of edX.
5) Somewhat documented
6) Unpluggable. There should be a setting to disable scaffolding code.
By default, and by policy, it should never be enabled on production
servers.
7) Purpose. The scaffolding must provide a clean reason for existence
(e.g. define a specific interface, etc.)
1.4 Prototype
Prototype code should live in a separate branch. It should strive
to follow PEP8, be readable, testable, and future-proof, but we have
no hard standards.
2. Process Standards
* Code should be integrated in small pull requests. Large commits
should be broken down into small commits for integration.
* Every piece of production and deployed code must be reviewed prior
to integration.
* Anyone on the edX team competent to review a piece of code may
review it (this may change as the team grows).
* Each contributor is responsible for finding a person to review their
code. If it is not clear to the contributor who is appropriate, each
project has an owner who is the default go-to.
2.1 Rapid pull
Unmerged code can lead to merge conflicts, and slow down
development. We have an experimental procedure for handling rapid
pulls and merges. To qualify:
* A piece of code must only have minor issues remaining (nothing which
we would be uncomfortable placing on a server).
* Either the requester or the puller takes ownership for guaranteeing
that those issues are resolved within a short timeframe.
* Both the requester and the puller must be comfortable with it.
* Both the requester and the owner must have a history of/ability to
resolve remaining issues quickly.
If code qualifies:
* It can be merged, and repaired in master.
* The pull message should specify '## pending fixes/OWNER' where ## is
the pull request number, and OWNER is the owner.
* All required fixes are documented in github in the (now closed) pull
request, and should be marked off there when applied (potentially,
directly to master).
* Once all fixes are applied, the final commit should specify
'## closed'.
3. Documentation Standards
* Whenever possible, documentation should live in code.
* When impossible, it should live in the github repo.
* Discussion should live on github, Basecamp or Pivotal, depending on
context.
* Notes for later fixes should in general be put into Pivotal as stories.
If they are left in the code, they should be prefixed by
# TODO (<name>)
# Development Tasks
## Prerequisites
### Ruby
To install all of the libraries needed for our rake commands, run `bundle install`.
This will read the `Gemfile` and install all of the gems specified there.
### Python
Run the following::
pip install -r requirements.txt
### Binaries
Install the following:
* Mongodb (http://www.mongodb.org/)
### Databases
First start up the mongo daemon. E.g. to start it up in the background
using a config file:
mongod --config /usr/local/etc/mongod.conf &
Check out the course data directories that you want to work with into the
`GITHUB_REPO_ROOT` (by default, `../data`). Then run the following command:
rake resetdb
## Installing
To create your development environment, run the shell script in the root of
the repo:
scripts/create-dev-env.sh
## Starting development servers
Both the LMS and Studio can be started using the following shortcut tasks
rake lms # Start the LMS
rake cms # Start studio
rake lms[cms.dev] # Start LMS to run alongside Studio
rake lms[cms.dev_preview] # Start LMS to run alongside Studio in preview mode
Under the hood, this executes `./manage.py {lms|cms} --settings $ENV runserver`,
which starts a local development server.
Both of these commands take arguments to start the servers in different environments
or with additional options:
# Start the LMS using the test configuration, on port 5000
rake lms[test,5000] # Executes ./manage.py lms --settings test runserver 5000
*N.B.* You may have to escape the `[` characters, depending on your shell: `rake "lms[test,5000]"`
To get a full list of available rake tasks, use:
rake -T
### Troubleshooting
#### Reference Error: XModule is not defined (javascript)
This means that the javascript defining an xmodule hasn't loaded correctly. There are a number
of different things that could be causing this:
1. See `Error: watch EMFILE`
#### Error: watch EMFILE (coffee)
When running a development server, we also start a watcher process alongside to recompile coffeescript
and sass as changes are made. On Mac OSX systems, the coffee watcher process takes more file handles
than are allowed by default. This will result in `EMFILE` errors when coffeescript is running, and
will prevent javascript from compiling, leading to the error 'XModule is not defined'
To work around this issue, we use `Process::setrlimit` to set the number of allowed open files.
Coffee watches both directories and files, so you will need to set this fairly high (anecdotally,
8000 seems to do the trick on OSX 10.7.5, 10.8.3, and 10.8.4)
## Running Tests
See `testing.md` for instructions on running the test suite.
## Content development
If you change course content, while running the LMS in dev mode, it is unnecessary to restart to refresh the modulestore.
Instead, hit /migrate/modules to see a list of all modules loaded, and click on links (eg /migrate/reload/edx4edx) to reload a course.
### Gitreload-based workflow
github (or other equivalent git-based repository systems) used for
course content can be setup to trigger an automatic reload when changes are pushed. Here is how:
1. Each content directory in edx_all/data should be a clone of a git repo
2. The user running the edx gunicorn process should have its ssh key registered with the git repo
3. The list settings.ALLOWED_GITRELOAD_IPS should contain the IP address of the git repo originating the gitreload request.
By default, this list is ['207.97.227.253', '50.57.128.197', '108.171.174.178'] (the github IPs).
The list can be overridden in the startup file used, eg lms/envs/dev*.py
4. The git post-receive-hook should POST to /gitreload with a JSON payload. This payload should define at least
{ "repository" : { "name" : reload_dir }
where reload_dir is the directory name of the content to reload (ie edx_all/data/reload_dir should exist)
The edx server will then do "git reset --hard HEAD; git clean -f -d; git pull origin" in that directory. After the pull,
it will reload the modulestore for that course.
Note that the gitreload-based workflow is not meant for deployments on AWS (or elsewhere) which use collectstatic, since collectstatic is not run by a gitreload event.
Also, the gitreload feature needs FEATURES['ENABLE_LMS_MIGRATION'] = True in the django settings.
# Running the discussion service
## Instruction for Mac
## Installing Mongodb
If you haven't done so already:
brew install mongodb
Make sure that you have mongodb running. You can simply open a new terminal tab and type:
mongod
## Installing elasticsearch
brew install elasticsearch
For debugging, it's often more convenient to have elasticsearch running in a terminal tab instead of in background. To do so, simply open a new terminal tab and then type:
elasticsearch -f
## Setting up the discussion service
You can retrieve the source code from the [github repository](https://github.com/edx/cs_comments_service).
First go into the edx_all directory. Then type
git clone https://github.com/edx/cs_comments_service.git
cd cs_comments_service/
If you see a prompt asking "Do you wish to trust this .rvmrc file?", type "y"
Now if you see this error "Gemset 'cs_comments_service' does not exist," run the following command to create the gemset and then use the rvm environment manually:
rvm gemset create 'cs_comments_service'
rvm use 1.9.3@cs_comments_service
Now use the following command to install required packages:
bundle install
The following command creates database indexes:
bundle exec rake db:init
Now use the following command to generate seeds (basically some random comments in Latin):
bundle exec rake db:seed
It's done! Launch the app now:
ruby app.rb
## Integrating with the edx platform
The API key must match on both sides. It is configured here:
* edx-platform: COMMENTS_SERVICE_KEY in your dev.py file (dev environment) or ENV_TOKENS (prod environment)
* cs_comments_service: api_key in the application.yml file (dev environment) or ENV variable (prod environment)
## Running the delayed job worker
In the discussion service, notifications are handled asynchronously using a third party gem called delayed_job. If you want to test this functionality, run the following command in a separate tab:
bundle exec rake jobs:work
## From the edx-platform django app, initialize roles and permissions
To fully test the discussion forum, you might want to act as a moderator or an administrator. Currently, the roles are:
* moderators can manage everything in the forum, and
* administrators can manage everything plus assigning and revoking moderator status of other users.
First make sure that the database is up-to-date:
rake resetdb
If you have created users in the edx-platform django apps when the comment service was not running, you will need to one-way sync the users into the comment service back end database:
./manage.py lms sync_user_info
Now initialize roles and permissions, providing a course id. See the example below. Note that you do not need to do this for Studio-created courses, as the Studio application does this for you.
./manage.py lms seed_permissions_roles "MITx/6.002x/2012_Fall"
To assign yourself as a moderator, use the following command (assuming your username is "test", and the course id is "MITx/6.002x/2012_Fall"):
./manage.py lms assign_role test Moderator "MITx/6.002x/2012_Fall"
To assign yourself as an administrator, use the following command
./manage.py lms assign_role test Administrator "MITx/6.002x/2012_Fall"
## Some other useful commands
### generate seeds for a specific forum
The seed generating command above assumes that you have the following discussion tags somewhere in the course data:
<discussion for="Welcome Video" id="video_1" discussion_category="Video"/>
<discussion for="Lab 0: Using the Tools" id="lab_1" discussion_category="Lab"/>
<discussion for="Lab Circuit Sandbox" id="lab_2" discussion_category="Lab"/>
For example, you can insert them into overview section as following:
<chapter name="Overview">
<section format="Video" name="Welcome">
<vertical>
<video youtube="0.75:izygArpw-Qo,1.0:p2Q6BrNhdh8,1.25:1EeWXzPdhSA,1.50:rABDYkeK0x8"/>
<discussion for="Welcome Video" id="video_1" discussion_category="Video"/>
</vertical>
</section>
<section format="Lecture Sequence" name="System Usage Sequence">
<%include file="sections/introseq.xml"/>
</section>
<section format="Lab" name="Lab0: Using the tools">
<vertical>
<html> See the <a href="/section/labintro"> Lab Introduction </a> or <a href="/static/handouts/schematic_tutorial.pdf">Interactive Lab Usage Handout </a> for information on how to do the lab </html>
<problem name="Lab 0: Using the Tools" filename="Lab0" rerandomize="false"/>
<discussion for="Lab 0: Using the Tools" id="lab_1" discussion_category="Lab"/>
</vertical>
</section>
<section format="Lab" name="Circuit Sandbox">
<vertical>
<problem name="Circuit Sandbox" filename="Lab_sandbox" rerandomize="false"/>
<discussion for="Lab Circuit Sandbox" id="lab_2" discussion_category="Lab"/>
</vertical>
</section>
</chapter>
Currently, only the attribute "id" is actually used, which identifies discussion forum. In the code for the data generator, the corresponding lines are:
generate_comments_for("video_1")
generate_comments_for("lab_1")
generate_comments_for("lab_2")
We also have a command for generating comments within a forum with the specified id:
bundle exec rake db:generate_comments[type_the_discussion_id_here]
For instance, if you want to generate comments for a new discussion tab named "lab_3", then use the following command
bundle exec rake db:generate_comments[lab_3]
### Running tests for the service
bundle exec rspec
Warning: the development and test environments share the same elasticsearch index. After running tests, search may not work in the development environment. You simply need to reindex:
bundle exec rake db:reindex_search
### debugging the service
You can use the following command to launch a console within the service environment:
bundle exec rake console
### show user roles and permissions
Use the following command to see the roles and permissions of a user in a given course (assuming, again, that the username is "test"):
./manage.py lms show_permissions moderator
You need to make sure that the environment variables are exported. Otherwise you would need to do
./manage.py lms show_permissions moderator
# Notes on using mongodb backed LMS and CMS
These are some random notes for developers, on how things are stored in mongodb, and how to debug mongodb data.
## Databases
Two mongodb databases are used:
- xmodule: stores module definitions and metadata (modulestore)
- xcontent: stores filesystem content, like PDF files
modulestore documents are stored with an _id which has fields like this:
{"_id": {"tag":"i4x","org":"HarvardX","course":"CS50x","category":"chapter","name":"Week_1","revision":null}}
## Document fields
### Problems
Here is an example showing the fields available in problem documents:
{
"_id" : {
"tag" : "i4x",
"org" : "MITx",
"course" : "6.00x",
"category" : "problem",
"name" : "ps03:ps03-Hangman_part_2_The_Game",
"revision" : null
},
"definition" : {
"data" : " ..."
},
"metadata" : {
"display_name" : "Hangman Part 2: The Game",
"attempts" : "30",
"title" : "Hangman, Part 2",
"data_dir" : "6.00x",
"type" : "lecture"
}
}
## Sample interaction with mongodb
1. "mongo"
2. "use xmodule"
3. "show collections" should give "modulestore" and "system.indexes"
4. 'db.modulestore.find( {"_id.org": "MITx"} )' will produce a list of all MITx course documents
5. 'db.modulestore.find( {"_id.org": "MITx", "_id.category": "problem"} )' will produce a list of all problems in MITx courses
Example query for finding all files with "image" in the filename:
- use xcontent
- db.fs.files.find({'filename': /image/ } )
- db.fs.files.find({'filename': /image/ } ).count()
## Debugging the mongodb contents
A convenient tool is http://phpmoadmin.com/ (needs php)
Under ubuntu, do:
- apt-get install php5-fpm php-pear
- pecl install mongo
- edit /etc/php5/fpm/php.ini to add "extension=mongo.so"
- /etc/init.d/php5-fpm restart
and also setup nginx to run php through fastcgi.
## Backing up mongodb
- mogodump (dumps all dbs)
- mongodump --collection modulestore --db xmodule (dumps just xmodule/modulestore)
- mongodump -d xmodule -q '{"_id.org": "MITx"}' (dumps just MITx documents in xmodule)
- mongodump -q '{"_id.org": "MITx"}' (dumps all MITx documents)
## Deleting course content
Use "remove" instead of "find":
- db.modulestore.remove( {"_id.course": "8.01greytak"})
## Finding useful information from the mongodb modulestore
- Organizations
> db.modulestore.distinct( "_id.org")
[ "HarvardX", "MITx", "edX", "edx" ]
- Courses
> db.modulestore.distinct( "_id.course")
[
"CS50x",
"PH207x",
"3.091x",
"6.002x",
"6.00x",
"8.01esg",
"8.01rq_MW",
"8.02teal",
"8.02x",
"edx4edx",
"toy",
"templates"
]
- Find a problem which has the word "quantum" in its definition
db.modulestore.findOne( {"definition.data":/quantum/})n
- Find Location for all problems with the word "quantum" in its definition
db.modulestore.find( {"definition.data":/quantum/}, {'_id':1})
- Number of problems in each course
db.runCommand({
mapreduce: "modulestore",
query: { '_id.category': 'problem' },
map: function(){ emit(this._id.course, {count:1}); },
reduce: function(key, values){
var result = {count:0};
values.forEach(function(value) {
result.count += value.count;
});
return result;
},
out: 'pbyc',
verbose: true
});
produces:
> db.pbyc.find()
{ "_id" : "3.091x", "value" : { "count" : 184 } }
{ "_id" : "6.002x", "value" : { "count" : 176 } }
{ "_id" : "6.00x", "value" : { "count" : 147 } }
{ "_id" : "8.01esg", "value" : { "count" : 184 } }
{ "_id" : "8.01rq_MW", "value" : { "count" : 73 } }
{ "_id" : "8.02teal", "value" : { "count" : 5 } }
{ "_id" : "8.02x", "value" : { "count" : 99 } }
{ "_id" : "PH207x", "value" : { "count" : 25 } }
{ "_id" : "edx4edx", "value" : { "count" : 50 } }
{ "_id" : "templates", "value" : { "count" : 11 } }
# Documentation for edX code (edx-platform repo)
This document explains the general structure of the edX platform, and defines some of the acronyms and terms you'll see flying around in the code.
## Assumptions:
You should be familiar with the following. If you're not, go read some docs...
- python
- django
- javascript
- html, xml -- xpath, xslt
- css
- git
- mako templates -- we use these instead of django templates, because they support embedding real python.
## Other relevant terms
- CAPA -- lon-capa.org -- content management system that has defined a standard for online learning and assessment materials. Many of our materials follow this standard.
- TODO: add more details / link to relevant docs. lon-capa.org is not immediately intuitive.
- lcp = loncapa problem
## Parts of the system
- LMS -- Learning Management System. The student-facing parts of the system. Handles student accounts, displaying videos, tutorials, exercies, problems, etc.
- CMS -- Course Management System. The instructor-facing parts of the system. Allows instructors to see and modify their course, add lectures, problems, reorder things, etc.
- Forums -- this is a ruby on rails service that runs on Heroku. Contributed by berkeley folks. The LMS has a wrapper lib that talks to it.
- Data. In the data/ dir. There is currently a single `course.xml` file that describes an entire course. Speaking of which...
- Courses. A course is broken up into Chapters ("week 1", "week 2", etc). A chapter is broken up into Sections ("Lecture 1", "Simple Circuits Exercises", "HW1", etc). A section can contain modules: Problems, Html, Videos, Verticals, or Sequences.
- Problems: specified in problem files. May have python scripts embedded to both generate random parameters and check answers. Also allows specifying things like tolerance or precision in answers
- Html: any html - often description, or links to outside resources
- Videos: links to youtube or elsewhere
- Verticals: a nesting tag: collect several videos, problems, html modules and display them vertically.
- Sequences: a sequence of modules, displayed with a horizontal navigation bar, displaying one component at a time.
- see `data/course.xml` for more examples
## High Level Entities in the code
### Common libraries
- xmodule: generic learning modules. *x* can be sequence, video, template, html,
vertical, capa, etc. These are the things that one puts inside sections
in the course structure.
- XModuleDescriptor: This defines the problem and all data and UI needed to edit
that problem. It is unaware of any student data, but can be used to retrieve
an XModule, which is aware of that student state.
- XModule: The XModule is a problem instance that is particular to a student. It knows
how to render itself to html to display the problem, how to score itself,
and how to handle ajax calls from the front end.
- Both XModule and XModuleDescriptor take system context parameters. These are named
ModuleSystem and DescriptorSystem respectively. These help isolate the XModules
from any interactions with external resources that they require.
For instance, the DescriptorSystem has a function to load an XModuleDescriptor
from a Location object, and the ModuleSystem knows how to render things,
track events, and complain about 404s
- XModules and XModuleDescriptors are uniquely identified by a Location object, encoding the organization, course, category, name, and possibly revision of the module.
- XModule initialization: XModules are instantiated by the `XModuleDescriptor.xmodule` method, and given a ModuleSystem, the descriptor which instantiated it, and their relevant model data.
- XModuleDescriptor initialization: If an XModuleDescriptor is loaded from an XML-based course, the XML data is passed into its `from_xml` method, which is responsible for instantiating a descriptor with the correct attributes. If it's in Mongo, the descriptor is instantiated directly. The module's attributes will be present in the `model_data` dict.
- `course.xml` format. We use python setuptools to connect supported tags with the descriptors that handle them. See `common/lib/xmodule/setup.py`. There are checking and validation tools in `common/validate`.
- the xml import+export functionality is in `xml_module.py:XmlDescriptor`, which is a mixin class that's used by the actual descriptor classes.
- There is a distinction between descriptor _definitions_ that stay the same for any use of that descriptor (e.g. here is what a particular problem is), and _metadata_ describing how that descriptor is used (e.g. whether to allow checking of answers, due date, etc). When reading in `from_xml`, the code pulls out the metadata attributes into a separate structure, and puts it back on export.
- in `common/lib/xmodule`
- capa modules -- defines `LoncapaProblem` and many related things.
- in `common/lib/capa`
### LMS
The LMS is a django site, with root in `lms/`. It runs in many different environments--the settings files are in `lms/envs`.
- We use the Django Auth system, including the is_staff and is_superuser flags. User profiles and related code lives in `lms/djangoapps/student/`. There is support for groups of students (e.g. 'want emails about future courses', 'have unenrolled', etc) in `lms/djangoapps/student/models.py`.
- `StudentModule` -- keeps track of where a particular student is in a module (problem, video, html)--what's their grade, have they started, are they done, etc. [This is only partly implemented so far.]
- `lms/djangoapps/courseware/models.py`
- Core rendering path:
- `lms/urls.py` points to `courseware.views.index`, which gets module info from the course xml file, pulls list of `StudentModule` objects for this user (to avoid multiple db hits).
- Calls `render_accordion` to render the "accordion"--the display of the course structure.
- To render the current module, calls `module_render.py:render_x_module()`, which gets the `StudentModule` instance, and passes the `StudentModule` state and other system context to the module constructor the get an instance of the appropriate module class for this user.
- calls the module's `.get_html()` method. If the module has nested submodules, render_x_module() will be called again for each.
- ajax calls go to `module_render.py:handle_xblock_callback()`, which passes it to one of the `XBlock`s handler functions
- See `lms/urls.py` for the wirings of urls to views.
- Tracking: there is support for basic tracking of client-side events in `lms/djangoapps/track`.
### CMS
The CMS is a django site, with root in `cms`. It can run in a number of different
environments, defined in `cms/envs`.
- Core rendering path: Still TBD
### Static file processing
- CSS -- we use a superset of CSS called SASS. It supports nice things like includes and variables, and compiles to CSS. The compiler is called `sass`.
- javascript -- we use coffeescript, which compiles to js, and is much nicer to work with. Look for `*.coffee` files. We use _jasmine_ for testing js.
- _mako_ -- we use this for templates, and have wrapper called edxmako that makes mako look like the django templating calls.
We use a fork of django-pipeline to make sure that the js and css always reflect the latest `*.coffee` and `*.sass` files (We're hoping to get our changes merged in the official version soon). This works differently in development and production. Test uses the production settings.
In production, the django `collectstatic` command recompiles everything and puts all the generated static files in a static/ dir. A starting point in the code is `django-pipeline/pipeline/packager.py:pack`.
In development, we don't use collectstatic, instead accessing the files in place. The auto-compilation is run via `common/djangoapps/pipeline_mako/templates/static_content.html`. Details: templates include `<%namespace name='static' file='static_content.html'/>`, then something like `<%static:css group='application'/>` to call the functions in `common/djangoapps/pipeline_mako/__init__.py`, which call the `django-pipeline` compilers.
## Testing
See `testing.md`.
## TODO:
- describe our production environment
- describe the front-end architecture, tools, etc. Starting point: `lms/static`
---
Note: this file uses markdown. To convert to html, run:
markdown2 overview.md > overview.html
This document describes the split mongostore representation which
separates course structure from content where each course run can have
its own structure. It does not describe the original mongostore
representation which combined structure and content and used the key
to distinguish draft from published elements.
This document does not describe mongo nor its operations. See
`http://www.mongodb.org/`_ for information on Mongo.
Product Goals and Discussion
----------------------------
(Mark Chang)
This work was instigated by the studio team's need to correctly do
metadata inheritance. As we moved from an on-startup load of the
courseware, the system was able to inflate and perform an inheritance
calculation step such that the intended properties of children could
be set through inheritance. While not strictly a requirement from the
studio authoring approach, where inheritance really rears its head is
on import of existing courseware that was designed assuming
inheritance.
A short term patch was applied that allowed inheritance to act
correctly, but it was felt that it was insufficient and this would be
an opportunity to make a more clean datastore representation. After
much difficulty with how draft objects would work, Calen Pennington
worked through a split data store model ala FAT filesystem (Mark's
metaphor, not Cale's) to split the structure from the content. The
goal would be a sea of content documents that would not know about the
structure they were utilized within. Cale began the work and handed it
off to Don Mitchell.
In the interim, great discussion was had at the Architect's Council
that firmed up the design and strategy for implementation, adding
great richness and completeness to the new data structure.
The immediate
needs are two, and only two.
#. functioning metadata inheritance
#. good groundwork for versioning
While the discussions of the atomic unit of courseware available for
sharing, how these are shared, and how they refer back to the parent
definition are all valuable, they will not be built in the near term. I
understand and expect there to be many refactorings, improvements, and
migrations in the future.
I fully anticipate much more detail to be uncovered even in this first
thin implementation. When that happens, we will need as much advice
from those watching this page to make sure we move in the right
direction. We also must have the right design artifacts to document
where we stand relative to the overall design that has loftier goals.
Representation
--------------
The xmodule collections:
+ `modulestore.active_versions`: this collection maps the org, course,
and run to the current draft and published versions of the course.
+ `modulestore.structures`: this collection has one entry per course
run and one for the template.
+ `modulestore.definitions`: this collection has one entry per
"module" or "block" version.
modulestore.active_versions: 2 simple maps for dereferencing the
correct course from the structures collection. Every course run will
have a draft version. Not every course run will have a published
version. No course run will have more than one of each of these.
::
{ '_id' : uniqueid,
'versions' : { <versionName> : versionGuid, ..}
'creator' : user_id,
'created' : date (native mongo rep)
}
::
+ `id` is a unique id for finding this course run. It's a
location-reference string, like 'edu.mit.eng.eecs.6002x.industry.spring2013'.
+ `versions`: These are references to `modulestore.structures`. A
location-reference like
`edu.mit.eng.eecs.6002x.industry.spring2013;draft` refers to the value
associated with `draft` for this document.
+ `versionName` is `draft`, `published`, or another user-defined
string.
+ `versionGuid` is a system generated globally unique id (hash). It
points to the entry in `modulestore.structures` ` `
`draftVersion`: the design will try to generate a new draft version
for each change to the course object: that is, for each move,
deletion, node creation, or metadata change. Cloning a course
(creating a new run of a course or such) will create a new entry in
this table with just a `draftVersion` and will cause a copy of the
corresponding entry in `modulestore.structures`. The entry in
`structures` will point to its version parent in the source course.
modulestore.structures : the entries in this collection follow this
definition:
::
{ '_id' : course_guid,
'blocks' :
{ block_guid : // the guid is an arbitrary id to represent this node in the course tree
{ 'children' : [ block_guid* ],
'metadata' : { property map },
'definition' : definition_guid,
'category' : 'section' | 'sequence' | ... }
::
...// more guids
::
},
'root' : block_guid,
'original' : course_guid, // the first version of this course from which all others were derived
'previous' : course_guid | null, // the previous revision of this course (null if this is the original)
'version_entry' : uniqueid, // from the active_versions collection
'creator' : user_idÂ
}
+ `blocks`: each block is a node in the course such as the course, a
section, a subsection, a unit, or a component. The block ids remain
the same over edits (they're not versioned).
+ `root`: the true top of the course. Not all nodes without parents
are truly roots. Some are orphans.
+ `course_guid, block_guid, definition_guid` are not those specific
strings but instead some system generated globally unique id.
+ The one which gets passed around and pointed to by urls is the
`block_guid`; so, it will be the one the system ensures is readable.
Unlike the other guids, this one stays the same over revisions and can
even be the same between course runs (although the course run
contextualizes it to distinguish its instantiated version).
+ `definition` points to the specific revision of the given element in
`modulestore.definitions` which this version of the course includes.
+ `children` lists the block_guids which are the children of this node
in the course tree. It's an error if the guid in the `children` list
does not occur in the `blocks` dictionary.
+ `metadata` is the node's explicitly defined metadata some of which
may be inherited by its children
For debugging purposes, there may be value in adding a courseId field
(org, course, run) for use via db browsers.
modulestore.definitions : the data associated with each version of
each node in the structures. Many courses may point to the same
definition or may point to different versions derived from the same
original definition.
::
{ '_id' : guid,
'data' : ..,
'default_settings' : {'display_name':..,..}, // a starting point for new uses of this definition
'category' : xblocktype, // the xmodule/xblock type such as course, problem, html, video, about
'original' : guid, // the first kept version of this definition from which all others were derived
'previous' : guid | null, // the previous revision of this definition (null if this is the original)
'creator' : user_id // the id of whomever pressed the draft or publish button
}
+ `_id`: a guid to uniquely identify the definition.
+ `data` is the payload used by the xmodule and following the
xmodule's data representation.
+ `category` is the xmodule type and used to figure out which xmodule
to instantiate.
There may be some debugging value to adding a courseId field, but it
may also be misleading if the element is used in more than one course.
Templates
~~~~~~~~~
(I'm refactoring templates quite a bit from their representation prior
to this design)
All field defaults will be defined through the xblock field.default
mechanism. Templates, otoh, are for representing optional boilerplate
usually for examples such as a multiple-choice problem or a video
component with the fields all filled in. Templates are stored in yaml
files which provide a template name, sorting and filtering information
(e.g., requires advanced editor v allows simple editor), and then
field: value pairs for setting xblocks' fields upon template
selection.
Most of the pre-existing templates including all of the 'empty' ones
will go away. The ones which will stay are the ones truly just giving
examples or starting points for variants. This change will require
that the template choice code provide a default 'blank' choice to the
user which just instantiates the model w/ its defaults versus a choice
of the boilerplates. The client can therefore populate its own model
of the xblock and then send a create-item request to the server when
the user says he/she's ready to save it.
Import/export
~~~~~~~~~~~~~
Export should allow the user to select the version of the course to
export which can be any of the draft or published versions. At a
minimum, the user should choose between draft or published.
Import should import the course as a draft course regardless of
whether it was exported as a published or draft one, I believe. If
there's already a draft for the same course, in the best of all
worlds, it would have the guid to see if the guid exists in the
structures collection, and, if so, just make that the current
draftVersion (don't do any actual data changes). If there's no guid or
the guid doesn't exist in the structures collection, then we'll need
to work out the logic for how to decide what definitions to create v
update v point to.
Course ID
~~~~~~~~~
Currently, we use a triple to identify a run of a course. The triple
is organization, course name, and run identity (e.g., 2013Q1). The
system does not care what the id consists of only that it uniquely
identify an edition of the course. The system uses this id to organize
the course composition and find the course elements. It distinguishes
between a current being-edited version (aka, draft) and publicly
viewable version (published). Not every course has a published
version, but every course will have a draft version. The application
specifies whether it wants the draft or published version. This system
allows the application to easily switch between the 2; however, it
will have a configuration in which it's impossible to access the draft
so that we can add access optimizations and extraction filtering later
if needed.
Location
~~~~~~~~
The purpose of `Location` is to identify content. That is, to be able
to locate content by providing sufficient addressing. The `Location`
object is ubiquitous throughout the current code and thus will be
difficult to adapt and make more flexible. Right now, it's a very
simple `namedtuple` and a lot of code presumes this. This refactoring
generalizes and subclasses it to handle various addressing schemes and
remove direct manipulations.
Our code needs to locate several types of things and should probably
use several different types of locators for these. These are the types
of things we need to address. Some of these can be the same as others,
but I wanted to lay them out fairly fine grained here before proposing
my distinctions:
#. Courses: an object representing a course as an offering but not any
of its content. Used for dashboards and other such navigators. These
may specify a version or merely reference the idea of the course's
existence.
#. Course structures: the names (and other metadata), `Locations`, and
children pointers but not definitions for all the blocks in a course
or a subtree of a course. Our applications often display contextual,
outline, or other such structural information which do not need to
include definitions but need to show display names, graded as, and
other status info. This document's design makes fetching these a
single document fetch; however, if it has to fetch the full course, it
will require far more work (getting all definitions too) than the apps
need.
#. Blocks (uses of definitions within a version of a course including
metadata, pointers to children, and type specific content)
#. Definitions: use independent definitions of content without
metadata (and currently w/o pointers to children).
#. Version trees Fetching the time history portrayal of a definition,
course, or block including branching.
#. Collections of courses, definitions, or blocks matching some
partial descriptors (e.g., all courses for org x, all definitions of
type foo, all blocks in course y of type x, all currently accessible
courses (published with startdate < today and enddate > today)).
#. Fetching of courses, blocks, or definitions via "human readable"
urls.
#. (partial descriptors) may suffice for this as human readable
does not guarantee uniqueness.
Some of these differ not so much in how to address them but in what
should be returned. The content should be up to the functions not the
addressing scheme. So, I think the addressable things are:
#. Course as in #1 above: usually a specific offering of a course.
Often used as a context for the other queries.
#. Blocks (aka usages) as in #3 above: a specific block contextualized
in a course
#. Definitions (#4): a specific definition
#. Collections of courses, blocks within a specific course, or
definitions matching a partial descriptor
Course locator (course_loc)
```````````````````````````
There are 3 ways to locate a course:
#. By its unique id in the `active_versions` collection with an
implied or specified selection of draft or published version.
#. By its unique id in the `structures` collection.
Block locator (block_loc)
`````````````````````````
A block locator finds a specific node in a specific version of a
course. Thus, it needs a course locator plus a `usage_id`.
Definition locator (definition_loc)
```````````````````````````````````
Just a `guid`.
Partial descriptor collections locators (partial)
`````````````````````````````````````````````````
In the most general case, and to simplify implementation, these can be
any payload passable to mongo for doing the lookup. The specification
of which collection to look into can be implied by which lookup
function your code calls (get_courses, get_blocks, get_definitions) or
we could add it as another property. For now, I will leave this as
merely a search string. Thus, to find all courses for org = mitx,
`{"org": "mitx"}`. To find all blocks in a course whose display name
contains "circuit example", call `get_blocks` with the course locator
plus `{"metadata.display_name" : /circuit example/i}` (the i makes it
case insensitive and is just an example). To find if a definition is
used in a course, call get_blocks with the course locator plus
`{definition : definition_guid}`. Note, this looks for a specific
version of the definition. If you wanted to see if it used any of a
set of versions, use `{definition : {"$in" : [definition_guid*]}}`
i4x locator
```````````
To support existing xml based courses and any urls, we need to
support i4x locators. These are tuples of `(org course category id
['draft'])`. The trouble with these is that they don't uniquely
identify a course run from which to dereference the element. There's
also no requirement that `id` have any uniqueness outside the scope of
the other elements. There's some debate as to whether these address
blocks or definitions. To mean, they seem to address blocks; however,
in the current system there is no distinction between blocks and
definitions; so, either could be argued.
This version will define an `i4x_location` class for representing
these and using them for xml based courses if necessary.
Current code munges strings to make them 'acceptable' by replacing
'illegal' chars with underscores. I'd like to suggest leaving strings
as is and using url escaping to make acceptable urls. As to making
human readable names from display strings, that should be the
responsibility of the naming module not the Location representation,
imo.
Use cases (expository)
~~~~~~~~~~~~~~~~~~~~~~
There's a section below walking through a specific use case. This one
just tries to review potential functionality.
Inheritance
```````````
Our system has the notion of policies which should control the
behavior of whole courses or subtrees within courses. Such policies
include graceperiods, discussion forum controls, dates, whether to
show answers, how to randomize, etc. It's important that the course
authors' intent propagates to all relevant course sections. The
desired behavior is that (some? all?) metadata attributes on modules
flow down to all children unless overridden.
This design addresses inheritance by making course structure and
metadata separate from content thus enabling a single or small number
of db queries to get these and then compute the inheritance.
Separating editing from live production
```````````````````````````````````````
Course authors should be able to make changes in isolation from
production and then push out consistent chunks of changes for all
students to see as atomic and consistent. The current system allows
authors to change text and content without affecting production but
not metadata nor course structure. This design separates all changes
from production until pushed.
Sharing of content, part 1
``````````````````````````
Authors want to share content between course runs and even between
different courses. The current system requires copying all such
content and losing the providence information which could be used to
take advantage of other peoples' changes. This design allows multiple
courses and multiple places within a course to point to the same
definitions and thus potentially, at some day, see other changes to
the content.
Sharing of content, part 2: course structure
````````````````````````````````````````````
Because courses structures are separate from their identities, courses
can share structure and track changes in the same way as definitions.
That is, a new course run can point to an existing course instance
with its version history and then branch it from there.
Sharing of content, part 3: modules
```````````````````````````````````
Suppose a course includes a soldering tutorial (or a required lab
safety lesson). Other courses want to use the same tutorial and
possibly allow the student to skip it if the student succeeded at it
in another course. As the tutorial updates, other courses may want to
track the updates or choose to move to the updates without having to
copy the modules from the module's authoritative parent course.
This design enables sharing of composed modules but it does not track
the revisions of those modules separately from their courses. It does
not adequately address this but may be extendible enough to do so.
That is, we could represent these shared units as separate "courses"
and allow ids in block.children[] to point to courses as well as other
blocks in the same course.
We should decide on the behaviors we want. Such as, some times the
student has to repeat the content or the student never has to repeat
it or? progress should be tracked by the owning course or as a stand
alone minicourse type element? Because it's a safety lesson, all
courses should track the current published head and not have their own
heads or they should choose when to promote the head?
Are these shared elements rare and large grained enough to make the
indirection not expensive or will it result in devolving to the
current one entry per module design for deducing course structure?
Functional differences from existing modulestore:
-------------------------------------------------
+ Courses and definitions support trees of versions knowing from where
they were derived. For now, I will not implement the server functions
for retrieving and manipulating these version trees and will leave
those for a future effort. I will only implement functions which
extend the trees.
+ Changes to course structure don't immediately affect production:
note, we need to figure out the granularity of the user's publish
behavior for pushing out these actions. That is, do they publish a
whole subtree which may include new children in order to make these
effective, do they publish all structural (deletion, move) changes
under a subtree but not insertions as an action, do they publish each
action individually, or what? How do they know that any of these are
not yet published? Do we have phantom placeholders for deleted nodes
w/ "publish deletion" buttons?
+ Element deletion
+ Element move
+ metadata changes
+ No location objects used as ids! This implementation will use guids
instead. There's a reasonable objection to guids as being too ugly,
long, and indecipherable. I will check mongy, pymongo, and python guid
generation mechanisms to find out if there's a way to make ones which
include a prepended string (such as course and run or an explicitly
stated prepend string) and minimize guid length (e.g., by using
sequential serial # from a global or local pool).
Use case walkthrough:
---------------------
Simple course creation with no precursor course: Note, this shows that
publishing creates subsets and side copies not in line versions of
nodes.
user db create course for org, course id, run id
active_versions.draftVersion: add entry
definitions: add entry C w/ category = 'course', no data
structures: add entry w/ 1 child C, original = self, no previous,
author = user
add section S copy structures entry, new one points to old as original
and previous
active_versions.draftVersion points to new
definitions: add entry S w/ category = 'section'
structures entry:
+ add S to children of the course block,
+ add S to blocks w/ no children
add subsection T copy structures entry, new one points to old as
original and previous
active_versions.draftVersion points to new
definitions: add entry T w/ category = 'sequential'
structures entry:
+ add T to children of the S block entry,
+ add T to blocks w/ no children
add unit U copy structures entry, new one points to old as original
and previous
active_versions.draftVersion points to new
definitions: add entry U w/ category = 'vertical'
structures entry:
+ add U to children of the T block entry,
+ add U to blocks w/ no children
publish U
create structures entry, new one points to self as original (no
pointer to draft course b/c it's not really a clone)
active_versions.publishedVersion points to new
block: add U, T, S, C pointers with each as respective child
(regardless of other children they may have in draft), and their
metadata
add units V, W, X under T copy structures entry of the draftVersion,
new one points to old as original and previous
active_versions.draftVersion points to new
definitions: add entries V, W, X w/ category = 'vertical'
structures entry:
+ add V, W, X to children of the T block entry,
+ add V, W, X to blocks w/ no children
edit U copy structures entry, new one points to old as original and
previous
active_versions.draftVersion points to new
definitions: copy entry U to U_2 w/ updates, U_2 points to U as
original and previous
structures entry:
+ replace U w/ U_2 in children of the T block entry,
+ copy entry U in blocks to entry U_2 and remove U
add subsection Z under S copy structures entry, new one points to old
as original and previous
active_versions.draftVersion points to new
definitions: add entry Z w/ category = 'sequential'
structures entry:
+ add Z to children of the S block entry,
+ add Z to blocks w/ no children
edit S's name (metadata) copy structures entry, new one points to old
as original and previous
active_versions.draftVersion points to new
structures entry: update S's metadata w/ new name publish U, V copy
publishedCourse structures entry, new one points to old published as
original and previous
active_versions.publishedVersion points to new
block: update T to point to new U & V and not old U
Note: does not update S's name publish C copy publishedCourse
structures entry, new one points to old published as original and
previous
active_versions.publishedVersion points to new
blocks: note that C child S == published(S) but metadata !=, update
metadata
note that S has unpublished children: publish them (recurse on this)
note that Z is unpublished: add pointer to blocks and children of S
note that W, X unpublished: add to blocks, add to children of T edit C
metadata (e.g., graceperiod) copy draft structures entry, new one
points to old as original and previous
active_versions.draftVersion points to new
structures entry: update C's metadata add Y under Z ... publish C's
metadata change copy publishedCourse structures entry, new one points
to old published as original and previous
active_versions.publishedVersion points to new
blocks: update C's metadata
Note: no copying of Y or any other changes to published move X under Z
copy draft structures entry, new one points to old as original and
previous
active_versions.draftVersion points to new
structures entry: remove X from T's children and add to Z's
Note: making it persistently clear to the user that X still exists
under T in the published version will be crucial delete W copy draft
structures entry, new one points to old as original and previous
active_versions.draftVersion points to new
structures entry: remove W from T's children and remove W from blocks
Note: no actual deletion of W, just no longer reachable w/in the draft
course, but still in published; so, need to keep user aware of that.
publish Z Note: the interesting thing here is that X cannot occur
under both Z and T, but the user's not publishing T, here's where
having a consistent definition of original may help. If the original
of a new element == original of an existing, then it's an update?
copy publishedCourse entry...
definitions: add Y, copy/update Z, X if either have any data changes
(they don't)
blocks: remove X from T's children and add to Z's, add Y to Z, add Y
publish deletion of W copy publishedCourse entry...
structures entry: remove W from T's children and remove W from blocks
Conflict detection:
Need a scenario where 2 authors make edits to different parts of
course, to parts while parents being moved, while parents being
deleted, to same parts, ...
.. _http://www.mongodb.org/: http://www.mongodb.org/
# Testing
## Overview
We maintain three kinds of tests: unit tests, integration tests,
and acceptance tests.
### Unit Tests
* Each test case should be concise: setup, execute, check, and teardown.
If you find yourself writing tests with many steps, consider refactoring
the unit under tests into smaller units, and then testing those individually.
* As a rule of thumb, your unit tests should cover every code branch.
* Mock or patch external dependencies.
We use [voidspace mock](http://www.voidspace.org.uk/python/mock/).
* We unit test Python code (using [unittest](http://docs.python.org/2/library/unittest.html)) and
Javascript (using [Jasmine](http://pivotal.github.io/jasmine/))
### Integration Tests
* Test several units at the same time.
Note that you can still mock or patch dependencies
that are not under test! For example, you might test that
`LoncapaProblem`, `NumericalResponse`, and `CorrectMap` in the
`capa` package work together, while still mocking out template rendering.
* Use integration tests to ensure that units are hooked up correctly.
You do not need to test every possible input--that's what unit
tests are for. Instead, focus on testing the "happy path"
to verify that the components work together correctly.
* Many of our tests use the [Django test client](https://docs.djangoproject.com/en/dev/topics/testing/overview/) to simulate
HTTP requests to the server.
### UI Acceptance Tests
* Use these to test that major program features are working correctly.
* We use [lettuce](http://lettuce.it/) to write BDD-style tests. Most of
these tests simulate user interactions through the browser using
[splinter](http://splinter.cobrateam.info/).
Overall, you want to write the tests that **maximize coverage**
while **minimizing maintenance**.
In practice, this usually means investing heavily
in unit tests, which tend to be the most robust to changes in the code base.
![Test Pyramid](test_pyramid.png)
The pyramid above shows the relative number of unit tests, integration tests,
and acceptance tests. Most of our tests are unit tests or integration tests.
## Test Locations
* Python unit and integration tests: Located in
subpackages called `tests`.
For example, the tests for the `capa` package are located in
`common/lib/capa/capa/tests`.
* Javascript unit tests: Located in `spec` folders. For example,
`common/lib/xmodule/xmodule/js/spec` and `{cms,lms}/static/coffee/spec`
For consistency, you should use the same directory structure for implementation
and test. For example, the test for `src/views/module.coffee`
should be written in `spec/views/module_spec.coffee`.
* UI acceptance tests:
- Set up and helper methods: `common/djangoapps/terrain`
- Tests: located in `features` subpackage within a Django app.
For example: `lms/djangoapps/courseware/features`
## Factories
Many tests delegate set-up to a "factory" class. For example,
there are factories for creating courses, problems, and users.
This encapsulates set-up logic from tests.
Factories are often implemented using [FactoryBoy](https://readthedocs.org/projects/factoryboy/)
In general, factories should be located close to the code they use.
For example, the factory for creating problem XML definitions
is located in `common/lib/capa/capa/tests/response_xml_factory.py`
because the `capa` package handles problem XML.
# Running Tests
You can run all of the unit-level tests using the command
rake test
This includes python, javascript, and documentation tests. It does not, however,
run any acceptance tests.
## Running Python Unit tests
We use [nose](https://nose.readthedocs.org/en/latest/) through
the [django-nose plugin](https://pypi.python.org/pypi/django-nose)
to run the test suite.
You can run all the python tests using `rake` commands. For example,
rake test:python
runs all the tests. It also runs `collectstatic`, which prepares the static files used by the site (for example, compiling Coffeescript to Javascript).
You can re-run all failed python tests by running
rake test:python[--failed]
You can also run the tests without `collectstatic`, which tends to be faster:
rake fasttest_lms
or
rake fasttest_cms
xmodule can be tested independently, with this:
rake test_common/lib/xmodule
other module level tests include
* `rake test_common/lib/capa`
* `rake test_common/lib/calc`
To run a single django test class:
rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest]
To run a single django test:
rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login]
To re-run all failing django tests from lms or cms:
rake test_lms[--failed]
To run a single nose test file:
nosetests common/lib/xmodule/xmodule/tests/test_stringify.py
To run a single nose test:
nosetests common/lib/xmodule/xmodule/tests/test_stringify.py:test_stringify
To run a single test and get stdout, with proper env config:
python manage.py cms --settings test test contentstore.tests.test_import_nostatic -s
To run a single test and get stdout and get coverage:
python -m coverage run --rcfile=./common/lib/xmodule/.coveragerc which ./manage.py cms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 contentstore.tests.test_import_nostatic -s # cms example
python -m coverage run --rcfile=./lms/.coveragerc which ./manage.py lms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 courseware.tests.test_module_render -s # lms example
generate coverage report:
coverage report --rcfile=./common/lib/xmodule/.coveragerc
or to get html report:
coverage html --rcfile=./common/lib/xmodule/.coveragerc
then browse reports/common/lib/xmodule/cover/index.html
Very handy: if you uncomment the `pdb=1` line in `setup.cfg`, it will drop you into pdb on error. This lets you go up and down the stack and see what the values of the variables are. Check out [the pdb documentation](http://docs.python.org/library/pdb.html)
### Running Javascript Unit Tests
We use Jasmine to run JavaScript unit tests. To run all the JavaScript tests:
rake test:js
To run a specific set of JavaScript tests and print the results to the console:
rake test:js:run[lms]
rake test:js:run[cms]
rake test:js:run[xmodule]
rake test:js:run[common]
To run JavaScript tests in your default browser:
rake test:js:dev[lms]
rake test:js:dev[cms]
rake test:js:dev[xmodule]
rake test:js:dev[common]
These rake commands call through to a custom test runner. For more info, see [js-test-tool](https://github.com/edx/js-test-tool).
### Running Acceptance Tests
We use [Lettuce](http://lettuce.it/) for acceptance testing.
Most of our tests use [Splinter](http://splinter.cobrateam.info/)
to simulate UI browser interactions. Splinter, in turn,
uses [Selenium](http://docs.seleniumhq.org/) to control the Chrome browser.
**Prerequisite**: You must have [ChromeDriver](https://code.google.com/p/selenium/wiki/ChromeDriver)
installed to run the tests in Chrome. The tests are confirmed to run
with Chrome (not Chromium) version 28.0.1500.71 with ChromeDriver
version 2.1.210398.
To run all the acceptance tests:
rake test:acceptance
To run only for lms or cms:
rake test:acceptance:lms
rake test:acceptance:cms
To test only a specific feature:
rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature"]
To test only a specific scenario
rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature -s 3"]
To start the debugger on failure, add the `--pdb` option:
rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature --pdb"]
To run tests faster by not collecting static files, you can use
`rake test:acceptance:lms:fast` and `rake test:acceptance:cms:fast`.
Acceptance tests will run on a randomized port and can be run in the background of rake cms and lms or unit tests.
To specify the port, change the LETTUCE_SERVER_PORT constant in cms/envs/acceptance.py and lms/envs/acceptance.py
as well as the port listed in cms/djangoapps/contentstore/feature/upload.py
During acceptance test execution, Django log files are written to `test_root/log/lms_acceptance.log` and `test_root/log/cms_acceptance.log`.
**Note**: The acceptance tests can *not* currently run in parallel.
## Viewing Test Coverage
We currently collect test coverage information for Python unit/integration tests.
To view test coverage:
1. Run the test suite:
rake test
2. Generate reports:
rake coverage
3. Reports are located in the `reports` folder. The command
generates HTML and XML (Cobertura format) reports.
## Testing using queue servers
When testing problems that use a queue server on AWS (e.g. sandbox-xqueue.edx.org), you'll need to run your server on your public IP, like so.
`./manage.py lms runserver 0.0.0.0:8000`
When you connect to the LMS, you need to use the public ip. Use `ifconfig` to figure out the number, and connect e.g. to `http://18.3.4.5:8000/`
## Acceptance Test Techniques
1. Element existence on the page<br />
Do not use splinter's built-in browser methods directly for determining if elements exist.
Use the world.is_css_present and world.is_css_not_present wrapper functions instead.
Otherwise errors can arise if checks for the css are performed before the page finishes loading.
Also these wrapper functions are optimized for the amount of wait time spent in both cases of positive
and negative expectation.
2. Dealing with alerts<br />
Chrome can hang on javascripts alerts. If a javascript alert/prompt/confirmation is expected, use the step
'I will confirm all alerts', 'I will cancel all alerts' or 'I will anser all prompts with "(.*)"' before the step
that causes the alert in order to properly deal with it.
3. Dealing with stale element reference exceptions<br />
These exceptions happen if any part of the page is refreshed in between finding an element and accessing the element.
When possible, use any of the css functions in common/djangoapps/terrain/ui_helpers.py as they will retry the action
in case of this exception. If the functionality is not there, wrap the function with world.retry_on_exception. This function takes in a function and will retry and return the result of the function if there was an exception
4. Scenario Level Constants<br />
If you want an object to be available for the entire scenario, it can be stored in world.scenario_dict. This object
is a dictionary that gets refreshed at the beginning on the scenario. Currently, the current logged in user and the current created course are stored under 'COURSE' and 'USER'. This will help prevent strings from being hard coded so the
acceptance tests can become more flexible.
5. Internal edX Jenkins considerations<br />
Acceptance tests are run in Jenkins as part of the edX development workflow. They are broken into shards and split across
workers. Therefore if you add a new .feature file, you need to define what shard they should be run in or else they
will not get executed. See someone from TestEng to help you determine where they should go.
Also, the test results are rolled up in Jenkins for ease of understanding, with the acceptance tests under the top level
of "CMS" and "LMS" when they follow this convention: name your feature in the .feature file CMS or LMS with a single
period and then no other periods in the name. The name can contain spaces. E.g. "CMS.Sign Up"
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
Q_FLAG =
ifeq ($(quiet), true)
Q_FLAG = -Q
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = $(Q_FLAG) -d $(BUILDDIR)/doctrees -c source $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/getting_started.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/getting_started.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/getting_started"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/getting_started"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
###################################
January 7, 2014
###################################
You can now access the public edX roadmap_ for details about the currently planned product direction.
.. _roadmap: https://edx-wiki.atlassian.net/wiki/display/OPENPROD/OpenEdX+Public+Product+Roadmap
*************
edX Studio
*************
New documentation, *Building a Course with edX Studio*, is available online_. You can also download the new guide as a PDF from the edX Studio user interface.
.. _online: http://edx.readthedocs.org/projects/ca/en/latest/
=============
New Features
=============
* The **Files & Uploads** page has been updated so that a maximum of 50 files now appear on a single page. If your course has more than 50 files, additional files are listed in separate pages. You can navigate to other pages through pagination controls at the top and bottom of the file list. This change improves the page performance for courses with a large number of files.
For more information, see the `updated documentation for adding files <http://edx.readthedocs.org/projects/ca/en/latest/create_new_course.html#add-files-to-a-course>`_.
.. note:: The :ref:`October 29 2013` release notes describe a workaround to limit the number of files that appear on a single page. With the January 7, 2014 release, this method is not necessary and no longer works.
* The **Course Outline** page is updated to include several design improvements. The new Course Outline appears as in the following example:
.. image:: images/course_outline.png
:alt: The Course Outline
To see the changes, view your course in Studio or see the `updated documentation for organizing your course content <http://edx.readthedocs.org/projects/ca/en/latest/organizing_course.html>`_.
* A template for custom JavaScript display and grading problems (also called JSInput problems) is now available. For more informatoin, see the `updated documentation for Custom JavaScript display and grading problems <http://edx.readthedocs.org/projects/ca/en/latest/advanced_problems.html#custom-javascript-display-and-grading>`_. (BLD-523) (BLD-556)
* A template for the Zooming Image tool is now available. For more informatoin, see the `updated documentation for the zooming image tool <http://edx.readthedocs.org/projects/ca/en/latest/tools.html#zooming-image>`_. (BLD-206)
==========================
Changes and Updates
==========================
* The Course Export tool now supports non-ASCII characters. (STUD-868)
* In the course outline, you can now drag a section to the end of the list of sections when the last section is collapsed. (STUD-879)
* In Video components, when you click inside the **Start Time** or **End Time** field, you can enter a time in HH:MM:SS format as normal text. After you click out of the field, Studio adds zeros and performs unit conversions so that the field contains six digits that correspond to hours, minutes, and seconds.
For example, if you enter 1:35, the text in the field changes to 00:01:35. If you enter 2:71:35, the text changes to 3:11:35. (BLD-506 and BLD-581)
* The **Save** button for JSInput Problem components now works as expected. (BLD-568)
***************************************
edX Learning Management System
***************************************
* When you download grades by clicking **Download CSV of answer distributions** on the Instructor Dashboard, the LMS no longer returns an empty CSV for small Studio-created courses. Instead, the LMS returns a CSV that is sorted by url_name and that includes responses from students who have unenrolled from the course.
Note that errors occur if you try to download grades for a large Studio-based course or an XML-based course.
* In the course wiki, the **Preview this Revision** and the **Merge selected with Current** dialog boxes are now keyboard accessible in Internet Explorer. (LMS-1539)
* On the Instructor Dashboard, when you click the Datadump tab and then click Download CSV of all student profile data, you no longer receive a 500 error message. (LMS-1675)
* For Image Response problems, the correct answer now appears when a student clicks **Show Answer**. (BLD-21)
* On iPads, the video player uses edX controls that appear after you click the video or the Play button. On iPhones, the video player uses native controls. (BLD-541)
###################################
October 23, 2013
###################################
*************
edX Studio
*************
=============
New Features
=============
* **Improved import experience** (STUD-595)
When you import a course, the Import screen now provides real-time status updates. The Import screen tells you when the import is in the
following stages:
* Uploading
* Unpacking
* Verifying
* Updated Course
* Success (Complete)
* **Improved drag and drop experience in course outlines** (STUD-575)
The ability to drag and drop sections, subsections, and units in the course outline is enhanced in the following ways:
* The visual representation of where you are moving the course element to is improved, with a pointer and blue line indicating the
new position.
* You can more easily drag units from one subsection to another.
* When you cancel a drop, the course element returns to its original position.
* You can no longer drag a course element below the New Unit button, which was causing confusion.
* **Text customization capability**
You can now customize some of the UI text that your students see. You do this through the text_customization key in the Advanced
Settings for the course. (However, edX recommends that you contact your Program Manager before you modify the text_customization key.)
* **JavaScript loading performance**
JavaScript loading is changed in ways that should improve the performance on some pages.
Code contributors should note that JavaScript is now loaded through require.js.
==========================
Changes and Updates
==========================
The following changes are included in this release:
* In a course outline, you can no longer drag and drop a unit below the New Unit button. (STUD-152)
* Course update content outside of HTML tags is no longer erroneously removed.(STUD-590)
* In a course outline, dragging a unit over the Units label no longer causes the unit to be removed. (STUD-755)
* Support text in the Assignment Types section of the Grading page is updated to clarify that you enter an integer, not a percent, in the W
eight of Total Grade field. (STUD-771)
* Certain component errors no longer prevent the course from saving correctly. (STUD-786)
* When you delete a Discussion component, the discussion is completely removed from the course. (STUD-811, STUD-817)
* When you are editing a course update, the update no longer disappears if you click outside of the Edit window. (STUD-822)
* When you enter the integer 7 in the Total Weight of Grade field, the value is no longer changed to a decimal. (STUD-826)
***************************************
edX Learning Management System
***************************************
=============
New Features
=============
The following changes are included in this release:
* **Fixed views in Internet Explorer 9.x**
Problems with pages in Internet Explorer 9.x are resolved.
* **Disabled downloading data for large courses**
For courses with over 200 students, downloading large data sets could fail. The Download Data button on the Instructor Dashboard is now
temporarily disabled to avoid this problem.
* **Improved Beta Instructor Dashboard**
You can access the Beta Instructor Dashboard from the current Instructor Dashboard by clicking Try the New Beta Dashboard. The Beta
version continued so evolve with a streamlined design and improved architecture. Both dashboards are currently available.
* **Improved video and transcript experience** (BLD-420)
When you are playing a video with the transcript hidden, you can display the transcript by hovering the mouse pointer over the CC button.
You can then click a paragraph in the displayed transcript to move to that point in the video. When you move the pointer off the CC button,
the transcript is hidden.
* **Improved Learning Tools Interoperability (LTI)** (BLD-330, BLD-347)
You can now use multiple LTI tools per page. You can also have an LTI module load external content in a new window.
==========================
Changes and Updates
==========================
* The link to open and close the calculator has additional aria attributes for accessibility. (BLD-164)
* The Hints panel for the calculator is now accessible to screen readers. (BLD-165)
* Students can now download video subtitles. (BLD-245)
* Multi-speed video playback now works in Firefox browsers as expected. (BLD-287)
* Window resizing no longer cuts off videos. (BLD-289)
* Video HD control is now handicap accessible. (BLD-387)
* You can now export courses that have LTI modules. (BLD-389)
* A malformed custom parameter in an LTI component no longer permanently breaks the unit. (BLD-390)
* The styles and text for the download links for videos and transcripts are updated for clarity and accessibility. (BLD-403)
* The CC button in the video player now includes explanatory text that is accessible with a screen reader. (BLD-404)
* LTI with the Piazza platform now works as expected. (BLD-405)
* The **Close** button on dialog boxes is now defined as an HTML button and is accessible to screen readers. (LMS-582)
******************
Discussion Forums
******************
The following changes are included in this release:
* The color contrast of the Report Misuse link is updated for accessibility. (FOR-200)
* The Report Misuse link now includes a tooltip that is accessible to screen readers. (FOR-201)
* The Report Misuse link is now included in the page tab order, for keyboard accessibility. (FOR-209)
.. _October 29 2013:
###################################
October 29, 2013
###################################
*************
edX Studio
*************
=============
New Features
=============
* **New video editing interface, enabling an enhanced workflow for adding timed transcripts to videos** (BLD-238)
When you enter a video URL in the Editing: Video dialog box, the system checks if a timed transcript for that video exists on edX, and if
so, automatically associates the transcript with the video. If no transcript is found, you click Upload New Timed Transcript to locate and
upload the .SRT file for the transcript.
When there is an associated timed transcript, you can click Download to Edit to download a local copy of the .SRT file. You can then
modify the transcript and upload the new file.
For YouTube videos, you can also import a timed transcript from YouTube, overwriting the version of the transcript on edX with the version
from YouTube.
Backwards compatibility with the other transcript workflow is maintained with a tabbed interface.
====================================================
Known Issues and Workarounds
====================================================
* **Uploading a large number of files** (STUD-813, STUD-837)
When you go to the Files & Uploads page, if your course has a large number of files, the Files & Uploads page can time out before it lists
all the files. The page becomes unresponsive, and you cannot upload more files.
**Workaround**: To upload new files files when the Files & Uploads page is timing out, limit the number of files that appear on the Files &
Uploads page by adding start and max parameters to the URL. For example, you can append the following parameters to the URL in your
browser:
`https://studio.edge.edx.org/assets/organization.course-number.course-name/branch/block/course-name?start=5&max=15`
This example tells the page to load a maximum of 10 files, starting with the 6th file. You can use other values as needed, as long as the list
is not so long that the page does not load successfully. Note that file counts begin at 0, not 1, and that files are listed chronologically, with
the most recent first.
==========================
Changes and Updates
==========================
The following changes are included in this release:
* Because Course IDs are not case sensitive, all Course IDs must be unique regardless of capitalization. For example, you cannot have
both edX101 and EdX101 as course IDs. (STUD-873)
***************************************
edX Learning Management System
***************************************
The following changes are included in this release:
* The cheatsheet available when you are adding a new Wiki article is now accessible to screen readers. (LMS-1303)
* In the Wiki, active links are now displayed as bold, and have additional text labels, to be accessible to screen readers. (LMS-1306)
* In the Wiki, when you navigate through links with the Tab key, the active link is updated in the same was as when you hover over it with
the mouse pointer. (LMS-1336)
* Default Wiki permissions are updated so that only course staff can delete Wiki pages. (LMS-1355)
* The Reset Password and Password Reset Confirmation pages are updated to use styles consistent with the system. (LMS-1357)
* In certain situations, students received a 500 error when viewing the Progress page. This problem was resolved in a patch on October 23, 2013. (LMS-1367)
* A visual indicator has been added to the video player to indicate which part of the video will play, when it is not the default. (BLD-391)
* Forum views are updated to improve performance. (FOR-250)
******************
Analytics
******************
The following changes are included in this release:
* Course exports are included with weekly data dumps delivered to university data representatives. (AN-57)
###################################
November 6, 2013
###################################
*************
edX Studio
*************
=============
New Features
=============
* **Improved Course Export page**
The Course Export page has a new layout, with enhanced help text.
==========================
Changes and Updates
==========================
The following changes are included in this release:
* The Forgot Password link on the Studio Sign In page now works correctly. (STUD-689)
* In the Create a New Course page Organization field, text now suggests the generic UniversityX, instead of MITx. (STUD-885)
* The Create a New Course page Course Run field, text now suggests the using the year and trimester (for example, 2014_T1), instead
of the year and season. (STUD-916)
* Studio now continues working correctly when a YouTube video in the page fails to load. (STUD-472)
* To avoid potential problems with browser security, you can no longer enter video URLs with http://. You must use https://. (BLD-408)
* By default, the options to add a Problem Written in LaTex and a Problem with Adaptive Hint in Latex are no longer included in the
Advanced tab of the Problems component. In addition, the option to add E-text Written in LaTeX is no longer included in the HTML component.
To enable these options, open the Advanced Settings page and set the value of the use_latex_compiler policy key to true. (BLD-426)
==========================
Technical Changes
==========================
Contributors to the open source edX Platform should note the following change:
The Course Export page is updated to use a RESTFul interface. (STUD-846)
***************************************
edX Learning Management System
***************************************
==========================
New Features
==========================
* **Upgrading the Course Track**
A student in the Honor Code or Audit track can now upgrade to the Verified Certificate track. (LMS-1127)
==========================
Changes and Updates
==========================
The following changes are included in this release:
* After registering for a course, a student could use the browser's Back button to return to the Registration page and change the
registration type. Now, if the user tries to go back to the Registration page, the Learning Management System redirects the student to the
courseware, where the student must unregister first to change the course mode. (LMS-1062)
* When a student fails verification for the Verified track, the Learning Management System now notifies the user through the Student
Dashboard, and prompts them to retry. The student must send another set of photos, but does not have to pay again. If the student does
not retry, they can get a refund. (LMS-1133)
* In the Beta Instructor Dashboard, the layout is improved and the Pending Instructor Tasks section now functions correctly. (LMS-1242)
* You can now tab through the Wiki pages without getting stuck in the content area. (LMS-1307)
* Default settings now include the generic email address @example.com instead of @edx.org. (LMS-1363)
* The user experience and help text during course registration and upgrading are enhanced, and the last day to register for verified certificates is more clear. (LMS-1384)
* Keyboard navigation is now updated in the Wiki to skip repetitive content, allowing users to go directly to unique content on a page. (LMS-1387)
* Errors that prevented the Progress page from successfully loading are fixed. (LMS-1388)
* In the Sign Up page, the Public Display Name field was renamed to Public Username. (LMS-1393)
* Errors when creating a new account are resolved. (LMS-1418)
* A typo was removed from the course registration email template. (LMS-1419)
* The Saving a Word Cloud component generated an error message. This problem no longer occurs. (BLD-205)
* When viewing a video, if a student clicked on the video timeline before or after the specified end time, the video jumped to the beginning. This problem is resolved. (BLD-392)
* Students can now change the video speed when the video is paused. (BLD-424)
* Several problems with the video player are resolved:
* The Start time did not work in Flash mode.
* Students could not change the speed before the video started.
* The end point in the video slider was inaccurate for short videos.
* The video slider showed the incorrect position after the video stopped. (BLD-468)
* Sorting of the forums thread list now works correctly when a topic is selected from the drop-down menu. (FOR-224)
* Forum follow buttons are now accessible to screen readers, have the ARIA checkbox role, and activate with the space or Enter key.(FOR-240)
******************
Analytics
******************
The following changes are included in this release:
* The user_id field is added to tracking events. (AN-213)
......@@ -5,8 +5,8 @@ import sys, os
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append(os.path.abspath('../../../../'))
sys.path.append(os.path.abspath('../../../'))
sys.path.append(os.path.abspath('../../'))
from docs.shared.conf import *
......@@ -34,4 +34,4 @@ copyright = u'2014, edX'
# The short X.Y version.
version = ''
# The full version, including alpha/beta/rc tags.
release = ''
\ No newline at end of file
release = ''
Scope
This document describes code quality standards for the i4x
system.
1. Coding Standards
Code falls into four categories:
* Deployed. Running on a live server.
* Production. Intended for deployment.
* Scaffolding. Intended to define interfaces for future work, and
minimal implementations to support further development.
* Prototype. Experimental new features.
1.1 Deployed
The standards for deployed code are identical to production. In
general, we tend to do either:
1) Perform a final verification QA cycle on changed parts of code
before deploying.
2) Use code on a staging or internal server for a week before
deploying.
1.2 Production
All production code must be peer-reviewed. The code must meet the
following standards:
1) Test Suite. Code must have reasonable, although not complete, test
coverage.
2) Consistent. Code must follow PEP8
3) Clean Abstractions.
4) Future-Compatible. Code must not be incompatible with the
long-term vision of either the codebase or of edX.
5) Properly Documented
6) Maintainable and deployable
7) Robust.
All code paths must be manually or automatically verified.
1.3 Scaffolding
All scaffolding code should be peer-reviewed. The code must meet the
following standards:
1) Testable. We do not require test coverage, but we do require the
code to be structured such that it is possible to build tests.
2) Consistent. Code must follow PEP8
3) Clean abstractions or obvious throw-away code. One of the goals
of scaffolding is to define proper abstractions.
4) Future-Compatible. Code must not be incompatible with the
long-term vision of either the codebase or of edX.
5) Somewhat documented
6) Unpluggable. There should be a setting to disable scaffolding code.
By default, and by policy, it should never be enabled on production
servers.
7) Purpose. The scaffolding must provide a clean reason for existence
(e.g. define a specific interface, etc.)
1.4 Prototype
Prototype code should live in a separate branch. It should strive
to follow PEP8, be readable, testable, and future-proof, but we have
no hard standards.
2. Process Standards
* Code should be integrated in small pull requests. Large commits
should be broken down into small commits for integration.
* Every piece of production and deployed code must be reviewed prior
to integration.
* Anyone on the edX team competent to review a piece of code may
review it (this may change as the team grows).
* Each contributor is responsible for finding a person to review their
code. If it is not clear to the contributor who is appropriate, each
project has an owner who is the default go-to.
2.1 Rapid pull
Unmerged code can lead to merge conflicts, and slow down
development. We have an experimental procedure for handling rapid
pulls and merges. To qualify:
* A piece of code must only have minor issues remaining (nothing which
we would be uncomfortable placing on a server).
* Either the requester or the puller takes ownership for guaranteeing
that those issues are resolved within a short timeframe.
* Both the requester and the puller must be comfortable with it.
* Both the requester and the owner must have a history of/ability to
resolve remaining issues quickly.
If code qualifies:
* It can be merged, and repaired in master.
* The pull message should specify '## pending fixes/OWNER' where ## is
the pull request number, and OWNER is the owner.
* All required fixes are documented in github in the (now closed) pull
request, and should be marked off there when applied (potentially,
directly to master).
* Once all fixes are applied, the final commit should specify
'## closed'.
3. Documentation Standards
* Whenever possible, documentation should live in code.
* When impossible, it should live in the github repo.
* Discussion should live on github, Basecamp or Pivotal, depending on
context.
* Notes for later fixes should in general be put into Pivotal as stories.
If they are left in the code, they should be prefixed by
# TODO (<name>)
# Development Tasks
## Prerequisites
### Ruby
To install all of the libraries needed for our rake commands, run `bundle install`.
This will read the `Gemfile` and install all of the gems specified there.
### Python
Run the following::
pip install -r requirements.txt
### Binaries
Install the following:
* Mongodb (http://www.mongodb.org/)
### Databases
First start up the mongo daemon. E.g. to start it up in the background
using a config file:
mongod --config /usr/local/etc/mongod.conf &
Check out the course data directories that you want to work with into the
`GITHUB_REPO_ROOT` (by default, `../data`). Then run the following command:
rake resetdb
## Installing
To create your development environment, run the shell script in the root of
the repo:
scripts/create-dev-env.sh
## Starting development servers
Both the LMS and Studio can be started using the following shortcut tasks
rake lms # Start the LMS
rake cms # Start studio
rake lms[cms.dev] # Start LMS to run alongside Studio
rake lms[cms.dev_preview] # Start LMS to run alongside Studio in preview mode
Under the hood, this executes `./manage.py {lms|cms} --settings $ENV runserver`,
which starts a local development server.
Both of these commands take arguments to start the servers in different environments
or with additional options:
# Start the LMS using the test configuration, on port 5000
rake lms[test,5000] # Executes ./manage.py lms --settings test runserver 5000
*N.B.* You may have to escape the `[` characters, depending on your shell: `rake "lms[test,5000]"`
To get a full list of available rake tasks, use:
rake -T
### Troubleshooting
#### Reference Error: XModule is not defined (javascript)
This means that the javascript defining an xmodule hasn't loaded correctly. There are a number
of different things that could be causing this:
1. See `Error: watch EMFILE`
#### Error: watch EMFILE (coffee)
When running a development server, we also start a watcher process alongside to recompile coffeescript
and sass as changes are made. On Mac OSX systems, the coffee watcher process takes more file handles
than are allowed by default. This will result in `EMFILE` errors when coffeescript is running, and
will prevent javascript from compiling, leading to the error 'XModule is not defined'
To work around this issue, we use `Process::setrlimit` to set the number of allowed open files.
Coffee watches both directories and files, so you will need to set this fairly high (anecdotally,
8000 seems to do the trick on OSX 10.7.5, 10.8.3, and 10.8.4)
## Running Tests
See `testing.md` for instructions on running the test suite.
## Content development
If you change course content, while running the LMS in dev mode, it is unnecessary to restart to refresh the modulestore.
Instead, hit /migrate/modules to see a list of all modules loaded, and click on links (eg /migrate/reload/edx4edx) to reload a course.
### Gitreload-based workflow
github (or other equivalent git-based repository systems) used for
course content can be setup to trigger an automatic reload when changes are pushed. Here is how:
1. Each content directory in edx_all/data should be a clone of a git repo
2. The user running the edx gunicorn process should have its ssh key registered with the git repo
3. The list settings.ALLOWED_GITRELOAD_IPS should contain the IP address of the git repo originating the gitreload request.
By default, this list is ['207.97.227.253', '50.57.128.197', '108.171.174.178'] (the github IPs).
The list can be overridden in the startup file used, eg lms/envs/dev*.py
4. The git post-receive-hook should POST to /gitreload with a JSON payload. This payload should define at least
{ "repository" : { "name" : reload_dir }
where reload_dir is the directory name of the content to reload (ie edx_all/data/reload_dir should exist)
The edx server will then do "git reset --hard HEAD; git clean -f -d; git pull origin" in that directory. After the pull,
it will reload the modulestore for that course.
Note that the gitreload-based workflow is not meant for deployments on AWS (or elsewhere) which use collectstatic, since collectstatic is not run by a gitreload event.
Also, the gitreload feature needs FEATURES['ENABLE_LMS_MIGRATION'] = True in the django settings.
# Running the discussion service
## Instruction for Mac
## Installing Mongodb
If you haven't done so already:
brew install mongodb
Make sure that you have mongodb running. You can simply open a new terminal tab and type:
mongod
## Installing elasticsearch
brew install elasticsearch
For debugging, it's often more convenient to have elasticsearch running in a terminal tab instead of in background. To do so, simply open a new terminal tab and then type:
elasticsearch -f
## Setting up the discussion service
You can retrieve the source code from the [github repository](https://github.com/edx/cs_comments_service).
First go into the edx_all directory. Then type
git clone https://github.com/edx/cs_comments_service.git
cd cs_comments_service/
If you see a prompt asking "Do you wish to trust this .rvmrc file?", type "y"
Now if you see this error "Gemset 'cs_comments_service' does not exist," run the following command to create the gemset and then use the rvm environment manually:
rvm gemset create 'cs_comments_service'
rvm use 1.9.3@cs_comments_service
Now use the following command to install required packages:
bundle install
The following command creates database indexes:
bundle exec rake db:init
Now use the following command to generate seeds (basically some random comments in Latin):
bundle exec rake db:seed
It's done! Launch the app now:
ruby app.rb
## Integrating with the edx platform
The API key must match on both sides. It is configured here:
* edx-platform: COMMENTS_SERVICE_KEY in your dev.py file (dev environment) or ENV_TOKENS (prod environment)
* cs_comments_service: api_key in the application.yml file (dev environment) or ENV variable (prod environment)
## Running the delayed job worker
In the discussion service, notifications are handled asynchronously using a third party gem called delayed_job. If you want to test this functionality, run the following command in a separate tab:
bundle exec rake jobs:work
## From the edx-platform django app, initialize roles and permissions
To fully test the discussion forum, you might want to act as a moderator or an administrator. Currently, the roles are:
* moderators can manage everything in the forum, and
* administrators can manage everything plus assigning and revoking moderator status of other users.
First make sure that the database is up-to-date:
rake resetdb
If you have created users in the edx-platform django apps when the comment service was not running, you will need to one-way sync the users into the comment service back end database:
./manage.py lms sync_user_info
Now initialize roles and permissions, providing a course id. See the example below. Note that you do not need to do this for Studio-created courses, as the Studio application does this for you.
./manage.py lms seed_permissions_roles "MITx/6.002x/2012_Fall"
To assign yourself as a moderator, use the following command (assuming your username is "test", and the course id is "MITx/6.002x/2012_Fall"):
./manage.py lms assign_role test Moderator "MITx/6.002x/2012_Fall"
To assign yourself as an administrator, use the following command
./manage.py lms assign_role test Administrator "MITx/6.002x/2012_Fall"
## Some other useful commands
### generate seeds for a specific forum
The seed generating command above assumes that you have the following discussion tags somewhere in the course data:
<discussion for="Welcome Video" id="video_1" discussion_category="Video"/>
<discussion for="Lab 0: Using the Tools" id="lab_1" discussion_category="Lab"/>
<discussion for="Lab Circuit Sandbox" id="lab_2" discussion_category="Lab"/>
For example, you can insert them into overview section as following:
<chapter name="Overview">
<section format="Video" name="Welcome">
<vertical>
<video youtube="0.75:izygArpw-Qo,1.0:p2Q6BrNhdh8,1.25:1EeWXzPdhSA,1.50:rABDYkeK0x8"/>
<discussion for="Welcome Video" id="video_1" discussion_category="Video"/>
</vertical>
</section>
<section format="Lecture Sequence" name="System Usage Sequence">
<%include file="sections/introseq.xml"/>
</section>
<section format="Lab" name="Lab0: Using the tools">
<vertical>
<html> See the <a href="/section/labintro"> Lab Introduction </a> or <a href="/static/handouts/schematic_tutorial.pdf">Interactive Lab Usage Handout </a> for information on how to do the lab </html>
<problem name="Lab 0: Using the Tools" filename="Lab0" rerandomize="false"/>
<discussion for="Lab 0: Using the Tools" id="lab_1" discussion_category="Lab"/>
</vertical>
</section>
<section format="Lab" name="Circuit Sandbox">
<vertical>
<problem name="Circuit Sandbox" filename="Lab_sandbox" rerandomize="false"/>
<discussion for="Lab Circuit Sandbox" id="lab_2" discussion_category="Lab"/>
</vertical>
</section>
</chapter>
Currently, only the attribute "id" is actually used, which identifies discussion forum. In the code for the data generator, the corresponding lines are:
generate_comments_for("video_1")
generate_comments_for("lab_1")
generate_comments_for("lab_2")
We also have a command for generating comments within a forum with the specified id:
bundle exec rake db:generate_comments[type_the_discussion_id_here]
For instance, if you want to generate comments for a new discussion tab named "lab_3", then use the following command
bundle exec rake db:generate_comments[lab_3]
### Running tests for the service
bundle exec rspec
Warning: the development and test environments share the same elasticsearch index. After running tests, search may not work in the development environment. You simply need to reindex:
bundle exec rake db:reindex_search
### debugging the service
You can use the following command to launch a console within the service environment:
bundle exec rake console
### show user roles and permissions
Use the following command to see the roles and permissions of a user in a given course (assuming, again, that the username is "test"):
./manage.py lms show_permissions moderator
You need to make sure that the environment variables are exported. Otherwise you would need to do
./manage.py lms show_permissions moderator
# Notes on using mongodb backed LMS and CMS
These are some random notes for developers, on how things are stored in mongodb, and how to debug mongodb data.
## Databases
Two mongodb databases are used:
- xmodule: stores module definitions and metadata (modulestore)
- xcontent: stores filesystem content, like PDF files
modulestore documents are stored with an _id which has fields like this:
{"_id": {"tag":"i4x","org":"HarvardX","course":"CS50x","category":"chapter","name":"Week_1","revision":null}}
## Document fields
### Problems
Here is an example showing the fields available in problem documents:
{
"_id" : {
"tag" : "i4x",
"org" : "MITx",
"course" : "6.00x",
"category" : "problem",
"name" : "ps03:ps03-Hangman_part_2_The_Game",
"revision" : null
},
"definition" : {
"data" : " ..."
},
"metadata" : {
"display_name" : "Hangman Part 2: The Game",
"attempts" : "30",
"title" : "Hangman, Part 2",
"data_dir" : "6.00x",
"type" : "lecture"
}
}
## Sample interaction with mongodb
1. "mongo"
2. "use xmodule"
3. "show collections" should give "modulestore" and "system.indexes"
4. 'db.modulestore.find( {"_id.org": "MITx"} )' will produce a list of all MITx course documents
5. 'db.modulestore.find( {"_id.org": "MITx", "_id.category": "problem"} )' will produce a list of all problems in MITx courses
Example query for finding all files with "image" in the filename:
- use xcontent
- db.fs.files.find({'filename': /image/ } )
- db.fs.files.find({'filename': /image/ } ).count()
## Debugging the mongodb contents
A convenient tool is http://phpmoadmin.com/ (needs php)
Under ubuntu, do:
- apt-get install php5-fpm php-pear
- pecl install mongo
- edit /etc/php5/fpm/php.ini to add "extension=mongo.so"
- /etc/init.d/php5-fpm restart
and also setup nginx to run php through fastcgi.
## Backing up mongodb
- mogodump (dumps all dbs)
- mongodump --collection modulestore --db xmodule (dumps just xmodule/modulestore)
- mongodump -d xmodule -q '{"_id.org": "MITx"}' (dumps just MITx documents in xmodule)
- mongodump -q '{"_id.org": "MITx"}' (dumps all MITx documents)
## Deleting course content
Use "remove" instead of "find":
- db.modulestore.remove( {"_id.course": "8.01greytak"})
## Finding useful information from the mongodb modulestore
- Organizations
> db.modulestore.distinct( "_id.org")
[ "HarvardX", "MITx", "edX", "edx" ]
- Courses
> db.modulestore.distinct( "_id.course")
[
"CS50x",
"PH207x",
"3.091x",
"6.002x",
"6.00x",
"8.01esg",
"8.01rq_MW",
"8.02teal",
"8.02x",
"edx4edx",
"toy",
"templates"
]
- Find a problem which has the word "quantum" in its definition
db.modulestore.findOne( {"definition.data":/quantum/})n
- Find Location for all problems with the word "quantum" in its definition
db.modulestore.find( {"definition.data":/quantum/}, {'_id':1})
- Number of problems in each course
db.runCommand({
mapreduce: "modulestore",
query: { '_id.category': 'problem' },
map: function(){ emit(this._id.course, {count:1}); },
reduce: function(key, values){
var result = {count:0};
values.forEach(function(value) {
result.count += value.count;
});
return result;
},
out: 'pbyc',
verbose: true
});
produces:
> db.pbyc.find()
{ "_id" : "3.091x", "value" : { "count" : 184 } }
{ "_id" : "6.002x", "value" : { "count" : 176 } }
{ "_id" : "6.00x", "value" : { "count" : 147 } }
{ "_id" : "8.01esg", "value" : { "count" : 184 } }
{ "_id" : "8.01rq_MW", "value" : { "count" : 73 } }
{ "_id" : "8.02teal", "value" : { "count" : 5 } }
{ "_id" : "8.02x", "value" : { "count" : 99 } }
{ "_id" : "PH207x", "value" : { "count" : 25 } }
{ "_id" : "edx4edx", "value" : { "count" : 50 } }
{ "_id" : "templates", "value" : { "count" : 11 } }
# Documentation for edX code (edx-platform repo)
This document explains the general structure of the edX platform, and defines some of the acronyms and terms you'll see flying around in the code.
## Assumptions:
You should be familiar with the following. If you're not, go read some docs...
- python
- django
- javascript
- html, xml -- xpath, xslt
- css
- git
- mako templates -- we use these instead of django templates, because they support embedding real python.
## Other relevant terms
- CAPA -- lon-capa.org -- content management system that has defined a standard for online learning and assessment materials. Many of our materials follow this standard.
- TODO: add more details / link to relevant docs. lon-capa.org is not immediately intuitive.
- lcp = loncapa problem
## Parts of the system
- LMS -- Learning Management System. The student-facing parts of the system. Handles student accounts, displaying videos, tutorials, exercies, problems, etc.
- CMS -- Course Management System. The instructor-facing parts of the system. Allows instructors to see and modify their course, add lectures, problems, reorder things, etc.
- Forums -- this is a ruby on rails service that runs on Heroku. Contributed by berkeley folks. The LMS has a wrapper lib that talks to it.
- Data. In the data/ dir. There is currently a single `course.xml` file that describes an entire course. Speaking of which...
- Courses. A course is broken up into Chapters ("week 1", "week 2", etc). A chapter is broken up into Sections ("Lecture 1", "Simple Circuits Exercises", "HW1", etc). A section can contain modules: Problems, Html, Videos, Verticals, or Sequences.
- Problems: specified in problem files. May have python scripts embedded to both generate random parameters and check answers. Also allows specifying things like tolerance or precision in answers
- Html: any html - often description, or links to outside resources
- Videos: links to youtube or elsewhere
- Verticals: a nesting tag: collect several videos, problems, html modules and display them vertically.
- Sequences: a sequence of modules, displayed with a horizontal navigation bar, displaying one component at a time.
- see `data/course.xml` for more examples
## High Level Entities in the code
### Common libraries
- xmodule: generic learning modules. *x* can be sequence, video, template, html,
vertical, capa, etc. These are the things that one puts inside sections
in the course structure.
- XModuleDescriptor: This defines the problem and all data and UI needed to edit
that problem. It is unaware of any student data, but can be used to retrieve
an XModule, which is aware of that student state.
- XModule: The XModule is a problem instance that is particular to a student. It knows
how to render itself to html to display the problem, how to score itself,
and how to handle ajax calls from the front end.
- Both XModule and XModuleDescriptor take system context parameters. These are named
ModuleSystem and DescriptorSystem respectively. These help isolate the XModules
from any interactions with external resources that they require.
For instance, the DescriptorSystem has a function to load an XModuleDescriptor
from a Location object, and the ModuleSystem knows how to render things,
track events, and complain about 404s
- XModules and XModuleDescriptors are uniquely identified by a Location object, encoding the organization, course, category, name, and possibly revision of the module.
- XModule initialization: XModules are instantiated by the `XModuleDescriptor.xmodule` method, and given a ModuleSystem, the descriptor which instantiated it, and their relevant model data.
- XModuleDescriptor initialization: If an XModuleDescriptor is loaded from an XML-based course, the XML data is passed into its `from_xml` method, which is responsible for instantiating a descriptor with the correct attributes. If it's in Mongo, the descriptor is instantiated directly. The module's attributes will be present in the `model_data` dict.
- `course.xml` format. We use python setuptools to connect supported tags with the descriptors that handle them. See `common/lib/xmodule/setup.py`. There are checking and validation tools in `common/validate`.
- the xml import+export functionality is in `xml_module.py:XmlDescriptor`, which is a mixin class that's used by the actual descriptor classes.
- There is a distinction between descriptor _definitions_ that stay the same for any use of that descriptor (e.g. here is what a particular problem is), and _metadata_ describing how that descriptor is used (e.g. whether to allow checking of answers, due date, etc). When reading in `from_xml`, the code pulls out the metadata attributes into a separate structure, and puts it back on export.
- in `common/lib/xmodule`
- capa modules -- defines `LoncapaProblem` and many related things.
- in `common/lib/capa`
### LMS
The LMS is a django site, with root in `lms/`. It runs in many different environments--the settings files are in `lms/envs`.
- We use the Django Auth system, including the is_staff and is_superuser flags. User profiles and related code lives in `lms/djangoapps/student/`. There is support for groups of students (e.g. 'want emails about future courses', 'have unenrolled', etc) in `lms/djangoapps/student/models.py`.
- `StudentModule` -- keeps track of where a particular student is in a module (problem, video, html)--what's their grade, have they started, are they done, etc. [This is only partly implemented so far.]
- `lms/djangoapps/courseware/models.py`
- Core rendering path:
- `lms/urls.py` points to `courseware.views.index`, which gets module info from the course xml file, pulls list of `StudentModule` objects for this user (to avoid multiple db hits).
- Calls `render_accordion` to render the "accordion"--the display of the course structure.
- To render the current module, calls `module_render.py:render_x_module()`, which gets the `StudentModule` instance, and passes the `StudentModule` state and other system context to the module constructor the get an instance of the appropriate module class for this user.
- calls the module's `.get_html()` method. If the module has nested submodules, render_x_module() will be called again for each.
- ajax calls go to `module_render.py:handle_xblock_callback()`, which passes it to one of the `XBlock`s handler functions
- See `lms/urls.py` for the wirings of urls to views.
- Tracking: there is support for basic tracking of client-side events in `lms/djangoapps/track`.
### CMS
The CMS is a django site, with root in `cms`. It can run in a number of different
environments, defined in `cms/envs`.
- Core rendering path: Still TBD
### Static file processing
- CSS -- we use a superset of CSS called SASS. It supports nice things like includes and variables, and compiles to CSS. The compiler is called `sass`.
- javascript -- we use coffeescript, which compiles to js, and is much nicer to work with. Look for `*.coffee` files. We use _jasmine_ for testing js.
- _mako_ -- we use this for templates, and have wrapper called edxmako that makes mako look like the django templating calls.
We use a fork of django-pipeline to make sure that the js and css always reflect the latest `*.coffee` and `*.sass` files (We're hoping to get our changes merged in the official version soon). This works differently in development and production. Test uses the production settings.
In production, the django `collectstatic` command recompiles everything and puts all the generated static files in a static/ dir. A starting point in the code is `django-pipeline/pipeline/packager.py:pack`.
In development, we don't use collectstatic, instead accessing the files in place. The auto-compilation is run via `common/djangoapps/pipeline_mako/templates/static_content.html`. Details: templates include `<%namespace name='static' file='static_content.html'/>`, then something like `<%static:css group='application'/>` to call the functions in `common/djangoapps/pipeline_mako/__init__.py`, which call the `django-pipeline` compilers.
## Testing
See `testing.md`.
## TODO:
- describe our production environment
- describe the front-end architecture, tools, etc. Starting point: `lms/static`
---
Note: this file uses markdown. To convert to html, run:
markdown2 overview.md > overview.html
# Testing
## Overview
We maintain three kinds of tests: unit tests, integration tests,
and acceptance tests.
### Unit Tests
* Each test case should be concise: setup, execute, check, and teardown.
If you find yourself writing tests with many steps, consider refactoring
the unit under tests into smaller units, and then testing those individually.
* As a rule of thumb, your unit tests should cover every code branch.
* Mock or patch external dependencies.
We use [voidspace mock](http://www.voidspace.org.uk/python/mock/).
* We unit test Python code (using [unittest](http://docs.python.org/2/library/unittest.html)) and
Javascript (using [Jasmine](http://pivotal.github.io/jasmine/))
### Integration Tests
* Test several units at the same time.
Note that you can still mock or patch dependencies
that are not under test! For example, you might test that
`LoncapaProblem`, `NumericalResponse`, and `CorrectMap` in the
`capa` package work together, while still mocking out template rendering.
* Use integration tests to ensure that units are hooked up correctly.
You do not need to test every possible input--that's what unit
tests are for. Instead, focus on testing the "happy path"
to verify that the components work together correctly.
* Many of our tests use the [Django test client](https://docs.djangoproject.com/en/dev/topics/testing/overview/) to simulate
HTTP requests to the server.
### UI Acceptance Tests
* Use these to test that major program features are working correctly.
* We use [lettuce](http://lettuce.it/) to write BDD-style tests. Most of
these tests simulate user interactions through the browser using
[splinter](http://splinter.cobrateam.info/).
Overall, you want to write the tests that **maximize coverage**
while **minimizing maintenance**.
In practice, this usually means investing heavily
in unit tests, which tend to be the most robust to changes in the code base.
![Test Pyramid](test_pyramid.png)
The pyramid above shows the relative number of unit tests, integration tests,
and acceptance tests. Most of our tests are unit tests or integration tests.
## Test Locations
* Python unit and integration tests: Located in
subpackages called `tests`.
For example, the tests for the `capa` package are located in
`common/lib/capa/capa/tests`.
* Javascript unit tests: Located in `spec` folders. For example,
`common/lib/xmodule/xmodule/js/spec` and `{cms,lms}/static/coffee/spec`
For consistency, you should use the same directory structure for implementation
and test. For example, the test for `src/views/module.coffee`
should be written in `spec/views/module_spec.coffee`.
* UI acceptance tests:
- Set up and helper methods: `common/djangoapps/terrain`
- Tests: located in `features` subpackage within a Django app.
For example: `lms/djangoapps/courseware/features`
## Factories
Many tests delegate set-up to a "factory" class. For example,
there are factories for creating courses, problems, and users.
This encapsulates set-up logic from tests.
Factories are often implemented using [FactoryBoy](https://readthedocs.org/projects/factoryboy/)
In general, factories should be located close to the code they use.
For example, the factory for creating problem XML definitions
is located in `common/lib/capa/capa/tests/response_xml_factory.py`
because the `capa` package handles problem XML.
# Running Tests
You can run all of the unit-level tests using the command
rake test
This includes python, javascript, and documentation tests. It does not, however,
run any acceptance tests.
## Running Python Unit tests
We use [nose](https://nose.readthedocs.org/en/latest/) through
the [django-nose plugin](https://pypi.python.org/pypi/django-nose)
to run the test suite.
You can run all the python tests using `rake` commands. For example,
rake test:python
runs all the tests. It also runs `collectstatic`, which prepares the static files used by the site (for example, compiling Coffeescript to Javascript).
You can re-run all failed python tests by running
rake test:python[--failed]
You can also run the tests without `collectstatic`, which tends to be faster:
rake fasttest_lms
or
rake fasttest_cms
xmodule can be tested independently, with this:
rake test_common/lib/xmodule
other module level tests include
* `rake test_common/lib/capa`
* `rake test_common/lib/calc`
To run a single django test class:
rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest]
To run a single django test:
rake test_lms[lms/djangoapps/courseware/tests/tests.py:ActivateLoginTest.test_activate_login]
To re-run all failing django tests from lms or cms:
rake test_lms[--failed]
To run a single nose test file:
nosetests common/lib/xmodule/xmodule/tests/test_stringify.py
To run a single nose test:
nosetests common/lib/xmodule/xmodule/tests/test_stringify.py:test_stringify
To run a single test and get stdout, with proper env config:
python manage.py cms --settings test test contentstore.tests.test_import_nostatic -s
To run a single test and get stdout and get coverage:
python -m coverage run --rcfile=./common/lib/xmodule/.coveragerc which ./manage.py cms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 contentstore.tests.test_import_nostatic -s # cms example
python -m coverage run --rcfile=./lms/.coveragerc which ./manage.py lms --settings test test --traceback --logging-clear-handlers --liveserver=localhost:8000-9000 courseware.tests.test_module_render -s # lms example
generate coverage report:
coverage report --rcfile=./common/lib/xmodule/.coveragerc
or to get html report:
coverage html --rcfile=./common/lib/xmodule/.coveragerc
then browse reports/common/lib/xmodule/cover/index.html
Very handy: if you uncomment the `pdb=1` line in `setup.cfg`, it will drop you into pdb on error. This lets you go up and down the stack and see what the values of the variables are. Check out [the pdb documentation](http://docs.python.org/library/pdb.html)
### Running Javascript Unit Tests
We use Jasmine to run JavaScript unit tests. To run all the JavaScript tests:
rake test:js
To run a specific set of JavaScript tests and print the results to the console:
rake test:js:run[lms]
rake test:js:run[cms]
rake test:js:run[xmodule]
rake test:js:run[common]
To run JavaScript tests in your default browser:
rake test:js:dev[lms]
rake test:js:dev[cms]
rake test:js:dev[xmodule]
rake test:js:dev[common]
These rake commands call through to a custom test runner. For more info, see [js-test-tool](https://github.com/edx/js-test-tool).
### Running Acceptance Tests
We use [Lettuce](http://lettuce.it/) for acceptance testing.
Most of our tests use [Splinter](http://splinter.cobrateam.info/)
to simulate UI browser interactions. Splinter, in turn,
uses [Selenium](http://docs.seleniumhq.org/) to control the Chrome browser.
**Prerequisite**: You must have [ChromeDriver](https://code.google.com/p/selenium/wiki/ChromeDriver)
installed to run the tests in Chrome. The tests are confirmed to run
with Chrome (not Chromium) version 28.0.1500.71 with ChromeDriver
version 2.1.210398.
To run all the acceptance tests:
rake test:acceptance
To run only for lms or cms:
rake test:acceptance:lms
rake test:acceptance:cms
To test only a specific feature:
rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature"]
To test only a specific scenario
rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature -s 3"]
To start the debugger on failure, add the `--pdb` option:
rake test:acceptance:lms["lms/djangoapps/courseware/features/problems.feature --pdb"]
To run tests faster by not collecting static files, you can use
`rake test:acceptance:lms:fast` and `rake test:acceptance:cms:fast`.
Acceptance tests will run on a randomized port and can be run in the background of rake cms and lms or unit tests.
To specify the port, change the LETTUCE_SERVER_PORT constant in cms/envs/acceptance.py and lms/envs/acceptance.py
as well as the port listed in cms/djangoapps/contentstore/feature/upload.py
During acceptance test execution, Django log files are written to `test_root/log/lms_acceptance.log` and `test_root/log/cms_acceptance.log`.
**Note**: The acceptance tests can *not* currently run in parallel.
## Viewing Test Coverage
We currently collect test coverage information for Python unit/integration tests.
To view test coverage:
1. Run the test suite:
rake test
2. Generate reports:
rake coverage
3. Reports are located in the `reports` folder. The command
generates HTML and XML (Cobertura format) reports.
## Testing using queue servers
When testing problems that use a queue server on AWS (e.g. sandbox-xqueue.edx.org), you'll need to run your server on your public IP, like so.
`./manage.py lms runserver 0.0.0.0:8000`
When you connect to the LMS, you need to use the public ip. Use `ifconfig` to figure out the number, and connect e.g. to `http://18.3.4.5:8000/`
## Acceptance Test Techniques
1. Element existence on the page<br />
Do not use splinter's built-in browser methods directly for determining if elements exist.
Use the world.is_css_present and world.is_css_not_present wrapper functions instead.
Otherwise errors can arise if checks for the css are performed before the page finishes loading.
Also these wrapper functions are optimized for the amount of wait time spent in both cases of positive
and negative expectation.
2. Dealing with alerts<br />
Chrome can hang on javascripts alerts. If a javascript alert/prompt/confirmation is expected, use the step
'I will confirm all alerts', 'I will cancel all alerts' or 'I will anser all prompts with "(.*)"' before the step
that causes the alert in order to properly deal with it.
3. Dealing with stale element reference exceptions<br />
These exceptions happen if any part of the page is refreshed in between finding an element and accessing the element.
When possible, use any of the css functions in common/djangoapps/terrain/ui_helpers.py as they will retry the action
in case of this exception. If the functionality is not there, wrap the function with world.retry_on_exception. This function takes in a function and will retry and return the result of the function if there was an exception
4. Scenario Level Constants<br />
If you want an object to be available for the entire scenario, it can be stored in world.scenario_dict. This object
is a dictionary that gets refreshed at the beginning on the scenario. Currently, the current logged in user and the current created course are stored under 'COURSE' and 'USER'. This will help prevent strings from being hard coded so the
acceptance tests can become more flexible.
5. Internal edX Jenkins considerations<br />
Acceptance tests are run in Jenkins as part of the edX development workflow. They are broken into shards and split across
workers. Therefore if you add a new .feature file, you need to define what shard they should be run in or else they
will not get executed. See someone from TestEng to help you determine where they should go.
Also, the test results are rolled up in Jenkins for ease of understanding, with the acceptance tests under the top level
of "CMS" and "LMS" when they follow this convention: name your feature in the .feature file CMS or LMS with a single
period and then no other periods in the name. The name can contain spaces. E.g. "CMS.Sign Up"
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
Q_FLAG =
ifeq ($(quiet), true)
Q_FLAG = -Q
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = $(Q_FLAG) -d $(BUILDDIR)/doctrees -c source $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/getting_started.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/getting_started.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/getting_started"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/getting_started"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
###################################
January 7, 2014
###################################
You can now access the public edX roadmap_ for details about the currently planned product direction.
.. _roadmap: https://edx-wiki.atlassian.net/wiki/display/OPENPROD/OpenEdX+Public+Product+Roadmap
*************
edX Studio
*************
New documentation, *Building a Course with edX Studio*, is available online_. You can also download the new guide as a PDF from the edX Studio user interface.
.. _online: http://edx.readthedocs.org/projects/ca/en/latest/
=============
New Features
=============
* The **Files & Uploads** page has been updated so that a maximum of 50 files now appear on a single page. If your course has more than 50 files, additional files are listed in separate pages. You can navigate to other pages through pagination controls at the top and bottom of the file list. This change improves the page performance for courses with a large number of files.
For more information, see the `updated documentation for adding files <http://edx.readthedocs.org/projects/ca/en/latest/create_new_course.html#add-files-to-a-course>`_.
.. note:: The :ref:`October 29 2013` release notes describe a workaround to limit the number of files that appear on a single page. With the January 7, 2014 release, this method is not necessary and no longer works.
* The **Course Outline** page is updated to include several design improvements. The new Course Outline appears as in the following example:
.. image:: images/course_outline.png
:alt: The Course Outline
To see the changes, view your course in Studio or see the `updated documentation for organizing your course content <http://edx.readthedocs.org/projects/ca/en/latest/organizing_course.html>`_.
* A template for custom JavaScript display and grading problems (also called JSInput problems) is now available. For more informatoin, see the `updated documentation for Custom JavaScript display and grading problems <http://edx.readthedocs.org/projects/ca/en/latest/advanced_problems.html#custom-javascript-display-and-grading>`_. (BLD-523) (BLD-556)
* A template for the Zooming Image tool is now available. For more informatoin, see the `updated documentation for the zooming image tool <http://edx.readthedocs.org/projects/ca/en/latest/tools.html#zooming-image>`_. (BLD-206)
==========================
Changes and Updates
==========================
* The Course Export tool now supports non-ASCII characters. (STUD-868)
* In the course outline, you can now drag a section to the end of the list of sections when the last section is collapsed. (STUD-879)
* In Video components, when you click inside the **Start Time** or **End Time** field, you can enter a time in HH:MM:SS format as normal text. After you click out of the field, Studio adds zeros and performs unit conversions so that the field contains six digits that correspond to hours, minutes, and seconds.
For example, if you enter 1:35, the text in the field changes to 00:01:35. If you enter 2:71:35, the text changes to 3:11:35. (BLD-506 and BLD-581)
* The **Save** button for JSInput Problem components now works as expected. (BLD-568)
***************************************
edX Learning Management System
***************************************
* When you download grades by clicking **Download CSV of answer distributions** on the Instructor Dashboard, the LMS no longer returns an empty CSV for small Studio-created courses. Instead, the LMS returns a CSV that is sorted by url_name and that includes responses from students who have unenrolled from the course.
Note that errors occur if you try to download grades for a large Studio-based course or an XML-based course.
* In the course wiki, the **Preview this Revision** and the **Merge selected with Current** dialog boxes are now keyboard accessible in Internet Explorer. (LMS-1539)
* On the Instructor Dashboard, when you click the Datadump tab and then click Download CSV of all student profile data, you no longer receive a 500 error message. (LMS-1675)
* For Image Response problems, the correct answer now appears when a student clicks **Show Answer**. (BLD-21)
* On iPads, the video player uses edX controls that appear after you click the video or the Play button. On iPhones, the video player uses native controls. (BLD-541)
###################################
October 23, 2013
###################################
*************
edX Studio
*************
=============
New Features
=============
* **Improved import experience** (STUD-595)
When you import a course, the Import screen now provides real-time status updates. The Import screen tells you when the import is in the
following stages:
* Uploading
* Unpacking
* Verifying
* Updated Course
* Success (Complete)
* **Improved drag and drop experience in course outlines** (STUD-575)
The ability to drag and drop sections, subsections, and units in the course outline is enhanced in the following ways:
* The visual representation of where you are moving the course element to is improved, with a pointer and blue line indicating the
new position.
* You can more easily drag units from one subsection to another.
* When you cancel a drop, the course element returns to its original position.
* You can no longer drag a course element below the New Unit button, which was causing confusion.
* **Text customization capability**
You can now customize some of the UI text that your students see. You do this through the text_customization key in the Advanced
Settings for the course. (However, edX recommends that you contact your Program Manager before you modify the text_customization key.)
* **JavaScript loading performance**
JavaScript loading is changed in ways that should improve the performance on some pages.
Code contributors should note that JavaScript is now loaded through require.js.
==========================
Changes and Updates
==========================
The following changes are included in this release:
* In a course outline, you can no longer drag and drop a unit below the New Unit button. (STUD-152)
* Course update content outside of HTML tags is no longer erroneously removed.(STUD-590)
* In a course outline, dragging a unit over the Units label no longer causes the unit to be removed. (STUD-755)
* Support text in the Assignment Types section of the Grading page is updated to clarify that you enter an integer, not a percent, in the W
eight of Total Grade field. (STUD-771)
* Certain component errors no longer prevent the course from saving correctly. (STUD-786)
* When you delete a Discussion component, the discussion is completely removed from the course. (STUD-811, STUD-817)
* When you are editing a course update, the update no longer disappears if you click outside of the Edit window. (STUD-822)
* When you enter the integer 7 in the Total Weight of Grade field, the value is no longer changed to a decimal. (STUD-826)
***************************************
edX Learning Management System
***************************************
=============
New Features
=============
The following changes are included in this release:
* **Fixed views in Internet Explorer 9.x**
Problems with pages in Internet Explorer 9.x are resolved.
* **Disabled downloading data for large courses**
For courses with over 200 students, downloading large data sets could fail. The Download Data button on the Instructor Dashboard is now
temporarily disabled to avoid this problem.
* **Improved Beta Instructor Dashboard**
You can access the Beta Instructor Dashboard from the current Instructor Dashboard by clicking Try the New Beta Dashboard. The Beta
version continued so evolve with a streamlined design and improved architecture. Both dashboards are currently available.
* **Improved video and transcript experience** (BLD-420)
When you are playing a video with the transcript hidden, you can display the transcript by hovering the mouse pointer over the CC button.
You can then click a paragraph in the displayed transcript to move to that point in the video. When you move the pointer off the CC button,
the transcript is hidden.
* **Improved Learning Tools Interoperability (LTI)** (BLD-330, BLD-347)
You can now use multiple LTI tools per page. You can also have an LTI module load external content in a new window.
==========================
Changes and Updates
==========================
* The link to open and close the calculator has additional aria attributes for accessibility. (BLD-164)
* The Hints panel for the calculator is now accessible to screen readers. (BLD-165)
* Students can now download video subtitles. (BLD-245)
* Multi-speed video playback now works in Firefox browsers as expected. (BLD-287)
* Window resizing no longer cuts off videos. (BLD-289)
* Video HD control is now handicap accessible. (BLD-387)
* You can now export courses that have LTI modules. (BLD-389)
* A malformed custom parameter in an LTI component no longer permanently breaks the unit. (BLD-390)
* The styles and text for the download links for videos and transcripts are updated for clarity and accessibility. (BLD-403)
* The CC button in the video player now includes explanatory text that is accessible with a screen reader. (BLD-404)
* LTI with the Piazza platform now works as expected. (BLD-405)
* The **Close** button on dialog boxes is now defined as an HTML button and is accessible to screen readers. (LMS-582)
******************
Discussion Forums
******************
The following changes are included in this release:
* The color contrast of the Report Misuse link is updated for accessibility. (FOR-200)
* The Report Misuse link now includes a tooltip that is accessible to screen readers. (FOR-201)
* The Report Misuse link is now included in the page tab order, for keyboard accessibility. (FOR-209)
.. _October 29 2013:
###################################
October 29, 2013
###################################
*************
edX Studio
*************
=============
New Features
=============
* **New video editing interface, enabling an enhanced workflow for adding timed transcripts to videos** (BLD-238)
When you enter a video URL in the Editing: Video dialog box, the system checks if a timed transcript for that video exists on edX, and if
so, automatically associates the transcript with the video. If no transcript is found, you click Upload New Timed Transcript to locate and
upload the .SRT file for the transcript.
When there is an associated timed transcript, you can click Download to Edit to download a local copy of the .SRT file. You can then
modify the transcript and upload the new file.
For YouTube videos, you can also import a timed transcript from YouTube, overwriting the version of the transcript on edX with the version
from YouTube.
Backwards compatibility with the other transcript workflow is maintained with a tabbed interface.
====================================================
Known Issues and Workarounds
====================================================
* **Uploading a large number of files** (STUD-813, STUD-837)
When you go to the Files & Uploads page, if your course has a large number of files, the Files & Uploads page can time out before it lists
all the files. The page becomes unresponsive, and you cannot upload more files.
**Workaround**: To upload new files files when the Files & Uploads page is timing out, limit the number of files that appear on the Files &
Uploads page by adding start and max parameters to the URL. For example, you can append the following parameters to the URL in your
browser:
`https://studio.edge.edx.org/assets/organization.course-number.course-name/branch/block/course-name?start=5&max=15`
This example tells the page to load a maximum of 10 files, starting with the 6th file. You can use other values as needed, as long as the list
is not so long that the page does not load successfully. Note that file counts begin at 0, not 1, and that files are listed chronologically, with
the most recent first.
==========================
Changes and Updates
==========================
The following changes are included in this release:
* Because Course IDs are not case sensitive, all Course IDs must be unique regardless of capitalization. For example, you cannot have
both edX101 and EdX101 as course IDs. (STUD-873)
***************************************
edX Learning Management System
***************************************
The following changes are included in this release:
* The cheatsheet available when you are adding a new Wiki article is now accessible to screen readers. (LMS-1303)
* In the Wiki, active links are now displayed as bold, and have additional text labels, to be accessible to screen readers. (LMS-1306)
* In the Wiki, when you navigate through links with the Tab key, the active link is updated in the same was as when you hover over it with
the mouse pointer. (LMS-1336)
* Default Wiki permissions are updated so that only course staff can delete Wiki pages. (LMS-1355)
* The Reset Password and Password Reset Confirmation pages are updated to use styles consistent with the system. (LMS-1357)
* In certain situations, students received a 500 error when viewing the Progress page. This problem was resolved in a patch on October 23, 2013. (LMS-1367)
* A visual indicator has been added to the video player to indicate which part of the video will play, when it is not the default. (BLD-391)
* Forum views are updated to improve performance. (FOR-250)
******************
Analytics
******************
The following changes are included in this release:
* Course exports are included with weekly data dumps delivered to university data representatives. (AN-57)
###################################
November 6, 2013
###################################
*************
edX Studio
*************
=============
New Features
=============
* **Improved Course Export page**
The Course Export page has a new layout, with enhanced help text.
==========================
Changes and Updates
==========================
The following changes are included in this release:
* The Forgot Password link on the Studio Sign In page now works correctly. (STUD-689)
* In the Create a New Course page Organization field, text now suggests the generic UniversityX, instead of MITx. (STUD-885)
* The Create a New Course page Course Run field, text now suggests the using the year and trimester (for example, 2014_T1), instead
of the year and season. (STUD-916)
* Studio now continues working correctly when a YouTube video in the page fails to load. (STUD-472)
* To avoid potential problems with browser security, you can no longer enter video URLs with http://. You must use https://. (BLD-408)
* By default, the options to add a Problem Written in LaTex and a Problem with Adaptive Hint in Latex are no longer included in the
Advanced tab of the Problems component. In addition, the option to add E-text Written in LaTeX is no longer included in the HTML component.
To enable these options, open the Advanced Settings page and set the value of the use_latex_compiler policy key to true. (BLD-426)
==========================
Technical Changes
==========================
Contributors to the open source edX Platform should note the following change:
The Course Export page is updated to use a RESTFul interface. (STUD-846)
***************************************
edX Learning Management System
***************************************
==========================
New Features
==========================
* **Upgrading the Course Track**
A student in the Honor Code or Audit track can now upgrade to the Verified Certificate track. (LMS-1127)
==========================
Changes and Updates
==========================
The following changes are included in this release:
* After registering for a course, a student could use the browser's Back button to return to the Registration page and change the
registration type. Now, if the user tries to go back to the Registration page, the Learning Management System redirects the student to the
courseware, where the student must unregister first to change the course mode. (LMS-1062)
* When a student fails verification for the Verified track, the Learning Management System now notifies the user through the Student
Dashboard, and prompts them to retry. The student must send another set of photos, but does not have to pay again. If the student does
not retry, they can get a refund. (LMS-1133)
* In the Beta Instructor Dashboard, the layout is improved and the Pending Instructor Tasks section now functions correctly. (LMS-1242)
* You can now tab through the Wiki pages without getting stuck in the content area. (LMS-1307)
* Default settings now include the generic email address @example.com instead of @edx.org. (LMS-1363)
* The user experience and help text during course registration and upgrading are enhanced, and the last day to register for verified certificates is more clear. (LMS-1384)
* Keyboard navigation is now updated in the Wiki to skip repetitive content, allowing users to go directly to unique content on a page. (LMS-1387)
* Errors that prevented the Progress page from successfully loading are fixed. (LMS-1388)
* In the Sign Up page, the Public Display Name field was renamed to Public Username. (LMS-1393)
* Errors when creating a new account are resolved. (LMS-1418)
* A typo was removed from the course registration email template. (LMS-1419)
* The Saving a Word Cloud component generated an error message. This problem no longer occurs. (BLD-205)
* When viewing a video, if a student clicked on the video timeline before or after the specified end time, the video jumped to the beginning. This problem is resolved. (BLD-392)
* Students can now change the video speed when the video is paused. (BLD-424)
* Several problems with the video player are resolved:
* The Start time did not work in Flash mode.
* Students could not change the speed before the video started.
* The end point in the video slider was inaccurate for short videos.
* The video slider showed the incorrect position after the video stopped. (BLD-468)
* Sorting of the forums thread list now works correctly when a topic is selected from the drop-down menu. (FOR-224)
* Forum follow buttons are now accessible to screen readers, have the ARIA checkbox role, and activate with the space or Enter key.(FOR-240)
******************
Analytics
******************
The following changes are included in this release:
* The user_id field is added to tracking events. (AN-213)
# -*- coding: utf-8 -*-
"""
EdX documentation build configuration file
"""
#pylint: disable=C0103
#pylint: disable=W0622
#pylint: disable=W0212
#pylint: disable=W0613
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
root = os.path.abspath('../..')
sys.path.append(root)
sys.path.append(os.path.join(root, "common/djangoapps"))
sys.path.append(os.path.join(root, "common/lib"))
sys.path.append(os.path.join(root, "common/lib/sandbox-packages"))
sys.path.append(os.path.join(root, "lms/djangoapps"))
sys.path.append(os.path.join(root, "lms/lib"))
sys.path.append(os.path.join(root, "cms/djangoapps"))
sys.path.append(os.path.join(root, "cms/lib"))
# django configuration - careful here
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms.envs.test'
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', 'sphinx.ext.coverage',
'sphinx.ext.pngmath', 'sphinx.ext.mathjax', 'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'EdX Dev Data'
copyright = u'2012-13, EdX team'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.2'
# The full version, including alpha/beta/rc tags.
release = '0.2'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# When auto-doc'ing a class, write the class' docstring and the __init__ docstring
# into the class docs.
autoclass_content = "both"
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinxdoc'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'edXDocs'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'edXDocs.tex', u'EdX Dev Data Documentation',
u'EdX Team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'edxdocs', u'EdX Dev Data Documentation',
[u'EdX Team'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'EdXDocs', u'EdX Dev Data Documentation',
u'EdX Team', 'EdXDocs', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'http://docs.python.org/': None}
# from http://djangosnippets.org/snippets/2533/
# autogenerate models definitions
import inspect
from django.utils.html import strip_tags
from django.utils.encoding import force_unicode
def process_docstring(app, what, name, obj, options, lines):
"""Autodoc django models"""
# This causes import errors if left outside the function
from django.db import models
# If you want extract docs from django forms:
# from django import forms
# from django.forms.models import BaseInlineFormSet
# Only look at objects that inherit from Django's base MODEL class
if inspect.isclass(obj) and issubclass(obj, models.Model):
# Grab the field list from the meta class
fields = obj._meta._fields()
for field in fields:
# Decode and strip any html out of the field's help text
help_text = strip_tags(force_unicode(field.help_text))
# Decode and capitalize the verbose name, for use if there isn't
# any help text
verbose_name = force_unicode(field.verbose_name).capitalize()
if help_text:
# Add the model field to the end of the docstring as a param
# using the help text as the description
lines.append(u':param %s: %s' % (field.attname, help_text))
else:
# Add the model field to the end of the docstring as a param
# using the verbose name as the description
lines.append(u':param %s: %s' % (field.attname, verbose_name))
# Add the field's type to the docstring
lines.append(u':type %s: %s' % (field.attname, type(field).__name__))
# Only look at objects that inherit from Django's base FORM class
# elif (inspect.isclass(obj) and issubclass(obj, forms.ModelForm) or issubclass(obj, forms.ModelForm) or issubclass(obj, BaseInlineFormSet)):
# pass
# # Grab the field list from the meta class
# import ipdb; ipdb.set_trace()
# fields = obj._meta._fields()
# import ipdb; ipdb.set_trace()
# for field in fields:
# import ipdb; ipdb.set_trace()
# # Decode and strip any html out of the field's help text
# help_text = strip_tags(force_unicode(field.help_text))
# # Decode and capitalize the verbose name, for use if there isn't
# # any help text
# verbose_name = force_unicode(field.verbose_name).capitalize()
# if help_text:
# # Add the model field to the end of the docstring as a param
# # using the help text as the description
# lines.append(u':param %s: %s' % (field.attname, help_text))
# else:
# # Add the model field to the end of the docstring as a param
# # using the verbose name as the description
# lines.append(u':param %s: %s' % (field.attname, verbose_name))
# # Add the field's type to the docstring
# lines.append(u':type %s: %s' % (field.attname, type(field).__name__))
# Return the extended docstring
return lines
def setup(app):
"""Setup docsting processors"""
#Register the docstring processor with sphinx
app.connect('autodoc-process-docstring', process_docstring)
This document describes the split mongostore representation which
separates course structure from content where each course run can have
its own structure. It does not describe the original mongostore
representation which combined structure and content and used the key
to distinguish draft from published elements.
This document does not describe mongo nor its operations. See
`http://www.mongodb.org/`_ for information on Mongo.
Product Goals and Discussion
----------------------------
(Mark Chang)
This work was instigated by the studio team's need to correctly do
metadata inheritance. As we moved from an on-startup load of the
courseware, the system was able to inflate and perform an inheritance
calculation step such that the intended properties of children could
be set through inheritance. While not strictly a requirement from the
studio authoring approach, where inheritance really rears its head is
on import of existing courseware that was designed assuming
inheritance.
A short term patch was applied that allowed inheritance to act
correctly, but it was felt that it was insufficient and this would be
an opportunity to make a more clean datastore representation. After
much difficulty with how draft objects would work, Calen Pennington
worked through a split data store model ala FAT filesystem (Mark's
metaphor, not Cale's) to split the structure from the content. The
goal would be a sea of content documents that would not know about the
structure they were utilized within. Cale began the work and handed it
off to Don Mitchell.
In the interim, great discussion was had at the Architect's Council
that firmed up the design and strategy for implementation, adding
great richness and completeness to the new data structure.
The immediate
needs are two, and only two.
#. functioning metadata inheritance
#. good groundwork for versioning
While the discussions of the atomic unit of courseware available for
sharing, how these are shared, and how they refer back to the parent
definition are all valuable, they will not be built in the near term. I
understand and expect there to be many refactorings, improvements, and
migrations in the future.
I fully anticipate much more detail to be uncovered even in this first
thin implementation. When that happens, we will need as much advice
from those watching this page to make sure we move in the right
direction. We also must have the right design artifacts to document
where we stand relative to the overall design that has loftier goals.
Representation
--------------
The xmodule collections:
+ `modulestore.active_versions`: this collection maps the org, course,
and run to the current draft and published versions of the course.
+ `modulestore.structures`: this collection has one entry per course
run and one for the template.
+ `modulestore.definitions`: this collection has one entry per
"module" or "block" version.
modulestore.active_versions: 2 simple maps for dereferencing the
correct course from the structures collection. Every course run will
have a draft version. Not every course run will have a published
version. No course run will have more than one of each of these.
::
{ '_id' : uniqueid,
'versions' : { <versionName> : versionGuid, ..}
'creator' : user_id,
'created' : date (native mongo rep)
}
::
+ `id` is a unique id for finding this course run. It's a
location-reference string, like 'edu.mit.eng.eecs.6002x.industry.spring2013'.
+ `versions`: These are references to `modulestore.structures`. A
location-reference like
`edu.mit.eng.eecs.6002x.industry.spring2013;draft` refers to the value
associated with `draft` for this document.
+ `versionName` is `draft`, `published`, or another user-defined
string.
+ `versionGuid` is a system generated globally unique id (hash). It
points to the entry in `modulestore.structures` ` `
`draftVersion`: the design will try to generate a new draft version
for each change to the course object: that is, for each move,
deletion, node creation, or metadata change. Cloning a course
(creating a new run of a course or such) will create a new entry in
this table with just a `draftVersion` and will cause a copy of the
corresponding entry in `modulestore.structures`. The entry in
`structures` will point to its version parent in the source course.
modulestore.structures : the entries in this collection follow this
definition:
::
{ '_id' : course_guid,
'blocks' :
{ block_guid : // the guid is an arbitrary id to represent this node in the course tree
{ 'children' : [ block_guid* ],
'metadata' : { property map },
'definition' : definition_guid,
'category' : 'section' | 'sequence' | ... }
::
...// more guids
::
},
'root' : block_guid,
'original' : course_guid, // the first version of this course from which all others were derived
'previous' : course_guid | null, // the previous revision of this course (null if this is the original)
'version_entry' : uniqueid, // from the active_versions collection
'creator' : user_idÂ
}
+ `blocks`: each block is a node in the course such as the course, a
section, a subsection, a unit, or a component. The block ids remain
the same over edits (they're not versioned).
+ `root`: the true top of the course. Not all nodes without parents
are truly roots. Some are orphans.
+ `course_guid, block_guid, definition_guid` are not those specific
strings but instead some system generated globally unique id.
+ The one which gets passed around and pointed to by urls is the
`block_guid`; so, it will be the one the system ensures is readable.
Unlike the other guids, this one stays the same over revisions and can
even be the same between course runs (although the course run
contextualizes it to distinguish its instantiated version).
+ `definition` points to the specific revision of the given element in
`modulestore.definitions` which this version of the course includes.
+ `children` lists the block_guids which are the children of this node
in the course tree. It's an error if the guid in the `children` list
does not occur in the `blocks` dictionary.
+ `metadata` is the node's explicitly defined metadata some of which
may be inherited by its children
For debugging purposes, there may be value in adding a courseId field
(org, course, run) for use via db browsers.
modulestore.definitions : the data associated with each version of
each node in the structures. Many courses may point to the same
definition or may point to different versions derived from the same
original definition.
::
{ '_id' : guid,
'data' : ..,
'default_settings' : {'display_name':..,..}, // a starting point for new uses of this definition
'category' : xblocktype, // the xmodule/xblock type such as course, problem, html, video, about
'original' : guid, // the first kept version of this definition from which all others were derived
'previous' : guid | null, // the previous revision of this definition (null if this is the original)
'creator' : user_id // the id of whomever pressed the draft or publish button
}
+ `_id`: a guid to uniquely identify the definition.
+ `data` is the payload used by the xmodule and following the
xmodule's data representation.
+ `category` is the xmodule type and used to figure out which xmodule
to instantiate.
There may be some debugging value to adding a courseId field, but it
may also be misleading if the element is used in more than one course.
Templates
~~~~~~~~~
(I'm refactoring templates quite a bit from their representation prior
to this design)
All field defaults will be defined through the xblock field.default
mechanism. Templates, otoh, are for representing optional boilerplate
usually for examples such as a multiple-choice problem or a video
component with the fields all filled in. Templates are stored in yaml
files which provide a template name, sorting and filtering information
(e.g., requires advanced editor v allows simple editor), and then
field: value pairs for setting xblocks' fields upon template
selection.
Most of the pre-existing templates including all of the 'empty' ones
will go away. The ones which will stay are the ones truly just giving
examples or starting points for variants. This change will require
that the template choice code provide a default 'blank' choice to the
user which just instantiates the model w/ its defaults versus a choice
of the boilerplates. The client can therefore populate its own model
of the xblock and then send a create-item request to the server when
the user says he/she's ready to save it.
Import/export
~~~~~~~~~~~~~
Export should allow the user to select the version of the course to
export which can be any of the draft or published versions. At a
minimum, the user should choose between draft or published.
Import should import the course as a draft course regardless of
whether it was exported as a published or draft one, I believe. If
there's already a draft for the same course, in the best of all
worlds, it would have the guid to see if the guid exists in the
structures collection, and, if so, just make that the current
draftVersion (don't do any actual data changes). If there's no guid or
the guid doesn't exist in the structures collection, then we'll need
to work out the logic for how to decide what definitions to create v
update v point to.
Course ID
~~~~~~~~~
Currently, we use a triple to identify a run of a course. The triple
is organization, course name, and run identity (e.g., 2013Q1). The
system does not care what the id consists of only that it uniquely
identify an edition of the course. The system uses this id to organize
the course composition and find the course elements. It distinguishes
between a current being-edited version (aka, draft) and publicly
viewable version (published). Not every course has a published
version, but every course will have a draft version. The application
specifies whether it wants the draft or published version. This system
allows the application to easily switch between the 2; however, it
will have a configuration in which it's impossible to access the draft
so that we can add access optimizations and extraction filtering later
if needed.
Location
~~~~~~~~
The purpose of `Location` is to identify content. That is, to be able
to locate content by providing sufficient addressing. The `Location`
object is ubiquitous throughout the current code and thus will be
difficult to adapt and make more flexible. Right now, it's a very
simple `namedtuple` and a lot of code presumes this. This refactoring
generalizes and subclasses it to handle various addressing schemes and
remove direct manipulations.
Our code needs to locate several types of things and should probably
use several different types of locators for these. These are the types
of things we need to address. Some of these can be the same as others,
but I wanted to lay them out fairly fine grained here before proposing
my distinctions:
#. Courses: an object representing a course as an offering but not any
of its content. Used for dashboards and other such navigators. These
may specify a version or merely reference the idea of the course's
existence.
#. Course structures: the names (and other metadata), `Locations`, and
children pointers but not definitions for all the blocks in a course
or a subtree of a course. Our applications often display contextual,
outline, or other such structural information which do not need to
include definitions but need to show display names, graded as, and
other status info. This document's design makes fetching these a
single document fetch; however, if it has to fetch the full course, it
will require far more work (getting all definitions too) than the apps
need.
#. Blocks (uses of definitions within a version of a course including
metadata, pointers to children, and type specific content)
#. Definitions: use independent definitions of content without
metadata (and currently w/o pointers to children).
#. Version trees Fetching the time history portrayal of a definition,
course, or block including branching.
#. Collections of courses, definitions, or blocks matching some
partial descriptors (e.g., all courses for org x, all definitions of
type foo, all blocks in course y of type x, all currently accessible
courses (published with startdate < today and enddate > today)).
#. Fetching of courses, blocks, or definitions via "human readable"
urls.
#. (partial descriptors) may suffice for this as human readable
does not guarantee uniqueness.
Some of these differ not so much in how to address them but in what
should be returned. The content should be up to the functions not the
addressing scheme. So, I think the addressable things are:
#. Course as in #1 above: usually a specific offering of a course.
Often used as a context for the other queries.
#. Blocks (aka usages) as in #3 above: a specific block contextualized
in a course
#. Definitions (#4): a specific definition
#. Collections of courses, blocks within a specific course, or
definitions matching a partial descriptor
Course locator (course_loc)
```````````````````````````
There are 3 ways to locate a course:
#. By its unique id in the `active_versions` collection with an
implied or specified selection of draft or published version.
#. By its unique id in the `structures` collection.
Block locator (block_loc)
`````````````````````````
A block locator finds a specific node in a specific version of a
course. Thus, it needs a course locator plus a `usage_id`.
Definition locator (definition_loc)
```````````````````````````````````
Just a `guid`.
Partial descriptor collections locators (partial)
`````````````````````````````````````````````````
In the most general case, and to simplify implementation, these can be
any payload passable to mongo for doing the lookup. The specification
of which collection to look into can be implied by which lookup
function your code calls (get_courses, get_blocks, get_definitions) or
we could add it as another property. For now, I will leave this as
merely a search string. Thus, to find all courses for org = mitx,
`{"org": "mitx"}`. To find all blocks in a course whose display name
contains "circuit example", call `get_blocks` with the course locator
plus `{"metadata.display_name" : /circuit example/i}` (the i makes it
case insensitive and is just an example). To find if a definition is
used in a course, call get_blocks with the course locator plus
`{definition : definition_guid}`. Note, this looks for a specific
version of the definition. If you wanted to see if it used any of a
set of versions, use `{definition : {"$in" : [definition_guid*]}}`
i4x locator
```````````
To support existing xml based courses and any urls, we need to
support i4x locators. These are tuples of `(org course category id
['draft'])`. The trouble with these is that they don't uniquely
identify a course run from which to dereference the element. There's
also no requirement that `id` have any uniqueness outside the scope of
the other elements. There's some debate as to whether these address
blocks or definitions. To mean, they seem to address blocks; however,
in the current system there is no distinction between blocks and
definitions; so, either could be argued.
This version will define an `i4x_location` class for representing
these and using them for xml based courses if necessary.
Current code munges strings to make them 'acceptable' by replacing
'illegal' chars with underscores. I'd like to suggest leaving strings
as is and using url escaping to make acceptable urls. As to making
human readable names from display strings, that should be the
responsibility of the naming module not the Location representation,
imo.
Use cases (expository)
~~~~~~~~~~~~~~~~~~~~~~
There's a section below walking through a specific use case. This one
just tries to review potential functionality.
Inheritance
```````````
Our system has the notion of policies which should control the
behavior of whole courses or subtrees within courses. Such policies
include graceperiods, discussion forum controls, dates, whether to
show answers, how to randomize, etc. It's important that the course
authors' intent propagates to all relevant course sections. The
desired behavior is that (some? all?) metadata attributes on modules
flow down to all children unless overridden.
This design addresses inheritance by making course structure and
metadata separate from content thus enabling a single or small number
of db queries to get these and then compute the inheritance.
Separating editing from live production
```````````````````````````````````````
Course authors should be able to make changes in isolation from
production and then push out consistent chunks of changes for all
students to see as atomic and consistent. The current system allows
authors to change text and content without affecting production but
not metadata nor course structure. This design separates all changes
from production until pushed.
Sharing of content, part 1
``````````````````````````
Authors want to share content between course runs and even between
different courses. The current system requires copying all such
content and losing the providence information which could be used to
take advantage of other peoples' changes. This design allows multiple
courses and multiple places within a course to point to the same
definitions and thus potentially, at some day, see other changes to
the content.
Sharing of content, part 2: course structure
````````````````````````````````````````````
Because courses structures are separate from their identities, courses
can share structure and track changes in the same way as definitions.
That is, a new course run can point to an existing course instance
with its version history and then branch it from there.
Sharing of content, part 3: modules
```````````````````````````````````
Suppose a course includes a soldering tutorial (or a required lab
safety lesson). Other courses want to use the same tutorial and
possibly allow the student to skip it if the student succeeded at it
in another course. As the tutorial updates, other courses may want to
track the updates or choose to move to the updates without having to
copy the modules from the module's authoritative parent course.
This design enables sharing of composed modules but it does not track
the revisions of those modules separately from their courses. It does
not adequately address this but may be extendible enough to do so.
That is, we could represent these shared units as separate "courses"
and allow ids in block.children[] to point to courses as well as other
blocks in the same course.
We should decide on the behaviors we want. Such as, some times the
student has to repeat the content or the student never has to repeat
it or? progress should be tracked by the owning course or as a stand
alone minicourse type element? Because it's a safety lesson, all
courses should track the current published head and not have their own
heads or they should choose when to promote the head?
Are these shared elements rare and large grained enough to make the
indirection not expensive or will it result in devolving to the
current one entry per module design for deducing course structure?
Functional differences from existing modulestore:
-------------------------------------------------
+ Courses and definitions support trees of versions knowing from where
they were derived. For now, I will not implement the server functions
for retrieving and manipulating these version trees and will leave
those for a future effort. I will only implement functions which
extend the trees.
+ Changes to course structure don't immediately affect production:
note, we need to figure out the granularity of the user's publish
behavior for pushing out these actions. That is, do they publish a
whole subtree which may include new children in order to make these
effective, do they publish all structural (deletion, move) changes
under a subtree but not insertions as an action, do they publish each
action individually, or what? How do they know that any of these are
not yet published? Do we have phantom placeholders for deleted nodes
w/ "publish deletion" buttons?
+ Element deletion
+ Element move
+ metadata changes
+ No location objects used as ids! This implementation will use guids
instead. There's a reasonable objection to guids as being too ugly,
long, and indecipherable. I will check mongy, pymongo, and python guid
generation mechanisms to find out if there's a way to make ones which
include a prepended string (such as course and run or an explicitly
stated prepend string) and minimize guid length (e.g., by using
sequential serial # from a global or local pool).
Use case walkthrough:
---------------------
Simple course creation with no precursor course: Note, this shows that
publishing creates subsets and side copies not in line versions of
nodes.
user db create course for org, course id, run id
active_versions.draftVersion: add entry
definitions: add entry C w/ category = 'course', no data
structures: add entry w/ 1 child C, original = self, no previous,
author = user
add section S copy structures entry, new one points to old as original
and previous
active_versions.draftVersion points to new
definitions: add entry S w/ category = 'section'
structures entry:
+ add S to children of the course block,
+ add S to blocks w/ no children
add subsection T copy structures entry, new one points to old as
original and previous
active_versions.draftVersion points to new
definitions: add entry T w/ category = 'sequential'
structures entry:
+ add T to children of the S block entry,
+ add T to blocks w/ no children
add unit U copy structures entry, new one points to old as original
and previous
active_versions.draftVersion points to new
definitions: add entry U w/ category = 'vertical'
structures entry:
+ add U to children of the T block entry,
+ add U to blocks w/ no children
publish U
create structures entry, new one points to self as original (no
pointer to draft course b/c it's not really a clone)
active_versions.publishedVersion points to new
block: add U, T, S, C pointers with each as respective child
(regardless of other children they may have in draft), and their
metadata
add units V, W, X under T copy structures entry of the draftVersion,
new one points to old as original and previous
active_versions.draftVersion points to new
definitions: add entries V, W, X w/ category = 'vertical'
structures entry:
+ add V, W, X to children of the T block entry,
+ add V, W, X to blocks w/ no children
edit U copy structures entry, new one points to old as original and
previous
active_versions.draftVersion points to new
definitions: copy entry U to U_2 w/ updates, U_2 points to U as
original and previous
structures entry:
+ replace U w/ U_2 in children of the T block entry,
+ copy entry U in blocks to entry U_2 and remove U
add subsection Z under S copy structures entry, new one points to old
as original and previous
active_versions.draftVersion points to new
definitions: add entry Z w/ category = 'sequential'
structures entry:
+ add Z to children of the S block entry,
+ add Z to blocks w/ no children
edit S's name (metadata) copy structures entry, new one points to old
as original and previous
active_versions.draftVersion points to new
structures entry: update S's metadata w/ new name publish U, V copy
publishedCourse structures entry, new one points to old published as
original and previous
active_versions.publishedVersion points to new
block: update T to point to new U & V and not old U
Note: does not update S's name publish C copy publishedCourse
structures entry, new one points to old published as original and
previous
active_versions.publishedVersion points to new
blocks: note that C child S == published(S) but metadata !=, update
metadata
note that S has unpublished children: publish them (recurse on this)
note that Z is unpublished: add pointer to blocks and children of S
note that W, X unpublished: add to blocks, add to children of T edit C
metadata (e.g., graceperiod) copy draft structures entry, new one
points to old as original and previous
active_versions.draftVersion points to new
structures entry: update C's metadata add Y under Z ... publish C's
metadata change copy publishedCourse structures entry, new one points
to old published as original and previous
active_versions.publishedVersion points to new
blocks: update C's metadata
Note: no copying of Y or any other changes to published move X under Z
copy draft structures entry, new one points to old as original and
previous
active_versions.draftVersion points to new
structures entry: remove X from T's children and add to Z's
Note: making it persistently clear to the user that X still exists
under T in the published version will be crucial delete W copy draft
structures entry, new one points to old as original and previous
active_versions.draftVersion points to new
structures entry: remove W from T's children and remove W from blocks
Note: no actual deletion of W, just no longer reachable w/in the draft
course, but still in published; so, need to keep user aware of that.
publish Z Note: the interesting thing here is that X cannot occur
under both Z and T, but the user's not publishing T, here's where
having a consistent definition of original may help. If the original
of a new element == original of an existing, then it's an update?
copy publishedCourse entry...
definitions: add Y, copy/update Z, X if either have any data changes
(they don't)
blocks: remove X from T's children and add to Z's, add Y to Z, add Y
publish deletion of W copy publishedCourse entry...
structures entry: remove W from T's children and remove W from blocks
Conflict detection:
Need a scenario where 2 authors make edits to different parts of
course, to parts while parents being moved, while parents being
deleted, to same parts, ...
.. _http://www.mongodb.org/: http://www.mongodb.org/
......@@ -5,13 +5,13 @@ desc "Invoke sphinx 'make build' to generate docs."
task :builddocs, [:type, :quiet] do |t, args|
args.with_defaults(:quiet => "quiet")
if args.type == 'dev'
path = "docs/developers"
path = "docs/en_us/developers"
elsif args.type == 'author'
path = "docs/course_authors"
path = "docs/en_us/course_authors"
elsif args.type == 'data'
path = "docs/data"
path = "docs/en_us/data"
else
path = "docs"
path = "docs/en_us"
end
Dir.chdir(path) do
......@@ -26,13 +26,13 @@ end
desc "Show docs in browser (mac and ubuntu)."
task :showdocs, [:options] do |t, args|
if args.options == 'dev'
path = "docs/developers"
path = "docs/en_us/developers"
elsif args.options == 'author'
path = "docs/course_authors"
path = "docs/en_us/course_authors"
elsif args.options == 'data'
path = "docs/data"
path = "docs/en_us/data"
else
path = "docs/developers"
path = "docs/en_us/developers"
end
Launchy.open("#{path}/build/html/index.html")
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment