Commit ef79b27b by Andrew Dekker

Sync fork

parents d06f1545 640eb0ef
......@@ -8,4 +8,5 @@ Mark Hoeber <hoeber@edx.org>
Sylvia Pearce <spearce@edx.org>
Ned Batchelder <ned@nedbatchelder.com>
David Baumgold <david@davidbaumgold.com>
Grady Ward <gward@brandeis.edu>
Andrew Dekker <a.dekker@uq.edu.au>
......@@ -33,7 +33,8 @@ install-nltk-data:
STATIC_JS = openassessment/xblock/static/js
javascript:
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js > "$(STATIC_JS)/openassessment.min.js"
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/lms/*.js > "$(STATIC_JS)/openassessment-lms.min.js"
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/studio/*.js > "$(STATIC_JS)/openassessment-studio.min.js"
install-test:
......
.. image:: https://travis-ci.org/edx/edx-ora2.png?branch=master
:alt: Travis build status
.. image:: https://coveralls.io/repos/edx/edx-ora2/badge.png?branch=master
:target: https://coveralls.io/r/edx/edx-ora2?branch=master
:alt: Coverage badge
Open Response Assessment |build-status| |coverage-status|
=========================================================
`User documentation available on ReadTheDocs`__.
__ http://edx.readthedocs.org/projects/edx-open-response-assessments
User docs: |user-docs| Developer docs: |dev-docs|
Installation
......@@ -32,13 +24,6 @@ Running the Development Server
./scripts/workbench.sh
By default, the XBlock JavaScript will be combined and minified. To
preserve indentation and line breaks in JavaScript source files:
.. code:: bash
DEBUG_JS=1 ./scripts/workbench.sh
Additional arguments are passed to ``runserver``. For example,
to start the server on port 8001:
......@@ -142,3 +127,16 @@ Mailing List and IRC Channel
You can discuss this code on the
`edx-code Google Group <https://groups.google.com/forum/#!forum/edx-code>`_ or
in the `edx-code` IRC channel on Freenode.
.. |build-status| image:: https://travis-ci.org/edx/edx-ora2.png?branch=master
:target: https://travis-ci.org/edx/edx-ora2
:alt: Travis build status
.. |coverage-status| image:: https://coveralls.io/repos/edx/edx-ora2/badge.png?branch=master
:target: https://coveralls.io/r/edx/edx-ora2?branch=master
:alt: Coverage badge
.. |user-docs| image:: https://readthedocs.org/projects/edx-open-response-assessments/badge/?version=latest
:target: http://edx.readthedocs.org/projects/edx-open-response-assessments
:alt: User documentation
.. |dev-docs| image:: https://readthedocs.org/projects/edx-ora-2/badge/?version=latest
:target: http://edx.readthedocs.org/projects/edx-ora-2
:alt: Developer documentation
......@@ -22,6 +22,8 @@ A *rubric* is a list of expectations that a response should meet. Rubrics are ma
When you assess a response, you'll select the option that best describes the response for each of the criteria.
Some instructors create a **Top Responses** section that shows the top-scoring responses for the assignment and the scores that these responses received. If an instructor creates this section, you can see it below your score after you've completed each step of the assignment.
************************
Student Instructions
************************
......@@ -32,7 +34,7 @@ When you come to an open response assessment in the course, you'll see the quest
:alt: Open response assessment example with question, response field, and assessment types and status labeled
:width: 550
Here, we'll walk you through the process of completing an open response assessment that includes a peer assessment and a self assessment:
Here, we'll walk you through the process of completing an open response assessment that includes a student training step, a peer assessment, and a self assessment:
#. Submit your response to a question.
#. Learn to assess responses.
......@@ -77,7 +79,7 @@ Note that you can view your response at any time after you submit it. To do this
Submit an Image with Your Response
***********************************
Some assignments require you to submit an image with your text response. If you have to submit an image, you'll see buttons that you'll use to upload your image.
Some assignments allow you to submit an image with your text response. If you can submit an image, you'll see buttons that you'll use to upload your image.
.. image:: /Images/PA_Upload_ChooseFile.png
:alt: Open response assessment example with Choose File and Upload Your Image buttons circled
......@@ -89,7 +91,7 @@ To upload your image:
#. In the dialog box that opens, select the file that you want, and then click **Open**.
#. When the dialog box closes, click **Upload Your Image**.
Your image appears below the response field, and the name of the image file appears next to the **Choose File** button. If you want to change the image, follow steps 1-3 again.
Your image appears below the response field, and the name of the image file appears next to the **Choose File** button. If you want to change the image, follow steps 1-3 again. You can only upload one image.
.. image:: /Images/PA_Upload_WithImage.png
:alt: Example response with an image of Paris
......@@ -152,16 +154,32 @@ When peer assessment starts, you'll see the original question, another student's
You'll assess these responses by selecting options in the rubric, the same way you assessed the sample responses in the "learn to assess responses" step. Additionally, this step has a field below the rubric where you can provide comments about the student's response.
.. note:: Some assessments may have an additional **Comments** field for one or more of the assessment's individual criteria. You can enter up to 300 characters in these fields. In the following image, the first of the criteria has a separate **Comments** field, but the second does not.
.. note:: Some assessments may have an additional **Comments** field for one or more of the assessment's individual criteria. You can enter up to 300 characters in these fields. In the following image, both criteria have a **Comments** field. There is also a field for overall comments on the response.
.. image:: /Images/PA_S_CommentBoxes.png
:alt: Rubric with call-outs for comment boxes
:width: 500
.. image:: /Images/PA_CriterionAndOverallComments.png
:alt: Rubric with comment fields under each criterion and under overall response
:width: 600
After you've selected options in the rubric and provided additional comments about the response in this field, click **Submit your assessment and move to response #<number>**.
When you submit your assessment of the first student's response, another response opens for you. Assess this response in the same way that you assessed the first response, and then submit your assessment. You'll repeat these steps until you've assessed the required number of responses. The number in the upper-right corner of the step is updated as you assess each response.
Assess Additional Peer Responses
********************************
You can assess more peer responses if you want to. After you assess the required number of responses, the step "collapses" so that just the **Assess Peers** heading is visible.
.. image:: /Images/PA_PAHeadingCollapsed.png
:width: 500
:alt: The peer assessment step with just the heading visible
To assess more responses, click the **Assess Peers** heading to expand the step. Then, click **Continue Assessing Peers**.
.. image:: /Images/PA_ContinueGrading.png
:width: 500
:alt: The peer assessment step expanded so that "Continue Assessing Peers" is visible
=====================
Assess Your Response
=====================
......@@ -201,9 +219,9 @@ If you've assessed the required number of peer responses and completed your self
Peer Assessment Scoring
***********************
Peer assessments are scored by criteria. An individual criterion's score is the median of the scores that each peer assessor gave that criterion. For example, if the Ideas criterion in a peer assessment receives a 10 from one student, a 7 from a second student, and an 8 from a third student, the Ideas criterion's score is 8.
Peer assessments are scored by criteria. An individual criterion's score is the *median*, not average, of the scores that each peer assessor gave that criterion. For example, if the Ideas criterion in a peer assessment receives a 10 from one student, a 7 from a second student, and an 8 from a third student, the Ideas criterion's score is 8.
A student's final score for a peer assessment is the sum of the median scores for each individual criterion.
Your final score for a peer assessment is the sum of the median scores for each individual criterion.
For example, a response may receive the following scores from peer assessors:
......@@ -237,4 +255,18 @@ To calculate the final score, the system adds the median scores for each criteri
**Ideas median (8/10) + Content median (8/10) + Grammar median (4/5) = final score (20/25)**
Note, again, that final scores are calculated by criteria, not by individual assessor. Thus the response's score is not the median of the scores that each individual peer assessor gave the response.
Note, again, that final scores are calculated by criteria, not by assessor. Thus your score is not the median of the scores that each individual peer assessor gave the response.
==================================
View Top Responses (optional)
==================================
If the instructor has included a **Top Responses** section, you can see the highest-scoring responses that your peers have submitted. This section only appears after you've completed all the steps of the assignment.
.. image:: /Images/PA_TopResponses.png
:alt: Section that shows the text and scores of the top three responses for the assignment
:width: 500
......@@ -12,5 +12,6 @@ Creating Peer Assessments
:maxdepth: 2
PeerAssessment
PeerAssessment_Students
CreatePeerAssessment
Access_PA_Info
PeerAssessment_Students
\ No newline at end of file
......@@ -36,6 +36,9 @@ configured:
* FILE_UPLOAD_STORAGE_BUCKET_NAME - The name of the S3 Bucket configured for uploading and downloading content.
* FILE_UPLOAD_STORAGE_PREFIX (optional) - The file prefix within the bucket for storing all content. Defaults to 'submissions_attachments'
Note that your S3 bucket must have a DNS compliant name, which will be used by
the File Upload Service to generate the upload and download URLs.
In addition, your S3 bucket must be have CORS configuration set up to allow PUT
and GET requests to be performed across request origins. To do so, you must:
......@@ -55,3 +58,34 @@ and GET requests to be performed across request origins. To do so, you must:
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Note that you must configure an IAM user and role for access to your S3 bucket.
1. From Amazon AWS, select services, IAM.
2. Select Groups
3. Create a new 'upload' group.
4. This new group will require a policy. The following is a lenient upload
policy for S3:
.. code-block:: json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1403207543000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
}
]
}
5. Create a new User, add this user to the new 'upload' Group. Choose to
generate a new access key for this user.
6. This new access key must be used in the settings described above:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.
......@@ -25,6 +25,15 @@ Developer Documentation
architecture/index
Migrating AI Problems to ORA2
-----------------------------
.. toctree::
:maxdepth: 2
migrate_ai
API Documentation
-----------------
......
.. _migrate_ai:
Migrating AI Problems
---------------------
ORA2 supports AI assessment for student responses, but currently does not support authoring of AI problems. In order to migrate an existing AI assessment problem into ORA2, you will need to:
1. Create a problem with example-based assessment enabled.
a. Create an ORA2 problem in a course. See `the user documentation <http://edx.readthedocs.org/projects/edx-open-response-assessments>`__ for directions.
b. `Export the course using Studio <http://ca.readthedocs.org/en/latest/building_course/export_import_course.html>`__
c. Untar the exported course and find the problem XML. You can search for the HTML tag "openassessment".
d. Add the AI ("example-based") assessment to the XML, including the example essays and scores. The selected criteria and options **must** match the rubric in the XML definition.
.. code:: xml
<assessment name="example-based-assessment" algorithm_id="ease">
<example>
<answer>First essay</answer>
<select criterion="Ideas" option="Bad" />
<select criterion="Content" option="Bad" />
</example>
<example>
<answer>Second essay</answer>
<select criterion="Ideas" option="Good" />
<select criterion="Content" option="Bad" />
</example>
<example>
<answer>Third essay</answer>
<select criterion="Ideas" option="Bad" />
<select criterion="Content" option="Good" />
</example>
</assessment>
..
e. Archive the course in "tar.gz" format.
f. `Import the course into Studio <http://ca.readthedocs.org/en/latest/building_course/export_import_course.html>`__
2. Train classifiers.
a. Log in to the LMS as global staff. (If your account does not have global staff permissions, you will need to run a Django management command).
b. Navigate to the ORA2 problem you created.
c. In the "Course Staff Information" section (at the bottom of the problem), click the button "Schedule Example-Based Training"
d. When training completes (should take ~1 hour), the "Course Staff Information" section will show that a classifier has been trained.
.. image:: course_staff_ai.png
3. At this point, students can submit essays and receive grades.
......@@ -25,7 +25,12 @@ module.exports = function(config) {
'lib/*.js',
'src/oa_shared.js',
'src/*.js',
'src/lms/*.js',
'src/studio/*.js',
'spec/test_shared.js',
'spec/*.js',
'spec/lms/*.js',
'spec/studio/*.js',
// fixtures
{
......@@ -44,7 +49,9 @@ module.exports = function(config) {
// preprocess matching files before serving them to the browser
// available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
preprocessors: {
'src/*.js': 'coverage'
'src/*.js': 'coverage',
'src/lms/*.js': 'coverage',
'src/studio/*.js': 'coverage'
},
......
......@@ -26,7 +26,7 @@ class RubricAdmin(admin.ModelAdmin):
"""Short description of criteria for presenting in a list."""
rubric_data = RubricSerializer.serialized_from_cache(rubric_obj)
return u", ".join(
u"{}: {}".format(criterion["name"], criterion["points_possible"])
u"{} - {}: {}".format(criterion["name"], criterion['label'], criterion["points_possible"])
for criterion in rubric_data["criteria"]
)
......@@ -88,11 +88,13 @@ class AssessmentAdmin(admin.ModelAdmin):
def parts_summary(self, assessment_obj):
return "<br/>".join(
html.escape(
u"{}/{} - {}: {} - {}".format(
u"{}/{} - {} - {}: {} - {} - {}".format(
part.points_earned,
part.points_possible,
part.criterion.name,
part.criterion.label,
part.option.name if part.option else "None",
part.option.label if part.option else "None",
part.feedback,
)
)
......
......@@ -249,7 +249,7 @@ def create_assessment(
try:
# Retrieve workflow information
scorer_workflow = PeerWorkflow.objects.get(submission_uuid=scorer_submission_uuid)
peer_workflow_item = scorer_workflow.get_latest_open_workflow_item()
peer_workflow_item = scorer_workflow.find_active_assessments()
if peer_workflow_item is None:
message = (
u"There are no open assessments associated with the scorer's "
......@@ -257,7 +257,7 @@ def create_assessment(
).format(scorer_submission_uuid)
logger.warning(message)
raise PeerAssessmentWorkflowError(message)
peer_submission_uuid = peer_workflow_item.author.submission_uuid
peer_submission_uuid = peer_workflow_item.submission_uuid
assessment = _complete_assessment(
rubric_dict,
......@@ -666,7 +666,8 @@ def get_submission_to_assess(submission_uuid, graded_by):
u"A Peer Assessment Workflow does not exist for the student "
u"with submission UUID {}".format(submission_uuid)
)
peer_submission_uuid = workflow.find_active_assessments()
open_item = workflow.find_active_assessments()
peer_submission_uuid = open_item.submission_uuid if open_item else None
# If there is an active assessment for this user, get that submission,
# otherwise, get the first assessment for review, otherwise,
# get the first submission available for over grading ("over-grading").
......
......@@ -189,9 +189,9 @@ def validate_training_examples(rubric, examples):
]
if len(set(criteria_options) - set(criteria_without_options)) == 0:
return [_(
u"When you include a student training assessment, "
u"the rubric for the assessment must contain at least one criterion, "
u"and each criterion must contain at least two options."
"If your assignment includes a student training step, "
"the rubric must have at least one criterion, "
"and that criterion must have at least one option."
)]
# Check each example
......
......@@ -149,7 +149,13 @@ class Criterion(models.Model):
"""
rubric = models.ForeignKey(Rubric, related_name="criteria")
# Backwards compatibility: The "name" field was formerly
# used both as a display name and as a unique identifier.
# Now we're using it only as a unique identifier.
# We include the "label" (which is displayed to the user)
# in the data model so we can include it in analytics data packages.
name = models.CharField(max_length=100, blank=False)
label = models.CharField(max_length=100, blank=True)
# 0-based order in the Rubric
order_num = models.PositiveIntegerField()
......@@ -189,9 +195,13 @@ class CriterionOption(models.Model):
# How many points this option is worth. 0 is allowed.
points = models.PositiveIntegerField()
# Short name of the option. This is visible to the user.
# Examples: "Excellent", "Good", "Fair", "Poor"
# Backwards compatibility: The "name" field was formerly
# used both as a display name and as a unique identifier.
# Now we're using it only as a unique identifier.
# We include the "label" (which is displayed to the user)
# in the data model so we can include it in analytics data packages.
name = models.CharField(max_length=100)
label = models.CharField(max_length=100, blank=True)
# Longer text describing this option and why you should choose it.
# Example: "The response makes 3-5 Monty Python references and at least one
......
......@@ -216,13 +216,29 @@ class PeerWorkflow(models.Model):
for this PeerWorkflow.
Returns:
submission_uuid (str): The submission_uuid for the submission that the
(PeerWorkflowItem) The PeerWorkflowItem for the submission that the
student has open for active assessment.
"""
oldest_acceptable = now() - self.TIME_LIMIT
workflows = self.graded.filter(assessment__isnull=True, started_at__gt=oldest_acceptable) # pylint:disable=E1101
return workflows[0].submission_uuid if workflows else None
items = list(self.graded.all().order_by("-started_at", "-id"))
valid_open_items = []
completed_sub_uuids = []
# First, remove all completed items.
for item in items:
if item.assessment is not None:
completed_sub_uuids.append(item.submission_uuid)
else:
valid_open_items.append(item)
# Remove any open items which have a submission which has been completed.
for item in valid_open_items:
if (item.started_at < oldest_acceptable or
item.submission_uuid in completed_sub_uuids):
valid_open_items.remove(item)
return valid_open_items[0] if valid_open_items else None
def get_submission_for_review(self, graded_by):
"""
......@@ -331,19 +347,6 @@ class PeerWorkflow(models.Model):
logger.exception(error_message)
raise PeerAssessmentInternalError(error_message)
def get_latest_open_workflow_item(self):
"""
Return the latest open workflow item for this workflow.
Returns:
A PeerWorkflowItem that is open for assessment.
None if no item is found.
"""
workflow_query = self.graded.filter(assessment__isnull=True).order_by("-started_at", "-id") # pylint:disable=E1101
items = list(workflow_query[:1])
return items[0] if items else None
def close_active_assessment(self, submission_uuid, assessment, num_required_grades):
"""
Updates a workflow item on the student's workflow with the associated
......
......@@ -63,7 +63,7 @@ class CriterionOptionSerializer(NestedModelSerializer):
"""Serializer for :class:`CriterionOption`"""
class Meta:
model = CriterionOption
fields = ('order_num', 'points', 'name', 'explanation')
fields = ('order_num', 'points', 'name', 'label', 'explanation')
class CriterionSerializer(NestedModelSerializer):
......@@ -73,7 +73,7 @@ class CriterionSerializer(NestedModelSerializer):
class Meta:
model = Criterion
fields = ('order_num', 'name', 'prompt', 'options', 'points_possible')
fields = ('order_num', 'name', 'label', 'prompt', 'options', 'points_possible')
class RubricSerializer(NestedModelSerializer):
......
......@@ -577,7 +577,7 @@
"options_selected": {}
}
],
"errors": ["When you include a student training assessment, the rubric for the assessment must contain at least one criterion, and each criterion must contain at least two options."]
"errors": ["If your assignment includes a student training step, the rubric must have at least one criterion, and that criterion must have at least one option."]
}
}
......@@ -151,7 +151,7 @@ class TestPeerApi(CacheResetTest):
Tests for the peer assessment API functions.
"""
CREATE_ASSESSMENT_NUM_QUERIES = 59
CREATE_ASSESSMENT_NUM_QUERIES = 58
def test_create_assessment_points(self):
self._create_student_and_submission("Tim", "Tim's answer")
......@@ -879,8 +879,8 @@ class TestPeerApi(CacheResetTest):
PeerWorkflow.create_item(buffy_workflow, xander_answer["uuid"])
# Check to see if Buffy is still actively reviewing Xander's submission.
submission_uuid = buffy_workflow.find_active_assessments()
self.assertEqual(xander_answer["uuid"], submission_uuid)
item = buffy_workflow.find_active_assessments()
self.assertEqual(xander_answer["uuid"], item.submission_uuid)
def test_get_workflow_by_uuid(self):
buffy_answer, _ = self._create_student_and_submission("Buffy", "Buffy's answer")
......@@ -1212,6 +1212,70 @@ class TestPeerApi(CacheResetTest):
MONDAY,
)
def test_ignore_duplicate_workflow_items(self):
"""
A race condition may cause two workflow items to be opened for a single
submission. In this case, we want to be defensive in the API, such that
no open workflow item is acknowledged if an assessment has already been
made against the associated submission.
"""
bob_sub, bob = self._create_student_and_submission('Bob', 'Bob submission')
tim_sub, tim = self._create_student_and_submission('Tim', 'Tim submission')
sally_sub, sally = self._create_student_and_submission('Sally', 'Sally submission')
jane_sub, jane = self._create_student_and_submission('Jane', 'Jane submission')
# Create two workflow items.
peer_api.create_peer_workflow_item(bob_sub['uuid'], tim_sub['uuid'])
peer_api.create_peer_workflow_item(bob_sub['uuid'], tim_sub['uuid'])
# Assess the submission, then get the next submission.
peer_api.create_assessment(
bob_sub['uuid'],
bob['student_id'],
ASSESSMENT_DICT['options_selected'],
ASSESSMENT_DICT['criterion_feedback'],
ASSESSMENT_DICT['overall_feedback'],
RUBRIC_DICT,
REQUIRED_GRADED_BY,
MONDAY
)
# Verify the next submission is not Tim again, but Sally.
next_sub = peer_api.get_submission_to_assess(bob_sub['uuid'], REQUIRED_GRADED_BY)
self.assertEqual(next_sub['uuid'], sally_sub['uuid'])
# Request another peer submission. Should pick up Sally again.
next_sub = peer_api.get_submission_to_assess(bob_sub['uuid'], REQUIRED_GRADED_BY)
self.assertEqual(next_sub['uuid'], sally_sub['uuid'])
# Ensure that the next assessment made is against Sally, not Tim.
# Assess the submission, then get the next submission.
peer_api.create_assessment(
bob_sub['uuid'],
bob['student_id'],
ASSESSMENT_DICT['options_selected'],
ASSESSMENT_DICT['criterion_feedback'],
ASSESSMENT_DICT['overall_feedback'],
RUBRIC_DICT,
REQUIRED_GRADED_BY,
MONDAY
)
# Make sure Tim has one assessment.
tim_assessments = peer_api.get_assessments(tim_sub['uuid'], scored_only=False)
self.assertEqual(1, len(tim_assessments))
# Make sure Sally has one assessment.
sally_assessments = peer_api.get_assessments(sally_sub['uuid'], scored_only=False)
self.assertEqual(1, len(sally_assessments))
# Make sure Jane has no assessment.
jane_assessments = peer_api.get_assessments(jane_sub['uuid'], scored_only=False)
self.assertEqual(0, len(jane_assessments))
def test_get_submission_to_assess_no_workflow(self):
# Try to retrieve a submission to assess when the student
# doing the assessment hasn't yet submitted.
......
......@@ -28,7 +28,8 @@ class CsvWriter(object):
],
'assessment_part': [
'assessment_id', 'points_earned',
'criterion_name', 'option_name', 'feedback'
'criterion_name', 'criterion_label',
'option_name', 'option_label', 'feedback'
],
'assessment_feedback': [
'submission_uuid', 'feedback_text', 'options'
......@@ -230,7 +231,9 @@ class CsvWriter(object):
part.assessment.id,
part.points_earned,
part.criterion.name,
part.criterion.label,
part.option.name if part.option is not None else u"",
part.option.label if part.option is not None else u"",
part.feedback
])
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment