Commit cb932343 by Will Daly

Merge remote-tracking branch 'origin/master' into authoring

Conflicts:
	Makefile
	openassessment/templates/openassessmentblock/peer/oa_peer_assessment.html
	openassessment/templates/openassessmentblock/self/oa_self_assessment.html
	openassessment/xblock/data_conversion.py
	openassessment/xblock/grade_mixin.py
	openassessment/xblock/peer_assessment_mixin.py
	openassessment/xblock/self_assessment_mixin.py
	openassessment/xblock/static/css/openassessment.css
	openassessment/xblock/static/js/src/oa_server.js
	openassessment/xblock/studio_mixin.py
	openassessment/xblock/xml.py
parents 7b55559d 49280183
...@@ -32,7 +32,7 @@ install-nltk-data: ...@@ -32,7 +32,7 @@ install-nltk-data:
STATIC_JS = openassessment/xblock/static/js STATIC_JS = openassessment/xblock/static/js
minimize-js: javascript:
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/lms/*.js > "$(STATIC_JS)/openassessment-lms.min.js" node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/lms/*.js > "$(STATIC_JS)/openassessment-lms.min.js"
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/studio/*.js > "$(STATIC_JS)/openassessment-studio.min.js" node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/studio/*.js > "$(STATIC_JS)/openassessment-studio.min.js"
...@@ -41,7 +41,7 @@ install-test: ...@@ -41,7 +41,7 @@ install-test:
pip install -q -r requirements/test.txt pip install -q -r requirements/test.txt
install: install-system install-node install-wheels install-python install-js install-nltk-data install-test minimize-js install: install-system install-node install-wheels install-python install-js install-nltk-data install-test javascript
test: test:
./scripts/test.sh ./scripts/test.sh
...@@ -47,6 +47,27 @@ to start the server on port 8001: ...@@ -47,6 +47,27 @@ to start the server on port 8001:
./scripts/workbench.sh 8001 ./scripts/workbench.sh 8001
Combining and Minifying JavaScript and Sass
============================================
To reduce page size, the OpenAssessment XBlock serves combined/minified
versions of JavaScript and CSS. This combined/minified files are checked
into the git repository.
If you modify JavaScript or Sass, you MUST regenerate the combined/minified
files:
.. code:: bash
# Combine/minify JavaScript
make javascript
# Combine/minify CSS (from Sass)
./scripts/sass.sh
Make sure you commit the combined/minified files to the git repository!
Running Tests Running Tests
============= =============
......
...@@ -74,6 +74,29 @@ Note that you can view your response at any time after you submit it. To do this ...@@ -74,6 +74,29 @@ Note that you can view your response at any time after you submit it. To do this
:alt: Image of the Response field collapsed and then expanded :alt: Image of the Response field collapsed and then expanded
:width: 550 :width: 550
Submit an Image with Your Response
***********************************
Some assignments require you to submit an image with your text response. If you have to submit an image, you'll see buttons that you'll use to upload your image.
.. image:: /Images/PA_Upload_ChooseFile.png
:alt: Open response assessment example with Choose File and Upload Your Image buttons circled
:width: 500
To upload your image:
#. Click **Choose File**.
#. In the dialog box that opens, select the file that you want, and then click **Open**.
#. When the dialog box closes, click **Upload Your Image**.
Your image appears below the response field, and the name of the image file appears next to the **Choose File** button. If you want to change the image, follow steps 1-3 again.
.. image:: /Images/PA_Upload_WithImage.png
:alt: Example response with an image of Paris
:width: 500
.. note:: You must submit text as well as your image in your response. You can't submit a response that doesn't contain text.
============================ ============================
Learn to Assess Responses Learn to Assess Responses
============================ ============================
......
############
Change Log
############
***********
July 2014
***********
.. list-table::
:widths: 10 70
:header-rows: 1
* - Date
- Change
* - 07/15/14
- Added information about uploading an image file in a response to both :ref:`Peer Assessments` and :ref:`PA for Students`.
* -
- Added information about providing a criterion that includes a comment field only to :ref:`Peer Assessments`.
...@@ -39,8 +39,14 @@ Student Training ...@@ -39,8 +39,14 @@ Student Training
.. automodule:: openassessment.assessment.api.student_training .. automodule:: openassessment.assessment.api.student_training
:members: :members:
Workflow Assessment File Upload
******************* ***********
.. automodule:: openassessment.fileupload.api
:members:
Workflow
********
.. automodule:: openassessment.workflow .. automodule:: openassessment.workflow
:members: :members:
......
...@@ -4,8 +4,6 @@ ...@@ -4,8 +4,6 @@
AI Grading AI Grading
########## ##########
.. warning:: This is a DRAFT that has not yet been implemented.
Overview Overview
-------- --------
...@@ -234,76 +232,10 @@ Recovery from Failure ...@@ -234,76 +232,10 @@ Recovery from Failure
c. Horizontally scale workers to handle additional load. c. Horizontally scale workers to handle additional load.
Data Model
----------
1. **GradingWorkflow**
a. Submission UUID (varchar)
b. ClassifierSet (Foreign Key, Nullable)
c. Assessment (Foreign Key, Nullable)
d. Rubric (Foreign Key): Used to search for classifier sets if none are available when the workflow is started.
e. Algorithm ID (varchar): Used to search for classifier sets if none are available when the workflow is started.
f. Scheduled at (timestamp): The time the task was placed on the queue.
g. Completed at (timestamp): The time the task was completed. If set, the task is considered complete.
h. Course ID (varchar): The ID of the course associated with the submission. Useful for rescheduling failed grading tasks in a particular course.
i. Item ID (varchar): The ID of the item (problem) associated with the submission. Useful for rescheduling failed grading tasks in a particular item in a course.
2. **TrainingWorkflow**
a. Algorithm ID (varchar)
b. Many-to-many relation with **TrainingExample**. We can re-use examples for multiple workflows.
c. ClassifierSet (Foreign Key)
d. Scheduled at (timestamp): The time the task was placed on the queue.
e. Completed at (timestamp): The time the task was completed. If set, the task is considered complete.
3. **TrainingExample**
a. Response text (text)
b. Options selected (many to many relation with CriterionOption)
4. **ClassifierSet**
a. Rubric (Foreign Key)
b. Created at (timestamp)
c. Algorithm ID (varchar)
5. **Classifier**
a. ClassifierSet (Foreign Key)
b. URL for trained classifier (varchar)
c. Criterion (Foreign Key)
6. **Assessment** (same as current implementation)
a. Submission UUID (varchar)
b. Rubric (Foreign Key)
7. **AssessmentPart** (same as current implementation)
a. Assessment (Foreign Key)
b. Option (Foreign Key to a **CriterionOption**)
8. **Rubric** (same as current implementation)
9. **Criterion** (same as current implementation)
a. Rubric (Foreign Key)
b. Name (varchar)
10. **CriterionOption** (same as current implementation)
a. Criterion (Foreign Key)
b. Points (positive integer)
c. Name (varchar)
Notes: Notes:
* We use a URL to reference the trained classifier so we can avoid storing it in the database. * The storage backend is pluggable. In production, we use Amazon S3, but in principle we could use other backends (including the local filesystem in local dev).
In practice, the URL will almost certainly point to Amazon S3, but in principle we could use
other backends.
* Unfortunately, the ML algorithm we will use for initial release (EASE) requires that we * Unfortunately, the ML algorithm we will use for initial release (EASE) requires that we
persist the trained classifiers using Python's ``pickle`` module. This has security implications persist the trained classifiers using Python's ``pickle`` module. This has security implications
......
...@@ -4,8 +4,6 @@ ...@@ -4,8 +4,6 @@
Understanding the Workflow Understanding the Workflow
########################## ##########################
.. warning:: The following section refers to features that are not yet fully
implemented.
The `openassessment.workflow` application is tasked with managing the overall The `openassessment.workflow` application is tasked with managing the overall
life-cycle of a student's submission as it goes through various evaluation steps life-cycle of a student's submission as it goes through various evaluation steps
...@@ -49,7 +47,9 @@ Isolation of Assessment types ...@@ -49,7 +47,9 @@ Isolation of Assessment types
a non `None` value has been returned by this function for a given a non `None` value has been returned by this function for a given
`submission_uuid`, repeated calls to this function should return the same `submission_uuid`, repeated calls to this function should return the same
thing. thing.
`on_start(submission_uuid)` `on_init(submission_uuid)`
Notification to the API that the student has submitted a response.
`on_start(submission_uuid)`
Notification to the API that the student has started the assessment step. Notification to the API that the student has started the assessment step.
In the long run, it could be that `OpenAssessmentBlock` becomes a wrapper In the long run, it could be that `OpenAssessmentBlock` becomes a wrapper
......
...@@ -12,9 +12,8 @@ Setup ...@@ -12,9 +12,8 @@ Setup
----- -----
:: ::
pip install -r requirements/dev.txt See the `README <https://github.com/edx/edx-ora2/blob/master/README.rst>`_
pip install -e .
python manage.py runserver
Developer Documentation Developer Documentation
...@@ -34,4 +33,3 @@ API Documentation ...@@ -34,4 +33,3 @@ API Documentation
:maxdepth: 2 :maxdepth: 2
api api
...@@ -90,7 +90,7 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None): ...@@ -90,7 +90,7 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None):
Args: Args:
submission_uuid (str): The UUID of the submission to assess. submission_uuid (str): The UUID of the submission to assess.
Kwargs: Keyword Arguments:
rubric (dict): Serialized rubric model. rubric (dict): Serialized rubric model.
algorithm_id (unicode): Use only classifiers trained with the specified algorithm. algorithm_id (unicode): Use only classifiers trained with the specified algorithm.
...@@ -104,8 +104,9 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None): ...@@ -104,8 +104,9 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None):
AIGradingRequestError AIGradingRequestError
AIGradingInternalError AIGradingInternalError
Example usage: Example Usage:
>>> submit('74a9d63e8a5fea369fd391d07befbd86ae4dc6e2', rubric, 'ease')
>>> on_init('74a9d63e8a5fea369fd391d07befbd86ae4dc6e2', rubric, 'ease')
'10df7db776686822e501b05f452dc1e4b9141fe5' '10df7db776686822e501b05f452dc1e4b9141fe5'
""" """
...@@ -179,7 +180,8 @@ def get_latest_assessment(submission_uuid): ...@@ -179,7 +180,8 @@ def get_latest_assessment(submission_uuid):
Raises: Raises:
AIGradingInternalError AIGradingInternalError
Examle usage: Example usage:
>>> get_latest_assessment('10df7db776686822e501b05f452dc1e4b9141fe5') >>> get_latest_assessment('10df7db776686822e501b05f452dc1e4b9141fe5')
{ {
'points_earned': 6, 'points_earned': 6,
...@@ -261,6 +263,7 @@ def train_classifiers(rubric_dict, examples, course_id, item_id, algorithm_id): ...@@ -261,6 +263,7 @@ def train_classifiers(rubric_dict, examples, course_id, item_id, algorithm_id):
AITrainingInternalError AITrainingInternalError
Example usage: Example usage:
>>> train_classifiers(rubric, examples, 'ease') >>> train_classifiers(rubric, examples, 'ease')
'10df7db776686822e501b05f452dc1e4b9141fe5' '10df7db776686822e501b05f452dc1e4b9141fe5'
...@@ -307,7 +310,7 @@ def reschedule_unfinished_tasks(course_id=None, item_id=None, task_type=u"grade" ...@@ -307,7 +310,7 @@ def reschedule_unfinished_tasks(course_id=None, item_id=None, task_type=u"grade"
only reschedule the unfinished grade tasks. Applied use case (with button in only reschedule the unfinished grade tasks. Applied use case (with button in
staff mixin) is to call without argument, and to reschedule grades only. staff mixin) is to call without argument, and to reschedule grades only.
Kwargs: Keyword Arguments:
course_id (unicode): Restrict to unfinished tasks in a particular course. course_id (unicode): Restrict to unfinished tasks in a particular course.
item_id (unicode): Restrict to unfinished tasks for a particular item in a course. item_id (unicode): Restrict to unfinished tasks for a particular item in a course.
NOTE: if you specify the item ID, you must also specify the course ID. NOTE: if you specify the item ID, you must also specify the course ID.
......
...@@ -225,7 +225,7 @@ def create_assessment( ...@@ -225,7 +225,7 @@ def create_assessment(
assessments is reached, the grading_completed_at timestamp is set assessments is reached, the grading_completed_at timestamp is set
for the Workflow. for the Workflow.
Kwargs: Keyword Args:
scored_at (datetime): Optional argument to override the time in which scored_at (datetime): Optional argument to override the time in which
the assessment took place. If not specified, scored_at is set to the assessment took place. If not specified, scored_at is set to
now. now.
...@@ -358,8 +358,8 @@ def get_assessment_median_scores(submission_uuid): ...@@ -358,8 +358,8 @@ def get_assessment_median_scores(submission_uuid):
appropriate median score. appropriate median score.
Returns: Returns:
(dict): A dictionary of rubric criterion names, with a median score of dict: A dictionary of rubric criterion names,
the peer assessments. with a median score of the peer assessments.
Raises: Raises:
PeerAssessmentInternalError: If any error occurs while retrieving PeerAssessmentInternalError: If any error occurs while retrieving
...@@ -430,16 +430,19 @@ def get_assessments(submission_uuid, scored_only=True, limit=None): ...@@ -430,16 +430,19 @@ def get_assessments(submission_uuid, scored_only=True, limit=None):
submission_uuid (str): The submission all the requested assessments are submission_uuid (str): The submission all the requested assessments are
associated with. Required. associated with. Required.
Kwargs: Keyword Arguments:
scored (boolean): Only retrieve the assessments used to generate a score scored (boolean): Only retrieve the assessments used to generate a score
for this submission. for this submission.
limit (int): Limit the returned assessments. If None, returns all. limit (int): Limit the returned assessments. If None, returns all.
Returns: Returns:
list(dict): A list of dictionaries, where each dictionary represents a list: A list of dictionaries, where each dictionary represents a
separate assessment. Each assessment contains points earned, points separate assessment. Each assessment contains points earned, points
possible, time scored, scorer id, score type, and feedback. possible, time scored, scorer id, score type, and feedback.
Raises: Raises:
PeerAssessmentRequestError: Raised when the submission_id is invalid. PeerAssessmentRequestError: Raised when the submission_id is invalid.
PeerAssessmentInternalError: Raised when there is an internal error PeerAssessmentInternalError: Raised when there is an internal error
...@@ -496,7 +499,7 @@ def get_submitted_assessments(submission_uuid, scored_only=True, limit=None): ...@@ -496,7 +499,7 @@ def get_submitted_assessments(submission_uuid, scored_only=True, limit=None):
submission_uuid (str): The submission of the student whose assessments submission_uuid (str): The submission of the student whose assessments
we are requesting. Required. we are requesting. Required.
Kwargs: Keyword Arguments:
scored (boolean): Only retrieve the assessments used to generate a score scored (boolean): Only retrieve the assessments used to generate a score
for this submission. for this submission.
limit (int): Limit the returned assessments. If None, returns all. limit (int): Limit the returned assessments. If None, returns all.
......
...@@ -89,7 +89,15 @@ def get_score(submission_uuid, requirements): ...@@ -89,7 +89,15 @@ def get_score(submission_uuid, requirements):
} }
def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, scored_at=None): def create_assessment(
submission_uuid,
user_id,
options_selected,
criterion_feedback,
overall_feedback,
rubric_dict,
scored_at=None
):
""" """
Create a self-assessment for a submission. Create a self-assessment for a submission.
...@@ -97,9 +105,14 @@ def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, s ...@@ -97,9 +105,14 @@ def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, s
submission_uuid (str): The unique identifier for the submission being assessed. submission_uuid (str): The unique identifier for the submission being assessed.
user_id (str): The ID of the user creating the assessment. This must match the ID of the user who made the submission. user_id (str): The ID of the user creating the assessment. This must match the ID of the user who made the submission.
options_selected (dict): Mapping of rubric criterion names to option values selected. options_selected (dict): Mapping of rubric criterion names to option values selected.
criterion_feedback (dict): Dictionary mapping criterion names to the
free-form text feedback the user gave for the criterion.
Since criterion feedback is optional, some criteria may not appear
in the dictionary.
overall_feedback (unicode): Free-form text feedback on the submission overall.
rubric_dict (dict): Serialized Rubric model. rubric_dict (dict): Serialized Rubric model.
Kwargs: Keyword Arguments:
scored_at (datetime): The timestamp of the assessment; defaults to the current time. scored_at (datetime): The timestamp of the assessment; defaults to the current time.
Returns: Returns:
...@@ -143,15 +156,24 @@ def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, s ...@@ -143,15 +156,24 @@ def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, s
rubric = rubric_from_dict(rubric_dict) rubric = rubric_from_dict(rubric_dict)
# Create the self assessment # Create the self assessment
assessment = Assessment.create(rubric, user_id, submission_uuid, SELF_TYPE, scored_at=scored_at) assessment = Assessment.create(
AssessmentPart.create_from_option_names(assessment, options_selected) rubric,
user_id,
submission_uuid,
SELF_TYPE,
scored_at=scored_at,
feedback=overall_feedback
)
# This will raise an `InvalidRubricSelection` if the selected options do not match the rubric.
AssessmentPart.create_from_option_names(assessment, options_selected, feedback=criterion_feedback)
_log_assessment(assessment, submission) _log_assessment(assessment, submission)
except InvalidRubric: except InvalidRubric as ex:
msg = "Invalid rubric definition" msg = "Invalid rubric definition: " + str(ex)
logger.warning(msg, exc_info=True) logger.warning(msg, exc_info=True)
raise SelfAssessmentRequestError(msg) raise SelfAssessmentRequestError(msg)
except InvalidRubricSelection: except InvalidRubricSelection as ex:
msg = "Selected options do not match the rubric" msg = "Selected options do not match the rubric: " + str(ex)
logger.warning(msg, exc_info=True) logger.warning(msg, exc_info=True)
raise SelfAssessmentRequestError(msg) raise SelfAssessmentRequestError(msg)
......
...@@ -234,7 +234,7 @@ def validate_training_examples(rubric, examples): ...@@ -234,7 +234,7 @@ def validate_training_examples(rubric, examples):
errors.append(msg) errors.append(msg)
# Check for missing criteria # Check for missing criteria
# Ignore options # Ignore options
all_example_criteria = set(options_selected.keys() + criteria_without_options) all_example_criteria = set(options_selected.keys() + criteria_without_options)
for missing_criterion in set(criteria_options.keys()) - all_example_criteria: for missing_criterion in set(criteria_options.keys()) - all_example_criteria:
msg = _( msg = _(
...@@ -398,7 +398,7 @@ def assess_training_example(submission_uuid, options_selected, update_workflow=T ...@@ -398,7 +398,7 @@ def assess_training_example(submission_uuid, options_selected, update_workflow=T
submission_uuid (str): The UUID of the student's submission. submission_uuid (str): The UUID of the student's submission.
options_selected (dict): The options the student selected. options_selected (dict): The options the student selected.
Kwargs: Keyword Arguments:
update_workflow (bool): If true, mark the current item complete update_workflow (bool): If true, mark the current item complete
if the student has assessed the example correctly. if the student has assessed the example correctly.
......
...@@ -253,10 +253,19 @@ class RubricIndex(object): ...@@ -253,10 +253,19 @@ class RubricIndex(object):
criterion.name: criterion criterion.name: criterion
for criterion in criteria for criterion in criteria
} }
self._option_index = {
(option.criterion.name, option.name): option # Finds the set of all criteria which have options by traversing through the options, and adding all of
for option in options # the options' associated criteria to an expanding set.
} criteria_with_options = set()
option_index = {}
for option in options:
option_index[(option.criterion.name, option.name)] = option
criteria_with_options.add(option.criterion)
# Anything not in the above mentioned set is a zero option criteria, and we save it here for future reference.
self._criteria_without_options = set(self._criteria_index.values()) - criteria_with_options
self._option_index = option_index
# By convention, if multiple options in the same criterion have the # By convention, if multiple options in the same criterion have the
# same point value, we return the *first* option. # same point value, we return the *first* option.
...@@ -389,10 +398,7 @@ class RubricIndex(object): ...@@ -389,10 +398,7 @@ class RubricIndex(object):
set of `Criterion` set of `Criterion`
""" """
return set( return self._criteria_without_options
criterion for criterion in self._criteria_index.values()
if criterion.options.count() == 0
)
class Assessment(models.Model): class Assessment(models.Model):
...@@ -454,7 +460,7 @@ class Assessment(models.Model): ...@@ -454,7 +460,7 @@ class Assessment(models.Model):
submission_uuid (str): The UUID of the submission being assessed. submission_uuid (str): The UUID of the submission being assessed.
score_type (unicode): The type of assessment (e.g. peer, self, or AI) score_type (unicode): The type of assessment (e.g. peer, self, or AI)
Kwargs: Keyword Arguments:
feedback (unicode): Overall feedback on the submission. feedback (unicode): Overall feedback on the submission.
scored_at (datetime): The time the assessment was created. Defaults to the current time. scored_at (datetime): The time the assessment was created. Defaults to the current time.
...@@ -639,7 +645,7 @@ class AssessmentPart(models.Model): ...@@ -639,7 +645,7 @@ class AssessmentPart(models.Model):
assessment (Assessment): The assessment we're adding parts to. assessment (Assessment): The assessment we're adding parts to.
selected (dict): A dictionary mapping criterion names to option names. selected (dict): A dictionary mapping criterion names to option names.
Kwargs: Keyword Arguments:
feedback (dict): A dictionary mapping criterion names to written feedback (dict): A dictionary mapping criterion names to written
feedback for the criterion. feedback for the criterion.
...@@ -665,8 +671,8 @@ class AssessmentPart(models.Model): ...@@ -665,8 +671,8 @@ class AssessmentPart(models.Model):
} }
# Validate that we have selections for all criteria # Validate that we have selections for all criteria
# This will raise an exception if we're missing any criteria # This will raise an exception if we're missing any selections/feedback required for criteria
cls._check_has_all_criteria(rubric_index, set(selected.keys() + feedback.keys())) cls._check_all_criteria_assessed(rubric_index, selected.keys(), feedback.keys())
# Retrieve the criteria/option/feedback for criteria that have options. # Retrieve the criteria/option/feedback for criteria that have options.
# Since we're using the rubric's index, we'll get an `InvalidRubricSelection` error # Since we're using the rubric's index, we'll get an `InvalidRubricSelection` error
...@@ -713,7 +719,7 @@ class AssessmentPart(models.Model): ...@@ -713,7 +719,7 @@ class AssessmentPart(models.Model):
assessment (Assessment): The assessment we're adding parts to. assessment (Assessment): The assessment we're adding parts to.
selected (dict): A dictionary mapping criterion names to option point values. selected (dict): A dictionary mapping criterion names to option point values.
Kwargs: Keyword Arguments:
feedback (dict): A dictionary mapping criterion names to written feedback (dict): A dictionary mapping criterion names to written
feedback for the criterion. feedback for the criterion.
...@@ -783,3 +789,35 @@ class AssessmentPart(models.Model): ...@@ -783,3 +789,35 @@ class AssessmentPart(models.Model):
if len(missing_criteria) > 0: if len(missing_criteria) > 0:
msg = u"Missing selections for criteria: {missing}".format(missing=missing_criteria) msg = u"Missing selections for criteria: {missing}".format(missing=missing_criteria)
raise InvalidRubricSelection(msg) raise InvalidRubricSelection(msg)
@classmethod
def _check_all_criteria_assessed(cls, rubric_index, selected_criteria, criteria_feedback):
"""
Verify that we've selected options OR have feedback for all criteria in the rubric.
Verifies the predicate for all criteria (X) in the rubric:
has-an-option-selected(X) OR (has-zero-options(X) AND has-criterion-feedback(X))
Args:
rubric_index (RubricIndex): The index of the rubric's data.
selected_criteria (list): list of criterion names that have an option selected
criteria_feedback (list): list of criterion names that have feedback on them
Returns:
None
Raises:
InvalidRubricSelection
"""
missing_option_selections = rubric_index.find_missing_criteria(selected_criteria)
zero_option_criteria = set([c.name for c in rubric_index.find_criteria_without_options()])
zero_option_criteria_missing_feedback = zero_option_criteria - set(criteria_feedback)
optioned_criteria_missing_selection = missing_option_selections - zero_option_criteria
missing_criteria = zero_option_criteria_missing_feedback | optioned_criteria_missing_selection
if len(missing_criteria) > 0:
msg = u"Missing selections for criteria: {missing}".format(missing=', '.join(missing_criteria))
raise InvalidRubricSelection(msg)
...@@ -93,7 +93,7 @@ class TrainingExample(models.Model): ...@@ -93,7 +93,7 @@ class TrainingExample(models.Model):
Create a cache key based on the content hash Create a cache key based on the content hash
for serialized versions of this model. for serialized versions of this model.
Kwargs: Keyword Arguments:
attribute: The name of the attribute being serialized. attribute: The name of the attribute being serialized.
If not specified, assume that we are serializing the entire model. If not specified, assume that we are serializing the entire model.
......
{
"No Option Selected, Has Options, No Feedback": {
"has_option_selected": false,
"has_zero_options": false,
"has_feedback": false,
"expected_error": true
},
"No Option Selected, Has Options, Has Feedback": {
"has_option_selected": false,
"has_zero_options": false,
"has_feedback": true,
"expected_error": true
},
"No Option Selected, No Options, No Feedback": {
"has_option_selected": false,
"has_zero_options": true,
"has_feedback": false,
"expected_error": true
},
"No Option Selected, No Options, Has Feedback": {
"has_option_selected": false,
"has_zero_options": true,
"has_feedback": true,
"expected_error": false
},
"Has Option Selected, Has Options, No Feedback": {
"has_option_selected": true,
"has_zero_options": false,
"has_feedback": false,
"expected_error": false
},
"Has Option Selected, No Options, Has Feedback": {
"has_option_selected": true,
"has_zero_options": true,
"has_feedback": true,
"expected_error": true
},
"Has Option Selected, No Options, No Feedback": {
"has_option_selected": true,
"has_zero_options": true,
"has_feedback": false,
"expected_error": true
},
"Has Option Selected, Has Options, Has Feedback": {
"has_option_selected": true,
"has_zero_options": false,
"has_feedback": true,
"expected_error": false
}
}
\ No newline at end of file
...@@ -2,13 +2,16 @@ ...@@ -2,13 +2,16 @@
""" """
Tests for the assessment Django models. Tests for the assessment Django models.
""" """
import copy import copy, ddt
from openassessment.test_utils import CacheResetTest from openassessment.test_utils import CacheResetTest
from openassessment.assessment.serializers import rubric_from_dict from openassessment.assessment.serializers import rubric_from_dict
from openassessment.assessment.models import Assessment, AssessmentPart, InvalidRubricSelection from openassessment.assessment.models import Assessment, AssessmentPart, InvalidRubricSelection
from .constants import RUBRIC from .constants import RUBRIC
from openassessment.assessment.api.self import create_assessment
from submissions.api import create_submission
from openassessment.assessment.errors import SelfAssessmentRequestError
@ddt.ddt
class AssessmentTest(CacheResetTest): class AssessmentTest(CacheResetTest):
""" """
Tests for the `Assessment` and `AssessmentPart` models. Tests for the `Assessment` and `AssessmentPart` models.
...@@ -148,3 +151,65 @@ class AssessmentTest(CacheResetTest): ...@@ -148,3 +151,65 @@ class AssessmentTest(CacheResetTest):
criterion['options'] = [] criterion['options'] = []
return rubric_from_dict(rubric_dict) return rubric_from_dict(rubric_dict)
@ddt.file_data('data/models_check_criteria_assessed.json')
def test_check_all_criteria_assessed(self, data):
student_item = {
'student_id': u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
'item_id': 'test_item',
'course_id': 'test_course',
'item_type': 'test_type'
}
submission = create_submission(student_item, "Test answer")
rubric, options_selected, criterion_feedback = self._create_data_structures_with_criterion_properties(
has_option_selected=data['has_option_selected'],
has_zero_options=data['has_zero_options'],
has_feedback=data['has_feedback']
)
error = False
try:
create_assessment(
submission['uuid'], student_item['student_id'], options_selected,
criterion_feedback, "overall feedback", rubric
)
except SelfAssessmentRequestError:
error = True
self.assertTrue(data['expected_error'] == error)
def _create_data_structures_with_criterion_properties(
self,
has_option_selected=True,
has_zero_options=True,
has_feedback=True
):
"""
Generates a dummy set of criterion definition structures that will allow us to specificy a specific combination
of criterion attributes for a test case.
"""
options = []
if not has_zero_options:
options = [{
"name": "Okay",
"points": 1,
"description": "It was okay I guess."
}]
rubric = {
'criteria': [
{
"name": "Quality",
"prompt": "How 'good' was it?",
"options": options
}
]
}
options_selected = {}
if has_option_selected:
options_selected['Quality'] = 'Okay'
criterion_feedback = {}
if has_feedback:
criterion_feedback['Quality'] = "This was an assignment of average quality."
return rubric, options_selected, criterion_feedback
\ No newline at end of file
...@@ -51,6 +51,16 @@ class TestSelfApi(CacheResetTest): ...@@ -51,6 +51,16 @@ class TestSelfApi(CacheResetTest):
"accuracy": "very accurate", "accuracy": "very accurate",
} }
CRITERION_FEEDBACK = {
"clarity": "Like a morning in the restful city of San Fransisco, the piece was indescribable, beautiful, and too foggy to properly comprehend.",
"accuracy": "Like my sister's cutting comments about my weight, I may not have enjoyed the piece, but I cannot fault it for its factual nature."
}
OVERALL_FEEDBACK = (
u"Unfortunately, the nature of being is too complex to comment, judge, or discern any one"
u"arbitrary set of things over another."
)
def test_create_assessment(self): def test_create_assessment(self):
# Initially, there should be no submission or self assessment # Initially, there should be no submission or self assessment
self.assertEqual(get_assessment("5"), None) self.assertEqual(get_assessment("5"), None)
...@@ -66,7 +76,7 @@ class TestSelfApi(CacheResetTest): ...@@ -66,7 +76,7 @@ class TestSelfApi(CacheResetTest):
# Create a self-assessment for the submission # Create a self-assessment for the submission
assessment = create_assessment( assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC, self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
...@@ -82,7 +92,7 @@ class TestSelfApi(CacheResetTest): ...@@ -82,7 +92,7 @@ class TestSelfApi(CacheResetTest):
self.assertEqual(assessment['submission_uuid'], submission['uuid']) self.assertEqual(assessment['submission_uuid'], submission['uuid'])
self.assertEqual(assessment['points_earned'], 8) self.assertEqual(assessment['points_earned'], 8)
self.assertEqual(assessment['points_possible'], 10) self.assertEqual(assessment['points_possible'], 10)
self.assertEqual(assessment['feedback'], u'') self.assertEqual(assessment['feedback'], u'' + self.OVERALL_FEEDBACK)
self.assertEqual(assessment['score_type'], u'SE') self.assertEqual(assessment['score_type'], u'SE')
def test_create_assessment_no_submission(self): def test_create_assessment_no_submission(self):
...@@ -90,7 +100,7 @@ class TestSelfApi(CacheResetTest): ...@@ -90,7 +100,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError): with self.assertRaises(SelfAssessmentRequestError):
create_assessment( create_assessment(
'invalid_submission_uuid', u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', 'invalid_submission_uuid', u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC, self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
...@@ -102,7 +112,22 @@ class TestSelfApi(CacheResetTest): ...@@ -102,7 +112,22 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError): with self.assertRaises(SelfAssessmentRequestError):
create_assessment( create_assessment(
'invalid_submission_uuid', u'another user', 'invalid_submission_uuid', u'another user',
self.OPTIONS_SELECTED, self.RUBRIC, self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
def test_create_assessment_invalid_criterion_feedback(self):
# Create a submission
submission = create_submission(self.STUDENT_ITEM, "Test answer")
# Mutate the criterion feedback to not include all the appropriate criteria.
criterion_feedback = {"clarify": "not", "accurate": "sure"}
# Attempt to create a self-assessment with criterion_feedback that do not match the rubric
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, criterion_feedback, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
...@@ -118,7 +143,7 @@ class TestSelfApi(CacheResetTest): ...@@ -118,7 +143,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError): with self.assertRaises(SelfAssessmentRequestError):
create_assessment( create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options, self.RUBRIC, options, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
...@@ -134,7 +159,7 @@ class TestSelfApi(CacheResetTest): ...@@ -134,7 +159,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError): with self.assertRaises(SelfAssessmentRequestError):
create_assessment( create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options, self.RUBRIC, options, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
...@@ -150,7 +175,7 @@ class TestSelfApi(CacheResetTest): ...@@ -150,7 +175,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError): with self.assertRaises(SelfAssessmentRequestError):
create_assessment( create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options, self.RUBRIC, options, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
...@@ -165,7 +190,7 @@ class TestSelfApi(CacheResetTest): ...@@ -165,7 +190,7 @@ class TestSelfApi(CacheResetTest):
# Do not override the scored_at timestamp, so it should be set to the current time # Do not override the scored_at timestamp, so it should be set to the current time
assessment = create_assessment( assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC, self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
) )
# Retrieve the self-assessment # Retrieve the self-assessment
...@@ -183,14 +208,14 @@ class TestSelfApi(CacheResetTest): ...@@ -183,14 +208,14 @@ class TestSelfApi(CacheResetTest):
# Self assess once # Self assess once
assessment = create_assessment( assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC, self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
) )
# Attempt to self-assess again, which should raise an exception # Attempt to self-assess again, which should raise an exception
with self.assertRaises(SelfAssessmentRequestError): with self.assertRaises(SelfAssessmentRequestError):
create_assessment( create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC, self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
) )
# Expect that we still have the original assessment # Expect that we still have the original assessment
...@@ -213,17 +238,20 @@ class TestSelfApi(CacheResetTest): ...@@ -213,17 +238,20 @@ class TestSelfApi(CacheResetTest):
"options": [] "options": []
}) })
criterion_feedback = copy.deepcopy(self.CRITERION_FEEDBACK)
criterion_feedback['feedback only'] = "This is the feedback for the Zero Option Criterion."
# Create a self-assessment for the submission # Create a self-assessment for the submission
assessment = create_assessment( assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, rubric, self.OPTIONS_SELECTED, criterion_feedback, self.OVERALL_FEEDBACK, rubric,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
# The self-assessment should have set the feedback for # The self-assessment should have set the feedback for
# the criterion with no options to an empty string # the criterion with no options to an empty string
self.assertEqual(assessment["parts"][2]["option"], None) self.assertEqual(assessment["parts"][2]["option"], None)
self.assertEqual(assessment["parts"][2]["feedback"], u"") self.assertEqual(assessment["parts"][2]["feedback"], u"This is the feedback for the Zero Option Criterion.")
def test_create_assessment_all_criteria_have_zero_options(self): def test_create_assessment_all_criteria_have_zero_options(self):
# Create a submission to self-assess # Create a submission to self-assess
...@@ -237,14 +265,25 @@ class TestSelfApi(CacheResetTest): ...@@ -237,14 +265,25 @@ class TestSelfApi(CacheResetTest):
# Create a self-assessment for the submission # Create a self-assessment for the submission
# We don't select any options, since none of the criteria have options # We don't select any options, since none of the criteria have options
options_selected = {} options_selected = {}
# However, because they don't have options, they need to have criterion feedback.
criterion_feedback = {
'clarity': 'I thought it was about as accurate as Scrubs is to the medical profession.',
'accuracy': 'I thought it was about as accurate as Scrubs is to the medical profession.'
}
overall_feedback = ""
assessment = create_assessment( assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗', submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options_selected, rubric, options_selected, criterion_feedback, overall_feedback,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc) rubric, scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
) )
# The self-assessment should have set the feedback for # The self-assessment should have set the feedback for
# all criteria to an empty string. # all criteria to an empty string.
for part in assessment["parts"]: for part in assessment["parts"]:
self.assertEqual(part["option"], None) self.assertEqual(part["option"], None)
self.assertEqual(part["feedback"], u"") self.assertEqual(
part["feedback"], u'I thought it was about as accurate as Scrubs is to the medical profession.'
)
...@@ -66,7 +66,7 @@ class CsvWriter(object): ...@@ -66,7 +66,7 @@ class CsvWriter(object):
output_streams (dictionary): Provide the file handles output_streams (dictionary): Provide the file handles
to write CSV data to. to write CSV data to.
Kwargs: Keyword Arguments:
progress_callback (callable): Callable that accepts progress_callback (callable): Callable that accepts
no arguments. Called once per submission loaded no arguments. Called once per submission loaded
from the database. from the database.
......
...@@ -107,7 +107,7 @@ class Command(BaseCommand): ...@@ -107,7 +107,7 @@ class Command(BaseCommand):
print "-- Creating self assessment" print "-- Creating self assessment"
self_api.create_assessment( self_api.create_assessment(
submission_uuid, student_item['student_id'], submission_uuid, student_item['student_id'],
options_selected, rubric options_selected, {}, " ".join(loremipsum.get_paragraphs(2)), rubric
) )
@property @property
......
...@@ -146,27 +146,42 @@ ...@@ -146,27 +146,42 @@
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% if criterion.feedback %} {% if criterion.peer_feedback or criterion.self_feedback %}
<li class="answer--feedback ui-toggle-visibility {% if criterion.options %}is--collapsed{% endif %}"> <li class="answer--feedback ui-toggle-visibility {% if criterion.options %}is--collapsed{% endif %}">
{% if criterion.options %} {% if criterion.options %}
<h5 class="answer--feedback__title ui-toggle-visibility__control"> <h5 class="answer--feedback__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i> <i class="ico icon-caret-right"></i>
<span class="answer--feedback__title__copy">{% trans "Additional Comments" %} ({{ criterion.feedback|length }})</span> {% if criterion.self_feedback %}
<span class="answer--feedback__title__copy">{% trans "Additional Comments" %} ({{ criterion.peer_feedback|length|add:'1' }})</span>
{% else %}
<span class="answer--feedback__title__copy">{% trans "Additional Comments" %} ({{ criterion.peer_feedback|length }})</span>
{% endif %}
</h5> </h5>
{% endif %} {% endif %}
<ul class="answer--feedback__content {% if criterion.options %}ui-toggle-visibility__content{% endif %}"> <ul class="answer--feedback__content {% if criterion.options %}ui-toggle-visibility__content{% endif %}">
{% for feedback in criterion.feedback %} {% for feedback in criterion.peer_feedback %}
<li class="feedback feedback--{{ forloop.counter }}"> <li class="feedback feedback--{{ forloop.counter }}">
<h6 class="feedback__source"> <h6 class="feedback__source">
{% trans "Peer" %} {{ forloop.counter }} {% trans "Peer" %} {{ forloop.counter }}
</h6> </h6>
<div class="feedback__value"> <div class="feedback__value">
{{ feedback }} {{ feedback }}
</div> </div>
</li> </li>
{% endfor %} {% endfor %}
{% if criterion.self_feedback %}
<li class="feedback feedback--{{ forloop.counter }}">
<h6 class="feedback__source">
{% trans "Your Assessment" %}
</h6>
<div class="feedback__value">
{{ criterion.self_feedback }}
</div>
</li>
{% endif %}
</ul> </ul>
</li> </li>
{% endif %} {% endif %}
...@@ -175,7 +190,7 @@ ...@@ -175,7 +190,7 @@
</li> </li>
{% endwith %} {% endwith %}
{% endfor %} {% endfor %}
{% if peer_assessments %} {% if peer_assessments or self_assessment.feedback %}
<li class="question question--feedback ui-toggle-visibility"> <li class="question question--feedback ui-toggle-visibility">
<h4 class="question__title ui-toggle-visibility__control"> <h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i> <i class="ico icon-caret-right"></i>
...@@ -204,6 +219,23 @@ ...@@ -204,6 +219,23 @@
{% endif %} {% endif %}
{% endwith %} {% endwith %}
{% endfor %} {% endfor %}
{% if self_assessment.feedback %}
<li class="answer self-evaluation--0" id="question--feedback__answer-0">
<h5 class="answer__title">
<span class="answer__source">
<span class="label sr">{% trans "Self assessment" %}: </span>
<span class="value">{% trans "Self assessment" %}</span>
</span>
</h5>
<div class="answer__value">
<h6 class="label sr">{% trans "Your assessment" %}: </h6>
<div class="value">
<p>{{ self_assessment.feedback }}</p>
</div>
</div>
</li>
{% endif %}
</ul> </ul>
</li> </li>
{% endif %} {% endif %}
......
{% spaceless %}
{% load i18n %}
<fieldset class="assessment__fields">
<ol class="list list--fields assessment__rubric">
{% for criterion in rubric_criteria %}
<li
class="field field--radio is--required assessment__rubric__question ui-toggle-visibility {% if criterion.options %}has--options{% endif %}"
id="assessment__rubric__question--{{ criterion.order_num }}"
>
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="ui-toggle-visibility__control__copy question__title__copy">{{ criterion.prompt }}</span>
<span class="label--required sr">* ({% trans "Required" %})</span>
</h4>
<div class="ui-toggle-visibility__content">
<ol class="question__answers">
{% for option in criterion.options %}
<li class="answer">
<div class="wrapper--input">
<input type="radio"
name="{{ criterion.name }}"
id="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__value"
value="{{ option.name }}" />
<label for="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__label"
>{{ option.label }}</label>
</div>
<div class="wrapper--metadata">
<span class="answer__tip">{{ option.explanation }}</span>
<span class="answer__points">{{ option.points }} <span class="answer__points__label">{% trans "points" %}</span></span>
</div>
</li>
{% endfor %}
{% if criterion.feedback == 'optional' or criterion.feedback == 'required' %}
<li class="answer--feedback">
<div class="wrapper--input">
<label for="assessment__rubric__question--{{ criterion.order_num }}__feedback" class="answer__label">{% trans "Comments" %}</label>
<textarea
id="assessment__rubric__question--{{ criterion.order_num }}__feedback"
class="answer__value"
value="{{ criterion.name }}"
name="{{ criterion.name }}"
maxlength="300"
{% if criterion.feedback == 'required' %}required{% endif %}
>
</textarea>
</div>
</li>
{% endif %}
</ol>
</div>
</li>
{% endfor %}
<li class="wrapper--input field field--textarea assessment__rubric__question assessment__rubric__question--feedback" id="assessment__rubric__question--feedback">
<label class="question__title" for="assessment__rubric__question--feedback__value">
<span class="question__title__copy">{{ rubric_feedback_prompt }}</span>
</label>
<div class="wrapper--input">
<textarea
id="assessment__rubric__question--feedback__value"
placeholder="{% trans "I noticed that this response..." %}"
maxlength="500"
>
</textarea>
</div>
</li>
</ol>
</fieldset>
{% endspaceless %}
\ No newline at end of file
...@@ -72,77 +72,7 @@ ...@@ -72,77 +72,7 @@
</div> </div>
<form id="peer-assessment--001__assessment" class="peer-assessment__assessment" method="post"> <form id="peer-assessment--001__assessment" class="peer-assessment__assessment" method="post">
<fieldset class="assessment__fields"> {% include "openassessmentblock/oa_rubric.html" %}
<ol class="list list--fields assessment__rubric">
{% for criterion in rubric_criteria %}
<li
class="field field--radio is--required assessment__rubric__question ui-toggle-visibility {% if criterion.options %}has--options{% endif %}"
id="assessment__rubric__question--{{ criterion.order_num }}"
>
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="ui-toggle-visibility__control__copy question__title__copy">{{ criterion.prompt }}</span>
<span class="label--required sr">* ({% trans "Required" %})</span>
</h4>
<div class="ui-toggle-visibility__content">
<ol class="question__answers">
{% for option in criterion.options %}
<li class="answer">
<div class="wrapper--input">
<input type="radio"
name="{{ criterion.name }}"
id="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__value"
value="{{ option.name }}" />
<label for="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__label"
>{{ option.label }}</label>
</div>
<div class="wrapper--metadata">
<span class="answer__tip">{{ option.explanation }}</span>
<span class="answer__points">{{ option.points }} <span class="answer__points__label">{% trans "points" %}</span></span>
</div>
</li>
{% endfor %}
{% if criterion.feedback == 'optional' or criterion.feedback == 'required' %}
<li class="answer--feedback">
<div class="wrapper--input">
<label for="assessment__rubric__question--{{ criterion.order_num }}__feedback" class="answer__label">{% trans "Comments" %}</label>
<textarea
id="assessment__rubric__question--{{ criterion.order_num }}__feedback"
class="answer__value"
value="{{ criterion.name }}"
name="{{ criterion.name }}"
maxlength="300"
{% if criterion.feedback == 'required' %}required{% endif %}
>
</textarea>
</div>
</li>
{% endif %}
</ol>
</div>
</li>
{% endfor %}
<li class="wrapper--input field field--textarea assessment__rubric__question assessment__rubric__question--feedback" id="assessment__rubric__question--feedback">
<label class="question__title" for="assessment__rubric__question--feedback__value">
<span class="question__title__copy">{{ rubric_feedback_prompt }}</span>
</label>
<div class="wrapper--input">
<textarea
id="assessment__rubric__question--feedback__value"
placeholder="{% trans "I noticed that this response..." %}"
maxlength="500"
>
</textarea>
</div>
</li>
</ol>
</fieldset>
</form> </form>
</article> </article>
</li> </li>
......
...@@ -72,7 +72,7 @@ ...@@ -72,7 +72,7 @@
<button type="submit" id="file__upload" class="action action--upload is--disabled">{% trans "Upload your image" %}</button> <button type="submit" id="file__upload" class="action action--upload is--disabled">{% trans "Upload your image" %}</button>
</li> </li>
<li> <li>
<div class="submission__answer__display__image"> <div class="submission__answer__display__image is--hidden">
<img id="submission__answer__image" <img id="submission__answer__image"
class="submission--image" class="submission--image"
{% if file_url %} {% if file_url %}
......
...@@ -59,46 +59,7 @@ ...@@ -59,46 +59,7 @@
</article> </article>
<form id="self-assessment--001__assessment" class="self-assessment__assessment" method="post"> <form id="self-assessment--001__assessment" class="self-assessment__assessment" method="post">
<fieldset class="assessment__fields"> {% include "openassessmentblock/oa_rubric.html" %}
<ol class="list list--fields assessment__rubric">
{% for criterion in rubric_criteria %}
{% if criterion.options %}
<li
class="field field--radio is--required assessment__rubric__question ui-toggle-visibility has--options"
id="assessment__rubric__question--{{ criterion.order_num }}"
>
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="question__title__copy">{{ criterion.prompt }}</span>
<span class="label--required sr">* ({% trans "Required" %})</span>
</h4>
<div class="ui-toggle-visibility__content">
<ol class="question__answers">
{% for option in criterion.options %}
<li class="answer">
<div class="wrapper--input">
<input type="radio"
name="{{ criterion.name }}"
id="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__value"
value="{{ option.name }}" />
<label for="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__label">{{ option.label }}</label>
</div>
<div class="wrapper--metadata">
<span class="answer__tip">{{ option.explanation }}</span>
<span class="answer__points">{{option.points}} <span class="answer__points__label">{% trans "points" %}</span></span>
</div>
</li>
{% endfor %}
</ol>
</div>
</li>
{% endif %}
{% endfor %}
</ol>
</fieldset>
</form> </form>
</div> </div>
......
...@@ -34,7 +34,7 @@ def create_workflow(submission_uuid, steps, on_init_params=None): ...@@ -34,7 +34,7 @@ def create_workflow(submission_uuid, steps, on_init_params=None):
steps (list): List of steps that are part of the workflow, in the order steps (list): List of steps that are part of the workflow, in the order
that the user must complete them. Example: `["peer", "self"]` that the user must complete them. Example: `["peer", "self"]`
Kwargs: Keyword Arguments:
on_init_params (dict): The parameters to pass to each assessment module on_init_params (dict): The parameters to pass to each assessment module
on init. Keys are the assessment step names. on init. Keys are the assessment step names.
...@@ -279,7 +279,7 @@ def get_status_counts(course_id, item_id, steps): ...@@ -279,7 +279,7 @@ def get_status_counts(course_id, item_id, steps):
""" """
Count how many workflows have each status, for a given item in a course. Count how many workflows have each status, for a given item in a course.
Kwargs: Keyword Arguments:
course_id (unicode): The ID of the course. course_id (unicode): The ID of the course.
item_id (unicode): The ID of the item in the course. item_id (unicode): The ID of the item in the course.
steps (list): A list of assessment steps for this problem. steps (list): A list of assessment steps for this problem.
......
...@@ -441,7 +441,7 @@ def update_workflow_async(sender, **kwargs): ...@@ -441,7 +441,7 @@ def update_workflow_async(sender, **kwargs):
Args: Args:
sender (object): Not used sender (object): Not used
Kwargs: Keyword Arguments:
submission_uuid (str): The UUID of the submission associated submission_uuid (str): The UUID of the submission associated
with the workflow being updated. with the workflow being updated.
......
...@@ -364,7 +364,7 @@ class TestAssessmentWorkflowApi(CacheResetTest): ...@@ -364,7 +364,7 @@ class TestAssessmentWorkflowApi(CacheResetTest):
item_id (unicode): Item ID for the submission item_id (unicode): Item ID for the submission
status (unicode): One of acceptable status values (e.g. "peer", "self", "waiting", "done") status (unicode): One of acceptable status values (e.g. "peer", "self", "waiting", "done")
Kwargs: Keyword Arguments:
answer (unicode): Submission answer. answer (unicode): Submission answer.
steps (list): A list of steps to create the workflow with. If not steps (list): A list of steps to create the workflow with. If not
specified the default steps are "peer", "self". specified the default steps are "peer", "self".
......
...@@ -75,6 +75,27 @@ def create_rubric_dict(prompt, criteria): ...@@ -75,6 +75,27 @@ def create_rubric_dict(prompt, criteria):
} }
def clean_criterion_feedback(rubric_criteria, criterion_feedback):
"""
Remove per-criterion feedback for criteria with feedback disabled
in the rubric.
Args:
rubric_criteria (list): The rubric criteria from the problem definition.
criterion_feedback (dict): Mapping of criterion names to feedback text.
Returns:
dict
"""
return {
criterion['name']: criterion_feedback[criterion['name']]
for criterion in rubric_criteria
if criterion['name'] in criterion_feedback
and criterion.get('feedback', 'disabled') in ['optional', 'required']
}
def make_django_template_key(key): def make_django_template_key(key):
""" """
Django templates access dictionary items using dot notation, Django templates access dictionary items using dot notation,
......
...@@ -34,7 +34,7 @@ class GradeMixin(object): ...@@ -34,7 +34,7 @@ class GradeMixin(object):
Args: Args:
data: Not used. data: Not used.
Kwargs: Keyword Arguments:
suffix: Not used. suffix: Not used.
Returns: Returns:
...@@ -135,7 +135,7 @@ class GradeMixin(object): ...@@ -135,7 +135,7 @@ class GradeMixin(object):
'peer_assessments': peer_assessments, 'peer_assessments': peer_assessments,
'self_assessment': self_assessment, 'self_assessment': self_assessment,
'example_based_assessment': example_based_assessment, 'example_based_assessment': example_based_assessment,
'rubric_criteria': self._rubric_criteria_grade_context(peer_assessments), 'rubric_criteria': self._rubric_criteria_grade_context(peer_assessments, self_assessment),
'has_submitted_feedback': has_submitted_feedback, 'has_submitted_feedback': has_submitted_feedback,
'allow_file_upload': self.allow_file_upload, 'allow_file_upload': self.allow_file_upload,
'file_url': self.get_download_url_from_submission(student_submission) 'file_url': self.get_download_url_from_submission(student_submission)
...@@ -196,7 +196,7 @@ class GradeMixin(object): ...@@ -196,7 +196,7 @@ class GradeMixin(object):
data (dict): Can provide keys 'feedback_text' (unicode) and data (dict): Can provide keys 'feedback_text' (unicode) and
'feedback_options' (list of unicode). 'feedback_options' (list of unicode).
Kwargs: Keyword Arguments:
suffix (str): Unused suffix (str): Unused
Returns: Returns:
...@@ -226,7 +226,7 @@ class GradeMixin(object): ...@@ -226,7 +226,7 @@ class GradeMixin(object):
) )
return {'success': True, 'msg': _(u"Feedback saved.")} return {'success': True, 'msg': _(u"Feedback saved.")}
def _rubric_criteria_grade_context(self, peer_assessments): def _rubric_criteria_grade_context(self, peer_assessments, self_assessment):
""" """
Sanitize the rubric criteria into a format that can be passed Sanitize the rubric criteria into a format that can be passed
into the grade complete Django template. into the grade complete Django template.
...@@ -237,6 +237,7 @@ class GradeMixin(object): ...@@ -237,6 +237,7 @@ class GradeMixin(object):
Args: Args:
peer_assessments (list of dict): Serialized assessment models from the peer API. peer_assessments (list of dict): Serialized assessment models from the peer API.
self_assessment (dict): Serialized assessment model from the self API
Returns: Returns:
list of criterion dictionaries list of criterion dictionaries
...@@ -258,17 +259,25 @@ class GradeMixin(object): ...@@ -258,17 +259,25 @@ class GradeMixin(object):
] ]
""" """
criteria = copy.deepcopy(self.rubric_criteria_with_labels) criteria = copy.deepcopy(self.rubric_criteria_with_labels)
criteria_feedback = defaultdict(list) peer_criteria_feedback = defaultdict(list)
self_criteria_feedback = {}
for assessment in peer_assessments: for assessment in peer_assessments:
for part in assessment['parts']: for part in assessment['parts']:
if part['feedback']: if part['feedback']:
part_criterion_name = part['criterion']['name'] part_criterion_name = part['criterion']['name']
criteria_feedback[part_criterion_name].append(part['feedback']) peer_criteria_feedback[part_criterion_name].append(part['feedback'])
if self_assessment:
for part in self_assessment['parts']:
if part['feedback']:
part_criterion_name = part['criterion']['name']
self_criteria_feedback[part_criterion_name] = part['feedback']
for criterion in criteria: for criterion in criteria:
criterion_name = criterion['name'] criterion_name = criterion['name']
criterion['feedback'] = criteria_feedback[criterion_name] criterion['peer_feedback'] = peer_criteria_feedback[criterion_name]
criterion['self_feedback'] = self_criteria_feedback.get(criterion_name)
return criteria return criteria
......
...@@ -26,7 +26,7 @@ class MessageMixin(object): ...@@ -26,7 +26,7 @@ class MessageMixin(object):
Args: Args:
data: Not used. data: Not used.
Kwargs: Keyword Arguments:
suffix: Not used. suffix: Not used.
Returns: Returns:
......
...@@ -273,8 +273,6 @@ class OpenAssessmentBlock( ...@@ -273,8 +273,6 @@ class OpenAssessmentBlock(
else: else:
return False return False
@property @property
def in_studio_preview(self): def in_studio_preview(self):
""" """
...@@ -477,7 +475,7 @@ class OpenAssessmentBlock( ...@@ -477,7 +475,7 @@ class OpenAssessmentBlock(
the peer grading step AFTER the submission deadline has passed. the peer grading step AFTER the submission deadline has passed.
This may not be necessary when we implement a grading interface specifically for course staff. This may not be necessary when we implement a grading interface specifically for course staff.
Kwargs: Keyword Arguments:
step (str): The step in the workflow to check. Options are: step (str): The step in the workflow to check. Options are:
None: check whether the problem as a whole is open. None: check whether the problem as a whole is open.
"submission": check whether the submission section is open. "submission": check whether the submission section is open.
...@@ -587,7 +585,7 @@ class OpenAssessmentBlock( ...@@ -587,7 +585,7 @@ class OpenAssessmentBlock(
""" """
Check if a question has been released. Check if a question has been released.
Kwargs: Keyword Arguments:
step (str): The step in the workflow to check. step (str): The step in the workflow to check.
None: check whether the problem as a whole is open. None: check whether the problem as a whole is open.
"submission": check whether the submission section is open. "submission": check whether the submission section is open.
...@@ -696,4 +694,3 @@ class OpenAssessmentBlock( ...@@ -696,4 +694,3 @@ class OpenAssessmentBlock(
return key.to_deprecated_string() return key.to_deprecated_string()
else: else:
return unicode(key) return unicode(key)
...@@ -11,9 +11,9 @@ from openassessment.assessment.errors import ( ...@@ -11,9 +11,9 @@ from openassessment.assessment.errors import (
from openassessment.workflow.errors import AssessmentWorkflowError from openassessment.workflow.errors import AssessmentWorkflowError
from openassessment.fileupload import api as file_upload_api from openassessment.fileupload import api as file_upload_api
from openassessment.fileupload.api import FileUploadError from openassessment.fileupload.api import FileUploadError
from .data_conversion import create_rubric_dict from .data_conversion import create_rubric_dict
from .resolve_dates import DISTANT_FUTURE from .resolve_dates import DISTANT_FUTURE
from .data_conversion import create_rubric_dict, clean_criterion_feedback
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
...@@ -71,7 +71,7 @@ class PeerAssessmentMixin(object): ...@@ -71,7 +71,7 @@ class PeerAssessmentMixin(object):
self.submission_uuid, self.submission_uuid,
self.get_student_item_dict()["student_id"], self.get_student_item_dict()["student_id"],
data['options_selected'], data['options_selected'],
self._clean_criterion_feedback(data['criterion_feedback']), clean_criterion_feedback(self.rubric_criteria_with_labels, data['criterion_feedback']),
data['overall_feedback'], data['overall_feedback'],
create_rubric_dict(self.prompt, self.rubric_criteria_with_labels), create_rubric_dict(self.prompt, self.rubric_criteria_with_labels),
assessment_ui_model['must_be_graded_by'] assessment_ui_model['must_be_graded_by']
...@@ -265,22 +265,3 @@ class PeerAssessmentMixin(object): ...@@ -265,22 +265,3 @@ class PeerAssessmentMixin(object):
logger.exception(err) logger.exception(err)
return peer_submission return peer_submission
def _clean_criterion_feedback(self, criterion_feedback):
"""
Remove per-criterion feedback for criteria with feedback disabled
in the rubric.
Args:
criterion_feedback (dict): Mapping of criterion names to feedback text.
Returns:
dict
"""
return {
criterion['name']: criterion_feedback[criterion['name']]
for criterion in self.rubric_criteria_with_labels
if criterion['name'] in criterion_feedback
and criterion.get('feedback', 'disabled') in ['optional', 'required']
}
...@@ -9,6 +9,7 @@ from openassessment.workflow import api as workflow_api ...@@ -9,6 +9,7 @@ from openassessment.workflow import api as workflow_api
from submissions import api as submission_api from submissions import api as submission_api
from .data_conversion import create_rubric_dict from .data_conversion import create_rubric_dict
from .resolve_dates import DISTANT_FUTURE from .resolve_dates import DISTANT_FUTURE
from .data_conversion import create_rubric_dict, clean_criterion_feedback
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
...@@ -113,6 +114,12 @@ class SelfAssessmentMixin(object): ...@@ -113,6 +114,12 @@ class SelfAssessmentMixin(object):
if 'options_selected' not in data: if 'options_selected' not in data:
return {'success': False, 'msg': _(u"Missing options_selected key in request")} return {'success': False, 'msg': _(u"Missing options_selected key in request")}
if 'overall_feedback' not in data:
return {'success': False, 'msg': _('Must provide overall feedback in the assessment')}
if 'criterion_feedback' not in data:
return {'success': False, 'msg': _('Must provide feedback for criteria in the assessment')}
if self.submission_uuid is None: if self.submission_uuid is None:
return {'success': False, 'msg': _(u"You must submit a response before you can perform a self-assessment.")} return {'success': False, 'msg': _(u"You must submit a response before you can perform a self-assessment.")}
...@@ -121,6 +128,8 @@ class SelfAssessmentMixin(object): ...@@ -121,6 +128,8 @@ class SelfAssessmentMixin(object):
self.submission_uuid, self.submission_uuid,
self.get_student_item_dict()['student_id'], self.get_student_item_dict()['student_id'],
data['options_selected'], data['options_selected'],
clean_criterion_feedback(self.rubric_criteria, data['criterion_feedback']),
data['overall_feedback'],
create_rubric_dict(self.prompt, self.rubric_criteria_with_labels) create_rubric_dict(self.prompt, self.rubric_criteria_with_labels)
) )
self.publish_assessment_event("openassessmentblock.self_assess", assessment) self.publish_assessment_event("openassessmentblock.self_assess", assessment)
......
This source diff could not be displayed because it is too large. You can view the blob instead.
...@@ -76,7 +76,7 @@ describe("OpenAssessment.PeerView", function() { ...@@ -76,7 +76,7 @@ describe("OpenAssessment.PeerView", function() {
// Provide overall feedback // Provide overall feedback
var overallFeedback = "Good job!"; var overallFeedback = "Good job!";
view.overallFeedback(overallFeedback); view.rubric.overallFeedback(overallFeedback);
// Submit the peer assessment // Submit the peer assessment
view.peerAssess(); view.peerAssess();
......
...@@ -55,8 +55,28 @@ describe("OpenAssessment.SelfView", function() { ...@@ -55,8 +55,28 @@ describe("OpenAssessment.SelfView", function() {
it("Sends a self assessment to the server", function() { it("Sends a self assessment to the server", function() {
spyOn(server, 'selfAssess').andCallThrough(); spyOn(server, 'selfAssess').andCallThrough();
// Select options in the rubric
var optionsSelected = {};
optionsSelected['Criterion 1'] = 'Poor';
optionsSelected['Criterion 2'] = 'Fair';
optionsSelected['Criterion 3'] = 'Good';
view.rubric.optionsSelected(optionsSelected);
// Provide per-criterion feedback
var criterionFeedback = {};
criterionFeedback['Criterion 1'] = "You did a fair job";
criterionFeedback['Criterion 3'] = "You did a good job";
view.rubric.criterionFeedback(criterionFeedback);
// Provide overall feedback
var overallFeedback = "Good job!";
view.rubric.overallFeedback(overallFeedback);
view.selfAssess(); view.selfAssess();
expect(server.selfAssess).toHaveBeenCalled(); expect(server.selfAssess).toHaveBeenCalledWith(
optionsSelected, criterionFeedback, overallFeedback
);
}); });
it("Re-enables the self assess button on error", function() { it("Re-enables the self assess button on error", function() {
......
describe("OpenAssessment.FileUploader", function() {
var fileUploader = null;
var TEST_URL = "http://www.example.com/upload";
var TEST_IMAGE = {
data: "abcdefghijklmnopqrstuvwxyz",
name: "test.jpg",
size: 10471,
type: "image/jpeg"
};
var TEST_CONTENT_TYPE = "image/jpeg";
beforeEach(function() {
fileUploader = new OpenAssessment.FileUploader();
});
it("logs a file upload event", function() {
// Stub the AJAX call, simulating success
var successPromise = $.Deferred(
function(defer) { defer.resolve(); }
).promise();
spyOn($, 'ajax').andReturn(successPromise);
// Stub the event logger
spyOn(Logger, 'log');
// Upload a file
fileUploader.upload(TEST_URL, TEST_IMAGE, TEST_CONTENT_TYPE);
// Verify that the event was logged
expect(Logger.log).toHaveBeenCalledWith(
"openassessment.upload_file", {
contentType: TEST_CONTENT_TYPE,
imageName: TEST_IMAGE.name,
imageSize: TEST_IMAGE.size,
imageType: TEST_IMAGE.type
}
);
});
});
\ No newline at end of file
...@@ -163,6 +163,29 @@ describe("OpenAssessment.Server", function() { ...@@ -163,6 +163,29 @@ describe("OpenAssessment.Server", function() {
}); });
}); });
it("sends a self-assessment to the XBlock", function() {
stubAjax(true, {success: true, msg: ''});
var success = false;
var options = {clarity: "Very clear", precision: "Somewhat precise"};
var criterionFeedback = {clarity: "This essay was very clear."};
server.selfAssess(options, criterionFeedback, "Excellent job!").done(
function() { success = true; }
);
expect(success).toBe(true);
expect($.ajax).toHaveBeenCalledWith({
url: '/self_assess',
type: "POST",
data: JSON.stringify({
options_selected: options,
criterion_feedback: criterionFeedback,
overall_feedback: "Excellent job!"
})
});
});
it("sends a training assessment to the XBlock", function() { it("sends a training assessment to the XBlock", function() {
stubAjax(true, {success: true, msg: '', correct: true}); stubAjax(true, {success: true, msg: '', correct: true});
var success = false; var success = false;
...@@ -300,7 +323,7 @@ describe("OpenAssessment.Server", function() { ...@@ -300,7 +323,7 @@ describe("OpenAssessment.Server", function() {
it("informs the caller of an AJAX error when sending a self assessment", function() { it("informs the caller of an AJAX error when sending a self assessment", function() {
stubAjax(false, null); stubAjax(false, null);
var receivedMsg = null; var receivedMsg = null;
server.selfAssess("Test").fail(function(errorMsg) { receivedMsg = errorMsg; }); server.selfAssess("Test", {}, "Excellent job!").fail(function(errorMsg) { receivedMsg = errorMsg; });
expect(receivedMsg).toContain('This assessment could not be submitted'); expect(receivedMsg).toContain('This assessment could not be submitted');
}); });
......
...@@ -8,7 +8,7 @@ PUT requests on the server. ...@@ -8,7 +8,7 @@ PUT requests on the server.
Args: Args:
url (string): The one-time URL we're uploading to. url (string): The one-time URL we're uploading to.
data (object): The object to upload, which should have properties: imageData (object): The object to upload, which should have properties:
data (string) data (string)
name (string) name (string)
size (int) size (int)
...@@ -20,18 +20,32 @@ Returns: ...@@ -20,18 +20,32 @@ Returns:
*/ */
OpenAssessment.FileUploader = function() { OpenAssessment.FileUploader = function() {
this.upload = function(url, data, contentType) { this.upload = function(url, imageData, contentType) {
return $.Deferred( return $.Deferred(
function(defer) { function(defer) {
$.ajax({ $.ajax({
url: url, url: url,
type: 'PUT', type: 'PUT',
data: data, data: imageData,
async: false, async: false,
processData: false, processData: false,
contentType: contentType, contentType: contentType,
}).done( }).done(
function(data, textStatus, jqXHR) { defer.resolve(); } function(data, textStatus, jqXHR) {
// Log an analytics event
Logger.log(
"openassessment.upload_file",
{
contentType: contentType,
imageName: imageData.name,
imageSize: imageData.size,
imageType: imageData.type
}
);
// Return control to the caller
defer.resolve();
}
).fail( ).fail(
function(data, textStatus, jqXHR) { function(data, textStatus, jqXHR) {
defer.rejectWith(this, [textStatus]); defer.rejectWith(this, [textStatus]);
......
...@@ -197,7 +197,7 @@ OpenAssessment.PeerView.prototype = { ...@@ -197,7 +197,7 @@ OpenAssessment.PeerView.prototype = {
this.server.peerAssess( this.server.peerAssess(
this.rubric.optionsSelected(), this.rubric.optionsSelected(),
this.rubric.criterionFeedback(), this.rubric.criterionFeedback(),
this.overallFeedback() this.rubric.overallFeedback()
).done( ).done(
successFunction successFunction
).fail(function(errMsg) { ).fail(function(errMsg) {
...@@ -206,28 +206,5 @@ OpenAssessment.PeerView.prototype = { ...@@ -206,28 +206,5 @@ OpenAssessment.PeerView.prototype = {
}); });
}, },
/**
Get or set overall feedback on the submission.
Args:
overallFeedback (string or undefined): The overall feedback text (optional).
Returns:
string or undefined
Example usage:
>>> view.overallFeedback('Good job!'); // Set the feedback text
>>> view.overallFeedback(); // Retrieve the feedback text
'Good job!'
**/
overallFeedback: function(overallFeedback) {
var selector = '#assessment__rubric__question--feedback__value';
if (typeof overallFeedback === 'undefined') {
return $(selector, this.element).val();
}
else {
$(selector, this.element).val(overallFeedback);
}
}
}; };
...@@ -93,6 +93,7 @@ OpenAssessment.ResponseView.prototype = { ...@@ -93,6 +93,7 @@ OpenAssessment.ResponseView.prototype = {
function(eventObject) { function(eventObject) {
// Override default form submission // Override default form submission
eventObject.preventDefault(); eventObject.preventDefault();
$('.submission__answer__display__image', view.element).removeClass('is--hidden');
view.fileUpload(); view.fileUpload();
} }
); );
......
...@@ -47,6 +47,31 @@ OpenAssessment.Rubric.prototype = { ...@@ -47,6 +47,31 @@ OpenAssessment.Rubric.prototype = {
}, },
/** /**
Get or set overall feedback on the submission.
Args:
overallFeedback (string or undefined): The overall feedback text (optional).
Returns:
string or undefined
Example usage:
>>> view.overallFeedback('Good job!'); // Set the feedback text
>>> view.overallFeedback(); // Retrieve the feedback text
'Good job!'
**/
overallFeedback: function(overallFeedback) {
var selector = '#assessment__rubric__question--feedback__value';
if (typeof overallFeedback === 'undefined') {
return $(selector, this.element).val();
}
else {
$(selector, this.element).val(overallFeedback);
}
},
/**
Get or set the options selected in the rubric. Get or set the options selected in the rubric.
Args: Args:
......
...@@ -103,8 +103,11 @@ OpenAssessment.SelfView.prototype = { ...@@ -103,8 +103,11 @@ OpenAssessment.SelfView.prototype = {
baseView.toggleActionError('self', null); baseView.toggleActionError('self', null);
view.selfSubmitEnabled(false); view.selfSubmitEnabled(false);
var options = this.rubric.optionsSelected(); this.server.selfAssess(
this.server.selfAssess(options).done( this.rubric.optionsSelected(),
this.rubric.criterionFeedback(),
this.rubric.overallFeedback()
).done(
function() { function() {
baseView.loadAssessmentModules(); baseView.loadAssessmentModules();
baseView.scrollToTop(); baseView.scrollToTop();
......
...@@ -269,6 +269,8 @@ if (typeof OpenAssessment.Server == "undefined" || !OpenAssessment.Server) { ...@@ -269,6 +269,8 @@ if (typeof OpenAssessment.Server == "undefined" || !OpenAssessment.Server) {
Args: Args:
optionsSelected (object literal): Keys are criteria names, optionsSelected (object literal): Keys are criteria names,
values are the option text the user selected for the criterion. values are the option text the user selected for the criterion.
var criterionFeedback = { clarity: "The essay was very clear." };
var overallFeedback = "Good job!";
Returns: Returns:
A JQuery promise, which resolves with no args if successful A JQuery promise, which resolves with no args if successful
...@@ -282,10 +284,12 @@ if (typeof OpenAssessment.Server == "undefined" || !OpenAssessment.Server) { ...@@ -282,10 +284,12 @@ if (typeof OpenAssessment.Server == "undefined" || !OpenAssessment.Server) {
function(errorMsg) { console.log(errorMsg); } function(errorMsg) { console.log(errorMsg); }
); );
**/ **/
selfAssess: function(optionsSelected) { selfAssess: function(optionsSelected, criterionFeedback, overallFeedback) {
var url = this.url('self_assess'); var url = this.url('self_assess');
var payload = JSON.stringify({ var payload = JSON.stringify({
options_selected: optionsSelected options_selected: optionsSelected,
criterion_feedback: criterionFeedback,
overall_feedback: overallFeedback
}); });
return $.Deferred(function(defer) { return $.Deferred(function(defer) {
$.ajax({ type: "POST", url: url, data: payload }).done( $.ajax({ type: "POST", url: url, data: payload }).done(
......
...@@ -17,3 +17,10 @@ if (typeof OpenAssessment == "undefined" || !OpenAssessment) { ...@@ -17,3 +17,10 @@ if (typeof OpenAssessment == "undefined" || !OpenAssessment) {
if (typeof window.gettext === 'undefined') { if (typeof window.gettext === 'undefined') {
window.gettext = function(text) { return text; }; window.gettext = function(text) { return text; };
} }
// Stub event logging if the runtime doesn't provide it
if (typeof window.Logger === 'undefined') {
window.Logger = {
log: function(event_type, data, kwargs) {}
};
}
\ No newline at end of file
...@@ -1033,21 +1033,43 @@ ...@@ -1033,21 +1033,43 @@
.action--upload { .action--upload {
@extend %btn--secondary; @extend %btn--secondary;
@extend %action-2; @extend %action-2;
display: block;
text-align: center; text-align: center;
margin-bottom: ($baseline-v/2); float: right;
display: inline-block;
margin: ($baseline-v/2) 0;
box-shadow: none;
} }
.file--upload { .file--upload {
margin-top: $baseline-v/2; margin: $baseline-v ($baseline-v/2);
margin-bottom: $baseline-v/2;
} }
} }
.self-assessment__display__header
.self-assessment__display__title,
.peer-assessment__display__header
.peer-assessment__display__title,
.submission__answer__display
.submission__answer__display__title{
margin: 10px 0;
}
.submission--image { .self-assessment__display__image,
max-height: 600px; .peer-assessment__display__image,
max-width: $max-width/2; .submission__answer__display__image{
margin-bottom: $baseline-v; @extend .submission__answer__display__content;
max-height: 400px;
text-align: left;
overflow: auto;
img{
max-height: 100%;
max-width: 100%;
}
}
.submission__answer__display__image
.submission--image{
max-height: 250px;
max-width: 100%;
} }
// Developer SASS for Continued Grading. // Developer SASS for Continued Grading.
...@@ -1055,4 +1077,4 @@ ...@@ -1055,4 +1077,4 @@
.action--continue--grading { .action--continue--grading {
@extend .action--submit; @extend .action--submit;
} }
} }
\ No newline at end of file
...@@ -49,11 +49,11 @@ ...@@ -49,11 +49,11 @@
} }
@include media($bp-dm) { @include media($bp-dm) {
@include span-columns(9 of 12); @include span-columns(8 of 12);
} }
@include media($bp-dl) { @include media($bp-dl) {
@include span-columns(9 of 12); @include span-columns(8 of 12);
} }
@include media($bp-dx) { @include media($bp-dx) {
......
...@@ -55,12 +55,12 @@ ...@@ -55,12 +55,12 @@
} }
@include media($bp-dm) { @include media($bp-dm) {
@include span-columns(9 of 12); @include span-columns(8 of 12);
margin-bottom: 0; margin-bottom: 0;
} }
@include media($bp-dl) { @include media($bp-dl) {
@include span-columns(9 of 12); @include span-columns(8 of 12);
margin-bottom: 0; margin-bottom: 0;
} }
...@@ -173,14 +173,14 @@ ...@@ -173,14 +173,14 @@
} }
@include media($bp-dm) { @include media($bp-dm) {
@include span-columns(3 of 12); @include span-columns(4 of 12);
@include omega(); @include omega();
position: relative; position: relative;
top:-12px; top:-12px;
} }
@include media($bp-dl) { @include media($bp-dl) {
@include span-columns(3 of 12); @include span-columns(4 of 12);
@include omega(); @include omega();
position: relative; position: relative;
top: -12px; top: -12px;
...@@ -201,7 +201,7 @@ ...@@ -201,7 +201,7 @@
// step content wrapper // step content wrapper
.wrapper--step__content { .wrapper--step__content {
margin-top: ($baseline-v/2); margin-top: ($baseline-v/2);
padding-top: $baseline-v; padding-top: ($baseline-v/2);
border-top: 1px solid $color-decorative-tertiary; border-top: 1px solid $color-decorative-tertiary;
} }
......
...@@ -37,7 +37,7 @@ class StudentTrainingMixin(object): ...@@ -37,7 +37,7 @@ class StudentTrainingMixin(object):
Args: Args:
data: Not used. data: Not used.
Kwargs: Keyword Arguments:
suffix: Not used. suffix: Not used.
Returns: Returns:
...@@ -169,6 +169,15 @@ class StudentTrainingMixin(object): ...@@ -169,6 +169,15 @@ class StudentTrainingMixin(object):
corrections = student_training.assess_training_example( corrections = student_training.assess_training_example(
self.submission_uuid, data['options_selected'] self.submission_uuid, data['options_selected']
) )
self.runtime.publish(
self,
"openassessment.student_training_assess_example",
{
"submission_uuid": self.submission_uuid,
"options_selected": data["options_selected"],
"corrections": corrections
}
)
except student_training.StudentTrainingRequestError: except student_training.StudentTrainingRequestError:
msg = ( msg = (
u"Could not check student training scores for " u"Could not check student training scores for "
......
...@@ -69,7 +69,14 @@ class StudioMixin(object): ...@@ -69,7 +69,14 @@ class StudioMixin(object):
def editor_context(self): def editor_context(self):
""" """
Retrieve the XBlock's content definition. Update the XBlock's XML.
Args:
data (dict): Data from the request; should have a value for the key 'xml'
containing the XML for this XBlock.
Keyword Arguments:
suffix (str): Not used
Returns: Returns:
dict with keys dict with keys
...@@ -122,7 +129,7 @@ class StudioMixin(object): ...@@ -122,7 +129,7 @@ class StudioMixin(object):
data (dict): Data from the request; should have the format described data (dict): Data from the request; should have the format described
in the editor schema. in the editor schema.
Kwargs: Keyword Arguments:
suffix (str): Not used suffix (str): Not used
Returns: Returns:
...@@ -207,7 +214,7 @@ class StudioMixin(object): ...@@ -207,7 +214,7 @@ class StudioMixin(object):
Args: Args:
data (dict): Not used data (dict): Not used
Kwargs: Keyword Arguments:
suffix (str): Not used suffix (str): Not used
Returns: Returns:
......
...@@ -19,7 +19,7 @@ def scenario(scenario_path, user_id=None): ...@@ -19,7 +19,7 @@ def scenario(scenario_path, user_id=None):
Args: Args:
scenario_path (str): Path to the scenario XML file. scenario_path (str): Path to the scenario XML file.
Kwargs: Keyword Arguments:
user_id (str or None): User ID to log in as, or None. user_id (str or None): User ID to log in as, or None.
Returns: Returns:
...@@ -109,7 +109,7 @@ class XBlockHandlerTestCase(CacheResetTest): ...@@ -109,7 +109,7 @@ class XBlockHandlerTestCase(CacheResetTest):
handler_name (str): The name of the handler. handler_name (str): The name of the handler.
content (unicode): Content of the request. content (unicode): Content of the request.
Kwargs: Keyword Arguments:
response_format (None or str): Expected format of the response string. response_format (None or str): Expected format of the response string.
If `None`, return the raw response content; if 'json', parse the If `None`, return the raw response content; if 'json', parse the
response as JSON and return the result. response as JSON and return the result.
......
...@@ -115,9 +115,16 @@ class TestGrade(XBlockHandlerTestCase): ...@@ -115,9 +115,16 @@ class TestGrade(XBlockHandlerTestCase):
u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': u"Ṫḧïṡ ïṡ ṡöṁë ḟëëḋḅäċḳ." u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': u"Ṫḧïṡ ïṡ ṡöṁë ḟëëḋḅäċḳ."
} }
self_assessment = copy.deepcopy(self.ASSESSMENTS[0])
self_assessment['criterion_feedback'] = {
u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': "Feedback here",
u'Form': 'lots of feedback yes"',
u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': "such feedback"
}
# Submit, assess, and render the grade view # Submit, assess, and render the grade view
self._create_submission_and_assessments( self._create_submission_and_assessments(
xblock, self.SUBMISSION, self.PEERS, peer_assessments, self.ASSESSMENTS[0] xblock, self.SUBMISSION, self.PEERS, peer_assessments, self_assessment
) )
# Render the grade section # Render the grade section
...@@ -172,11 +179,13 @@ class TestGrade(XBlockHandlerTestCase): ...@@ -172,11 +179,13 @@ class TestGrade(XBlockHandlerTestCase):
# Verify that the context for the grade complete page contains the feedback # Verify that the context for the grade complete page contains the feedback
_, context = xblock.render_grade_complete(xblock.get_workflow_info()) _, context = xblock.render_grade_complete(xblock.get_workflow_info())
criteria = context['rubric_criteria'] criteria = context['rubric_criteria']
self.assertEqual(criteria[0]['feedback'], [
self.assertEqual(criteria[0]['peer_feedback'], [
u'Peer 2: ฝﻉɭɭ ɗѻกﻉ!', u'Peer 2: ฝﻉɭɭ ɗѻกﻉ!',
u'Peer 1: ฝﻉɭɭ ɗѻกﻉ!', u'Peer 1: ฝﻉɭɭ ɗѻกﻉ!',
]) ])
self.assertEqual(criteria[1]['feedback'], [u'Peer 2: ƒαιя נσв']) self.assertEqual(criteria[0]['self_feedback'], u'Peer 1: ฝﻉɭɭ ɗѻกﻉ!')
self.assertEqual(criteria[1]['peer_feedback'], [u'Peer 2: ƒαιя נσв'])
# The order of the peers in the per-criterion feedback needs # The order of the peers in the per-criterion feedback needs
# to match the order of the peer assessments # to match the order of the peer assessments
...@@ -346,7 +355,7 @@ class TestGrade(XBlockHandlerTestCase): ...@@ -346,7 +355,7 @@ class TestGrade(XBlockHandlerTestCase):
peer_assessments (list of dict): List of assessment dictionaries for peer assessments. peer_assessments (list of dict): List of assessment dictionaries for peer assessments.
self_assessment (dict): Dict of assessment for self-assessment. self_assessment (dict): Dict of assessment for self-assessment.
Kwargs: Keyword Arguments:
waiting_for_peer (bool): If true, skip creation of peer assessments for the user's submission. waiting_for_peer (bool): If true, skip creation of peer assessments for the user's submission.
Returns: Returns:
...@@ -402,5 +411,6 @@ class TestGrade(XBlockHandlerTestCase): ...@@ -402,5 +411,6 @@ class TestGrade(XBlockHandlerTestCase):
if self_assessment is not None: if self_assessment is not None:
self_api.create_assessment( self_api.create_assessment(
submission['uuid'], student_id, self_assessment['options_selected'], submission['uuid'], student_id, self_assessment['options_selected'],
self_assessment['criterion_feedback'], self_assessment['overall_feedback'],
{'criteria': xblock.rubric_criteria} {'criteria': xblock.rubric_criteria}
) )
...@@ -548,7 +548,7 @@ class TestDates(XBlockHandlerTestCase): ...@@ -548,7 +548,7 @@ class TestDates(XBlockHandlerTestCase):
expected_start (datetime): Expected start date. expected_start (datetime): Expected start date.
expected_due (datetime): Expected due date. expected_due (datetime): Expected due date.
Kwargs: Keyword Arguments:
released (bool): If set, check whether the XBlock has been released. released (bool): If set, check whether the XBlock has been released.
course_staff (bool): Whether to treat the user as course staff. course_staff (bool): Whether to treat the user as course staff.
......
...@@ -507,7 +507,7 @@ class TestPeerAssessmentRender(XBlockHandlerTestCase): ...@@ -507,7 +507,7 @@ class TestPeerAssessmentRender(XBlockHandlerTestCase):
expected_path (str): The expected template path. expected_path (str): The expected template path.
expected_context (dict): The expected template context. expected_context (dict): The expected template context.
Kwargs: Keyword Arguments:
continue_grading (bool): If true, the user has chosen to continue grading. continue_grading (bool): If true, the user has chosen to continue grading.
workflow_status (str): If provided, simulate this status from the workflow API. workflow_status (str): If provided, simulate this status from the workflow API.
graded_enough (bool): Did the student meet the requirement by assessing enough peers? graded_enough (bool): Did the student meet the requirement by assessing enough peers?
...@@ -679,7 +679,7 @@ class TestPeerAssessHandler(XBlockHandlerTestCase): ...@@ -679,7 +679,7 @@ class TestPeerAssessHandler(XBlockHandlerTestCase):
scorer_id (unicode): The ID of the student creating the assessment. scorer_id (unicode): The ID of the student creating the assessment.
assessment (dict): Serialized assessment model. assessment (dict): Serialized assessment model.
Kwargs: Keyword Arguments:
expect_failure (bool): If true, expect a failure response and return None expect_failure (bool): If true, expect a failure response and return None
Returns: Returns:
......
...@@ -9,6 +9,7 @@ import mock ...@@ -9,6 +9,7 @@ import mock
import pytz import pytz
from openassessment.assessment.api import self as self_api from openassessment.assessment.api import self as self_api
from openassessment.workflow import api as workflow_api from openassessment.workflow import api as workflow_api
from openassessment.xblock.data_conversion import create_rubric_dict
from .base import XBlockHandlerTestCase, scenario from .base import XBlockHandlerTestCase, scenario
...@@ -23,6 +24,8 @@ class TestSelfAssessment(XBlockHandlerTestCase): ...@@ -23,6 +24,8 @@ class TestSelfAssessment(XBlockHandlerTestCase):
ASSESSMENT = { ASSESSMENT = {
'options_selected': {u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'}, 'options_selected': {u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'},
'criterion_feedback': {},
'overall_feedback': ""
} }
@scenario('data/self_assessment_scenario.xml', user_id='Bob') @scenario('data/self_assessment_scenario.xml', user_id='Bob')
...@@ -87,6 +90,10 @@ class TestSelfAssessment(XBlockHandlerTestCase): ...@@ -87,6 +90,10 @@ class TestSelfAssessment(XBlockHandlerTestCase):
# Submit a self assessment for a rubric with a feedback-only criterion # Submit a self assessment for a rubric with a feedback-only criterion
assessment_dict = { assessment_dict = {
'options_selected': {u'vocabulary': u'good'}, 'options_selected': {u'vocabulary': u'good'},
'criterion_feedback': {
u'vocabulary': 'Awesome job!',
u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': 'fairly illegible.'
},
'overall_feedback': u'' 'overall_feedback': u''
} }
resp = self.request(xblock, 'self_assess', json.dumps(assessment_dict), response_format='json') resp = self.request(xblock, 'self_assess', json.dumps(assessment_dict), response_format='json')
...@@ -99,10 +106,9 @@ class TestSelfAssessment(XBlockHandlerTestCase): ...@@ -99,10 +106,9 @@ class TestSelfAssessment(XBlockHandlerTestCase):
self.assertEqual(assessment['parts'][0]['option']['points'], 1) self.assertEqual(assessment['parts'][0]['option']['points'], 1)
# Check the feedback-only criterion score/feedback # Check the feedback-only criterion score/feedback
# The written feedback should default to an empty string
self.assertEqual(assessment['parts'][1]['criterion']['name'], u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞') self.assertEqual(assessment['parts'][1]['criterion']['name'], u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞')
self.assertIs(assessment['parts'][1]['option'], None) self.assertIs(assessment['parts'][1]['option'], None)
self.assertEqual(assessment['parts'][1]['feedback'], u'') self.assertEqual(assessment['parts'][1]['feedback'], u'fairly illegible.')
@scenario('data/self_assessment_scenario.xml', user_id='Bob') @scenario('data/self_assessment_scenario.xml', user_id='Bob')
def test_self_assess_workflow_error(self, xblock): def test_self_assess_workflow_error(self, xblock):
...@@ -267,7 +273,8 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase): ...@@ -267,7 +273,8 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
submission['uuid'], submission['uuid'],
xblock.get_student_item_dict()['student_id'], xblock.get_student_item_dict()['student_id'],
{u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'}, {u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'},
{'criteria': xblock.rubric_criteria} {}, "Good job!",
create_rubric_dict(xblock.prompt, xblock.rubric_criteria)
) )
self._assert_path_and_context( self._assert_path_and_context(
xblock, 'openassessmentblock/self/oa_self_complete.html', {}, xblock, 'openassessmentblock/self/oa_self_complete.html', {},
...@@ -302,7 +309,8 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase): ...@@ -302,7 +309,8 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
submission['uuid'], submission['uuid'],
xblock.get_student_item_dict()['student_id'], xblock.get_student_item_dict()['student_id'],
{u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'}, {u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'},
{'criteria': xblock.rubric_criteria} {}, "Good job!",
create_rubric_dict(xblock.prompt, xblock.rubric_criteria)
) )
# This case probably isn't possible, because presumably when we create # This case probably isn't possible, because presumably when we create
...@@ -358,7 +366,7 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase): ...@@ -358,7 +366,7 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
expected_path (str): The expected template path. expected_path (str): The expected template path.
expected_context (dict): The expected template context. expected_context (dict): The expected template context.
Kwargs: Keyword Arguments:
workflow_status (str): If provided, simulate this status from the workflow API. workflow_status (str): If provided, simulate this status from the workflow API.
workflow_status (str): If provided, simulate these details from the workflow API. workflow_status (str): If provided, simulate these details from the workflow API.
submission_uuid (str): If provided, simulate this submision UUI for the current workflow. submission_uuid (str): If provided, simulate this submision UUI for the current workflow.
......
...@@ -32,6 +32,12 @@ ASSESSMENT_DICT = { ...@@ -32,6 +32,12 @@ ASSESSMENT_DICT = {
"Clear-headed": "Yogi Berra", "Clear-headed": "Yogi Berra",
"Form": "Reddit", "Form": "Reddit",
}, },
'criterion_feedback': {
"Concise": "Not very.",
"Clear-headed": "Indubitably",
"Form": "s ka tter ed"
}
} }
...@@ -209,6 +215,8 @@ class TestCourseStaff(XBlockHandlerTestCase): ...@@ -209,6 +215,8 @@ class TestCourseStaff(XBlockHandlerTestCase):
submission['uuid'], submission['uuid'],
STUDENT_ITEM["student_id"], STUDENT_ITEM["student_id"],
ASSESSMENT_DICT['options_selected'], ASSESSMENT_DICT['options_selected'],
ASSESSMENT_DICT['criterion_feedback'],
ASSESSMENT_DICT['overall_feedback'],
{'criteria': xblock.rubric_criteria}, {'criteria': xblock.rubric_criteria},
) )
...@@ -235,6 +243,13 @@ class TestCourseStaff(XBlockHandlerTestCase): ...@@ -235,6 +243,13 @@ class TestCourseStaff(XBlockHandlerTestCase):
"Content": "Poor", "Content": "Poor",
} }
criterion_feedback = {
"Ideas": "Dear diary: Lots of creativity from my dream journal last night at 2 AM,",
"Content": "Not as insightful as I had thought in the wee hours of the morning!"
}
overall_feedback = "I think I should tell more people about how important worms are for the ecosystem."
bob_item = STUDENT_ITEM.copy() bob_item = STUDENT_ITEM.copy()
bob_item["item_id"] = xblock.scope_ids.usage_id bob_item["item_id"] = xblock.scope_ids.usage_id
...@@ -265,6 +280,8 @@ class TestCourseStaff(XBlockHandlerTestCase): ...@@ -265,6 +280,8 @@ class TestCourseStaff(XBlockHandlerTestCase):
submission['uuid'], submission['uuid'],
STUDENT_ITEM["student_id"], STUDENT_ITEM["student_id"],
options_selected, options_selected,
criterion_feedback,
overall_feedback,
{'criteria': xblock.rubric_criteria}, {'criteria': xblock.rubric_criteria},
) )
......
...@@ -306,7 +306,7 @@ def validator(oa_block, strict_post_release=True): ...@@ -306,7 +306,7 @@ def validator(oa_block, strict_post_release=True):
Args: Args:
oa_block (OpenAssessmentBlock): The XBlock being updated. oa_block (OpenAssessmentBlock): The XBlock being updated.
Kwargs: Keyword Arguments:
strict_post_release (bool): If true, restrict what authors can update once strict_post_release (bool): If true, restrict what authors can update once
a problem has been released. a problem has been released.
......
...@@ -29,7 +29,7 @@ class WorkflowMixin(object): ...@@ -29,7 +29,7 @@ class WorkflowMixin(object):
Args: Args:
data: Unused data: Unused
Kwargs: Keyword Arguments:
suffix: Unused suffix: Unused
Returns: Returns:
...@@ -92,7 +92,7 @@ class WorkflowMixin(object): ...@@ -92,7 +92,7 @@ class WorkflowMixin(object):
from peer-assessment to self-assessment. Creates a score from peer-assessment to self-assessment. Creates a score
if the student has completed all requirements. if the student has completed all requirements.
Kwargs: Keyword Arguments:
submission_uuid (str): The submission associated with the workflow to update. submission_uuid (str): The submission associated with the workflow to update.
Defaults to the submission created by the current student. Defaults to the submission created by the current student.
......
...@@ -6,7 +6,7 @@ cd `dirname $BASH_SOURCE` && cd .. ...@@ -6,7 +6,7 @@ cd `dirname $BASH_SOURCE` && cd ..
# Install dependencies # Install dependencies
make install-python make install-python
make install-js make install-js
make minimize-js make javascript
# Configure Django settings # Configure Django settings
export DJANGO_SETTINGS_MODULE="settings.dev" export DJANGO_SETTINGS_MODULE="settings.dev"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment