Commit 75b5d5af by gradyward

Merge branch 'authoring' of https://github.com/edx/edx-ora2 into grady/ORA-689

Conflicts:
	openassessment/xblock/static/js/openassessment-lms.min.js
	openassessment/xblock/static/js/openassessment-studio.min.js
	openassessment/xblock/static/js/src/oa_shared.js
parents 1d7424ed cb932343
......@@ -32,7 +32,7 @@ install-nltk-data:
STATIC_JS = openassessment/xblock/static/js
minimize-js:
javascript:
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/lms/*.js > "$(STATIC_JS)/openassessment-lms.min.js"
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js $(STATIC_JS)/src/studio/*.js > "$(STATIC_JS)/openassessment-studio.min.js"
......@@ -41,7 +41,7 @@ install-test:
pip install -q -r requirements/test.txt
install: install-system install-node install-wheels install-python install-js install-nltk-data install-test minimize-js
install: install-system install-node install-wheels install-python install-js install-nltk-data install-test javascript
test:
./scripts/test.sh
......@@ -47,6 +47,27 @@ to start the server on port 8001:
./scripts/workbench.sh 8001
Combining and Minifying JavaScript and Sass
============================================
To reduce page size, the OpenAssessment XBlock serves combined/minified
versions of JavaScript and CSS. This combined/minified files are checked
into the git repository.
If you modify JavaScript or Sass, you MUST regenerate the combined/minified
files:
.. code:: bash
# Combine/minify JavaScript
make javascript
# Combine/minify CSS (from Sass)
./scripts/sass.sh
Make sure you commit the combined/minified files to the git repository!
Running Tests
=============
......
......@@ -74,6 +74,29 @@ Note that you can view your response at any time after you submit it. To do this
:alt: Image of the Response field collapsed and then expanded
:width: 550
Submit an Image with Your Response
***********************************
Some assignments require you to submit an image with your text response. If you have to submit an image, you'll see buttons that you'll use to upload your image.
.. image:: /Images/PA_Upload_ChooseFile.png
:alt: Open response assessment example with Choose File and Upload Your Image buttons circled
:width: 500
To upload your image:
#. Click **Choose File**.
#. In the dialog box that opens, select the file that you want, and then click **Open**.
#. When the dialog box closes, click **Upload Your Image**.
Your image appears below the response field, and the name of the image file appears next to the **Choose File** button. If you want to change the image, follow steps 1-3 again.
.. image:: /Images/PA_Upload_WithImage.png
:alt: Example response with an image of Paris
:width: 500
.. note:: You must submit text as well as your image in your response. You can't submit a response that doesn't contain text.
============================
Learn to Assess Responses
============================
......
############
Change Log
############
***********
July 2014
***********
.. list-table::
:widths: 10 70
:header-rows: 1
* - Date
- Change
* - 07/15/14
- Added information about uploading an image file in a response to both :ref:`Peer Assessments` and :ref:`PA for Students`.
* -
- Added information about providing a criterion that includes a comment field only to :ref:`Peer Assessments`.
......@@ -39,8 +39,14 @@ Student Training
.. automodule:: openassessment.assessment.api.student_training
:members:
Workflow Assessment
*******************
File Upload
***********
.. automodule:: openassessment.fileupload.api
:members:
Workflow
********
.. automodule:: openassessment.workflow
:members:
......
......@@ -4,8 +4,6 @@
AI Grading
##########
.. warning:: This is a DRAFT that has not yet been implemented.
Overview
--------
......@@ -234,76 +232,10 @@ Recovery from Failure
c. Horizontally scale workers to handle additional load.
Data Model
----------
1. **GradingWorkflow**
a. Submission UUID (varchar)
b. ClassifierSet (Foreign Key, Nullable)
c. Assessment (Foreign Key, Nullable)
d. Rubric (Foreign Key): Used to search for classifier sets if none are available when the workflow is started.
e. Algorithm ID (varchar): Used to search for classifier sets if none are available when the workflow is started.
f. Scheduled at (timestamp): The time the task was placed on the queue.
g. Completed at (timestamp): The time the task was completed. If set, the task is considered complete.
h. Course ID (varchar): The ID of the course associated with the submission. Useful for rescheduling failed grading tasks in a particular course.
i. Item ID (varchar): The ID of the item (problem) associated with the submission. Useful for rescheduling failed grading tasks in a particular item in a course.
2. **TrainingWorkflow**
a. Algorithm ID (varchar)
b. Many-to-many relation with **TrainingExample**. We can re-use examples for multiple workflows.
c. ClassifierSet (Foreign Key)
d. Scheduled at (timestamp): The time the task was placed on the queue.
e. Completed at (timestamp): The time the task was completed. If set, the task is considered complete.
3. **TrainingExample**
a. Response text (text)
b. Options selected (many to many relation with CriterionOption)
4. **ClassifierSet**
a. Rubric (Foreign Key)
b. Created at (timestamp)
c. Algorithm ID (varchar)
5. **Classifier**
a. ClassifierSet (Foreign Key)
b. URL for trained classifier (varchar)
c. Criterion (Foreign Key)
6. **Assessment** (same as current implementation)
a. Submission UUID (varchar)
b. Rubric (Foreign Key)
7. **AssessmentPart** (same as current implementation)
a. Assessment (Foreign Key)
b. Option (Foreign Key to a **CriterionOption**)
8. **Rubric** (same as current implementation)
9. **Criterion** (same as current implementation)
a. Rubric (Foreign Key)
b. Name (varchar)
10. **CriterionOption** (same as current implementation)
a. Criterion (Foreign Key)
b. Points (positive integer)
c. Name (varchar)
Notes:
* We use a URL to reference the trained classifier so we can avoid storing it in the database.
In practice, the URL will almost certainly point to Amazon S3, but in principle we could use
other backends.
* The storage backend is pluggable. In production, we use Amazon S3, but in principle we could use other backends (including the local filesystem in local dev).
* Unfortunately, the ML algorithm we will use for initial release (EASE) requires that we
persist the trained classifiers using Python's ``pickle`` module. This has security implications
......
......@@ -4,8 +4,6 @@
Understanding the Workflow
##########################
.. warning:: The following section refers to features that are not yet fully
implemented.
The `openassessment.workflow` application is tasked with managing the overall
life-cycle of a student's submission as it goes through various evaluation steps
......@@ -49,7 +47,9 @@ Isolation of Assessment types
a non `None` value has been returned by this function for a given
`submission_uuid`, repeated calls to this function should return the same
thing.
`on_start(submission_uuid)`
`on_init(submission_uuid)`
Notification to the API that the student has submitted a response.
`on_start(submission_uuid)`
Notification to the API that the student has started the assessment step.
In the long run, it could be that `OpenAssessmentBlock` becomes a wrapper
......
......@@ -12,9 +12,8 @@ Setup
-----
::
pip install -r requirements/dev.txt
pip install -e .
python manage.py runserver
See the `README <https://github.com/edx/edx-ora2/blob/master/README.rst>`_
Developer Documentation
......@@ -34,4 +33,3 @@ API Documentation
:maxdepth: 2
api
......@@ -90,7 +90,7 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None):
Args:
submission_uuid (str): The UUID of the submission to assess.
Kwargs:
Keyword Arguments:
rubric (dict): Serialized rubric model.
algorithm_id (unicode): Use only classifiers trained with the specified algorithm.
......@@ -104,8 +104,9 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None):
AIGradingRequestError
AIGradingInternalError
Example usage:
>>> submit('74a9d63e8a5fea369fd391d07befbd86ae4dc6e2', rubric, 'ease')
Example Usage:
>>> on_init('74a9d63e8a5fea369fd391d07befbd86ae4dc6e2', rubric, 'ease')
'10df7db776686822e501b05f452dc1e4b9141fe5'
"""
......@@ -179,7 +180,8 @@ def get_latest_assessment(submission_uuid):
Raises:
AIGradingInternalError
Examle usage:
Example usage:
>>> get_latest_assessment('10df7db776686822e501b05f452dc1e4b9141fe5')
{
'points_earned': 6,
......@@ -261,6 +263,7 @@ def train_classifiers(rubric_dict, examples, course_id, item_id, algorithm_id):
AITrainingInternalError
Example usage:
>>> train_classifiers(rubric, examples, 'ease')
'10df7db776686822e501b05f452dc1e4b9141fe5'
......@@ -307,7 +310,7 @@ def reschedule_unfinished_tasks(course_id=None, item_id=None, task_type=u"grade"
only reschedule the unfinished grade tasks. Applied use case (with button in
staff mixin) is to call without argument, and to reschedule grades only.
Kwargs:
Keyword Arguments:
course_id (unicode): Restrict to unfinished tasks in a particular course.
item_id (unicode): Restrict to unfinished tasks for a particular item in a course.
NOTE: if you specify the item ID, you must also specify the course ID.
......
......@@ -225,7 +225,7 @@ def create_assessment(
assessments is reached, the grading_completed_at timestamp is set
for the Workflow.
Kwargs:
Keyword Args:
scored_at (datetime): Optional argument to override the time in which
the assessment took place. If not specified, scored_at is set to
now.
......@@ -358,8 +358,8 @@ def get_assessment_median_scores(submission_uuid):
appropriate median score.
Returns:
(dict): A dictionary of rubric criterion names, with a median score of
the peer assessments.
dict: A dictionary of rubric criterion names,
with a median score of the peer assessments.
Raises:
PeerAssessmentInternalError: If any error occurs while retrieving
......@@ -430,16 +430,19 @@ def get_assessments(submission_uuid, scored_only=True, limit=None):
submission_uuid (str): The submission all the requested assessments are
associated with. Required.
Kwargs:
Keyword Arguments:
scored (boolean): Only retrieve the assessments used to generate a score
for this submission.
limit (int): Limit the returned assessments. If None, returns all.
Returns:
list(dict): A list of dictionaries, where each dictionary represents a
list: A list of dictionaries, where each dictionary represents a
separate assessment. Each assessment contains points earned, points
possible, time scored, scorer id, score type, and feedback.
Raises:
PeerAssessmentRequestError: Raised when the submission_id is invalid.
PeerAssessmentInternalError: Raised when there is an internal error
......@@ -496,7 +499,7 @@ def get_submitted_assessments(submission_uuid, scored_only=True, limit=None):
submission_uuid (str): The submission of the student whose assessments
we are requesting. Required.
Kwargs:
Keyword Arguments:
scored (boolean): Only retrieve the assessments used to generate a score
for this submission.
limit (int): Limit the returned assessments. If None, returns all.
......
......@@ -89,7 +89,15 @@ def get_score(submission_uuid, requirements):
}
def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, scored_at=None):
def create_assessment(
submission_uuid,
user_id,
options_selected,
criterion_feedback,
overall_feedback,
rubric_dict,
scored_at=None
):
"""
Create a self-assessment for a submission.
......@@ -97,9 +105,14 @@ def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, s
submission_uuid (str): The unique identifier for the submission being assessed.
user_id (str): The ID of the user creating the assessment. This must match the ID of the user who made the submission.
options_selected (dict): Mapping of rubric criterion names to option values selected.
criterion_feedback (dict): Dictionary mapping criterion names to the
free-form text feedback the user gave for the criterion.
Since criterion feedback is optional, some criteria may not appear
in the dictionary.
overall_feedback (unicode): Free-form text feedback on the submission overall.
rubric_dict (dict): Serialized Rubric model.
Kwargs:
Keyword Arguments:
scored_at (datetime): The timestamp of the assessment; defaults to the current time.
Returns:
......@@ -143,15 +156,24 @@ def create_assessment(submission_uuid, user_id, options_selected, rubric_dict, s
rubric = rubric_from_dict(rubric_dict)
# Create the self assessment
assessment = Assessment.create(rubric, user_id, submission_uuid, SELF_TYPE, scored_at=scored_at)
AssessmentPart.create_from_option_names(assessment, options_selected)
assessment = Assessment.create(
rubric,
user_id,
submission_uuid,
SELF_TYPE,
scored_at=scored_at,
feedback=overall_feedback
)
# This will raise an `InvalidRubricSelection` if the selected options do not match the rubric.
AssessmentPart.create_from_option_names(assessment, options_selected, feedback=criterion_feedback)
_log_assessment(assessment, submission)
except InvalidRubric:
msg = "Invalid rubric definition"
except InvalidRubric as ex:
msg = "Invalid rubric definition: " + str(ex)
logger.warning(msg, exc_info=True)
raise SelfAssessmentRequestError(msg)
except InvalidRubricSelection:
msg = "Selected options do not match the rubric"
except InvalidRubricSelection as ex:
msg = "Selected options do not match the rubric: " + str(ex)
logger.warning(msg, exc_info=True)
raise SelfAssessmentRequestError(msg)
......
......@@ -234,7 +234,7 @@ def validate_training_examples(rubric, examples):
errors.append(msg)
# Check for missing criteria
# Ignore options
# Ignore options
all_example_criteria = set(options_selected.keys() + criteria_without_options)
for missing_criterion in set(criteria_options.keys()) - all_example_criteria:
msg = _(
......@@ -398,7 +398,7 @@ def assess_training_example(submission_uuid, options_selected, update_workflow=T
submission_uuid (str): The UUID of the student's submission.
options_selected (dict): The options the student selected.
Kwargs:
Keyword Arguments:
update_workflow (bool): If true, mark the current item complete
if the student has assessed the example correctly.
......
......@@ -253,10 +253,19 @@ class RubricIndex(object):
criterion.name: criterion
for criterion in criteria
}
self._option_index = {
(option.criterion.name, option.name): option
for option in options
}
# Finds the set of all criteria which have options by traversing through the options, and adding all of
# the options' associated criteria to an expanding set.
criteria_with_options = set()
option_index = {}
for option in options:
option_index[(option.criterion.name, option.name)] = option
criteria_with_options.add(option.criterion)
# Anything not in the above mentioned set is a zero option criteria, and we save it here for future reference.
self._criteria_without_options = set(self._criteria_index.values()) - criteria_with_options
self._option_index = option_index
# By convention, if multiple options in the same criterion have the
# same point value, we return the *first* option.
......@@ -389,10 +398,7 @@ class RubricIndex(object):
set of `Criterion`
"""
return set(
criterion for criterion in self._criteria_index.values()
if criterion.options.count() == 0
)
return self._criteria_without_options
class Assessment(models.Model):
......@@ -454,7 +460,7 @@ class Assessment(models.Model):
submission_uuid (str): The UUID of the submission being assessed.
score_type (unicode): The type of assessment (e.g. peer, self, or AI)
Kwargs:
Keyword Arguments:
feedback (unicode): Overall feedback on the submission.
scored_at (datetime): The time the assessment was created. Defaults to the current time.
......@@ -639,7 +645,7 @@ class AssessmentPart(models.Model):
assessment (Assessment): The assessment we're adding parts to.
selected (dict): A dictionary mapping criterion names to option names.
Kwargs:
Keyword Arguments:
feedback (dict): A dictionary mapping criterion names to written
feedback for the criterion.
......@@ -665,8 +671,8 @@ class AssessmentPart(models.Model):
}
# Validate that we have selections for all criteria
# This will raise an exception if we're missing any criteria
cls._check_has_all_criteria(rubric_index, set(selected.keys() + feedback.keys()))
# This will raise an exception if we're missing any selections/feedback required for criteria
cls._check_all_criteria_assessed(rubric_index, selected.keys(), feedback.keys())
# Retrieve the criteria/option/feedback for criteria that have options.
# Since we're using the rubric's index, we'll get an `InvalidRubricSelection` error
......@@ -713,7 +719,7 @@ class AssessmentPart(models.Model):
assessment (Assessment): The assessment we're adding parts to.
selected (dict): A dictionary mapping criterion names to option point values.
Kwargs:
Keyword Arguments:
feedback (dict): A dictionary mapping criterion names to written
feedback for the criterion.
......@@ -783,3 +789,35 @@ class AssessmentPart(models.Model):
if len(missing_criteria) > 0:
msg = u"Missing selections for criteria: {missing}".format(missing=missing_criteria)
raise InvalidRubricSelection(msg)
@classmethod
def _check_all_criteria_assessed(cls, rubric_index, selected_criteria, criteria_feedback):
"""
Verify that we've selected options OR have feedback for all criteria in the rubric.
Verifies the predicate for all criteria (X) in the rubric:
has-an-option-selected(X) OR (has-zero-options(X) AND has-criterion-feedback(X))
Args:
rubric_index (RubricIndex): The index of the rubric's data.
selected_criteria (list): list of criterion names that have an option selected
criteria_feedback (list): list of criterion names that have feedback on them
Returns:
None
Raises:
InvalidRubricSelection
"""
missing_option_selections = rubric_index.find_missing_criteria(selected_criteria)
zero_option_criteria = set([c.name for c in rubric_index.find_criteria_without_options()])
zero_option_criteria_missing_feedback = zero_option_criteria - set(criteria_feedback)
optioned_criteria_missing_selection = missing_option_selections - zero_option_criteria
missing_criteria = zero_option_criteria_missing_feedback | optioned_criteria_missing_selection
if len(missing_criteria) > 0:
msg = u"Missing selections for criteria: {missing}".format(missing=', '.join(missing_criteria))
raise InvalidRubricSelection(msg)
......@@ -93,7 +93,7 @@ class TrainingExample(models.Model):
Create a cache key based on the content hash
for serialized versions of this model.
Kwargs:
Keyword Arguments:
attribute: The name of the attribute being serialized.
If not specified, assume that we are serializing the entire model.
......
{
"No Option Selected, Has Options, No Feedback": {
"has_option_selected": false,
"has_zero_options": false,
"has_feedback": false,
"expected_error": true
},
"No Option Selected, Has Options, Has Feedback": {
"has_option_selected": false,
"has_zero_options": false,
"has_feedback": true,
"expected_error": true
},
"No Option Selected, No Options, No Feedback": {
"has_option_selected": false,
"has_zero_options": true,
"has_feedback": false,
"expected_error": true
},
"No Option Selected, No Options, Has Feedback": {
"has_option_selected": false,
"has_zero_options": true,
"has_feedback": true,
"expected_error": false
},
"Has Option Selected, Has Options, No Feedback": {
"has_option_selected": true,
"has_zero_options": false,
"has_feedback": false,
"expected_error": false
},
"Has Option Selected, No Options, Has Feedback": {
"has_option_selected": true,
"has_zero_options": true,
"has_feedback": true,
"expected_error": true
},
"Has Option Selected, No Options, No Feedback": {
"has_option_selected": true,
"has_zero_options": true,
"has_feedback": false,
"expected_error": true
},
"Has Option Selected, Has Options, Has Feedback": {
"has_option_selected": true,
"has_zero_options": false,
"has_feedback": true,
"expected_error": false
}
}
\ No newline at end of file
......@@ -2,13 +2,16 @@
"""
Tests for the assessment Django models.
"""
import copy
import copy, ddt
from openassessment.test_utils import CacheResetTest
from openassessment.assessment.serializers import rubric_from_dict
from openassessment.assessment.models import Assessment, AssessmentPart, InvalidRubricSelection
from .constants import RUBRIC
from openassessment.assessment.api.self import create_assessment
from submissions.api import create_submission
from openassessment.assessment.errors import SelfAssessmentRequestError
@ddt.ddt
class AssessmentTest(CacheResetTest):
"""
Tests for the `Assessment` and `AssessmentPart` models.
......@@ -148,3 +151,65 @@ class AssessmentTest(CacheResetTest):
criterion['options'] = []
return rubric_from_dict(rubric_dict)
@ddt.file_data('data/models_check_criteria_assessed.json')
def test_check_all_criteria_assessed(self, data):
student_item = {
'student_id': u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
'item_id': 'test_item',
'course_id': 'test_course',
'item_type': 'test_type'
}
submission = create_submission(student_item, "Test answer")
rubric, options_selected, criterion_feedback = self._create_data_structures_with_criterion_properties(
has_option_selected=data['has_option_selected'],
has_zero_options=data['has_zero_options'],
has_feedback=data['has_feedback']
)
error = False
try:
create_assessment(
submission['uuid'], student_item['student_id'], options_selected,
criterion_feedback, "overall feedback", rubric
)
except SelfAssessmentRequestError:
error = True
self.assertTrue(data['expected_error'] == error)
def _create_data_structures_with_criterion_properties(
self,
has_option_selected=True,
has_zero_options=True,
has_feedback=True
):
"""
Generates a dummy set of criterion definition structures that will allow us to specificy a specific combination
of criterion attributes for a test case.
"""
options = []
if not has_zero_options:
options = [{
"name": "Okay",
"points": 1,
"description": "It was okay I guess."
}]
rubric = {
'criteria': [
{
"name": "Quality",
"prompt": "How 'good' was it?",
"options": options
}
]
}
options_selected = {}
if has_option_selected:
options_selected['Quality'] = 'Okay'
criterion_feedback = {}
if has_feedback:
criterion_feedback['Quality'] = "This was an assignment of average quality."
return rubric, options_selected, criterion_feedback
\ No newline at end of file
......@@ -51,6 +51,16 @@ class TestSelfApi(CacheResetTest):
"accuracy": "very accurate",
}
CRITERION_FEEDBACK = {
"clarity": "Like a morning in the restful city of San Fransisco, the piece was indescribable, beautiful, and too foggy to properly comprehend.",
"accuracy": "Like my sister's cutting comments about my weight, I may not have enjoyed the piece, but I cannot fault it for its factual nature."
}
OVERALL_FEEDBACK = (
u"Unfortunately, the nature of being is too complex to comment, judge, or discern any one"
u"arbitrary set of things over another."
)
def test_create_assessment(self):
# Initially, there should be no submission or self assessment
self.assertEqual(get_assessment("5"), None)
......@@ -66,7 +76,7 @@ class TestSelfApi(CacheResetTest):
# Create a self-assessment for the submission
assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC,
self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
......@@ -82,7 +92,7 @@ class TestSelfApi(CacheResetTest):
self.assertEqual(assessment['submission_uuid'], submission['uuid'])
self.assertEqual(assessment['points_earned'], 8)
self.assertEqual(assessment['points_possible'], 10)
self.assertEqual(assessment['feedback'], u'')
self.assertEqual(assessment['feedback'], u'' + self.OVERALL_FEEDBACK)
self.assertEqual(assessment['score_type'], u'SE')
def test_create_assessment_no_submission(self):
......@@ -90,7 +100,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
'invalid_submission_uuid', u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC,
self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
......@@ -102,7 +112,22 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
'invalid_submission_uuid', u'another user',
self.OPTIONS_SELECTED, self.RUBRIC,
self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
def test_create_assessment_invalid_criterion_feedback(self):
# Create a submission
submission = create_submission(self.STUDENT_ITEM, "Test answer")
# Mutate the criterion feedback to not include all the appropriate criteria.
criterion_feedback = {"clarify": "not", "accurate": "sure"}
# Attempt to create a self-assessment with criterion_feedback that do not match the rubric
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, criterion_feedback, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
......@@ -118,7 +143,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options, self.RUBRIC,
options, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
......@@ -134,7 +159,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options, self.RUBRIC,
options, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
......@@ -150,7 +175,7 @@ class TestSelfApi(CacheResetTest):
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options, self.RUBRIC,
options, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
......@@ -165,7 +190,7 @@ class TestSelfApi(CacheResetTest):
# Do not override the scored_at timestamp, so it should be set to the current time
assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC,
self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
)
# Retrieve the self-assessment
......@@ -183,14 +208,14 @@ class TestSelfApi(CacheResetTest):
# Self assess once
assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC,
self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
)
# Attempt to self-assess again, which should raise an exception
with self.assertRaises(SelfAssessmentRequestError):
create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, self.RUBRIC,
self.OPTIONS_SELECTED, self.CRITERION_FEEDBACK, self.OVERALL_FEEDBACK, self.RUBRIC,
)
# Expect that we still have the original assessment
......@@ -213,17 +238,20 @@ class TestSelfApi(CacheResetTest):
"options": []
})
criterion_feedback = copy.deepcopy(self.CRITERION_FEEDBACK)
criterion_feedback['feedback only'] = "This is the feedback for the Zero Option Criterion."
# Create a self-assessment for the submission
assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
self.OPTIONS_SELECTED, rubric,
self.OPTIONS_SELECTED, criterion_feedback, self.OVERALL_FEEDBACK, rubric,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
# The self-assessment should have set the feedback for
# the criterion with no options to an empty string
self.assertEqual(assessment["parts"][2]["option"], None)
self.assertEqual(assessment["parts"][2]["feedback"], u"")
self.assertEqual(assessment["parts"][2]["feedback"], u"This is the feedback for the Zero Option Criterion.")
def test_create_assessment_all_criteria_have_zero_options(self):
# Create a submission to self-assess
......@@ -237,14 +265,25 @@ class TestSelfApi(CacheResetTest):
# Create a self-assessment for the submission
# We don't select any options, since none of the criteria have options
options_selected = {}
# However, because they don't have options, they need to have criterion feedback.
criterion_feedback = {
'clarity': 'I thought it was about as accurate as Scrubs is to the medical profession.',
'accuracy': 'I thought it was about as accurate as Scrubs is to the medical profession.'
}
overall_feedback = ""
assessment = create_assessment(
submission['uuid'], u'𝖙𝖊𝖘𝖙 𝖚𝖘𝖊𝖗',
options_selected, rubric,
scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
options_selected, criterion_feedback, overall_feedback,
rubric, scored_at=datetime.datetime(2014, 4, 1).replace(tzinfo=pytz.utc)
)
# The self-assessment should have set the feedback for
# all criteria to an empty string.
for part in assessment["parts"]:
self.assertEqual(part["option"], None)
self.assertEqual(part["feedback"], u"")
self.assertEqual(
part["feedback"], u'I thought it was about as accurate as Scrubs is to the medical profession.'
)
......@@ -66,7 +66,7 @@ class CsvWriter(object):
output_streams (dictionary): Provide the file handles
to write CSV data to.
Kwargs:
Keyword Arguments:
progress_callback (callable): Callable that accepts
no arguments. Called once per submission loaded
from the database.
......
......@@ -107,7 +107,7 @@ class Command(BaseCommand):
print "-- Creating self assessment"
self_api.create_assessment(
submission_uuid, student_item['student_id'],
options_selected, rubric
options_selected, {}, " ".join(loremipsum.get_paragraphs(2)), rubric
)
@property
......
......@@ -21,6 +21,7 @@
</a>
</div>
<div id="openassessment_rubric_content_editor">
<div id="openassessment_rubric_instructions">
<p class = openassessment_description>
{% trans "For open response problems, assessment is rubric-based. Rubric criterion have point breakdowns and explanations to help students with peer and self assessment steps. For more information on how to build your rubric, see our online help documentation."%}
......@@ -56,5 +57,6 @@
</ul>
</div>
</div>
</div>
{% endspaceless %}
\ No newline at end of file
......@@ -9,7 +9,7 @@
<select class="openassessment_training_example_criterion_option setting-input" data-criterion="{{ criterion.name }}" data-option="{{ option.name }}">
<option value="">{% trans "Not Scored" %}</option>
{% for option in criterion.options %}
<option value={{ option.name }} data-points={{ option.points }} data-label={{ option.label }}
<option value="{{ option.name }}" data-points="{{ option.points }}" data-label="{{ option.label }}"
{% if criterion.option_selected == option.name %} selected {% endif %}
>
{{ option.label }} - {{ option.points }} {% trans "points" %}
......
......@@ -146,27 +146,42 @@
{% endif %}
{% endfor %}
{% if criterion.feedback %}
{% if criterion.peer_feedback or criterion.self_feedback %}
<li class="answer--feedback ui-toggle-visibility {% if criterion.options %}is--collapsed{% endif %}">
{% if criterion.options %}
<h5 class="answer--feedback__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="answer--feedback__title__copy">{% trans "Additional Comments" %} ({{ criterion.feedback|length }})</span>
{% if criterion.self_feedback %}
<span class="answer--feedback__title__copy">{% trans "Additional Comments" %} ({{ criterion.peer_feedback|length|add:'1' }})</span>
{% else %}
<span class="answer--feedback__title__copy">{% trans "Additional Comments" %} ({{ criterion.peer_feedback|length }})</span>
{% endif %}
</h5>
{% endif %}
<ul class="answer--feedback__content {% if criterion.options %}ui-toggle-visibility__content{% endif %}">
{% for feedback in criterion.feedback %}
{% for feedback in criterion.peer_feedback %}
<li class="feedback feedback--{{ forloop.counter }}">
<h6 class="feedback__source">
{% trans "Peer" %} {{ forloop.counter }}
</h6>
<div class="feedback__value">
{{ feedback }}
{{ feedback }}
</div>
</li>
{% endfor %}
{% if criterion.self_feedback %}
<li class="feedback feedback--{{ forloop.counter }}">
<h6 class="feedback__source">
{% trans "Your Assessment" %}
</h6>
<div class="feedback__value">
{{ criterion.self_feedback }}
</div>
</li>
{% endif %}
</ul>
</li>
{% endif %}
......@@ -175,7 +190,7 @@
</li>
{% endwith %}
{% endfor %}
{% if peer_assessments %}
{% if peer_assessments or self_assessment.feedback %}
<li class="question question--feedback ui-toggle-visibility">
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
......@@ -204,6 +219,23 @@
{% endif %}
{% endwith %}
{% endfor %}
{% if self_assessment.feedback %}
<li class="answer self-evaluation--0" id="question--feedback__answer-0">
<h5 class="answer__title">
<span class="answer__source">
<span class="label sr">{% trans "Self assessment" %}: </span>
<span class="value">{% trans "Self assessment" %}</span>
</span>
</h5>
<div class="answer__value">
<h6 class="label sr">{% trans "Your assessment" %}: </h6>
<div class="value">
<p>{{ self_assessment.feedback }}</p>
</div>
</div>
</li>
{% endif %}
</ul>
</li>
{% endif %}
......
{% spaceless %}
{% load i18n %}
<fieldset class="assessment__fields">
<ol class="list list--fields assessment__rubric">
{% for criterion in rubric_criteria %}
<li
class="field field--radio is--required assessment__rubric__question ui-toggle-visibility {% if criterion.options %}has--options{% endif %}"
id="assessment__rubric__question--{{ criterion.order_num }}"
>
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="ui-toggle-visibility__control__copy question__title__copy">{{ criterion.prompt }}</span>
<span class="label--required sr">* ({% trans "Required" %})</span>
</h4>
<div class="ui-toggle-visibility__content">
<ol class="question__answers">
{% for option in criterion.options %}
<li class="answer">
<div class="wrapper--input">
<input type="radio"
name="{{ criterion.name }}"
id="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__value"
value="{{ option.name }}" />
<label for="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__label"
>{{ option.label }}</label>
</div>
<div class="wrapper--metadata">
<span class="answer__tip">{{ option.explanation }}</span>
<span class="answer__points">{{ option.points }} <span class="answer__points__label">{% trans "points" %}</span></span>
</div>
</li>
{% endfor %}
{% if criterion.feedback == 'optional' or criterion.feedback == 'required' %}
<li class="answer--feedback">
<div class="wrapper--input">
<label for="assessment__rubric__question--{{ criterion.order_num }}__feedback" class="answer__label">{% trans "Comments" %}</label>
<textarea
id="assessment__rubric__question--{{ criterion.order_num }}__feedback"
class="answer__value"
value="{{ criterion.name }}"
name="{{ criterion.name }}"
maxlength="300"
{% if criterion.feedback == 'required' %}required{% endif %}
>
</textarea>
</div>
</li>
{% endif %}
</ol>
</div>
</li>
{% endfor %}
<li class="wrapper--input field field--textarea assessment__rubric__question assessment__rubric__question--feedback" id="assessment__rubric__question--feedback">
<label class="question__title" for="assessment__rubric__question--feedback__value">
<span class="question__title__copy">{{ rubric_feedback_prompt }}</span>
</label>
<div class="wrapper--input">
<textarea
id="assessment__rubric__question--feedback__value"
placeholder="{% trans "I noticed that this response..." %}"
maxlength="500"
>
</textarea>
</div>
</li>
</ol>
</fieldset>
{% endspaceless %}
\ No newline at end of file
......@@ -72,77 +72,7 @@
</div>
<form id="peer-assessment--001__assessment" class="peer-assessment__assessment" method="post">
<fieldset class="assessment__fields">
<ol class="list list--fields assessment__rubric">
{% for criterion in rubric_criteria %}
<li
class="field field--radio is--required assessment__rubric__question ui-toggle-visibility {% if criterion.options %}has--options{% endif %}"
id="assessment__rubric__question--{{ criterion.order_num }}"
>
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="ui-toggle-visibility__control__copy question__title__copy">{{ criterion.prompt }}</span>
<span class="label--required sr">* ({% trans "Required" %})</span>
</h4>
<div class="ui-toggle-visibility__content">
<ol class="question__answers">
{% for option in criterion.options %}
<li class="answer">
<div class="wrapper--input">
<input type="radio"
name="{{ criterion.name }}"
id="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__value"
value="{{ option.name }}" />
<label for="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__label"
>{{ option.label }}</label>
</div>
<div class="wrapper--metadata">
<span class="answer__tip">{{ option.explanation }}</span>
<span class="answer__points">{{ option.points }} <span class="answer__points__label">{% trans "points" %}</span></span>
</div>
</li>
{% endfor %}
{% if criterion.feedback == 'optional' or criterion.feedback == 'required' %}
<li class="answer--feedback">
<div class="wrapper--input">
<label for="assessment__rubric__question--{{ criterion.order_num }}__feedback" class="answer__label">{% trans "Comments" %}</label>
<textarea
id="assessment__rubric__question--{{ criterion.order_num }}__feedback"
class="answer__value"
value="{{ criterion.name }}"
name="{{ criterion.name }}"
maxlength="300"
{% if criterion.feedback == 'required' %}required{% endif %}
>
</textarea>
</div>
</li>
{% endif %}
</ol>
</div>
</li>
{% endfor %}
<li class="wrapper--input field field--textarea assessment__rubric__question assessment__rubric__question--feedback" id="assessment__rubric__question--feedback">
<label class="question__title" for="assessment__rubric__question--feedback__value">
<span class="question__title__copy">{{ rubric_feedback_prompt }}</span>
</label>
<div class="wrapper--input">
<textarea
id="assessment__rubric__question--feedback__value"
placeholder="{% trans "I noticed that this response..." %}"
maxlength="500"
>
</textarea>
</div>
</li>
</ol>
</fieldset>
{% include "openassessmentblock/oa_rubric.html" %}
</form>
</article>
</li>
......
......@@ -72,7 +72,7 @@
<button type="submit" id="file__upload" class="action action--upload is--disabled">{% trans "Upload your image" %}</button>
</li>
<li>
<div class="submission__answer__display__image">
<div class="submission__answer__display__image is--hidden">
<img id="submission__answer__image"
class="submission--image"
{% if file_url %}
......
......@@ -59,46 +59,7 @@
</article>
<form id="self-assessment--001__assessment" class="self-assessment__assessment" method="post">
<fieldset class="assessment__fields">
<ol class="list list--fields assessment__rubric">
{% for criterion in rubric_criteria %}
{% if criterion.options %}
<li
class="field field--radio is--required assessment__rubric__question ui-toggle-visibility has--options"
id="assessment__rubric__question--{{ criterion.order_num }}"
>
<h4 class="question__title ui-toggle-visibility__control">
<i class="ico icon-caret-right"></i>
<span class="question__title__copy">{{ criterion.prompt }}</span>
<span class="label--required sr">* ({% trans "Required" %})</span>
</h4>
<div class="ui-toggle-visibility__content">
<ol class="question__answers">
{% for option in criterion.options %}
<li class="answer">
<div class="wrapper--input">
<input type="radio"
name="{{ criterion.name }}"
id="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__value"
value="{{ option.name }}" />
<label for="assessment__rubric__question--{{ criterion.order_num }}__{{ option.order_num }}"
class="answer__label">{{ option.label }}</label>
</div>
<div class="wrapper--metadata">
<span class="answer__tip">{{ option.explanation }}</span>
<span class="answer__points">{{option.points}} <span class="answer__points__label">{% trans "points" %}</span></span>
</div>
</li>
{% endfor %}
</ol>
</div>
</li>
{% endif %}
{% endfor %}
</ol>
</fieldset>
{% include "openassessmentblock/oa_rubric.html" %}
</form>
</div>
......
......@@ -34,7 +34,7 @@ def create_workflow(submission_uuid, steps, on_init_params=None):
steps (list): List of steps that are part of the workflow, in the order
that the user must complete them. Example: `["peer", "self"]`
Kwargs:
Keyword Arguments:
on_init_params (dict): The parameters to pass to each assessment module
on init. Keys are the assessment step names.
......@@ -279,7 +279,7 @@ def get_status_counts(course_id, item_id, steps):
"""
Count how many workflows have each status, for a given item in a course.
Kwargs:
Keyword Arguments:
course_id (unicode): The ID of the course.
item_id (unicode): The ID of the item in the course.
steps (list): A list of assessment steps for this problem.
......
......@@ -441,7 +441,7 @@ def update_workflow_async(sender, **kwargs):
Args:
sender (object): Not used
Kwargs:
Keyword Arguments:
submission_uuid (str): The UUID of the submission associated
with the workflow being updated.
......
......@@ -364,7 +364,7 @@ class TestAssessmentWorkflowApi(CacheResetTest):
item_id (unicode): Item ID for the submission
status (unicode): One of acceptable status values (e.g. "peer", "self", "waiting", "done")
Kwargs:
Keyword Arguments:
answer (unicode): Submission answer.
steps (list): A list of steps to create the workflow with. If not
specified the default steps are "peer", "self".
......
......@@ -75,6 +75,27 @@ def create_rubric_dict(prompt, criteria):
}
def clean_criterion_feedback(rubric_criteria, criterion_feedback):
"""
Remove per-criterion feedback for criteria with feedback disabled
in the rubric.
Args:
rubric_criteria (list): The rubric criteria from the problem definition.
criterion_feedback (dict): Mapping of criterion names to feedback text.
Returns:
dict
"""
return {
criterion['name']: criterion_feedback[criterion['name']]
for criterion in rubric_criteria
if criterion['name'] in criterion_feedback
and criterion.get('feedback', 'disabled') in ['optional', 'required']
}
def make_django_template_key(key):
"""
Django templates access dictionary items using dot notation,
......
......@@ -34,7 +34,7 @@ class GradeMixin(object):
Args:
data: Not used.
Kwargs:
Keyword Arguments:
suffix: Not used.
Returns:
......@@ -135,7 +135,7 @@ class GradeMixin(object):
'peer_assessments': peer_assessments,
'self_assessment': self_assessment,
'example_based_assessment': example_based_assessment,
'rubric_criteria': self._rubric_criteria_grade_context(peer_assessments),
'rubric_criteria': self._rubric_criteria_grade_context(peer_assessments, self_assessment),
'has_submitted_feedback': has_submitted_feedback,
'allow_file_upload': self.allow_file_upload,
'file_url': self.get_download_url_from_submission(student_submission)
......@@ -196,7 +196,7 @@ class GradeMixin(object):
data (dict): Can provide keys 'feedback_text' (unicode) and
'feedback_options' (list of unicode).
Kwargs:
Keyword Arguments:
suffix (str): Unused
Returns:
......@@ -226,7 +226,7 @@ class GradeMixin(object):
)
return {'success': True, 'msg': _(u"Feedback saved.")}
def _rubric_criteria_grade_context(self, peer_assessments):
def _rubric_criteria_grade_context(self, peer_assessments, self_assessment):
"""
Sanitize the rubric criteria into a format that can be passed
into the grade complete Django template.
......@@ -237,6 +237,7 @@ class GradeMixin(object):
Args:
peer_assessments (list of dict): Serialized assessment models from the peer API.
self_assessment (dict): Serialized assessment model from the self API
Returns:
list of criterion dictionaries
......@@ -258,17 +259,25 @@ class GradeMixin(object):
]
"""
criteria = copy.deepcopy(self.rubric_criteria_with_labels)
criteria_feedback = defaultdict(list)
peer_criteria_feedback = defaultdict(list)
self_criteria_feedback = {}
for assessment in peer_assessments:
for part in assessment['parts']:
if part['feedback']:
part_criterion_name = part['criterion']['name']
criteria_feedback[part_criterion_name].append(part['feedback'])
peer_criteria_feedback[part_criterion_name].append(part['feedback'])
if self_assessment:
for part in self_assessment['parts']:
if part['feedback']:
part_criterion_name = part['criterion']['name']
self_criteria_feedback[part_criterion_name] = part['feedback']
for criterion in criteria:
criterion_name = criterion['name']
criterion['feedback'] = criteria_feedback[criterion_name]
criterion['peer_feedback'] = peer_criteria_feedback[criterion_name]
criterion['self_feedback'] = self_criteria_feedback.get(criterion_name)
return criteria
......
......@@ -26,7 +26,7 @@ class MessageMixin(object):
Args:
data: Not used.
Kwargs:
Keyword Arguments:
suffix: Not used.
Returns:
......
......@@ -273,8 +273,6 @@ class OpenAssessmentBlock(
else:
return False
@property
def in_studio_preview(self):
"""
......@@ -477,7 +475,7 @@ class OpenAssessmentBlock(
the peer grading step AFTER the submission deadline has passed.
This may not be necessary when we implement a grading interface specifically for course staff.
Kwargs:
Keyword Arguments:
step (str): The step in the workflow to check. Options are:
None: check whether the problem as a whole is open.
"submission": check whether the submission section is open.
......@@ -587,7 +585,7 @@ class OpenAssessmentBlock(
"""
Check if a question has been released.
Kwargs:
Keyword Arguments:
step (str): The step in the workflow to check.
None: check whether the problem as a whole is open.
"submission": check whether the submission section is open.
......@@ -696,4 +694,3 @@ class OpenAssessmentBlock(
return key.to_deprecated_string()
else:
return unicode(key)
......@@ -11,9 +11,9 @@ from openassessment.assessment.errors import (
from openassessment.workflow.errors import AssessmentWorkflowError
from openassessment.fileupload import api as file_upload_api
from openassessment.fileupload.api import FileUploadError
from .data_conversion import create_rubric_dict
from .resolve_dates import DISTANT_FUTURE
from .data_conversion import create_rubric_dict, clean_criterion_feedback
logger = logging.getLogger(__name__)
......@@ -71,7 +71,7 @@ class PeerAssessmentMixin(object):
self.submission_uuid,
self.get_student_item_dict()["student_id"],
data['options_selected'],
self._clean_criterion_feedback(data['criterion_feedback']),
clean_criterion_feedback(self.rubric_criteria_with_labels, data['criterion_feedback']),
data['overall_feedback'],
create_rubric_dict(self.prompt, self.rubric_criteria_with_labels),
assessment_ui_model['must_be_graded_by']
......@@ -265,22 +265,3 @@ class PeerAssessmentMixin(object):
logger.exception(err)
return peer_submission
def _clean_criterion_feedback(self, criterion_feedback):
"""
Remove per-criterion feedback for criteria with feedback disabled
in the rubric.
Args:
criterion_feedback (dict): Mapping of criterion names to feedback text.
Returns:
dict
"""
return {
criterion['name']: criterion_feedback[criterion['name']]
for criterion in self.rubric_criteria_with_labels
if criterion['name'] in criterion_feedback
and criterion.get('feedback', 'disabled') in ['optional', 'required']
}
......@@ -9,6 +9,7 @@ from openassessment.workflow import api as workflow_api
from submissions import api as submission_api
from .data_conversion import create_rubric_dict
from .resolve_dates import DISTANT_FUTURE
from .data_conversion import create_rubric_dict, clean_criterion_feedback
logger = logging.getLogger(__name__)
......@@ -113,6 +114,12 @@ class SelfAssessmentMixin(object):
if 'options_selected' not in data:
return {'success': False, 'msg': _(u"Missing options_selected key in request")}
if 'overall_feedback' not in data:
return {'success': False, 'msg': _('Must provide overall feedback in the assessment')}
if 'criterion_feedback' not in data:
return {'success': False, 'msg': _('Must provide feedback for criteria in the assessment')}
if self.submission_uuid is None:
return {'success': False, 'msg': _(u"You must submit a response before you can perform a self-assessment.")}
......@@ -121,6 +128,8 @@ class SelfAssessmentMixin(object):
self.submission_uuid,
self.get_student_item_dict()['student_id'],
data['options_selected'],
clean_criterion_feedback(self.rubric_criteria, data['criterion_feedback']),
data['overall_feedback'],
create_rubric_dict(self.prompt, self.rubric_criteria_with_labels)
)
self.publish_assessment_event("openassessmentblock.self_assess", assessment)
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -76,7 +76,7 @@ describe("OpenAssessment.PeerView", function() {
// Provide overall feedback
var overallFeedback = "Good job!";
view.overallFeedback(overallFeedback);
view.rubric.overallFeedback(overallFeedback);
// Submit the peer assessment
view.peerAssess();
......
......@@ -55,8 +55,28 @@ describe("OpenAssessment.SelfView", function() {
it("Sends a self assessment to the server", function() {
spyOn(server, 'selfAssess').andCallThrough();
// Select options in the rubric
var optionsSelected = {};
optionsSelected['Criterion 1'] = 'Poor';
optionsSelected['Criterion 2'] = 'Fair';
optionsSelected['Criterion 3'] = 'Good';
view.rubric.optionsSelected(optionsSelected);
// Provide per-criterion feedback
var criterionFeedback = {};
criterionFeedback['Criterion 1'] = "You did a fair job";
criterionFeedback['Criterion 3'] = "You did a good job";
view.rubric.criterionFeedback(criterionFeedback);
// Provide overall feedback
var overallFeedback = "Good job!";
view.rubric.overallFeedback(overallFeedback);
view.selfAssess();
expect(server.selfAssess).toHaveBeenCalled();
expect(server.selfAssess).toHaveBeenCalledWith(
optionsSelected, criterionFeedback, overallFeedback
);
});
it("Re-enables the self assess button on error", function() {
......
describe("OpenAssessment.FileUploader", function() {
var fileUploader = null;
var TEST_URL = "http://www.example.com/upload";
var TEST_IMAGE = {
data: "abcdefghijklmnopqrstuvwxyz",
name: "test.jpg",
size: 10471,
type: "image/jpeg"
};
var TEST_CONTENT_TYPE = "image/jpeg";
beforeEach(function() {
fileUploader = new OpenAssessment.FileUploader();
});
it("logs a file upload event", function() {
// Stub the AJAX call, simulating success
var successPromise = $.Deferred(
function(defer) { defer.resolve(); }
).promise();
spyOn($, 'ajax').andReturn(successPromise);
// Stub the event logger
spyOn(Logger, 'log');
// Upload a file
fileUploader.upload(TEST_URL, TEST_IMAGE, TEST_CONTENT_TYPE);
// Verify that the event was logged
expect(Logger.log).toHaveBeenCalledWith(
"openassessment.upload_file", {
contentType: TEST_CONTENT_TYPE,
imageName: TEST_IMAGE.name,
imageSize: TEST_IMAGE.size,
imageType: TEST_IMAGE.type
}
);
});
});
\ No newline at end of file
......@@ -163,6 +163,29 @@ describe("OpenAssessment.Server", function() {
});
});
it("sends a self-assessment to the XBlock", function() {
stubAjax(true, {success: true, msg: ''});
var success = false;
var options = {clarity: "Very clear", precision: "Somewhat precise"};
var criterionFeedback = {clarity: "This essay was very clear."};
server.selfAssess(options, criterionFeedback, "Excellent job!").done(
function() { success = true; }
);
expect(success).toBe(true);
expect($.ajax).toHaveBeenCalledWith({
url: '/self_assess',
type: "POST",
data: JSON.stringify({
options_selected: options,
criterion_feedback: criterionFeedback,
overall_feedback: "Excellent job!"
})
});
});
it("sends a training assessment to the XBlock", function() {
stubAjax(true, {success: true, msg: '', correct: true});
var success = false;
......@@ -300,7 +323,7 @@ describe("OpenAssessment.Server", function() {
it("informs the caller of an AJAX error when sending a self assessment", function() {
stubAjax(false, null);
var receivedMsg = null;
server.selfAssess("Test").fail(function(errorMsg) { receivedMsg = errorMsg; });
server.selfAssess("Test", {}, "Excellent job!").fail(function(errorMsg) { receivedMsg = errorMsg; });
expect(receivedMsg).toContain('This assessment could not be submitted');
});
......
......@@ -8,7 +8,7 @@ PUT requests on the server.
Args:
url (string): The one-time URL we're uploading to.
data (object): The object to upload, which should have properties:
imageData (object): The object to upload, which should have properties:
data (string)
name (string)
size (int)
......@@ -20,18 +20,32 @@ Returns:
*/
OpenAssessment.FileUploader = function() {
this.upload = function(url, data, contentType) {
this.upload = function(url, imageData, contentType) {
return $.Deferred(
function(defer) {
$.ajax({
url: url,
type: 'PUT',
data: data,
data: imageData,
async: false,
processData: false,
contentType: contentType,
}).done(
function(data, textStatus, jqXHR) { defer.resolve(); }
function(data, textStatus, jqXHR) {
// Log an analytics event
Logger.log(
"openassessment.upload_file",
{
contentType: contentType,
imageName: imageData.name,
imageSize: imageData.size,
imageType: imageData.type
}
);
// Return control to the caller
defer.resolve();
}
).fail(
function(data, textStatus, jqXHR) {
defer.rejectWith(this, [textStatus]);
......
......@@ -197,7 +197,7 @@ OpenAssessment.PeerView.prototype = {
this.server.peerAssess(
this.rubric.optionsSelected(),
this.rubric.criterionFeedback(),
this.overallFeedback()
this.rubric.overallFeedback()
).done(
successFunction
).fail(function(errMsg) {
......@@ -206,28 +206,5 @@ OpenAssessment.PeerView.prototype = {
});
},
/**
Get or set overall feedback on the submission.
Args:
overallFeedback (string or undefined): The overall feedback text (optional).
Returns:
string or undefined
Example usage:
>>> view.overallFeedback('Good job!'); // Set the feedback text
>>> view.overallFeedback(); // Retrieve the feedback text
'Good job!'
**/
overallFeedback: function(overallFeedback) {
var selector = '#assessment__rubric__question--feedback__value';
if (typeof overallFeedback === 'undefined') {
return $(selector, this.element).val();
}
else {
$(selector, this.element).val(overallFeedback);
}
}
};
......@@ -93,6 +93,7 @@ OpenAssessment.ResponseView.prototype = {
function(eventObject) {
// Override default form submission
eventObject.preventDefault();
$('.submission__answer__display__image', view.element).removeClass('is--hidden');
view.fileUpload();
}
);
......
......@@ -47,6 +47,31 @@ OpenAssessment.Rubric.prototype = {
},
/**
Get or set overall feedback on the submission.
Args:
overallFeedback (string or undefined): The overall feedback text (optional).
Returns:
string or undefined
Example usage:
>>> view.overallFeedback('Good job!'); // Set the feedback text
>>> view.overallFeedback(); // Retrieve the feedback text
'Good job!'
**/
overallFeedback: function(overallFeedback) {
var selector = '#assessment__rubric__question--feedback__value';
if (typeof overallFeedback === 'undefined') {
return $(selector, this.element).val();
}
else {
$(selector, this.element).val(overallFeedback);
}
},
/**
Get or set the options selected in the rubric.
Args:
......
......@@ -103,8 +103,11 @@ OpenAssessment.SelfView.prototype = {
baseView.toggleActionError('self', null);
view.selfSubmitEnabled(false);
var options = this.rubric.optionsSelected();
this.server.selfAssess(options).done(
this.server.selfAssess(
this.rubric.optionsSelected(),
this.rubric.criterionFeedback(),
this.rubric.overallFeedback()
).done(
function() {
baseView.loadAssessmentModules();
baseView.scrollToTop();
......
......@@ -269,6 +269,8 @@ if (typeof OpenAssessment.Server == "undefined" || !OpenAssessment.Server) {
Args:
optionsSelected (object literal): Keys are criteria names,
values are the option text the user selected for the criterion.
var criterionFeedback = { clarity: "The essay was very clear." };
var overallFeedback = "Good job!";
Returns:
A JQuery promise, which resolves with no args if successful
......@@ -282,10 +284,12 @@ if (typeof OpenAssessment.Server == "undefined" || !OpenAssessment.Server) {
function(errorMsg) { console.log(errorMsg); }
);
**/
selfAssess: function(optionsSelected) {
selfAssess: function(optionsSelected, criterionFeedback, overallFeedback) {
var url = this.url('self_assess');
var payload = JSON.stringify({
options_selected: optionsSelected
options_selected: optionsSelected,
criterion_feedback: criterionFeedback,
overall_feedback: overallFeedback
});
return $.Deferred(function(defer) {
$.ajax({ type: "POST", url: url, data: payload }).done(
......
......@@ -21,11 +21,19 @@ if (typeof window.gettext === 'undefined') {
// If ngettext isn't found (workbench, testing, etc.), return the simplistic english version
if (typeof window.ngetgext === 'undefined') {
window.ngettext = function(singular_text, plural_text, n) {
if (n > 1){
window.ngettext = function (singular_text, plural_text, n) {
if (n > 1) {
return plural_text;
} else {
return singular_text;
}
}
}
// Stub event logging if the runtime doesn't provide it
if (typeof window.Logger === 'undefined') {
window.Logger = {
log: function(event_type, data, kwargs) {}
};
}
\ No newline at end of file
......@@ -33,11 +33,15 @@ OpenAssessment.ItemUtilities = {
refreshOptionString: function(element) {
var points = $(element).data('points');
var label = $(element).data('label');
var singular_string = label + " - " + points + " point";
var multiple_string = label + " - " + points + " points";
// We don't want the lack of a label to make it look like - 1 points.
if (label == ""){
label = gettext('Unnamed Option');
}
var singularString = label + " - " + points + " point";
var multipleString = label + " - " + points + " points";
// If the option doesn't have a data points value, that indicates to us that it is not a user-specified option,
// but represents the "Not Selected" option which all criterion drop-downs have.
if (typeof(points) === 'undefined'){
if (typeof points === 'undefined') {
$(element).text(
gettext('Not Selected')
);
......@@ -45,7 +49,7 @@ OpenAssessment.ItemUtilities = {
// Otherwise, set the text of the option element to be the properly conjugated, translated string.
else {
$(element).text(
ngettext(singular_string, multiple_string, points)
ngettext(singularString, multipleString, points)
);
}
}
......
......@@ -71,7 +71,7 @@ OpenAssessment.StudentTrainingListener.prototype = {
.data("points", data.points)
.data("label", data.label);
// Sets the option's text description, and ads it to the criterion.
// Sets the option's text description, and adds it to the criterion.
OpenAssessment.ItemUtilities.refreshOptionString(option);
$(criterion).append(option);
examplesUpdated = true;
......
......@@ -11,6 +11,7 @@ Returns:
OpenAssessment.ValidationAlert = function (element) {
var alert = this;
this.element = element;
this.rubricContentElement = $('#openassessment_rubric_content_editor');
this.title = $(".openassessment_alert_title", this.element);
this.message = $(".openassessment_alert_message", this.element);
$(".openassessment_alert_close", element).click(function(eventObject) {
......@@ -27,6 +28,7 @@ OpenAssessment.ValidationAlert.prototype = {
*/
hide: function() {
this.element.addClass('is--hidden');
this.rubricContentElement.removeClass('openassessment_alert_shown');
},
/**
......@@ -34,6 +36,7 @@ OpenAssessment.ValidationAlert.prototype = {
*/
show : function() {
this.element.removeClass('is--hidden');
this.rubricContentElement.addClass('openassessment_alert_shown');
},
/**
......
......@@ -230,12 +230,13 @@
}
.oa_editor_content_wrapper {
height: 100%;
height: Calc(100% - 1px);
width: 100%;
border-radius: 3px;
border: 1px solid $edx-gray-d1;
background-color: white;
overflow-y: scroll;
position: absolute;
}
#openassessment_prompt_editor {
......@@ -426,20 +427,17 @@
#oa_rubric_editor_wrapper{
#openassessment_rubric_validation_alert{
-webkit-animation: notificationSlideDown 1s ease-in-out 1;
-moz-animation: notificationSlideDown 1s ease-in-out 1;
animation: notificationSlideDown 1s ease-in-out 1;
-webkit-animation-fill-mode: forwards;
-moz-animation-fill-mode: forwards;
animation-fill-mode: forwards;
background-color: #323232;
height: auto;
width: 100%;
border-top-left-radius: 2px;
border-top-right-radius: 2px;
background-color: #323232;
border-bottom: 3px solid rgb(192, 172, 0);
padding: 10px;
position: absolute;
z-index: 10;
width: 100%;
min-height: 70px;
@include transition (color 0.50s ease-in-out 0s);
.openassessment_alert_icon:before{
font-family: FontAwesome;
......@@ -448,7 +446,7 @@
color: rgb(192, 172, 0);
float: left;
font-size: 200%;
margin: 20px 0px 0px 25px;
margin: 1.5% 0px 0px 2%;
}
.openassessment_alert_header {
width: 85%;
......@@ -470,11 +468,11 @@
display: inline-block;
position: absolute;
top: 0px;
right: 10px;
right: 0px;
color: #e9e9e9;
background: #323232;
text-align: center;
padding: 2px 5px;
margin: 5px 10px;
[class^="icon"] {
width: auto;
......@@ -488,6 +486,17 @@
}
}
#openassessment_rubric_content_editor{
height: 100%;
overflow-y: scroll;
}
#openassessment_rubric_content_editor.openassessment_alert_shown{
height: Calc(100% - 70px);
position: absolute;
bottom: 0;
}
.wrapper-comp-settings{
display: initial;
}
......@@ -1024,21 +1033,43 @@
.action--upload {
@extend %btn--secondary;
@extend %action-2;
display: block;
text-align: center;
margin-bottom: ($baseline-v/2);
float: right;
display: inline-block;
margin: ($baseline-v/2) 0;
box-shadow: none;
}
.file--upload {
margin-top: $baseline-v/2;
margin-bottom: $baseline-v/2;
margin: $baseline-v ($baseline-v/2);
}
}
.self-assessment__display__header
.self-assessment__display__title,
.peer-assessment__display__header
.peer-assessment__display__title,
.submission__answer__display
.submission__answer__display__title{
margin: 10px 0;
}
.submission--image {
max-height: 600px;
max-width: $max-width/2;
margin-bottom: $baseline-v;
.self-assessment__display__image,
.peer-assessment__display__image,
.submission__answer__display__image{
@extend .submission__answer__display__content;
max-height: 400px;
text-align: left;
overflow: auto;
img{
max-height: 100%;
max-width: 100%;
}
}
.submission__answer__display__image
.submission--image{
max-height: 250px;
max-width: 100%;
}
// Developer SASS for Continued Grading.
......@@ -1046,4 +1077,4 @@
.action--continue--grading {
@extend .action--submit;
}
}
\ No newline at end of file
}
......@@ -49,11 +49,11 @@
}
@include media($bp-dm) {
@include span-columns(9 of 12);
@include span-columns(8 of 12);
}
@include media($bp-dl) {
@include span-columns(9 of 12);
@include span-columns(8 of 12);
}
@include media($bp-dx) {
......@@ -204,7 +204,3 @@
color: $copy-staff-color !important;
}
}
.modal-lg.modal-window.confirm.openassessment_modal_window{
z-index: 100000;
}
......@@ -55,12 +55,12 @@
}
@include media($bp-dm) {
@include span-columns(9 of 12);
@include span-columns(8 of 12);
margin-bottom: 0;
}
@include media($bp-dl) {
@include span-columns(9 of 12);
@include span-columns(8 of 12);
margin-bottom: 0;
}
......@@ -173,14 +173,14 @@
}
@include media($bp-dm) {
@include span-columns(3 of 12);
@include span-columns(4 of 12);
@include omega();
position: relative;
top:-12px;
}
@include media($bp-dl) {
@include span-columns(3 of 12);
@include span-columns(4 of 12);
@include omega();
position: relative;
top: -12px;
......@@ -201,7 +201,7 @@
// step content wrapper
.wrapper--step__content {
margin-top: ($baseline-v/2);
padding-top: $baseline-v;
padding-top: ($baseline-v/2);
border-top: 1px solid $color-decorative-tertiary;
}
......
......@@ -37,7 +37,7 @@ class StudentTrainingMixin(object):
Args:
data: Not used.
Kwargs:
Keyword Arguments:
suffix: Not used.
Returns:
......@@ -169,6 +169,15 @@ class StudentTrainingMixin(object):
corrections = student_training.assess_training_example(
self.submission_uuid, data['options_selected']
)
self.runtime.publish(
self,
"openassessment.student_training_assess_example",
{
"submission_uuid": self.submission_uuid,
"options_selected": data["options_selected"],
"corrections": corrections
}
)
except student_training.StudentTrainingRequestError:
msg = (
u"Could not check student training scores for "
......
......@@ -69,7 +69,14 @@ class StudioMixin(object):
def editor_context(self):
"""
Retrieve the XBlock's content definition.
Update the XBlock's XML.
Args:
data (dict): Data from the request; should have a value for the key 'xml'
containing the XML for this XBlock.
Keyword Arguments:
suffix (str): Not used
Returns:
dict with keys
......@@ -122,7 +129,7 @@ class StudioMixin(object):
data (dict): Data from the request; should have the format described
in the editor schema.
Kwargs:
Keyword Arguments:
suffix (str): Not used
Returns:
......@@ -207,7 +214,7 @@ class StudioMixin(object):
Args:
data (dict): Not used
Kwargs:
Keyword Arguments:
suffix (str): Not used
Returns:
......@@ -293,22 +300,29 @@ class StudioMixin(object):
"""
order = copy.deepcopy(self.editor_assessments_order)
used_assessments = [asmnt['name'] for asmnt in self.valid_assessments]
default_editor_order = copy.deepcopy(DEFAULT_EDITOR_ASSESSMENTS_ORDER)
# Backwards compatibility:
# If the problem already contains example-based assessment
# then allow the editor to display example-based assessments.
if 'example-based-assessment' in used_assessments:
default_editor_order.insert(0, 'example-based-assessment')
# Backwards compatibility:
# If the editor assessments order doesn't match the problem order,
# fall back to the problem order.
# This handles the migration of problems created pre-authoring,
# which will have the default editor order.
used_assessments = [asmnt['name'] for asmnt in self.valid_assessments]
problem_order_indices = [
order.index(asmnt_name) for asmnt_name in used_assessments
if asmnt_name in order
]
if problem_order_indices != sorted(problem_order_indices):
unused_assessments = list(set(DEFAULT_EDITOR_ASSESSMENTS_ORDER) - set(used_assessments))
unused_assessments = list(set(default_editor_order) - set(used_assessments))
return sorted(unused_assessments) + used_assessments
# Forwards compatibility:
# Include any additional assessments that may have been added since the problem was created.
else:
return order + list(set(DEFAULT_EDITOR_ASSESSMENTS_ORDER) - set(order))
return order + list(set(default_editor_order) - set(order))
......@@ -19,7 +19,7 @@ def scenario(scenario_path, user_id=None):
Args:
scenario_path (str): Path to the scenario XML file.
Kwargs:
Keyword Arguments:
user_id (str or None): User ID to log in as, or None.
Returns:
......@@ -109,7 +109,7 @@ class XBlockHandlerTestCase(CacheResetTest):
handler_name (str): The name of the handler.
content (unicode): Content of the request.
Kwargs:
Keyword Arguments:
response_format (None or str): Expected format of the response string.
If `None`, return the raw response content; if 'json', parse the
response as JSON and return the result.
......
......@@ -115,9 +115,16 @@ class TestGrade(XBlockHandlerTestCase):
u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': u"Ṫḧïṡ ïṡ ṡöṁë ḟëëḋḅäċḳ."
}
self_assessment = copy.deepcopy(self.ASSESSMENTS[0])
self_assessment['criterion_feedback'] = {
u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': "Feedback here",
u'Form': 'lots of feedback yes"',
u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': "such feedback"
}
# Submit, assess, and render the grade view
self._create_submission_and_assessments(
xblock, self.SUBMISSION, self.PEERS, peer_assessments, self.ASSESSMENTS[0]
xblock, self.SUBMISSION, self.PEERS, peer_assessments, self_assessment
)
# Render the grade section
......@@ -172,11 +179,13 @@ class TestGrade(XBlockHandlerTestCase):
# Verify that the context for the grade complete page contains the feedback
_, context = xblock.render_grade_complete(xblock.get_workflow_info())
criteria = context['rubric_criteria']
self.assertEqual(criteria[0]['feedback'], [
self.assertEqual(criteria[0]['peer_feedback'], [
u'Peer 2: ฝﻉɭɭ ɗѻกﻉ!',
u'Peer 1: ฝﻉɭɭ ɗѻกﻉ!',
])
self.assertEqual(criteria[1]['feedback'], [u'Peer 2: ƒαιя נσв'])
self.assertEqual(criteria[0]['self_feedback'], u'Peer 1: ฝﻉɭɭ ɗѻกﻉ!')
self.assertEqual(criteria[1]['peer_feedback'], [u'Peer 2: ƒαιя נσв'])
# The order of the peers in the per-criterion feedback needs
# to match the order of the peer assessments
......@@ -346,7 +355,7 @@ class TestGrade(XBlockHandlerTestCase):
peer_assessments (list of dict): List of assessment dictionaries for peer assessments.
self_assessment (dict): Dict of assessment for self-assessment.
Kwargs:
Keyword Arguments:
waiting_for_peer (bool): If true, skip creation of peer assessments for the user's submission.
Returns:
......@@ -402,5 +411,6 @@ class TestGrade(XBlockHandlerTestCase):
if self_assessment is not None:
self_api.create_assessment(
submission['uuid'], student_id, self_assessment['options_selected'],
self_assessment['criterion_feedback'], self_assessment['overall_feedback'],
{'criteria': xblock.rubric_criteria}
)
......@@ -548,7 +548,7 @@ class TestDates(XBlockHandlerTestCase):
expected_start (datetime): Expected start date.
expected_due (datetime): Expected due date.
Kwargs:
Keyword Arguments:
released (bool): If set, check whether the XBlock has been released.
course_staff (bool): Whether to treat the user as course staff.
......
......@@ -507,7 +507,7 @@ class TestPeerAssessmentRender(XBlockHandlerTestCase):
expected_path (str): The expected template path.
expected_context (dict): The expected template context.
Kwargs:
Keyword Arguments:
continue_grading (bool): If true, the user has chosen to continue grading.
workflow_status (str): If provided, simulate this status from the workflow API.
graded_enough (bool): Did the student meet the requirement by assessing enough peers?
......@@ -679,7 +679,7 @@ class TestPeerAssessHandler(XBlockHandlerTestCase):
scorer_id (unicode): The ID of the student creating the assessment.
assessment (dict): Serialized assessment model.
Kwargs:
Keyword Arguments:
expect_failure (bool): If true, expect a failure response and return None
Returns:
......
......@@ -9,6 +9,7 @@ import mock
import pytz
from openassessment.assessment.api import self as self_api
from openassessment.workflow import api as workflow_api
from openassessment.xblock.data_conversion import create_rubric_dict
from .base import XBlockHandlerTestCase, scenario
......@@ -23,6 +24,8 @@ class TestSelfAssessment(XBlockHandlerTestCase):
ASSESSMENT = {
'options_selected': {u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'},
'criterion_feedback': {},
'overall_feedback': ""
}
@scenario('data/self_assessment_scenario.xml', user_id='Bob')
......@@ -87,6 +90,10 @@ class TestSelfAssessment(XBlockHandlerTestCase):
# Submit a self assessment for a rubric with a feedback-only criterion
assessment_dict = {
'options_selected': {u'vocabulary': u'good'},
'criterion_feedback': {
u'vocabulary': 'Awesome job!',
u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞': 'fairly illegible.'
},
'overall_feedback': u''
}
resp = self.request(xblock, 'self_assess', json.dumps(assessment_dict), response_format='json')
......@@ -99,10 +106,9 @@ class TestSelfAssessment(XBlockHandlerTestCase):
self.assertEqual(assessment['parts'][0]['option']['points'], 1)
# Check the feedback-only criterion score/feedback
# The written feedback should default to an empty string
self.assertEqual(assessment['parts'][1]['criterion']['name'], u'𝖋𝖊𝖊𝖉𝖇𝖆𝖈𝖐 𝖔𝖓𝖑𝖞')
self.assertIs(assessment['parts'][1]['option'], None)
self.assertEqual(assessment['parts'][1]['feedback'], u'')
self.assertEqual(assessment['parts'][1]['feedback'], u'fairly illegible.')
@scenario('data/self_assessment_scenario.xml', user_id='Bob')
def test_self_assess_workflow_error(self, xblock):
......@@ -267,7 +273,8 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
submission['uuid'],
xblock.get_student_item_dict()['student_id'],
{u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'},
{'criteria': xblock.rubric_criteria}
{}, "Good job!",
create_rubric_dict(xblock.prompt, xblock.rubric_criteria)
)
self._assert_path_and_context(
xblock, 'openassessmentblock/self/oa_self_complete.html', {},
......@@ -302,7 +309,8 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
submission['uuid'],
xblock.get_student_item_dict()['student_id'],
{u'𝓒𝓸𝓷𝓬𝓲𝓼𝓮': u'ﻉซƈﻉɭɭﻉกՇ', u'Form': u'Fair'},
{'criteria': xblock.rubric_criteria}
{}, "Good job!",
create_rubric_dict(xblock.prompt, xblock.rubric_criteria)
)
# This case probably isn't possible, because presumably when we create
......@@ -358,7 +366,7 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
expected_path (str): The expected template path.
expected_context (dict): The expected template context.
Kwargs:
Keyword Arguments:
workflow_status (str): If provided, simulate this status from the workflow API.
workflow_status (str): If provided, simulate these details from the workflow API.
submission_uuid (str): If provided, simulate this submision UUI for the current workflow.
......
......@@ -32,6 +32,12 @@ ASSESSMENT_DICT = {
"Clear-headed": "Yogi Berra",
"Form": "Reddit",
},
'criterion_feedback': {
"Concise": "Not very.",
"Clear-headed": "Indubitably",
"Form": "s ka tter ed"
}
}
......@@ -209,6 +215,8 @@ class TestCourseStaff(XBlockHandlerTestCase):
submission['uuid'],
STUDENT_ITEM["student_id"],
ASSESSMENT_DICT['options_selected'],
ASSESSMENT_DICT['criterion_feedback'],
ASSESSMENT_DICT['overall_feedback'],
{'criteria': xblock.rubric_criteria},
)
......@@ -235,6 +243,13 @@ class TestCourseStaff(XBlockHandlerTestCase):
"Content": "Poor",
}
criterion_feedback = {
"Ideas": "Dear diary: Lots of creativity from my dream journal last night at 2 AM,",
"Content": "Not as insightful as I had thought in the wee hours of the morning!"
}
overall_feedback = "I think I should tell more people about how important worms are for the ecosystem."
bob_item = STUDENT_ITEM.copy()
bob_item["item_id"] = xblock.scope_ids.usage_id
......@@ -265,6 +280,8 @@ class TestCourseStaff(XBlockHandlerTestCase):
submission['uuid'],
STUDENT_ITEM["student_id"],
options_selected,
criterion_feedback,
overall_feedback,
{'criteria': xblock.rubric_criteria},
)
......
......@@ -117,7 +117,7 @@ class StudioViewTest(XBlockHandlerTestCase):
@scenario('data/example_based_only.xml')
def test_render_studio_with_ai(self, xblock):
frag = self.runtime.render(xblock, 'studio_view')
self.assertTrue(frag.body_html().find('openassessment-edit'))
self.assertTrue('ai_assessment_settings_editor' in frag.body_html())
@file_data('data/update_xblock.json')
@scenario('data/basic_scenario.xml')
......
......@@ -306,7 +306,7 @@ def validator(oa_block, strict_post_release=True):
Args:
oa_block (OpenAssessmentBlock): The XBlock being updated.
Kwargs:
Keyword Arguments:
strict_post_release (bool): If true, restrict what authors can update once
a problem has been released.
......
......@@ -29,7 +29,7 @@ class WorkflowMixin(object):
Args:
data: Unused
Kwargs:
Keyword Arguments:
suffix: Unused
Returns:
......@@ -92,7 +92,7 @@ class WorkflowMixin(object):
from peer-assessment to self-assessment. Creates a score
if the student has completed all requirements.
Kwargs:
Keyword Arguments:
submission_uuid (str): The submission associated with the workflow to update.
Defaults to the submission created by the current student.
......
......@@ -6,7 +6,7 @@ cd `dirname $BASH_SOURCE` && cd ..
# Install dependencies
make install-python
make install-js
make minimize-js
make javascript
# Configure Django settings
export DJANGO_SETTINGS_MODULE="settings.dev"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment