Commit a8f25efd by Will Daly

Merge remote-tracking branch 'origin/master' into authoring

Conflicts:
	AUTHORS
	openassessment/xblock/static/css/openassessment.css
	openassessment/xblock/static/js/openassessment.min.js
	openassessment/xblock/submission_mixin.py
	openassessment/xblock/test/data/update_from_xml.json
	openassessment/xblock/test/test_submission.py
	openassessment/xblock/test/test_xml.py
	openassessment/xblock/xml.py
parents c9625e18 3cc5adbe
...@@ -8,4 +8,5 @@ Mark Hoeber <hoeber@edx.org> ...@@ -8,4 +8,5 @@ Mark Hoeber <hoeber@edx.org>
Sylvia Pearce <spearce@edx.org> Sylvia Pearce <spearce@edx.org>
Ned Batchelder <ned@nedbatchelder.com> Ned Batchelder <ned@nedbatchelder.com>
David Baumgold <david@davidbaumgold.com> David Baumgold <david@davidbaumgold.com>
Grady Ward <gward@brandeis.edu> Grady Ward <gward@brandeis.edu>
\ No newline at end of file Andrew Dekker <a.dekker@uq.edu.au>
...@@ -81,6 +81,8 @@ Question ...@@ -81,6 +81,8 @@ Question
You'll also specify the **question** that you want your students to answer. This appears near the top of the component, followed by a field where the student enters a response. You can require your students to enter text as a response, or you can require your students to both enter text and upload an image. (All student responses must include text. You cannot require students to only upload an image.) You'll also specify the **question** that you want your students to answer. This appears near the top of the component, followed by a field where the student enters a response. You can require your students to enter text as a response, or you can require your students to both enter text and upload an image. (All student responses must include text. You cannot require students to only upload an image.)
.. note:: Currently, course teams cannot see images that students upload. Images do not appear in information that course teams can access about individual students, and they are not included in the course data package.
When you write your question, you can include helpful information for your students, such as what students can expect after they submit responses and the approximate number of words or sentences that a student's response should have. (A response cannot have more than 10,000 words.) When you write your question, you can include helpful information for your students, such as what students can expect after they submit responses and the approximate number of words or sentences that a student's response should have. (A response cannot have more than 10,000 words.)
For more information, see :ref:`PA Add Question`. For more information, see :ref:`PA Add Question`.
...@@ -473,6 +475,7 @@ If you want your students to upload an image as a part of their response, change ...@@ -473,6 +475,7 @@ If you want your students to upload an image as a part of their response, change
:alt: Open response assessment example with Choose File and Upload Your Image buttons circled :alt: Open response assessment example with Choose File and Upload Your Image buttons circled
:width: 500 :width: 500
.. note:: Currently, course teams cannot see images that students upload. Images do not appear in information that course teams can access about individual students, and they are not included in the course data package.
Add Formatting or Images to the Question Add Formatting or Images to the Question
**************************************** ****************************************
......
.. _fileupload:
##########
FileUpload
##########
Overview
--------
In this document, we describe the use of the File Upload API.
By design, this is a simple API for requesting an Upload URL or Download URL
for a piece of content. The means by which the media is stored is relative to
the implementation of the File Upload Service.
This project initially has one File Upload Service implementation for
retrieving Upload / Download URLs for Amazon S3.
The URLs provided by the File Upload API are intended to be used to upload and
download content from the client to the content store directly.
In order to provide a seamless interaction on the client, this may require an
AJAX request to first retrieve the URL, then upload content. This type of
request is restricted via Cross Origin Policy, but can be resolved through CORS
configuration on the content store.
Configuration
-------------
The Amazon S3 File Upload Service requires the following settings to be
configured:
* AWS_ACCESS_KEY_ID - The AWS Access Key ID.
* AWS_SECRET_ACCESS_KEY - The associated AWS Secret Access Key.
* FILE_UPLOAD_STORAGE_BUCKET_NAME - The name of the S3 Bucket configured for
uploading and downloading content.
* FILE_UPLOAD_STORAGE_PREFIX (optional) - The file prefix within the bucket
for storing all content. Defaults to 'submissions_attachments'
In addition, your S3 bucket must be have CORS configuration set up to allow PUT
and GET requests to be performed across request origins. To do so, you must:
1. Log into Amazon AWS
2. Select S3 from the available applications
3. Expand the "Permissions" section
4. Click "Edit CORS configuration"
5. Your CORS configuration must have the following values:
.. code-block:: xml
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedHeader>*</AllowedHeader>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
\ No newline at end of file
...@@ -9,3 +9,4 @@ Architecture ...@@ -9,3 +9,4 @@ Architecture
workflow workflow
ai_grading ai_grading
fileupload
{% load i18n %}
{% spaceless %}
<li id="openassessment__leaderboard" class="openassessment__steps__step step--leaderboard is--complete">
<header class="step__header">
<h2 class="step__title">
<span class="wrapper--copy">
<span class="step__label">{% trans "Leaderboard: Complete" %} </span>
<div class="wrapper--step__content">
<h3 class="leaderboard__title">{% trans "Best Responses For This Assignment" %}</h3>
<ol class="list leaderboard__score__list">
{% for topscore in topscores %}
<li class="leaderboard__score__item">
<h4 class="leaderboard__list__number">{{ forloop.counter }}</h4>
{% with num_points=topscore.score %}
<h4 class="leaderboard__score__title">
{% blocktrans %}{{ num_points }} points{% endblocktrans %}
</h4>
{% endwith %}
<div class="leaderboard__answer">{{ topscore.content|linebreaks }}</div>
</li>
{% endfor %}
</ol>
</div>
</span>
</h2>
</header>
</li>
{% endspaceless %}
{% load i18n %}
{% spaceless %}
<li id="openassessment__leaderboard" class="openassessment__steps__step step--leaderboard">
<header class="step__header">
<h2 class="step__title">
<span class="wrapper--copy">
<span class="step__label">{% trans "Leaderboard: Not Available" %} </span>
</span>
</h2>
</header>
<div class="wrapper--step__content">
<div class="step__content">
<div class="leaderboard__description">
<p>{% trans "The leaderboard is not available until your final grade is complete." %}</p>
</div>
</div>
</div>
</li>
{% endspaceless %}
...@@ -4,9 +4,13 @@ ...@@ -4,9 +4,13 @@
<div class="message__content"> <div class="message__content">
<p> <p>
{% if approaching %} {% if approaching %}
{% blocktrans %}Assignment submissions will close soon. To receive a grade, first provide a response to the question, then complete the steps below the <strong>Your Response</strong> field.{% endblocktrans %} {% blocktrans with start_tag='<strong>'|safe end_tag="</strong>"|safe %}
Assignment submissions will close soon. To receive a grade, first provide a response to the question, then complete the steps below the {{ start_tag }}Your Response{{ end_tag }} field.
{% endblocktrans %}
{% else %} {% else %}
{% blocktrans %}This assignment has several steps. In the first step, you'll provide a response to the question. The other steps appear below the <strong>Your Response</strong> field.{% endblocktrans %} {% blocktrans with start_tag="<strong>"|safe end_tag="</strong>"|safe %}
This assignment has several steps. In the first step, you'll provide a response to the question. The other steps appear below the {{ start_tag }}Your Response{{ end_tag }} field.
{% endblocktrans %}
{% endif %} {% endif %}
</p> </p>
</div> </div>
......
...@@ -22,9 +22,13 @@ ...@@ -22,9 +22,13 @@
{% trans "All submitted peer responses have been assessed. Check back later to see if more students have submitted responses. " %} {% trans "All submitted peer responses have been assessed. Check back later to see if more students have submitted responses. " %}
{% endif %} {% endif %}
{% if has_self %} {% if has_self %}
{% blocktrans %}You'll receive your grade after you complete the <a data-behavior="ui-scroll" href="#openassessment__peer-assessment">peer assessment</a> and <a data-behavior="ui-scroll" href="#openassessment__self-assessment">self assessment</a> steps, and after your peers have assessed your response.{% endblocktrans %} {% blocktrans with peer_start_tag = '<a data-behavior="ui-scroll" href="#openassessment__peer-assessment">'|safe self_start_tag = '<a data-behavior="ui-scroll" href="#openassessment__self-assessment">'|safe end_tag = '</a>'|safe %}
You'll receive your grade after you complete the {{ peer_start_tag }}peer assessment{{ end_tag }} and {{ self_start_tag }}self assessment{{ end_tag }} steps, and after your peers have assessed your response.
{% endblocktrans %}
{% else %} {% else %}
{% blocktrans %}You'll receive your grade after you complete the <a data-behavior="ui-scroll" href="#openassessment__peer-assessment">peer assessment</a> step.{% endblocktrans %} {% blocktrans with start_tag = '<a data-behavior="ui-scroll" href="#openassessment__peer-assessment">'|safe end_tag = '</a>'|safe %}
You'll receive your grade after you complete the {{ start_tag }}peer assessment{{ end_tag }} step.
{% endblocktrans %}
{% endif %} {% endif %}
{% endif %} {% endif %}
</p> </p>
......
...@@ -19,9 +19,13 @@ ...@@ -19,9 +19,13 @@
<strong> {% trans "Self evaluation of this assignment will close soon. " %} </strong> <strong> {% trans "Self evaluation of this assignment will close soon. " %} </strong>
{% endif %} {% endif %}
{% if has_peer %} {% if has_peer %}
{% blocktrans %}You'll receive your grade after the required number of your peers have assessed your response and you complete the <a data-behavior="ui-scroll" href="#openassessment__self-assessment">self assessment</a> step.{% endblocktrans %} {% blocktrans with start_tag = '<a data-behavior="ui-scroll" href="#openassessment__self-assessment">'|safe end_tag = '</a>'|safe %}
You'll receive your grade after the required number of your peers have assessed your response and you complete the {{ start_tag }}self assessment{{ end_tag }} step.
{% endblocktrans %}
{% else %} {% else %}
{% blocktrans %}You'll receive your grade after you complete the <a data-behavior="ui-scroll" href="#openassessment__self-assessment">self assessment</a> step.{% endblocktrans %} {% blocktrans with start_tag = '<a data-behavior="ui-scroll" href="#openassessment__self-assessment">'|safe end_tag = '</a>'|safe %}
You'll receive your grade after you complete the {{ start_tag }}self assessment{{ end_tag}} step.
{% endblocktrans %}
{% endif %} {% endif %}
{% endif %} {% endif %}
</p> </p>
......
...@@ -12,10 +12,12 @@ ...@@ -12,10 +12,12 @@
<span class="step__label">{% trans "Assess Peers" %}</span> <span class="step__label">{% trans "Assess Peers" %}</span>
{% if peer_start %} {% if peer_start %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "available August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with start_date=peer_start|utc|date:"N j, Y H:i e" time_until=peer_start|timeuntil %}available <span class="date">{{ start_date }} (in {{ time_until }})</span>{% endblocktrans %} {% blocktrans with start_date=peer_start|utc|date:"N j, Y H:i e" time_until=peer_start|timeuntil %}available <span class="date">{{ start_date }} (in {{ time_until }})</span>{% endblocktrans %}
</span> </span>
{% elif peer_due %} {% elif peer_due %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "due August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with due_date=peer_due|utc|date:"N j, Y H:i e" time_until=peer_due|timeuntil %}due <span class="date">{{ due_date }} (in {{ time_until }})</span>{% endblocktrans %} {% blocktrans with due_date=peer_due|utc|date:"N j, Y H:i e" time_until=peer_due|timeuntil %}due <span class="date">{{ due_date }} (in {{ time_until }})</span>{% endblocktrans %}
</span> </span>
{% endif %} {% endif %}
...@@ -26,9 +28,13 @@ ...@@ -26,9 +28,13 @@
<span class="step__status"> <span class="step__status">
<span class="step__status__label">{% trans "This step's status" %}:</span> <span class="step__status__label">{% trans "This step's status" %}:</span>
<span class="step__status__value"> <span class="step__status__value">
{% with graded=graded must_grade=must_grade %}
<span class="copy"> <span class="copy">
{% blocktrans with graded=graded must_grade=must_grade%}In Progress (<span class="step__status__value--completed">{{ graded }}</span> of <span class="step__status__value--required">{{ must_grade }}</span>){% endblocktrans %} {% blocktrans with num_graded="<span class=\"step__status__value--completed\">"|add:graded|add:"</span>"|safe num_must_grade="<span class=\"step__status__value--required\">"|add:must_grade|add:"</span>"|safe %}
In Progress ({{ num_graded }} of {{ num_must_grade }})
{% endblocktrans %}
</span> </span>
{% endwith %}
</span> </span>
</span> </span>
{% endblock %} {% endblock %}
...@@ -47,11 +53,13 @@ ...@@ -47,11 +53,13 @@
<article class="peer-assessment" id="peer-assessment--001"> <article class="peer-assessment" id="peer-assessment--001">
<div class="peer-assessment__display"> <div class="peer-assessment__display">
<header class="peer-assessment__display__header"> <header class="peer-assessment__display__header">
{% blocktrans with review_num=review_num must_grade=must_grade%} {% with review_num=review_num must_grade=must_grade %}
<h3 class="peer-assessment__display__title"> <h3 class="peer-assessment__display__title">
Assessment # <span class="peer-assessment__number--current">{{ review_num }}</span> of <span class="peer-assessment__number--required">{{ must_grade }}</span> {% blocktrans with review_number='<span>'|add:review_num|add:'</span>'|safe num_must_grade='<span class="peer-assessment__number--required">'|add:must_grade|add:'</span>'|safe %}
Assessment # {{ review_number }} of {{ num_must_grade }}
{% endblocktrans %}
</h3> </h3>
{% endblocktrans %} {% endwith %}
</header> </header>
<div class="peer-assessment__display__response"> <div class="peer-assessment__display__response">
......
...@@ -11,7 +11,9 @@ ...@@ -11,7 +11,9 @@
<span class="step__status__value"> <span class="step__status__value">
<span class="copy"> <span class="copy">
<i class="ico icon-warning-sign"></i> <i class="ico icon-warning-sign"></i>
{% blocktrans with graded=graded must_grade=must_grade %}Incomplete (<span class="step__status__value--completed">{{ graded }}</span> of <span class="step__status__value--required">{{ must_grade }}</span>){% endblocktrans %} {% blocktrans with num_graded="<span class=\"step__status__value--completed\">"|add:graded|add:"</span>"|safe num_must_grade="<span class=\"step__status__value--required\">"|add:must_grade|add:"</span>"|safe %}
Incomplete ({{ num_graded }} of {{ num_must_grade }})
{% endblocktrans %}
</span> </span>
</span> </span>
</span> </span>
...@@ -20,7 +22,6 @@ ...@@ -20,7 +22,6 @@
{% block body %} {% block body %}
<div class="ui-toggle-visibility__content"> <div class="ui-toggle-visibility__content">
<div class="wrapper--step__content"> <div class="wrapper--step__content">
<div class="step__message message message--incomplete"> <div class="step__message message message--incomplete">
<h3 class="message__title">{% trans "The Due Date for This Step Has Passed" %}</h3> <h3 class="message__title">{% trans "The Due Date for This Step Has Passed" %}</h3>
<div class="message__content"> <div class="message__content">
......
...@@ -11,7 +11,9 @@ ...@@ -11,7 +11,9 @@
<span class="step__status__value"> <span class="step__status__value">
<i class="ico icon-ok"></i> <i class="ico icon-ok"></i>
<span class="copy"> <span class="copy">
{% blocktrans with graded=graded %}Complete (<span class="step__status__value--completed">{{ graded }}</span>){% endblocktrans %} {% blocktrans with num_graded='<span class="step__status__value--completed">'|add:graded|add:'</span>'|safe %}
Complete ({{ num_graded }})
{% endblocktrans %}
</span> </span>
</span> </span>
</span> </span>
......
...@@ -11,7 +11,9 @@ ...@@ -11,7 +11,9 @@
<span class="step__status__value"> <span class="step__status__value">
<i class="ico icon-ok"></i> <i class="ico icon-ok"></i>
<span class="copy"> <span class="copy">
{% blocktrans with graded=graded %} Complete (<span class="step__status__value--completed">{{ graded }}</span>){% endblocktrans %} {% blocktrans with num_graded='<span class="step__status__value--completed">'|add:graded|add:'</span>'|safe %}
Complete ({{ num_graded }})
{% endblocktrans %}
</span> </span>
</span> </span>
</span> </span>
......
...@@ -10,7 +10,9 @@ ...@@ -10,7 +10,9 @@
<span class="step__status__label">{% trans "This step's status" %}:</span> <span class="step__status__label">{% trans "This step's status" %}:</span>
<span class="step__status__value"> <span class="step__status__value">
<span class="copy"> <span class="copy">
{% blocktrans with graded=graded must_grade=must_grade %}In Progress (<span class="step__status__value--completed">{{ graded }}</span> of <span class="step__status__value--required">{{ must_grade }}</span>){% endblocktrans %} {% blocktrans with num_graded="<span class=\"step__status__value--completed\">"|add:graded|add:"</span>"|safe num_must_grade="<span class=\"step__status__value--required\">"|add:must_grade|add:"</span>"|safe %}
In Progress ({{ num_graded }} of {{ num_must_grade }})
{% endblocktrans %}
</span> </span>
</span> </span>
</span> </span>
...@@ -24,7 +26,7 @@ ...@@ -24,7 +26,7 @@
<h3 class="message__title">{% trans "Waiting for Peer Responses" %}</h3> <h3 class="message__title">{% trans "Waiting for Peer Responses" %}</h3>
<div class="message__content"> <div class="message__content">
<p>{% blocktrans %}All submitted peer responses have been assessed. Check back later to see if more students have submitted responses. You'll receive your grade after you've completed all the steps for this problem and your peers have assessed your response.{% endblocktrans %}</p> <p>{% trans "All submitted peer responses have been assessed. Check back later to see if more students have submitted responses. You'll receive your grade after you've completed all the steps for this problem and your peers have assessed your response." %}</p>
</div> </div>
</div> </div>
</div> </div>
......
...@@ -12,10 +12,12 @@ ...@@ -12,10 +12,12 @@
<span class="step__label">{% trans "Your Response" %}</span> <span class="step__label">{% trans "Your Response" %}</span>
{% if submission_start %} {% if submission_start %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "available August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with start_date=submission_start|utc|date:"N j, Y H:i e" time_until=submission_start|timeuntil %}available <span class="date">{{ start_date }} (in {{ time_until }})</span>{% endblocktrans %} {% blocktrans with start_date=submission_start|utc|date:"N j, Y H:i e" time_until=submission_start|timeuntil %}available <span class="date">{{ start_date }} (in {{ time_until }})</span>{% endblocktrans %}
</span> </span>
{% elif submission_due %} {% elif submission_due %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "due August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with due_date=submission_due|utc|date:"N j, Y H:i e" time_until=submission_due|timeuntil %}due <span class="date"> {{ due_date }} (in {{ time_until }})</span>{% endblocktrans %} {% blocktrans with due_date=submission_due|utc|date:"N j, Y H:i e" time_until=submission_due|timeuntil %}due <span class="date"> {{ due_date }} (in {{ time_until }})</span>{% endblocktrans %}
</span> </span>
{% endif %} {% endif %}
......
...@@ -23,11 +23,17 @@ ...@@ -23,11 +23,17 @@
<h3 class="message__title">{% trans "Your Response Has Been Submitted" %}</h3> <h3 class="message__title">{% trans "Your Response Has Been Submitted" %}</h3>
<div class="message__content"> <div class="message__content">
{% if has_peer and has_self %} {% if has_peer and has_self %}
{% blocktrans %}You'll receive your grade after some of your peers have assessed your response and you complete the <a data-behavior="ui-scroll" href="#openassessment__peer-assessment">peer assessment</a> and <a data-behavior="ui-scroll" href="#openassessment__self-assessment">self assessment</a> steps{% endblocktrans %}. {% blocktrans with peer_start_tag='<a data-behavior="ui-scroll" href="#openassessment__peer-assessment">'|safe self_start_tag='<a data-behavior="ui-scroll" href="#openassessment__self-assessment">'|safe end_tag='</a>'|safe %}
You'll receive your grade after some of your peers have assessed your response and you complete the {{ peer_start_tag }}peer assessment{{ end_tag }} and {{ self_start_tag }}self assessment{{ end_tag }} steps.
{% endblocktrans %}
{% elif has_peer %} {% elif has_peer %}
{% blocktrans %}You'll receive your grade after some of your peers have assessed your response and you complete the <a data-behavior="ui-scroll" href="#openassessment__peer-assessment">peer assessment</a> step.{% endblocktrans %} {% blocktrans with start_tag='<a data-behavior="ui-scroll" href="#openassessment__peer-assessment">'|safe end_tag='</a>'|safe %}
You'll receive your grade after some of your peers have assessed your response and you complete the {{ start_tag }}peer assessment{{ end_tag }} step.
{% endblocktrans %}
{% elif has_self %} {% elif has_self %}
{% blocktrans %}You'll receive your grade after you complete the <a data-behavior="ui-scroll" href="#openassessment__self-assessment">self assessment</a> step.{% endblocktrans %} {% blocktrans with start_tag='<a data-behavior="ui-scroll" href="#openassessment__self-assessment">'|safe end_tag='</a>'|safe %}
You'll receive your grade after you complete the {{ start_tag }}self assessment{{ end_tag }} step.
{% endblocktrans %}
{% endif %} {% endif %}
</div> </div>
</div> </div>
......
...@@ -12,10 +12,12 @@ ...@@ -12,10 +12,12 @@
<span class="step__label">{% trans "Assess Your Response" %}</span> <span class="step__label">{% trans "Assess Your Response" %}</span>
{% if self_start %} {% if self_start %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "available August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with start_date=self_start|utc|date:"N j, Y H:i e" time_until=self_start|timeuntil %}available <span class="date">{{ start_date }} (in {{ time_until }})</span>{% endblocktrans %} {% blocktrans with start_date=self_start|utc|date:"N j, Y H:i e" time_until=self_start|timeuntil %}available <span class="date">{{ start_date }} (in {{ time_until }})</span>{% endblocktrans %}
</span> </span>
{% elif self_due %} {% elif self_due %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "due August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with due_date=self_due|utc|date:"N j, Y H:i e" time_until=self_due|timeuntil %}due <span class="date">{{ due_date }}</span> (in {{ time_until }}){% endblocktrans %} {% blocktrans with due_date=self_due|utc|date:"N j, Y H:i e" time_until=self_due|timeuntil %}due <span class="date">{{ due_date }}</span> (in {{ time_until }}){% endblocktrans %}
</span> </span>
{% endif %} {% endif %}
......
...@@ -12,10 +12,12 @@ ...@@ -12,10 +12,12 @@
<span class="step__label">{% trans "Learn to Assess Responses" %}</span> <span class="step__label">{% trans "Learn to Assess Responses" %}</span>
{% if training_start %} {% if training_start %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "available August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with start_date=training_start|utc|date:"N j, Y H:i e" time_until=training_start|timeuntil %}available <span class="date"> {{ start_date }} (in {{ time_until }}) </span>{% endblocktrans %} {% blocktrans with start_date=training_start|utc|date:"N j, Y H:i e" time_until=training_start|timeuntil %}available <span class="date"> {{ start_date }} (in {{ time_until }}) </span>{% endblocktrans %}
</span> </span>
{% elif training_due %} {% elif training_due %}
<span class="step__deadline"> <span class="step__deadline">
{# Translators: This string displays a date to the user, then tells them the time until that date. Example: "due August 13th, 2014 (in 5 days and 45 minutes)" #}
{% blocktrans with due_date=training_due|utc|date:"N j, Y H:i e" time_until=training_due|timeuntil %}due <span class="date">{{ due_date }}</span> (in {{ time_until }}){% endblocktrans %} {% blocktrans with due_date=training_due|utc|date:"N j, Y H:i e" time_until=training_due|timeuntil %}due <span class="date">{{ due_date }}</span> (in {{ time_until }}){% endblocktrans %}
</span> </span>
</span> </span>
...@@ -59,9 +61,13 @@ ...@@ -59,9 +61,13 @@
<div class="step__content"> <div class="step__content">
<article class="student-training__display" id="student-training"> <article class="student-training__display" id="student-training">
<header class="student-training__display__header"> <header class="student-training__display__header">
{% with training_num_current=training_num_current training_num_available=training_num_available %}
<h3 class="student-training__display__title"> <h3 class="student-training__display__title">
{% blocktrans with training_num_current=training_num_current training_num_available=training_num_available %}Training Assessment #<span class="student-training__number--current">{{ training_num_current }}</span> of <span class="student-training__number--required">{{ training_num_available }}</span>{% endblocktrans %} {% blocktrans with current_progress_num='<span class="student-training__number--current">'|add:training_num_current|add:'</span>'|safe num_to_complete='<span class="student-training__number--required">'|add:training_num_available|add:'</span>'|safe %}
Training Assessment # {{ current_progress_num }} of {{ num_to_complete }}
{% endblocktrans %}
</h3> </h3>
{% endwith %}
</header> </header>
<div class="student-training__display__response"> <div class="student-training__display__response">
......
...@@ -154,8 +154,15 @@ class GradeMixin(object): ...@@ -154,8 +154,15 @@ class GradeMixin(object):
if median_scores is not None and max_scores is not None: if median_scores is not None and max_scores is not None:
for criterion in context["rubric_criteria"]: for criterion in context["rubric_criteria"]:
criterion["median_score"] = median_scores[criterion["name"]] # Although we prevent course authors from modifying criteria post-release,
criterion["total_value"] = max_scores[criterion["name"]] # it's still possible for assessments created by course staff to
# have criteria that differ from the current problem definition.
# It's also possible to circumvent the post-release restriction
# if course authors directly import a course into Studio.
# If this happens, we simply leave the score blank so that the grade
# section can render without error.
criterion["median_score"] = median_scores.get(criterion["name"], '')
criterion["total_value"] = max_scores.get(criterion["name"], '')
return ('openassessmentblock/grade/oa_grade_complete.html', context) return ('openassessmentblock/grade/oa_grade_complete.html', context)
......
"""
Leaderboard step in the OpenAssessment XBlock.
"""
from django.utils.translation import ugettext as _
from xblock.core import XBlock
from openassessment.assessment.errors import SelfAssessmentError, PeerAssessmentError
from submissions import api as sub_api
class LeaderboardMixin(object):
"""Leaderboard Mixin introduces all handlers for displaying the leaderboard
Abstracts all functionality and handlers associated with the Leaderboard.
Leaderboard is a Mixin for the OpenAssessmentBlock. Functions in the
Leaderboard call into the OpenAssessmentBlock functions and will not work
outside of OpenAssessmentBlock.
"""
@XBlock.handler
def render_leaderboard(self, data, suffix=''):
"""
Render the leaderboard.
Args:
data: Not used.
Kwargs:
suffix: Not used.
Returns:
unicode: HTML content of the leaderboard.
"""
# Retrieve the status of the workflow. If no workflows have been
# started this will be an empty dict, so status will be None.
workflow = self.get_workflow_info()
status = workflow.get('status')
# Render the grading section based on the status of the workflow
try:
if status == "done":
path, context = self.render_leaderboard_complete(self.get_student_item_dict())
else: # status is 'self' or 'peer', which implies that the workflow is incomplete
path, context = self.render_leaderboard_incomplete()
except (sub_api.SubmissionError, PeerAssessmentError, SelfAssessmentError):
return self.render_error(_(u"An unexpected error occurred."))
else:
return self.render_assessment(path, context)
def render_leaderboard_complete(self, student_item_dict):
"""
Render the leaderboard complete state.
Args:
student_item_dict (dict): The student item
Returns:
template_path (string), tuple of context (dict)
"""
scores = sub_api.get_top_submissions(
student_item_dict['course_id'],
student_item_dict['item_id'],
student_item_dict['item_type'],
self.leaderboard_show,
use_cache=False
)
for score in scores:
if 'text' in score['content']:
score['content'] = score['content']['text']
elif isinstance(score['content'], basestring):
pass
# Currently, we do not handle non-text submissions.
else:
score['content'] = ""
context = { 'topscores': scores }
return ('openassessmentblock/leaderboard/oa_leaderboard_show.html', context)
def render_leaderboard_incomplete(self):
"""
Render the grade incomplete state.
Returns:
template_path (string), tuple of context (dict)
"""
return ('openassessmentblock/leaderboard/oa_leaderboard_waiting.html', {})
...@@ -13,10 +13,10 @@ from webob import Response ...@@ -13,10 +13,10 @@ from webob import Response
from lazy import lazy from lazy import lazy
from xblock.core import XBlock from xblock.core import XBlock
from xblock.fields import List, Scope, String, Boolean from xblock.fields import List, Scope, String, Boolean, Integer
from xblock.fragment import Fragment from xblock.fragment import Fragment
from openassessment.xblock.grade_mixin import GradeMixin from openassessment.xblock.grade_mixin import GradeMixin
from openassessment.xblock.leaderboard_mixin import LeaderboardMixin
from openassessment.xblock.defaults import * # pylint: disable=wildcard-import, unused-wildcard-import from openassessment.xblock.defaults import * # pylint: disable=wildcard-import, unused-wildcard-import
from openassessment.xblock.message_mixin import MessageMixin from openassessment.xblock.message_mixin import MessageMixin
from openassessment.xblock.peer_assessment_mixin import PeerAssessmentMixin from openassessment.xblock.peer_assessment_mixin import PeerAssessmentMixin
...@@ -67,6 +67,12 @@ UI_MODELS = { ...@@ -67,6 +67,12 @@ UI_MODELS = {
"class_id": "openassessment__grade", "class_id": "openassessment__grade",
"navigation_text": "Your grade for this assignment", "navigation_text": "Your grade for this assignment",
"title": "Your Grade:" "title": "Your Grade:"
},
"leaderboard": {
"name": "leaderboard",
"class_id": "openassessment__leaderboard",
"navigation_text": "A leaderboard of the top submissions",
"title": "Leaderboard"
} }
} }
...@@ -92,6 +98,7 @@ class OpenAssessmentBlock( ...@@ -92,6 +98,7 @@ class OpenAssessmentBlock(
SelfAssessmentMixin, SelfAssessmentMixin,
StudioMixin, StudioMixin,
GradeMixin, GradeMixin,
LeaderboardMixin,
StaffInfoMixin, StaffInfoMixin,
WorkflowMixin, WorkflowMixin,
StudentTrainingMixin, StudentTrainingMixin,
...@@ -121,6 +128,12 @@ class OpenAssessmentBlock( ...@@ -121,6 +128,12 @@ class OpenAssessmentBlock(
help="A title to display to a student (plain text)." help="A title to display to a student (plain text)."
) )
leaderboard_show = Integer(
default=0,
scope=Scope.content,
help="The number of leaderboard results to display (0 if none)"
)
prompt = String( prompt = String(
default=DEFAULT_PROMPT, default=DEFAULT_PROMPT,
scope=Scope.content, scope=Scope.content,
...@@ -224,6 +237,7 @@ class OpenAssessmentBlock( ...@@ -224,6 +237,7 @@ class OpenAssessmentBlock(
# On page load, update the workflow status. # On page load, update the workflow status.
# We need to do this here because peers may have graded us, in which # We need to do this here because peers may have graded us, in which
# case we may have a score available. # case we may have a score available.
try: try:
self.update_workflow_status() self.update_workflow_status()
except AssessmentWorkflowError: except AssessmentWorkflowError:
...@@ -238,7 +252,6 @@ class OpenAssessmentBlock( ...@@ -238,7 +252,6 @@ class OpenAssessmentBlock(
"rubric_assessments": ui_models, "rubric_assessments": ui_models,
"show_staff_debug_info": self.is_course_staff and not self.in_studio_preview, "show_staff_debug_info": self.is_course_staff and not self.in_studio_preview,
} }
template = get_template("openassessmentblock/oa_base.html") template = get_template("openassessmentblock/oa_base.html")
context = Context(context_dict) context = Context(context_dict)
frag = Fragment(template.render(context)) frag = Fragment(template.render(context))
...@@ -303,6 +316,10 @@ class OpenAssessmentBlock( ...@@ -303,6 +316,10 @@ class OpenAssessmentBlock(
if ui_model: if ui_model:
ui_models.append(dict(assessment, **ui_model)) ui_models.append(dict(assessment, **ui_model))
ui_models.append(UI_MODELS["grade"]) ui_models.append(UI_MODELS["grade"])
if self.leaderboard_show > 0:
ui_models.append(UI_MODELS["leaderboard"])
return ui_models return ui_models
@staticmethod @staticmethod
...@@ -327,6 +344,10 @@ class OpenAssessmentBlock( ...@@ -327,6 +344,10 @@ class OpenAssessmentBlock(
load('static/xml/poverty_rubric_example.xml') load('static/xml/poverty_rubric_example.xml')
), ),
( (
"OpenAssessmentBlock Leaderboard",
load('static/xml/leaderboard.xml')
),
(
"OpenAssessmentBlock (Peer Only) Rubric", "OpenAssessmentBlock (Peer Only) Rubric",
load('static/xml/poverty_peer_only_example.xml') load('static/xml/poverty_peer_only_example.xml')
), ),
...@@ -370,6 +391,7 @@ class OpenAssessmentBlock( ...@@ -370,6 +391,7 @@ class OpenAssessmentBlock(
block.title = config['title'] block.title = config['title']
block.prompt = config['prompt'] block.prompt = config['prompt']
block.allow_file_upload = config['allow_file_upload'] block.allow_file_upload = config['allow_file_upload']
block.leaderboard_show = config['leaderboard_show']
return block return block
......
This source diff could not be displayed because it is too large. You can view the blob instead.
...@@ -30,6 +30,12 @@ ...@@ -30,6 +30,12 @@
"class_id": "openassessment__grade", "class_id": "openassessment__grade",
"navigation_text": "Your grade for this problem", "navigation_text": "Your grade for this problem",
"title": "Your Grade:" "title": "Your Grade:"
},
{
"name": "leaderboard",
"class_id": "openassessment__leaderboard",
"navigation_text": "A leaderboard for the top submissions",
"title": "Leaderboard:"
} }
] ]
}, },
...@@ -66,6 +72,12 @@ ...@@ -66,6 +72,12 @@
"class_id": "openassessment__grade", "class_id": "openassessment__grade",
"navigation_text": "Your grade for this problem", "navigation_text": "Your grade for this problem",
"title": "Your Grade:" "title": "Your Grade:"
},
{
"name": "leaderboard",
"class_id": "openassessment__leaderboard",
"navigation_text": "A leaderboard for the top submissions",
"title": "Leaderboard:"
} }
] ]
}, },
......
...@@ -50,12 +50,11 @@ describe("OpenAssessment.ResponseView", function() { ...@@ -50,12 +50,11 @@ describe("OpenAssessment.ResponseView", function() {
this.uploadError = false; this.uploadError = false;
this.uploadArgs = null; this.uploadArgs = null;
this.upload = function(url, data, contentType) { this.upload = function(url, data) {
// Store the args we were passed so we can verify them // Store the args we were passed so we can verify them
this.uploadArgs = { this.uploadArgs = {
url: url, url: url,
data: data, data: data,
contentType: contentType
}; };
// Return a promise indicating success or error // Return a promise indicating success or error
...@@ -420,7 +419,6 @@ describe("OpenAssessment.ResponseView", function() { ...@@ -420,7 +419,6 @@ describe("OpenAssessment.ResponseView", function() {
view.fileUpload(); view.fileUpload();
expect(fileUploader.uploadArgs.url).toEqual(FAKE_URL); expect(fileUploader.uploadArgs.url).toEqual(FAKE_URL);
expect(fileUploader.uploadArgs.data).toEqual(files[0]); expect(fileUploader.uploadArgs.data).toEqual(files[0]);
expect(fileUploader.uploadArgs.contentType).toEqual('image/jpg');
}); });
it("displays an error if a one-time file upload URL cannot be retrieved", function() { it("displays an error if a one-time file upload URL cannot be retrieved", function() {
......
...@@ -2,13 +2,12 @@ describe("OpenAssessment.FileUploader", function() { ...@@ -2,13 +2,12 @@ describe("OpenAssessment.FileUploader", function() {
var fileUploader = null; var fileUploader = null;
var TEST_URL = "http://www.example.com/upload"; var TEST_URL = "http://www.example.com/upload";
var TEST_IMAGE = { var TEST_FILE = {
data: "abcdefghijklmnopqrstuvwxyz", data: "abcdefghijklmnopqrstuvwxyz",
name: "test.jpg", name: "test.jpg",
size: 10471, size: 10471,
type: "image/jpeg" type: "image/jpeg"
}; };
var TEST_CONTENT_TYPE = "image/jpeg";
beforeEach(function() { beforeEach(function() {
fileUploader = new OpenAssessment.FileUploader(); fileUploader = new OpenAssessment.FileUploader();
...@@ -25,15 +24,24 @@ describe("OpenAssessment.FileUploader", function() { ...@@ -25,15 +24,24 @@ describe("OpenAssessment.FileUploader", function() {
spyOn(Logger, 'log'); spyOn(Logger, 'log');
// Upload a file // Upload a file
fileUploader.upload(TEST_URL, TEST_IMAGE, TEST_CONTENT_TYPE); fileUploader.upload(TEST_URL, TEST_FILE);
// Verify that a PUT request was sent with the right parameters
expect($.ajax).toHaveBeenCalledWith({
url: TEST_URL,
type: 'PUT',
data: TEST_FILE,
async: false,
processData: false,
contentType: 'image/jpeg'
});
// Verify that the event was logged // Verify that the event was logged
expect(Logger.log).toHaveBeenCalledWith( expect(Logger.log).toHaveBeenCalledWith(
"openassessment.upload_file", { "openassessment.upload_file", {
contentType: TEST_CONTENT_TYPE, fileName: TEST_FILE.name,
imageName: TEST_IMAGE.name, fileSize: TEST_FILE.size,
imageSize: TEST_IMAGE.size, fileType: TEST_FILE.type
imageType: TEST_IMAGE.type
} }
); );
}); });
......
...@@ -20,6 +20,7 @@ OpenAssessment.BaseView = function(runtime, element, server) { ...@@ -20,6 +20,7 @@ OpenAssessment.BaseView = function(runtime, element, server) {
this.selfView = new OpenAssessment.SelfView(this.element, this.server, this); this.selfView = new OpenAssessment.SelfView(this.element, this.server, this);
this.peerView = new OpenAssessment.PeerView(this.element, this.server, this); this.peerView = new OpenAssessment.PeerView(this.element, this.server, this);
this.gradeView = new OpenAssessment.GradeView(this.element, this.server, this); this.gradeView = new OpenAssessment.GradeView(this.element, this.server, this);
this.leaderboardView = new OpenAssessment.LeaderboardView(this.element, this.server, this);
this.messageView = new OpenAssessment.MessageView(this.element, this.server, this); this.messageView = new OpenAssessment.MessageView(this.element, this.server, this);
// Staff only information about student progress. // Staff only information about student progress.
this.staffInfoView = new OpenAssessment.StaffInfoView(this.element, this.server, this); this.staffInfoView = new OpenAssessment.StaffInfoView(this.element, this.server, this);
...@@ -74,6 +75,7 @@ OpenAssessment.BaseView.prototype = { ...@@ -74,6 +75,7 @@ OpenAssessment.BaseView.prototype = {
this.peerView.load(); this.peerView.load();
this.selfView.load(); this.selfView.load();
this.gradeView.load(); this.gradeView.load();
this.leaderboardView.load();
/** /**
this.messageView.load() is intentionally omitted. this.messageView.load() is intentionally omitted.
Because of the asynchronous loading, there is no way to tell (from the perspective of the Because of the asynchronous loading, there is no way to tell (from the perspective of the
......
...@@ -8,38 +8,32 @@ PUT requests on the server. ...@@ -8,38 +8,32 @@ PUT requests on the server.
Args: Args:
url (string): The one-time URL we're uploading to. url (string): The one-time URL we're uploading to.
imageData (object): The object to upload, which should have properties: file (File): The HTML5 file reference.
data (string)
name (string)
size (int)
type (string)
contentType (string): The MIME type of the data to upload.
Returns: Returns:
JQuery promise JQuery promise
*/ */
OpenAssessment.FileUploader = function() { OpenAssessment.FileUploader = function() {
this.upload = function(url, imageData, contentType) { this.upload = function(url, file) {
return $.Deferred( return $.Deferred(
function(defer) { function(defer) {
$.ajax({ $.ajax({
url: url, url: url,
type: 'PUT', type: 'PUT',
data: imageData, data: file,
async: false, async: false,
processData: false, processData: false,
contentType: contentType, contentType: file.type,
}).done( }).done(
function(data, textStatus, jqXHR) { function(data, textStatus, jqXHR) {
// Log an analytics event // Log an analytics event
Logger.log( Logger.log(
"openassessment.upload_file", "openassessment.upload_file",
{ {
contentType: contentType, fileName: file.name,
imageName: imageData.name, fileSize: file.size,
imageSize: imageData.size, fileType: file.type
imageType: imageData.type
} }
); );
......
/**
Interface for leaderboard view.
Args:
element (DOM element): The DOM element representing the XBlock.
server (OpenAssessment.Server): The interface to the XBlock server.
baseView (OpenAssessment.BaseView): Container view.
Returns:
OpenAssessment.ResponseView
**/
OpenAssessment.LeaderboardView = function(element, server, baseView) {
this.element = element;
this.server = server;
this.baseView = baseView;
};
OpenAssessment.LeaderboardView.prototype = {
/**
Load the leaderboard view.
**/
load: function() {
var view = this;
var baseView = this.baseView;
this.server.render('leaderboard').done(
function(html) {
// Load the HTML and install event handlers
$('#openassessment__leaderboard', view.element).replaceWith(html);
}
).fail(function(errMsg) {
baseView.showLoadError('leaderboard', errMsg);
});
},
};
...@@ -466,7 +466,7 @@ OpenAssessment.ResponseView.prototype = { ...@@ -466,7 +466,7 @@ OpenAssessment.ResponseView.prototype = {
this.server.getUploadUrl(view.imageType).done( this.server.getUploadUrl(view.imageType).done(
function(url) { function(url) {
var image = view.files[0]; var image = view.files[0];
view.fileUploader.upload(url, image, view.imageType) view.fileUploader.upload(url, image)
.done(function() { .done(function() {
view.imageUrl(); view.imageUrl();
view.baseView.toggleActionError('upload', null); view.baseView.toggleActionError('upload', null);
......
...@@ -1078,3 +1078,88 @@ ...@@ -1078,3 +1078,88 @@
@extend .action--submit; @extend .action--submit;
} }
} }
#openassessment__leaderboard{
font-family: "Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;
.step__counter, .step__counter:before {
display: none;
}
.wrapper--copy{
margin-left: 0;
padding-left: 0;
border-left: 0;
}
@include media($bp-m) {
@include span-columns(4 of 4);
}
@include media($bp-ds) {
@include span-columns(6 of 6);
}
@include media($bp-dm) {
@include span-columns(12 of 12);
}
@include media($bp-dl) {
@include span-columns(12 of 12);
}
@include media($bp-dx) {
@include span-columns(12 of 12);
}
.step__label, .grade__value {
display: inline-block;
vertical-align: middle;
}
.step__label {
margin-right: ($baseline-h/4);
}
.leaderboard__title{
@extend %t-superheading;
color: $heading-primary-color;
}
.list.leaderboard__score__list{
list-style-type: none;
li.leaderboard__score__item {
margin: 15px 0;
.leaderboard__list__number{
display: inline-block;
background: $edx-gray-d2;
color: white;
padding: 5px 5px 3px 5px;
font-size: 16px;
min-width: 35px;
text-align: center;
border-top-right-radius: 2px;
border-top-left-radius: 2px;
}
.leaderboard__score__title{
font-size: 15px;
color: $edx-gray-l1;
text-transform: uppercase;
display: inline-block;
padding-left: 15px;
}
.leaderboard__answer{
border-top: 2px solid $edx-gray-d2;
box-shadow: inset 0 0 3px 1px rgba(10, 10, 10, 0.1);
padding: 5px 10px;
max-height: 200px;
overflow-y: scroll;
font-size: 14px;
}
}
}
}
<openassessment submission_due="2030-03-11T18:20" leaderboard_show="10">
<title>
My favourite pet
</title>
<rubric>
<prompt>
Which animal would you like to have as a pet?
</prompt>
<criterion feedback='optional'>
<name>concise</name>
<prompt>How rare is the animal?</prompt>
<option points="0">
<name>Very common</name>
<explanation>
You can pick it up on the street
</explanation>
</option>
<option points="2">
<name>Common</name>
<explanation>
Can get it at the local pet store
</explanation>
</option>
<option points="4">
<name>Somewhat common</name>
<explanation>
Easy to see but hard to purchase as a pet
</explanation>
</option>
<option points="8">
<name>Rare</name>
<explanation>
Need to travel the world to find it
</explanation>
</option>
<option points="10">
<name>Extinct</name>
<explanation>
Maybe in the ice-age
</explanation>
</option>
</criterion>
<criterion feedback='optional'>
<name>form</name>
<prompt>How hard would it be to care for the animal?</prompt>
<option points="0">
<name>It feeds itself</name>
<explanation></explanation>
</option>
<option points="2">
<name>Any pet food will do</name>
<explanation></explanation>
</option>
<option points="4">
<name>Some work required to care for the animal</name>
<explanation></explanation>
</option>
<option points="6">
<name>A full time job to care for the animal</name>
<explanation></explanation>
</option>
<option points="8">
<name>A team required to care for the animal</name>
<explanation></explanation>
</option>
<option points="10">
<name>The pet has special needs</name>
<explanation></explanation>
</option>
</criterion>
</rubric>
<assessments>
<assessment name="self-assessment" />
</assessments>
</openassessment>
...@@ -76,10 +76,35 @@ class SubmissionMixin(object): ...@@ -76,10 +76,35 @@ class SubmissionMixin(object):
student_sub student_sub
) )
except api.SubmissionRequestError as err: except api.SubmissionRequestError as err:
status_tag = 'EBADFORM'
status_text = unicode(err.field_errors) # Handle the case of an answer that's too long as a special case,
# so we can display a more specific error message.
# Although we limit the number of characters the user can
# enter on the client side, the submissions API uses the JSON-serialized
# submission to calculate length. If each character submitted
# by the user takes more than 1 byte to encode (for example, double-escaped
# newline characters or non-ASCII unicode), then the user might
# exceed the limits set by the submissions API. In that case,
# we display an error message indicating that the answer is too long.
answer_too_long = any(
"maximum answer size exceeded" in answer_err.lower()
for answer_err in err.field_errors.get('answer', [])
)
if answer_too_long:
status_tag = 'EANSWERLENGTH'
else:
msg = (
u"The submissions API reported an invalid request error "
u"when submitting a response for the user: {student_item}"
).format(student_item=student_item_dict)
logger.exception(msg)
status_tag = 'EBADFORM'
except (api.SubmissionError, AssessmentWorkflowError): except (api.SubmissionError, AssessmentWorkflowError):
logger.exception("This response was not submitted.") msg = (
u"An unknown error occurred while submitting "
u"a response for the user: {student_item}"
).format(student_item=student_item_dict)
logger.exception(msg)
status_tag = 'EUNKNOWN' status_tag = 'EUNKNOWN'
status_text = self._(u'API returned unclassified exception.') status_text = self._(u'API returned unclassified exception.')
else: else:
......
...@@ -5,7 +5,7 @@ import os.path ...@@ -5,7 +5,7 @@ import os.path
import json import json
from functools import wraps from functools import wraps
from openassessment.test_utils import CacheResetTest from openassessment.test_utils import CacheResetTest, TransactionCacheResetTest
from workbench.runtime import WorkbenchRuntime from workbench.runtime import WorkbenchRuntime
import webob import webob
...@@ -41,7 +41,7 @@ def scenario(scenario_path, user_id=None): ...@@ -41,7 +41,7 @@ def scenario(scenario_path, user_id=None):
xblock = None xblock = None
if args: if args:
self = args[0] self = args[0]
if isinstance(self, XBlockHandlerTestCase): if isinstance(self, XBlockHandlerTestCaseMixin):
# Print a debug message # Print a debug message
print "Loading scenario from {path}".format(path=scenario_path) print "Loading scenario from {path}".format(path=scenario_path)
...@@ -61,7 +61,7 @@ def scenario(scenario_path, user_id=None): ...@@ -61,7 +61,7 @@ def scenario(scenario_path, user_id=None):
return _decorator return _decorator
class XBlockHandlerTestCase(CacheResetTest): class XBlockHandlerTestCaseMixin(object):
""" """
Load the XBlock in the workbench runtime to test its handler. Load the XBlock in the workbench runtime to test its handler.
""" """
...@@ -70,6 +70,7 @@ class XBlockHandlerTestCase(CacheResetTest): ...@@ -70,6 +70,7 @@ class XBlockHandlerTestCase(CacheResetTest):
""" """
Create the runtime. Create the runtime.
""" """
super(XBlockHandlerTestCaseMixin, self).setUp()
self.runtime = WorkbenchRuntime() self.runtime = WorkbenchRuntime()
def set_user(self, user_id): def set_user(self, user_id):
...@@ -149,3 +150,20 @@ class XBlockHandlerTestCase(CacheResetTest): ...@@ -149,3 +150,20 @@ class XBlockHandlerTestCase(CacheResetTest):
base_dir = os.path.dirname(os.path.abspath(__file__)) base_dir = os.path.dirname(os.path.abspath(__file__))
with open(os.path.join(base_dir, path)) as file_handle: with open(os.path.join(base_dir, path)) as file_handle:
return file_handle.read() return file_handle.read()
class XBlockHandlerTestCase(XBlockHandlerTestCaseMixin, CacheResetTest):
"""
Base XBlock handler test case. Use this if you do NOT need to simulate the read-replica.
"""
pass
class XBlockHandlerTransactionTestCase(XBlockHandlerTestCaseMixin, TransactionCacheResetTest):
"""
Variation of the XBlock handler test case that truncates the test database instead
of rolling back transactions. This is necessary if the software under test relies
on the read replica. It's also slower, so unless you're using the read-replica,
use `XBlockHandlerTestCase` instead.
"""
pass
<openassessment leaderboard_show="3">
<title>Open Assessment Test</title>
<prompt>
Given the state of the world today, what do you think should be done to
combat poverty? Please answer in a short essay of 200-300 words.
</prompt>
<rubric>
<prompt>Read for conciseness, clarity of thought, and form.</prompt>
<criterion>
<name>𝓒𝓸𝓷𝓬𝓲𝓼𝓮</name>
<prompt>How concise is it?</prompt>
<option points="3">
<name>ﻉซƈﻉɭɭﻉกՇ</name>
<explanation>Extremely concise</explanation>
</option>
<option points="2">
<name>Ġööḋ</name>
<explanation>Concise</explanation>
</option>
<option points="1">
<name>ק๏๏г</name>
<explanation>Wordy</explanation>
</option>
</criterion>
<criterion>
<name>Form</name>
<prompt>How well-formed is it?</prompt>
<option points="3">
<name>Good</name>
<explanation>Good</explanation>
</option>
<option points="2">
<name>Fair</name>
<explanation>Fair</explanation>
</option>
<option points="1">
<name>Poor</name>
<explanation>Poor</explanation>
</option>
</criterion>
</rubric>
<assessments>
<assessment name="peer-assessment" must_grade="1" must_be_graded_by="1" />
<assessment name="self-assessment" />
</assessments>
</openassessment>
<openassessment leaderboard_show="10">
<title>Open Assessment Test</title>
<prompt>
Given the state of the world today, what do you think should be done to
combat poverty? Please answer in a short essay of 200-300 words.
</prompt>
<rubric>
<prompt>Read for conciseness, clarity of thought, and form.</prompt>
<criterion>
<name>𝓒𝓸𝓷𝓬𝓲𝓼𝓮</name>
<prompt>How concise is it?</prompt>
<option points="3">
<name>ﻉซƈﻉɭɭﻉกՇ</name>
<explanation>Extremely concise</explanation>
</option>
<option points="2">
<name>Ġööḋ</name>
<explanation>Concise</explanation>
</option>
<option points="1">
<name>ק๏๏г</name>
<explanation>Wordy</explanation>
</option>
</criterion>
<criterion>
<name>Form</name>
<prompt>How well-formed is it?</prompt>
<option points="3">
<name>Good</name>
<explanation>Good</explanation>
</option>
<option points="2">
<name>Fair</name>
<explanation>Fair</explanation>
</option>
<option points="1">
<name>Poor</name>
<explanation>Poor</explanation>
</option>
</criterion>
</rubric>
<assessments>
<assessment name="peer-assessment" must_grade="1" must_be_graded_by="1" />
<assessment name="self-assessment" />
</assessments>
</openassessment>
...@@ -3,11 +3,6 @@ ...@@ -3,11 +3,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -70,13 +65,7 @@ ...@@ -70,13 +65,7 @@
"promptless": { "promptless": {
"title": "Foo", "title": "Foo",
"prompt": null,
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -137,11 +126,6 @@ ...@@ -137,11 +126,6 @@
"title": "Foo", "title": "Foo",
"prompt": "", "prompt": "",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -203,11 +187,6 @@ ...@@ -203,11 +187,6 @@
"title": "ƒσσ", "title": "ƒσσ",
"prompt": "Ṫëṡẗ ṗṛöṁṗẗ", "prompt": "Ṫëṡẗ ṗṛöṁṗẗ",
"rubric_feedback_prompt": "†es† Feedbåck Prømp†", "rubric_feedback_prompt": "†es† Feedbåck Prømp†",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -266,11 +245,6 @@ ...@@ -266,11 +245,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "", "rubric_feedback_prompt": "",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -331,12 +305,6 @@ ...@@ -331,12 +305,6 @@
"no_feedback_prompt": { "no_feedback_prompt": {
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": null,
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -397,11 +365,6 @@ ...@@ -397,11 +365,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -457,11 +420,6 @@ ...@@ -457,11 +420,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 2, "order_num": 2,
...@@ -536,11 +494,6 @@ ...@@ -536,11 +494,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -606,9 +559,7 @@ ...@@ -606,9 +559,7 @@
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": "2010-04-01T00:00:00", "start": "2010-04-01T00:00:00",
"due": "2030-05-01T00:00:00", "due": "2030-05-01T00:00:00",
"submission_start": null,
"submission_due": "2020-04-15T00:00:00", "submission_due": "2020-04-15T00:00:00",
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -672,11 +623,6 @@ ...@@ -672,11 +623,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -739,11 +685,6 @@ ...@@ -739,11 +685,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -806,11 +747,6 @@ ...@@ -806,11 +747,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -865,11 +801,6 @@ ...@@ -865,11 +801,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -970,11 +901,6 @@ ...@@ -970,11 +901,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -1093,11 +1019,6 @@ ...@@ -1093,11 +1019,6 @@
"title": "Foo", "title": "Foo",
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"allow_file_upload": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
...@@ -1194,10 +1115,6 @@ ...@@ -1194,10 +1115,6 @@
"prompt": "Test prompt", "prompt": "Test prompt",
"rubric_feedback_prompt": "Test Feedback Prompt", "rubric_feedback_prompt": "Test Feedback Prompt",
"allow_file_upload": true, "allow_file_upload": true,
"start": null,
"due": null,
"submission_start": null,
"submission_due": null,
"criteria": [ "criteria": [
{ {
"order_num": 0, "order_num": 0,
......
...@@ -450,5 +450,89 @@ ...@@ -450,5 +450,89 @@
"</rubric>", "</rubric>",
"</openassessment>" "</openassessment>"
] ]
},
"leaderboard_num_zero": {
"xml": [
"<openassessment leaderboard_show=\"0\">",
"<title>Foo</title>",
"<assessments>",
"<assessment name=\"peer-assessment\" start=\"2014-02-27T09:46:28\" due=\"2014-03-01T00:00:00\" must_grade=\"5\" must_be_graded_by=\"3\" />",
"<assessment name=\"self-assessment\" start=\"2014-04-01T00:00:00\" due=\"2014-06-01T00:00:00\" />",
"</assessments>",
"<rubric>",
"<prompt>Test prompt</prompt>",
"<criterion>",
"<name>Test criterion</name>",
"<prompt>Test criterion prompt</prompt>",
"<option points=\"0\"><name>No</name><explanation>No explanation</explanation></option>",
"<option points=\"2\"><name>Yes</name><explanation>Yes explanation</explanation></option>",
"</criterion>",
"</rubric>",
"</openassessment>"
]
},
"leaderboard_num_negative": {
"xml": [
"<openassessment leaderboard_show=\"-1\">",
"<title>Foo</title>",
"<assessments>",
"<assessment name=\"peer-assessment\" start=\"2014-02-27T09:46:28\" due=\"2014-03-01T00:00:00\" must_grade=\"5\" must_be_graded_by=\"3\" />",
"<assessment name=\"self-assessment\" start=\"2014-04-01T00:00:00\" due=\"2014-06-01T00:00:00\" />",
"</assessments>",
"<rubric>",
"<prompt>Test prompt</prompt>",
"<criterion>",
"<name>Test criterion</name>",
"<prompt>Test criterion prompt</prompt>",
"<option points=\"0\"><name>No</name><explanation>No explanation</explanation></option>",
"<option points=\"2\"><name>Yes</name><explanation>Yes explanation</explanation></option>",
"</criterion>",
"</rubric>",
"</openassessment>"
]
},
"leaderboard_num_too_high": {
"xml": [
"<openassessment leaderboard_show=\"101\">",
"<title>Foo</title>",
"<assessments>",
"<assessment name=\"peer-assessment\" start=\"2014-02-27T09:46:28\" due=\"2014-03-01T00:00:00\" must_grade=\"5\" must_be_graded_by=\"3\" />",
"<assessment name=\"self-assessment\" start=\"2014-04-01T00:00:00\" due=\"2014-06-01T00:00:00\" />",
"</assessments>",
"<rubric>",
"<prompt>Test prompt</prompt>",
"<criterion>",
"<name>Test criterion</name>",
"<prompt>Test criterion prompt</prompt>",
"<option points=\"0\"><name>No</name><explanation>No explanation</explanation></option>",
"<option points=\"2\"><name>Yes</name><explanation>Yes explanation</explanation></option>",
"</criterion>",
"</rubric>",
"</openassessment>"
]
},
"leaderboard_num_not_integer": {
"xml": [
"<openassessment leaderboard_show=\"not_an_int\">",
"<title>Foo</title>",
"<assessments>",
"<assessment name=\"peer-assessment\" start=\"2014-02-27T09:46:28\" due=\"2014-03-01T00:00:00\" must_grade=\"5\" must_be_graded_by=\"3\" />",
"<assessment name=\"self-assessment\" start=\"2014-04-01T00:00:00\" due=\"2014-06-01T00:00:00\" />",
"</assessments>",
"<rubric>",
"<prompt>Test prompt</prompt>",
"<criterion>",
"<name>Test criterion</name>",
"<prompt>Test criterion prompt</prompt>",
"<option points=\"0\"><name>No</name><explanation>No explanation</explanation></option>",
"<option points=\"2\"><name>Yes</name><explanation>Yes explanation</explanation></option>",
"</criterion>",
"</rubric>",
"</openassessment>"
]
} }
} }
...@@ -202,6 +202,28 @@ class TestGrade(XBlockHandlerTestCase): ...@@ -202,6 +202,28 @@ class TestGrade(XBlockHandlerTestCase):
self.assertIn(u'Peer 2: ฝﻉɭɭ ɗѻกﻉ!', resp.decode('utf-8')) self.assertIn(u'Peer 2: ฝﻉɭɭ ɗѻกﻉ!', resp.decode('utf-8'))
self.assertIn(u'Peer 2: ƒαιя נσв', resp.decode('utf-8')) self.assertIn(u'Peer 2: ƒαιя נσв', resp.decode('utf-8'))
@scenario('data/grade_scenario.xml', user_id='Bob')
def test_assessment_does_not_match_rubric(self, xblock):
# Get to the grade complete section
self._create_submission_and_assessments(
xblock, self.SUBMISSION, self.PEERS, self.ASSESSMENTS, self.ASSESSMENTS[0]
)
# Change the problem definition so it no longer
# matches the assessments. This should never happen
# for a student (since we prevent authors from doing this post-release),
# but it may happen if a course author has submitted
# an assessment for a problem before it was published,
# or if course authors mess around with course import.
xblock.rubric_criteria[0]["name"] = "CHANGED NAME!"
# Expect that the page renders without an error
# It won't show the assessment criterion that changed
# (since it's not part of the original assessment),
# but at least it won't display an error.
resp = self.request(xblock, 'render_grade', json.dumps({}))
self.assertGreater(resp, 0)
@ddt.file_data('data/waiting_scenarios.json') @ddt.file_data('data/waiting_scenarios.json')
@scenario('data/grade_waiting_scenario.xml', user_id='Omar') @scenario('data/grade_waiting_scenario.xml', user_id='Omar')
def test_grade_waiting(self, xblock, data): def test_grade_waiting(self, xblock, data):
......
# -*- coding: utf-8 -*-
"""
Tests for leaderboard handlers in Open Assessment XBlock.
"""
import json
import mock
from django.core.cache import cache
from submissions import api as sub_api
from .base import XBlockHandlerTransactionTestCase, scenario
class TestLeaderboardRender(XBlockHandlerTransactionTestCase):
@scenario('data/basic_scenario.xml')
def test_no_leaderboard(self, xblock):
# Since there's no leaderboard set in the problem XML,
# it should not be visible
self._assert_leaderboard_visible(xblock, False)
@scenario('data/leaderboard_unavailable.xml')
def test_unavailable(self, xblock):
# Start date is in the future for this scenario
self._assert_path_and_context(
xblock,
'openassessmentblock/leaderboard/oa_leaderboard_waiting.html',
{}
)
self._assert_leaderboard_visible(xblock, True)
@scenario('data/leaderboard_show.xml')
def test_show_no_submissions(self, xblock):
# No submissions created yet, so the leaderboard shouldn't display any scores
self._assert_scores(xblock, [])
self._assert_leaderboard_visible(xblock, True)
@scenario('data/leaderboard_show.xml')
def test_show_submissions(self, xblock):
# Create some submissions (but fewer than the max that can be shown)
self._create_submissions_and_scores(xblock, [
("test answer 1", 1),
("test answer 2", 2)
])
self._assert_scores(xblock, [
{"content": "test answer 2", "score": 2},
{"content": "test answer 1", "score": 1}
])
self._assert_leaderboard_visible(xblock, True)
# Since leaderboard results are cached, we need to clear
# the cache in order to see the new scores.
cache.clear()
# Create more submissions than the max
self._create_submissions_and_scores(xblock, [
("test answer 3", 0),
("test answer 4", 10),
("test answer 5", 3)
])
self._assert_scores(xblock, [
{"content": "test answer 4", "score": 10},
{"content": "test answer 5", "score": 3},
{"content": "test answer 2", "score": 2}
])
self._assert_leaderboard_visible(xblock, True)
@scenario('data/leaderboard_show.xml')
def test_no_text_key_submission(self, xblock):
# Instead of using the default submission as a dict with "text",
# make the submission a string.
self._create_submissions_and_scores(xblock, [("test answer", 1)], submission_key=None)
# It should still work
self._assert_scores(xblock, [
{"content": "test answer", "score": 1}
])
@scenario('data/leaderboard_show.xml')
def test_non_text_submission(self, xblock):
# Create a non-text submission (the submission dict doesn't contain "text")
self._create_submissions_and_scores(xblock, [("s3key", 1)], submission_key="file_key")
# Expect that we default to an empty string for content
self._assert_scores(xblock, [
{"content": "", "score": 1}
])
def _create_submissions_and_scores(
self, xblock, submissions_and_scores,
submission_key="text", points_possible=10
):
"""
Create submissions and scores that should be displayed by the leaderboard.
Args:
xblock (OpenAssessmentBlock)
submisions_and_scores (list): List of `(submission, score)` tuples, where
`submission` is the essay text (string) and `score` is the integer
number of points earned.
Keyword Args:
points_possible (int): The total number of points possible for this problem
submission_key (string): The key to use in the submission dict. If None, use
the submission value itself instead of embedding it in a dictionary.
"""
for num, (submission, points_earned) in enumerate(submissions_and_scores):
# Assign a unique student ID
# These aren't displayed by the leaderboard, so we can set them
# to anything without affecting the test.
student_item = xblock.get_student_item_dict()
student_item['student_id'] = "student {num}".format(num=num)
if submission_key is not None:
answer = { submission_key: submission }
else:
answer = submission
# Create a submission
sub = sub_api.create_submission(student_item, answer)
# Create a score for the submission
sub_api.set_score(sub['uuid'], points_earned, points_possible)
def _assert_scores(self, xblock, scores):
"""
Check that the leaderboard displays the expected scores.
Args:
xblock (OpenAssessmentBlock)
scores (list): The scores displayed by the leaderboard, each of which
is a dictionary of with keys 'content' (the submission text)
and 'score' (the integer number of points earned)
"""
self._assert_path_and_context(
xblock,
'openassessmentblock/leaderboard/oa_leaderboard_show.html',
{
'topscores': scores
},
workflow_status='done'
)
def _assert_path_and_context(self, xblock, expected_path, expected_context, workflow_status=None):
"""
Render the leaderboard and verify:
1) that the correct template and context were used
2) that the rendering occurred without an error
Args:
xblock (OpenAssessmentBlock): The XBlock under test.
expected_path (str): The expected template path.
expected_context (dict): The expected template context.
Kwargs:
workflow_status (str): If provided, simulate this status from the workflow API.
Raises:
AssertionError
"""
if workflow_status is not None:
xblock.get_workflow_info = mock.Mock(return_value={ 'status': workflow_status })
if workflow_status == 'done':
path, context = xblock.render_leaderboard_complete(xblock.get_student_item_dict())
else:
path, context = xblock.render_leaderboard_incomplete()
self.assertEqual(path, expected_path)
self.assertEqual(context, expected_context)
# Verify that we render without error
resp = self.request(xblock, 'render_leaderboard', json.dumps({}))
self.assertGreater(len(resp), 0)
def _assert_leaderboard_visible(self, xblock, is_visible):
"""
Check that the leaderboard is displayed in the student view.
"""
fragment = self.runtime.render(xblock, "student_view")
has_leaderboard = 'openassessment__leaderboard' in fragment.body_html()
self.assertEqual(has_leaderboard, is_visible)
...@@ -22,6 +22,17 @@ class SubmissionTest(XBlockHandlerTestCase): ...@@ -22,6 +22,17 @@ class SubmissionTest(XBlockHandlerTestCase):
self.assertTrue(resp[0]) self.assertTrue(resp[0])
@scenario('data/basic_scenario.xml', user_id='Bob') @scenario('data/basic_scenario.xml', user_id='Bob')
def test_submit_answer_too_long(self, xblock):
# Maximum answer length is 100K, once the answer has been JSON-encoded
long_submission = json.dumps({
'submission': 'longcat is long ' * 100000
})
resp = self.request(xblock, 'submit', long_submission, response_format='json')
self.assertFalse(resp[0])
self.assertEqual(resp[1], "EANSWERLENGTH")
self.assertIsNot(resp[2], None)
@scenario('data/basic_scenario.xml', user_id='Bob')
def test_submission_multisubmit_failure(self, xblock): def test_submission_multisubmit_failure(self, xblock):
# We don't care about return value of first one # We don't care about return value of first one
self.request(xblock, 'submit', self.SUBMISSION, response_format='json') self.request(xblock, 'submit', self.SUBMISSION, response_format='json')
...@@ -44,7 +55,7 @@ class SubmissionTest(XBlockHandlerTestCase): ...@@ -44,7 +55,7 @@ class SubmissionTest(XBlockHandlerTestCase):
@scenario('data/basic_scenario.xml', user_id='Bob') @scenario('data/basic_scenario.xml', user_id='Bob')
@patch.object(sub_api, 'create_submission') @patch.object(sub_api, 'create_submission')
def test_submission_API_failure(self, xblock, mock_submit): def test_submission_API_failure(self, xblock, mock_submit):
mock_submit.side_effect = SubmissionRequestError("Cat on fire.") mock_submit.side_effect = SubmissionRequestError(msg="Cat on fire.")
resp = self.request(xblock, 'submit', self.SUBMISSION, response_format='json') resp = self.request(xblock, 'submit', self.SUBMISSION, response_format='json')
self.assertFalse(resp[0]) self.assertFalse(resp[0])
self.assertEqual(resp[1], "EBADFORM") self.assertEqual(resp[1], "EBADFORM")
...@@ -65,7 +76,7 @@ class SubmissionTest(XBlockHandlerTestCase): ...@@ -65,7 +76,7 @@ class SubmissionTest(XBlockHandlerTestCase):
resp = self.request(xblock, 'submit', self.SUBMISSION, response_format='json') resp = self.request(xblock, 'submit', self.SUBMISSION, response_format='json')
self.assertFalse(resp[0]) self.assertFalse(resp[0])
self.assertEqual(resp[1], "ENOPREVIEW") self.assertEqual(resp[1], "ENOPREVIEW")
self.assertEqual(resp[2], "To submit a response, view this component in Preview or Live mode.") self.assertIsNot(resp[2], None)
@scenario('data/over_grade_scenario.xml', user_id='Alice') @scenario('data/over_grade_scenario.xml', user_id='Alice')
def test_closed_submissions(self, xblock): def test_closed_submissions(self, xblock):
......
...@@ -96,18 +96,18 @@ class TestSerializeContent(TestCase): ...@@ -96,18 +96,18 @@ class TestSerializeContent(TestCase):
""" """
self.oa_block = mock.MagicMock(OpenAssessmentBlock) self.oa_block = mock.MagicMock(OpenAssessmentBlock)
def _configure_xblock(self, data): def _configure_xblock(self, data):
self.oa_block.title = data['title'] self.oa_block.title = data.get('title', '')
self.oa_block.prompt = data['prompt'] self.oa_block.prompt = data.get('prompt')
self.oa_block.rubric_feedback_prompt = data['rubric_feedback_prompt'] self.oa_block.rubric_feedback_prompt = data.get('rubric_feedback_prompt')
self.oa_block.start = _parse_date(data['start']) self.oa_block.start = _parse_date(data.get('start'))
self.oa_block.due = _parse_date(data['due']) self.oa_block.due = _parse_date(data.get('due'))
self.oa_block.submission_start = data['submission_start'] self.oa_block.submission_start = data.get('submission_start')
self.oa_block.submission_due = data['submission_due'] self.oa_block.submission_due = data.get('submission_due')
self.oa_block.rubric_criteria = data['criteria'] self.oa_block.rubric_criteria = data.get('criteria', copy.deepcopy(self.BASIC_CRITERIA))
self.oa_block.rubric_assessments = data['assessments'] self.oa_block.rubric_assessments = data.get('assessments', copy.deepcopy(self.BASIC_ASSESSMENTS))
self.oa_block.allow_file_upload = data['allow_file_upload'] self.oa_block.allow_file_upload = data.get('allow_file_upload')
self.oa_block.leaderboard_show = data.get('leaderboard_show', 0)
@ddt.file_data('data/serialize.json') @ddt.file_data('data/serialize.json')
def test_serialize(self, data): def test_serialize(self, data):
...@@ -158,7 +158,7 @@ class TestSerializeContent(TestCase): ...@@ -158,7 +158,7 @@ class TestSerializeContent(TestCase):
self._configure_xblock(data) self._configure_xblock(data)
xml_str = serialize_rubric_to_xml_str(self.oa_block) xml_str = serialize_rubric_to_xml_str(self.oa_block)
self.assertIn("<rubric>", xml_str) self.assertIn("<rubric>", xml_str)
if data['prompt']: if data.get('prompt'):
self.assertNotIn(data['prompt'], xml_str) self.assertNotIn(data['prompt'], xml_str)
@ddt.file_data('data/serialize.json') @ddt.file_data('data/serialize.json')
...@@ -176,12 +176,7 @@ class TestSerializeContent(TestCase): ...@@ -176,12 +176,7 @@ class TestSerializeContent(TestCase):
self.assertIn(data['assessments'][0]['name'], xml_str) self.assertIn(data['assessments'][0]['name'], xml_str)
def test_mutated_criteria_dict(self): def test_mutated_criteria_dict(self):
self.oa_block.title = "Test title" self._configure_xblock({})
self.oa_block.rubric_assessments = self.BASIC_ASSESSMENTS
self.oa_block.start = None
self.oa_block.due = None
self.oa_block.submission_start = None
self.oa_block.submission_due = None
# We have to be really permissive with the data we'll accept. # We have to be really permissive with the data we'll accept.
# If the data we're retrieving is somehow corrupted, # If the data we're retrieving is somehow corrupted,
...@@ -201,12 +196,7 @@ class TestSerializeContent(TestCase): ...@@ -201,12 +196,7 @@ class TestSerializeContent(TestCase):
self.fail(msg) self.fail(msg)
def test_mutated_assessments_dict(self): def test_mutated_assessments_dict(self):
self.oa_block.title = "Test title" self._configure_xblock({})
self.oa_block.rubric_criteria = self.BASIC_CRITERIA
self.oa_block.start = None
self.oa_block.due = None
self.oa_block.submission_start = None
self.oa_block.submission_due = None
for assessment_dict in self.BASIC_ASSESSMENTS: for assessment_dict in self.BASIC_ASSESSMENTS:
for mutated_dict in self._dict_mutations(assessment_dict): for mutated_dict in self._dict_mutations(assessment_dict):
...@@ -219,15 +209,9 @@ class TestSerializeContent(TestCase): ...@@ -219,15 +209,9 @@ class TestSerializeContent(TestCase):
msg = "Could not parse mutated assessment dict {assessment}\n{ex}".format(assessment=mutated_dict, ex=ex) msg = "Could not parse mutated assessment dict {assessment}\n{ex}".format(assessment=mutated_dict, ex=ex)
self.fail(msg) self.fail(msg)
@ddt.data("title", "prompt", "start", "due", "submission_due", "submission_start") @ddt.data("title", "prompt", "start", "due", "submission_due", "submission_start", "leaderboard_show")
def test_mutated_field(self, field): def test_mutated_field(self, field):
self.oa_block.rubric_criteria = self.BASIC_CRITERIA self._configure_xblock({})
self.oa_block.rubric_assessments = self.BASIC_ASSESSMENTS
self.oa_block.start = None
self.oa_block.due = None
self.oa_block.submission_start = None
self.oa_block.submission_due = None
self.oa_block.allow_file_upload = None
for mutated_value in [0, u"\u9282", None]: for mutated_value in [0, u"\u9282", None]:
setattr(self.oa_block, field, mutated_value) setattr(self.oa_block, field, mutated_value)
...@@ -245,13 +229,7 @@ class TestSerializeContent(TestCase): ...@@ -245,13 +229,7 @@ class TestSerializeContent(TestCase):
# Configure rubric criteria and options with no names or labels # Configure rubric criteria and options with no names or labels
# This *should* never happen, but if it does, recover gracefully # This *should* never happen, but if it does, recover gracefully
# by assigning unique names and empty labels # by assigning unique names and empty labels
self.oa_block.rubric_criteria = copy.deepcopy(self.BASIC_CRITERIA) self._configure_xblock({})
self.oa_block.rubric_assessments = self.BASIC_ASSESSMENTS
self.oa_block.start = None
self.oa_block.due = None
self.oa_block.submission_start = None
self.oa_block.submission_due = None
self.oa_block.allow_file_upload = None
for criterion in self.oa_block.rubric_criteria: for criterion in self.oa_block.rubric_criteria:
del criterion['name'] del criterion['name']
...@@ -406,27 +384,12 @@ class TestParseAssessmentsFromXml(TestCase): ...@@ -406,27 +384,12 @@ class TestParseAssessmentsFromXml(TestCase):
@ddt.ddt @ddt.ddt
class TestUpdateFromXml(TestCase): class TestParseFromXml(TestCase):
""" """
Test deserialization of OpenAssessment XBlock content from XML. Test deserialization of OpenAssessment XBlock content from XML.
""" """
maxDiff = None maxDiff = None
def setUp(self):
"""
Mock the OA XBlock.
"""
self.oa_block = mock.MagicMock(OpenAssessmentBlock)
self.oa_block.title = ""
self.oa_block.prompt = ""
self.oa_block.rubric_criteria = dict()
self.oa_block.rubric_assessments = list()
self.oa_block.start = dt.datetime(2000, 1, 1).replace(tzinfo=pytz.utc)
self.oa_block.due = dt.datetime(3000, 1, 1).replace(tzinfo=pytz.utc)
self.oa_block.submission_start = "2000-01-01T00:00:00"
self.oa_block.submission_due = "2000-01-01T00:00:00"
@ddt.file_data('data/update_from_xml.json') @ddt.file_data('data/update_from_xml.json')
def test_parse_from_xml(self, data): def test_parse_from_xml(self, data):
...@@ -434,12 +397,34 @@ class TestUpdateFromXml(TestCase): ...@@ -434,12 +397,34 @@ class TestUpdateFromXml(TestCase):
config = parse_from_xml_str("".join(data['xml'])) config = parse_from_xml_str("".join(data['xml']))
# Check that the contents of the modified XBlock are correct # Check that the contents of the modified XBlock are correct
self.assertEqual(config['title'], data['title']) expected_fields = [
self.assertEqual(config['prompt'], data['prompt']) 'title',
self.assertEqual(config['submission_start'], data['submission_start']) 'prompt',
self.assertEqual(config['submission_due'], data['submission_due']) 'start',
self.assertEqual(config['rubric_criteria'], data['criteria']) 'due',
self.assertEqual(config['rubric_assessments'], data['assessments']) 'submission_start',
'submission_due',
'criteria',
'assessments',
'allow_file_upload',
'leaderboard_show'
]
for field_name in expected_fields:
if field_name in data:
actual = config[field_name]
expected = data[field_name]
if field_name in ['start', 'due']:
expected = _parse_date(expected)
self.assertEqual(
actual, expected,
msg=u"Wrong value for '{key}': was {actual} but expected {expected}".format(
key=field_name,
actual=repr(actual),
expected=repr(expected)
)
)
@ddt.file_data('data/update_from_xml_error.json') @ddt.file_data('data/update_from_xml_error.json')
def test_parse_from_xml_error(self, data): def test_parse_from_xml_error(self, data):
......
...@@ -6,6 +6,7 @@ import lxml.etree as etree ...@@ -6,6 +6,7 @@ import lxml.etree as etree
import pytz import pytz
import dateutil.parser import dateutil.parser
import defusedxml.ElementTree as safe_etree import defusedxml.ElementTree as safe_etree
from submissions.api import MAX_TOP_SUBMISSIONS
class UpdateFromXmlError(Exception): class UpdateFromXmlError(Exception):
...@@ -605,6 +606,10 @@ def serialize_content_to_xml(oa_block, root): ...@@ -605,6 +606,10 @@ def serialize_content_to_xml(oa_block, root):
if oa_block.submission_due is not None: if oa_block.submission_due is not None:
root.set('submission_due', unicode(oa_block.submission_due)) root.set('submission_due', unicode(oa_block.submission_due))
# Set leaderboard show
if oa_block.leaderboard_show:
root.set('leaderboard_show', unicode(oa_block.leaderboard_show))
# Allow file upload # Allow file upload
if oa_block.allow_file_upload is not None: if oa_block.allow_file_upload is not None:
root.set('allow_file_upload', unicode(oa_block.allow_file_upload)) root.set('allow_file_upload', unicode(oa_block.allow_file_upload))
...@@ -745,6 +750,21 @@ def parse_from_xml(root): ...@@ -745,6 +750,21 @@ def parse_from_xml(root):
else: else:
rubric = parse_rubric_xml(rubric_el) rubric = parse_rubric_xml(rubric_el)
# Retrieve the leaderboard if it exists, otherwise set it to 0
leaderboard_show = 0
if 'leaderboard_show' in root.attrib:
try:
leaderboard_show = int(root.attrib['leaderboard_show'])
if leaderboard_show < 1:
raise UpdateFromXmlError('The leaderboard must have a positive integer value.')
if leaderboard_show > MAX_TOP_SUBMISSIONS:
msg = 'The number of leaderboard scores must be less than {max_num}'.format(
max_num=MAX_TOP_SUBMISSIONS
)
raise UpdateFromXmlError(msg)
except (TypeError, ValueError):
raise UpdateFromXmlError('The leaderboard must have an integer value.')
# Retrieve the assessments # Retrieve the assessments
assessments_el = root.find('assessments') assessments_el = root.find('assessments')
if assessments_el is None: if assessments_el is None:
...@@ -760,10 +780,10 @@ def parse_from_xml(root): ...@@ -760,10 +780,10 @@ def parse_from_xml(root):
'rubric_feedback_prompt': rubric['feedbackprompt'], 'rubric_feedback_prompt': rubric['feedbackprompt'],
'submission_start': submission_start, 'submission_start': submission_start,
'submission_due': submission_due, 'submission_due': submission_due,
'allow_file_upload': allow_file_upload 'allow_file_upload': allow_file_upload,
'leaderboard_show': leaderboard_show
} }
def parse_from_xml_str(xml): def parse_from_xml_str(xml):
""" """
Create a dictionary for the OpenAssessment XBlock's content from an XML Create a dictionary for the OpenAssessment XBlock's content from an XML
......
...@@ -6,10 +6,10 @@ ...@@ -6,10 +6,10 @@
git+https://github.com/edx/XBlock.git@fc5fea25c973ec66d8db63cf69a817ce624f5ef5#egg=XBlock git+https://github.com/edx/XBlock.git@fc5fea25c973ec66d8db63cf69a817ce624f5ef5#egg=XBlock
git+https://github.com/edx/xblock-sdk.git@643900aadcb18aaeb7fe67271ca9dbf36e463ee6#egg=xblock-sdk git+https://github.com/edx/xblock-sdk.git@643900aadcb18aaeb7fe67271ca9dbf36e463ee6#egg=xblock-sdk
edx-submissions==0.0.3 edx-submissions==0.0.6
# Third Party Requirements # Third Party Requirements
boto==2.13.3 boto>=2.30.0,<3.0.0
celery==3.0.19 celery==3.0.19
defusedxml==0.4.1 defusedxml==0.4.1
dogapi==1.2.1 dogapi==1.2.1
...@@ -27,4 +27,4 @@ South==0.7.6 ...@@ -27,4 +27,4 @@ South==0.7.6
voluptuous==0.8.5 voluptuous==0.8.5
# AI grading # AI grading
git+https://github.com/edx/ease.git@f9f47fb6b5c7c8b6c3360efa72eb56561e1a03b0#egg=ease git+https://github.com/edx/ease.git@bcb36e84b5ffa4ac00813577079dd6eef4fff566#egg=ease
-r base.txt
locustio==0.7.0 locustio==0.7.0
loremipsum==1.0.2
pyzmq==14.0.1 pyzmq==14.0.1
bok_choy==0.3.1 bok_choy==0.3.1
nose==1.3.0
...@@ -5,7 +5,6 @@ ddt==0.8.0 ...@@ -5,7 +5,6 @@ ddt==0.8.0
django-nose==1.2 django-nose==1.2
mock==1.0.1 mock==1.0.1
moto==0.2.22 moto==0.2.22
nose==1.3.0
coverage==3.7.1 coverage==3.7.1
pep8==1.4.6 pep8==1.4.6
pylint<1.0 pylint<1.0
......
...@@ -6,4 +6,4 @@ lxml==3.0.1 ...@@ -6,4 +6,4 @@ lxml==3.0.1
nltk==2.0.4 nltk==2.0.4
numpy==1.6.2 numpy==1.6.2
scikit-learn==0.12.1 scikit-learn==0.12.1
scipy==0.11.0 scipy==0.14.0
##############################################################################
#
# Run the acceptance tests in Jenkins
#
# This assumes that:
# * Jenkins has Python and virtualenv installed
# * Jenkins has the SauceConnect plugin installed.
# * The Jenkins job provides the environment variables
# - BASIC_AUTH_USER: The basic auth username for the sandbox.
# - BASIC_AUTH_PASSWORD: The basic auth password for the sandbox.
# - TEST_HOST: The hostname of the sandbox (e.g. test.example.com)
#
##############################################################################
set -x
if [ -z "$BASIC_AUTH_USER" ]; then
echo "Need to set BASIC_AUTH_USER env variable"
exit 1;
fi
if [ -z "$BASIC_AUTH_PASSWORD" ]; then
echo "Need to set BASIC_AUTH_PASSWORD env variable"
exit 1;
fi
if [ -z "$TEST_HOST" ]; then
echo "Need to set TEST_HOST env variable"
exit 1;
fi
export BASE_URL="https://${BASIC_AUTH_USER}:${BASIC_AUTH_PASSWORD}@${TEST_HOST}"
virtualenv venv
source venv/bin/activate
pip install -r requirements/test-acceptance.txt
cd test/acceptance
python tests.py
...@@ -22,6 +22,7 @@ class OpenAssessmentPage(object): ...@@ -22,6 +22,7 @@ class OpenAssessmentPage(object):
'course_id', 'base_url', 'base_handler_url', 'course_id', 'base_url', 'base_handler_url',
'rubric_options', 'render_step_handlers' 'rubric_options', 'render_step_handlers'
]) ])
PROBLEMS = { PROBLEMS = {
'peer_then_self': ProblemFixture( 'peer_then_self': ProblemFixture(
course_id="ora2/1/1", course_id="ora2/1/1",
...@@ -49,15 +50,17 @@ class OpenAssessmentPage(object): ...@@ -49,15 +50,17 @@ class OpenAssessmentPage(object):
) )
} }
def __init__(self, client, problem_name): def __init__(self, hostname, client, problem_name):
""" """
Initialize the page to use specified HTTP client. Initialize the page to use specified HTTP client.
Args: Args:
hostname (unicode): The hostname (used for the referer HTTP header)
client (HttpSession): The HTTP client to use. client (HttpSession): The HTTP client to use.
problem_name (unicode): Name of the problem (one of the keys in `OpenAssessmentPage.PROBLEMS`) problem_name (unicode): Name of the problem (one of the keys in `OpenAssessmentPage.PROBLEMS`)
""" """
self.hostname = hostname
self.client = client self.client = client
self.problem_fixture = self.PROBLEMS[problem_name] self.problem_fixture = self.PROBLEMS[problem_name]
self.logged_in = False self.logged_in = False
...@@ -66,12 +69,16 @@ class OpenAssessmentPage(object): ...@@ -66,12 +69,16 @@ class OpenAssessmentPage(object):
if 'BASIC_AUTH_USER' in os.environ and 'BASIC_AUTH_PASSWORD' in os.environ: if 'BASIC_AUTH_USER' in os.environ and 'BASIC_AUTH_PASSWORD' in os.environ:
self.client.auth = (os.environ['BASIC_AUTH_USER'], os.environ['BASIC_AUTH_PASSWORD']) self.client.auth = (os.environ['BASIC_AUTH_USER'], os.environ['BASIC_AUTH_PASSWORD'])
def log_in(self): def log_in(self):
""" """
Log in as a unique user with access to the XBlock(s) under test. Log in as a unique user with access to the XBlock(s) under test.
""" """
resp = self.client.get("auto_auth", params={'course_id': self.problem_fixture.course_id}, verify=False) resp = self.client.get(
"auto_auth",
params={'course_id': self.problem_fixture.course_id},
verify=False,
timeout=120
)
self.logged_in = (resp.status_code == 200) self.logged_in = (resp.status_code == 200)
return self return self
...@@ -162,10 +169,10 @@ class OpenAssessmentPage(object): ...@@ -162,10 +169,10 @@ class OpenAssessmentPage(object):
'Content-type': 'application/json', 'Content-type': 'application/json',
'Accept': 'application/json', 'Accept': 'application/json',
'X-CSRFToken': self.client.cookies.get('csrftoken', ''), 'X-CSRFToken': self.client.cookies.get('csrftoken', ''),
'Referer': self.hostname
} }
class OpenAssessmentTasks(TaskSet): class OpenAssessmentTasks(TaskSet):
""" """
Virtual user interactions with the OpenAssessment XBlock. Virtual user interactions with the OpenAssessment XBlock.
...@@ -176,6 +183,7 @@ class OpenAssessmentTasks(TaskSet): ...@@ -176,6 +183,7 @@ class OpenAssessmentTasks(TaskSet):
Initialize the task set. Initialize the task set.
""" """
super(OpenAssessmentTasks, self).__init__(*args, **kwargs) super(OpenAssessmentTasks, self).__init__(*args, **kwargs)
self.hostname = self.locust.host
self.page = None self.page = None
@task @task
...@@ -184,7 +192,7 @@ class OpenAssessmentTasks(TaskSet): ...@@ -184,7 +192,7 @@ class OpenAssessmentTasks(TaskSet):
Test the peer-->self workflow. Test the peer-->self workflow.
""" """
if self.page is None: if self.page is None:
self.page = OpenAssessmentPage(self.client, 'peer_then_self') # pylint: disable=E1101 self.page = OpenAssessmentPage(self.hostname, self.client, 'peer_then_self') # pylint: disable=E1101
self.page.log_in() self.page.log_in()
if not self.page.logged_in: if not self.page.logged_in:
...@@ -209,7 +217,7 @@ class OpenAssessmentTasks(TaskSet): ...@@ -209,7 +217,7 @@ class OpenAssessmentTasks(TaskSet):
Test example-based assessment only. Test example-based assessment only.
""" """
if self.page is None: if self.page is None:
self.page = OpenAssessmentPage(self.client, 'example_based') # pylint: disable=E1101 self.page = OpenAssessmentPage(self.hostname, self.client, 'example_based') # pylint: disable=E1101
self.page.log_in() self.page.log_in()
if not self.page.logged_in: if not self.page.logged_in:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment