Commit 0ff06107 by Braden MacDonald

Dashboard XBlock

parent 700f1834
...@@ -23,6 +23,8 @@ It supports: ...@@ -23,6 +23,8 @@ It supports:
next step. Provides a link to the next step to the student. next step. Provides a link to the next step to the student.
* **Tables**, which allow to present answers from the student to * **Tables**, which allow to present answers from the student to
free-form answers in a concise way. Supports custom headers. free-form answers in a concise way. Supports custom headers.
* **Dashboards**, for displaying a summary of the student's answers
to multiple choice questions. [Details](doc/Dashboard.md)
The screenshot shows an example of a problem builder block containing a The screenshot shows an example of a problem builder block containing a
free-form question, two MCQs and one MRQ. free-form question, two MCQs and one MRQ.
...@@ -55,183 +57,7 @@ settings. ...@@ -55,183 +57,7 @@ settings.
Usage Usage
----- -----
When you add the `Problem Builder` component to a course in the studio, the See [Usage Instructions](doc/Usage.md)
built-in editing tools guide you through the process of configuring the
block and adding individual questions.
### Problem Builder modes
There are 2 mentoring modes available:
* *standard*: Traditional mentoring. All questions are displayed on the
page and submitted at the same time. The students get some tips and
feedback about their answers. This is the default mode.
* *assessment*: Questions are displayed and submitted one by one. The
students don't get tips or feedback, but only know if their answer was
correct. Assessment mode comes with a default `max_attempts` of `2`.
Below are some LMS screenshots of a problem builder block in assessment mode.
Question before submitting an answer:
![Assessment Step 1](doc/img/assessment-1.png)
Question after submitting the correct answer:
![Assessment Step 2](doc/img/assessment-2.png)
Question after submitting a wrong answer:
![Assessment Step 3](doc/img/assessment-3.png)
Score review and the "Try Again" button:
![Assessment Step 4](doc/img/assessment-4.png)
### Free-form Question
Free-form questions are represented by a "Long Answer" component.
Example screenshot before answering the question:
![Answer Initial](doc/img/answer-1.png)
Screenshot after answering the question:
![Answer Complete](doc/img/answer-2.png)
You can add "Long Answer Recap" components to problem builder blocks later on
in the course to provide a read-only view of any answer that the student
entered earlier.
The read-only answer is rendered as a quote in the LMS:
![Answer Read-Only](doc/img/answer-3.png)
### Multiple Choice Questions (MCQ)
Multiple Choice Questions can be added to a problem builder component and
have the following configurable options:
* Question - The question to ask the student
* Message - A feedback message to display to the student after they
have made their choice.
* Weight - The weight is used when computing total grade/score of
the problem builder block. The larger the weight, the more influence this
question will have on the grade. Value of zero means this question
has no influence on the grade (float, defaults to `1`).
* Correct Choice - Specify which choice[s] is considered correct. If
a student selects a choice that is not indicated as correct here,
the student will get the question wrong.
Using the Studio editor, you can add "Custom Choice" blocks to the MCQ.
Each Custom Choice represents one of the options from which students
will choose their answer.
You can also add "Tip" entries. Each "Tip" must be configured to link
it to one or more of the choices. If the student chooses a choice, the
Screenshot: Before attempting to answer the questions:
![MCQ Initial](doc/img/mcq-1.png)
While attempting to complete the questions:
![MCQ Attempting](doc/img/mcq-2.png)
After successfully completing the questions:
![MCQ Success](doc/img/mcq-3.png)
#### Rating MCQ
When constructing questions where the student rates some topic on the
scale from `1` to `5` (e.g. a Likert Scale), you can use the Rating
question type, which includes built-in numbered choices from 1 to 5
The `Low` and `High` settings specify the text shown next to the
lowest and highest valued choice.
Rating questions are a specialized type of MCQ, and the same
instructions apply. You can also still add "Custom Choice" components
if you want additional choices to be available such as "I don't know".
### Self-assessment Multiple Response Questions (MRQ)
Multiple Response Questions are set up similarly to MCQs. The answers
are rendered as checkboxes. Unlike MCQs where only a single answer can
be selected, MRQs allow multiple answers to be selected at the same
time.
MRQ questions have these configurable settings:
* Question - The question to ask the student
* Required Choices - For any choices selected here, if the student
does *not* select that choice, they will lose marks.
* Ignored Choices - For any choices selected here, the student will
always be considered correct whether they choose this choice or not.
* Message - A feedback message to display to the student after they
have made their choice.
* Weight - The weight is used when computing total grade/score of
the problem builder block. The larger the weight, the more influence this
question will have on the grade. Value of zero means this question
has no influence on the grade (float, defaults to `1`).
* Hide Result - If set to True, the feedback icons next to each
choice will not be displayed (This is false by default).
The "Custom Choice" and "Tip" components work the same way as they
do when used with MCQs (see above).
Screenshot - Before attempting to answer the questions:
![MRQ Initial](doc/img/mrq-1.png)
While attempting to answer the questions:
![MRQ Attempt](doc/img/mrq-2.png)
After clicking on the feedback icon next to the "Its bugs" answer:
![MRQ Attempt](doc/img/mrq-3.png)
After successfully completing the questions:
![MRQ Success](doc/img/mrq-4.png)
### Tables
The problem builder table allows you to present answers to multiple
free-form questions in a concise way. Once you create an "Answer
Recap Table" inside a Mentoring component in Studio, you will be
able to add columns to the table. Each column has an optional
"Header" setting that you can use to add a header to that column.
Each column can contain one or more "Answer Recap" element, as
well as HTML components.
Screenshot:
![Table Screenshot](doc/img/mentoring-table.png)
### Maximum Attempts
You can set the number of maximum attempts for the unit completion by
setting the Max. Attempts option of the Mentoring component.
Before submitting an answer for the first time:
![Max Attempts Before](doc/img/max-attempts-before.png)
After submitting a wrong answer two times:
![Max Attempts Reached](doc/img/max-attempts-reached.png)
### Custom tip popup window size
You can specify With and Height attributes of any Tip component to
customize the popup window size. The value of those attribute should
be valid CSS (e.g. `50px`).
Workbench installation and settings Workbench installation and settings
----------------------------------- -----------------------------------
......
"Dashboard" Self-Assessment Summary Block
=========================================
A "Dashboard" XBlock provides a concise way to summarize a student's answers to
groups of multiple choice questions.
You configure it like this, by pasting in the `url_name`s of some Problem
Builder ("Mentoring") blocks:
![Screen shot of Dashboard XBlock configuration](img/dashboard-configuration.png)
And it will then look like this (after a student has submitted their answers):
![Screen shot of a Dashboard XBlock](img/dashboard-example.png)
Color Coding Rules:
-------------------
Authors can add a list of rules (one per line) that apply colors to the various
possible student answer values.
Rules are entered into the dashboard configuration "Color Coding Rules", one
rule per line. The first rule to match a given value will be used to color
that value.
The simplest rule looks like "3: red". With this line, if the student's answer
is 3, it will be shown in red in the report. Colors can be specified using any
valid CSS color value, e.g. "red", "#f00", "#ff0000", "rgb(255,0,0)", etc.
For more advanced rules, you can use an expression in terms of x such as
"x > 3: blue" or "0 <= x < 5: green" (green if x is greater than or equal to
zero but less than five).
You can also just specify a color on a line by itself, which will always match
any value (usually this would be the last line, as a "default" color).
Visual Representation
---------------------
The Dashboard also supports an optional visual representation. This is a
powerful feature, but setting it up is a bit involved.
The end result is shown below. You can see a diagram, in which a colored arrow
appears as the student works through the various "Steps". Each "Step" is one
mentoring block, which contains several multiple choice questions. Based on the
average value of the student's choices, the step is given a color.
In this example, steps that have not been attempted are grey-green, and steps
that have been attempted are colored in according to the color coding rules.
![Screen shot of visual representation](img/dashboard-visual.png)
To achieve the result shown above requires a set of "stacked" image files (one
for each step), as well as an overlay image (in this case, the overlay image
contains all the text). For coloring to work, the images must be white where
color is desired.
To build this example, the images used look like this:
![Images Used](img/dashboard-visual-instructions.png)
The block was configured to use these images as follows:
![Screen shot of visual representation rule configuration](img/dashboard-visual-config.png)
The **Visual Representation Settings** used to define the visual representation
must be in JSON format. The supported entries are:
* **`"images"`**: A list of image URLs, one per PB block, in the same order as
the 'blocks' list (the list of `url_name`s described above). If the images you
wish to use are on your computer, first upload them to the course's "Files and
Uploads" page. You can then find the URL for each image listed on that page
(use the "Studio" URL, not the "Web" URL).
* All images listed here will be layered on top of each other, and can be
colorized, faded, etc. based on the average value of the student's choices
for the corresponding group of MCQs.
* **`"overlay"`**: (Optional) The URL of an image to be drawn on top of the
layered images, with no effects applied.
* **`"background"`**: (Optional) The URL of an image to be drawn behind the
layered images, with no effects applied.
* **`"width"`**: (Important) The width of the images, in pixels (all images
should be the same size).
* **`"height"`**: (Important) The height of the images, in pixels
Mentoring Block Usage
=====================
When you add the `Problem Builder` component to a course in the studio, the
built-in editing tools guide you through the process of configuring the
block and adding individual questions.
### Problem Builder modes
There are 2 mentoring modes available:
* *standard*: Traditional mentoring. All questions are displayed on the
page and submitted at the same time. The students get some tips and
feedback about their answers. This is the default mode.
* *assessment*: Questions are displayed and submitted one by one. The
students don't get tips or feedback, but only know if their answer was
correct. Assessment mode comes with a default `max_attempts` of `2`.
Below are some LMS screenshots of a problem builder block in assessment mode.
Question before submitting an answer:
![Assessment Step 1](img/assessment-1.png)
Question after submitting the correct answer:
![Assessment Step 2](img/assessment-2.png)
Question after submitting a wrong answer:
![Assessment Step 3](img/assessment-3.png)
Score review and the "Try Again" button:
![Assessment Step 4](img/assessment-4.png)
### Free-form Question
Free-form questions are represented by a "Long Answer" component.
Example screenshot before answering the question:
![Answer Initial](img/answer-1.png)
Screenshot after answering the question:
![Answer Complete](img/answer-2.png)
You can add "Long Answer Recap" components to problem builder blocks later on
in the course to provide a read-only view of any answer that the student
entered earlier.
The read-only answer is rendered as a quote in the LMS:
![Answer Read-Only](img/answer-3.png)
### Multiple Choice Questions (MCQ)
Multiple Choice Questions can be added to a problem builder component and
have the following configurable options:
* Question - The question to ask the student
* Message - A feedback message to display to the student after they
have made their choice.
* Weight - The weight is used when computing total grade/score of
the problem builder block. The larger the weight, the more influence this
question will have on the grade. Value of zero means this question
has no influence on the grade (float, defaults to `1`).
* Correct Choice - Specify which choice[s] is considered correct. If
a student selects a choice that is not indicated as correct here,
the student will get the question wrong.
Using the Studio editor, you can add "Custom Choice" blocks to the MCQ.
Each Custom Choice represents one of the options from which students
will choose their answer.
You can also add "Tip" entries. Each "Tip" must be configured to link
it to one or more of the choices. If the student chooses a choice, the
Screenshot: Before attempting to answer the questions:
![MCQ Initial](img/mcq-1.png)
While attempting to complete the questions:
![MCQ Attempting](img/mcq-2.png)
After successfully completing the questions:
![MCQ Success](img/mcq-3.png)
#### Rating MCQ
When constructing questions where the student rates some topic on the
scale from `1` to `5` (e.g. a Likert Scale), you can use the Rating
question type, which includes built-in numbered choices from 1 to 5
The `Low` and `High` settings specify the text shown next to the
lowest and highest valued choice.
Rating questions are a specialized type of MCQ, and the same
instructions apply. You can also still add "Custom Choice" components
if you want additional choices to be available such as "I don't know".
### Self-assessment Multiple Response Questions (MRQ)
Multiple Response Questions are set up similarly to MCQs. The answers
are rendered as checkboxes. Unlike MCQs where only a single answer can
be selected, MRQs allow multiple answers to be selected at the same
time.
MRQ questions have these configurable settings:
* Question - The question to ask the student
* Required Choices - For any choices selected here, if the student
does *not* select that choice, they will lose marks.
* Ignored Choices - For any choices selected here, the student will
always be considered correct whether they choose this choice or not.
* Message - A feedback message to display to the student after they
have made their choice.
* Weight - The weight is used when computing total grade/score of
the problem builder block. The larger the weight, the more influence this
question will have on the grade. Value of zero means this question
has no influence on the grade (float, defaults to `1`).
* Hide Result - If set to True, the feedback icons next to each
choice will not be displayed (This is false by default).
The "Custom Choice" and "Tip" components work the same way as they
do when used with MCQs (see above).
Screenshot - Before attempting to answer the questions:
![MRQ Initial](img/mrq-1.png)
While attempting to answer the questions:
![MRQ Attempt](img/mrq-2.png)
After clicking on the feedback icon next to the "Its bugs" answer:
![MRQ Attempt](img/mrq-3.png)
After successfully completing the questions:
![MRQ Success](img/mrq-4.png)
### Tables
The problem builder table allows you to present answers to multiple
free-form questions in a concise way. Once you create an "Answer
Recap Table" inside a Mentoring component in Studio, you will be
able to add columns to the table. Each column has an optional
"Header" setting that you can use to add a header to that column.
Each column can contain one or more "Answer Recap" element, as
well as HTML components.
Screenshot:
![Table Screenshot](img/mentoring-table.png)
### Maximum Attempts
You can set the number of maximum attempts for the unit completion by
setting the Max. Attempts option of the Mentoring component.
Before submitting an answer for the first time:
![Max Attempts Before](img/max-attempts-before.png)
After submitting a wrong answer two times:
![Max Attempts Reached](img/max-attempts-reached.png)
### Custom tip popup window size
You can specify With and Height attributes of any Tip component to
customize the popup window size. The value of those attribute should
be valid CSS (e.g. `50px`).
### "Dashboard" Self-Assessment Summary Block
[Instructions for using the "Dashboard" Self-Assessment Summary Block](Dashboard.md)
from .mentoring import MentoringBlock from .mentoring import MentoringBlock
from .answer import AnswerBlock, AnswerRecapBlock from .answer import AnswerBlock, AnswerRecapBlock
from .choice import ChoiceBlock from .choice import ChoiceBlock
from .dashboard import DashboardBlock
from .mcq import MCQBlock, RatingBlock from .mcq import MCQBlock, RatingBlock
from .mrq import MRQBlock from .mrq import MRQBlock
from .message import MentoringMessageBlock from .message import MentoringMessageBlock
......
...@@ -141,13 +141,7 @@ class AnswerBlock(AnswerMixin, StepMixin, StudioEditableXBlockMixin, XBlock): ...@@ -141,13 +141,7 @@ class AnswerBlock(AnswerMixin, StepMixin, StudioEditableXBlockMixin, XBlock):
enforce_type=True enforce_type=True
) )
editable_fields = ('question', 'name', 'min_characters', 'weight', 'default_from') editable_fields = ('question', 'name', 'min_characters', 'weight', 'default_from', 'display_name', 'show_title')
@property
def display_name_with_default(self):
if not self.lonely_step:
return self._(u"Question {number}").format(number=self.step_number)
return self._(u"Question")
@lazy @lazy
def student_input(self): def student_input(self):
...@@ -172,6 +166,7 @@ class AnswerBlock(AnswerMixin, StepMixin, StudioEditableXBlockMixin, XBlock): ...@@ -172,6 +166,7 @@ class AnswerBlock(AnswerMixin, StepMixin, StudioEditableXBlockMixin, XBlock):
""" Render this XBlock within a mentoring block. """ """ Render this XBlock within a mentoring block. """
context = context or {} context = context or {}
context['self'] = self context['self'] = self
context['hide_header'] = context.get('hide_header', False) or not self.show_title
html = loader.render_template('templates/html/answer_editable.html', context) html = loader.render_template('templates/html/answer_editable.html', context)
fragment = Fragment(html) fragment = Fragment(html)
......
...@@ -44,7 +44,7 @@ class ChoiceBlock(StudioEditableXBlockMixin, XBlock): ...@@ -44,7 +44,7 @@ class ChoiceBlock(StudioEditableXBlockMixin, XBlock):
""" """
value = String( value = String(
display_name=_("Value"), display_name=_("Value"),
help=_("Value of the choice when selected. Should be unique."), help=_("Value of the choice when selected. Should be unique. Generally you do not need to edit this."),
scope=Scope.content, scope=Scope.content,
default="", default="",
) )
...@@ -54,7 +54,7 @@ class ChoiceBlock(StudioEditableXBlockMixin, XBlock): ...@@ -54,7 +54,7 @@ class ChoiceBlock(StudioEditableXBlockMixin, XBlock):
scope=Scope.content, scope=Scope.content,
default="", default="",
) )
editable_fields = ('content', ) editable_fields = ('content', 'value')
def _(self, text): def _(self, text):
""" translate text """ """ translate text """
......
# -*- coding: utf-8 -*-
#
# Copyright (c) 2014-2015 Harvard, edX & OpenCraft
#
# This software's license gives you freedom; you can copy, convey,
# propagate, redistribute and/or modify this program under the terms of
# the GNU Affero General Public License (AGPL) as published by the Free
# Software Foundation (FSF), either version 3 of the License, or (at your
# option) any later version of the AGPL published by the FSF.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero
# General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program in a file in the toplevel directory called
# "AGPLv3". If not, see <http://www.gnu.org/licenses/>.
#
"""
Visual Representation of Dashboard State.
Consists of a series of images, layered on top of each other, where the appearance of each layer
can be tied to the average value of the student's response to a particular Problem Builder
block.
For example, each layer can have its color turn green if the student's average value on MCQs in
a specific Problem Builder block was at least 3.
"""
class DashboardVisualData(object):
"""
Data about the visual representation of a dashboard.
"""
def __init__(self, blocks, rules, color_for_value, title, desc):
"""
Construct the data required for the optional visual representation of the dashboard.
Data format accepted for rules is like:
{
"images": [
"/static/step1.png",
"/static/step2.png",
"/static/step3.png",
"/static/step4.png",
"/static/step5.png",
"/static/step6.png",
"/static/step7.png"
],
"background": "/static/background.png",
"overlay": "/static/overlay.png",
"width": "500",
"height": "500"
}
color_for_value is a method that, given a value, returns a color string or None
"""
# Images is a list of images, one per PB block, in the same order as 'blocks'
# All images are rendered layered on top of each other, and can be hidden,
# shown, colorized, faded, etc. based on the average answer value for that PB block.
images = rules.get("images", [])
# Overlay is an optional images drawn on top, with no effects applied
overlay = rules.get("overlay")
# Background is an optional images drawn on the bottom, with no effects applied
background = rules.get("background")
# Width and height of the image:
self.width = int(rules.get("width", 400))
self.height = int(rules.get("height", 400))
# A unique ID used by the HTML template:
self.unique_id = id(self)
# Alternate dsescription for screen reader users:
self.title = title
self.desc = desc
self.layers = []
if background:
self.layers.append({"url": background})
for idx, block in enumerate(blocks):
if not block.get("has_average"):
continue # We only use blocks with numeric averages for the visual representation
# Now we build the 'layer_data' information to pass on to the template:
try:
layer_data = {"url": images[idx], "id": "layer{}".format(idx)}
except IndexError:
break
# Check if a color rule applies:
layer_data["color"] = color_for_value(block["average"])
self.layers.append(layer_data)
if overlay:
self.layers.append({"url": overlay})
...@@ -28,6 +28,7 @@ from xblock.validation import ValidationMessage ...@@ -28,6 +28,7 @@ from xblock.validation import ValidationMessage
from xblockutils.resources import ResourceLoader from xblockutils.resources import ResourceLoader
from .questionnaire import QuestionnaireAbstractBlock from .questionnaire import QuestionnaireAbstractBlock
from .sub_api import sub_api, SubmittingXBlockMixin
# Globals ########################################################### # Globals ###########################################################
...@@ -43,7 +44,7 @@ def _(text): ...@@ -43,7 +44,7 @@ def _(text):
# Classes ########################################################### # Classes ###########################################################
class MCQBlock(QuestionnaireAbstractBlock): class MCQBlock(SubmittingXBlockMixin, QuestionnaireAbstractBlock):
""" """
An XBlock used to ask multiple-choice questions An XBlock used to ask multiple-choice questions
""" """
...@@ -88,6 +89,11 @@ class MCQBlock(QuestionnaireAbstractBlock): ...@@ -88,6 +89,11 @@ class MCQBlock(QuestionnaireAbstractBlock):
}) })
self.student_choice = submission self.student_choice = submission
if sub_api:
# Also send to the submissions API:
sub_api.create_submission(self.student_item_key, submission)
result = { result = {
'submission': submission, 'submission': submission,
'status': 'correct' if correct else 'incorrect', 'status': 'correct' if correct else 'incorrect',
......
...@@ -69,7 +69,10 @@ class MRQBlock(QuestionnaireAbstractBlock): ...@@ -69,7 +69,10 @@ class MRQBlock(QuestionnaireAbstractBlock):
default=[], default=[],
) )
hide_results = Boolean(display_name="Hide results", scope=Scope.content, default=False) hide_results = Boolean(display_name="Hide results", scope=Scope.content, default=False)
editable_fields = ('question', 'required_choices', 'ignored_choices', 'message', 'weight', 'hide_results', ) editable_fields = (
'question', 'required_choices', 'ignored_choices', 'message', 'display_name',
'show_title', 'weight', 'hide_results',
)
def describe_choice_correctness(self, choice_value): def describe_choice_correctness(self, choice_value):
if choice_value in self.required_choices: if choice_value in self.required_choices:
......
.pb-dashboard table {
max-width: 800px;
border-collapse: collapse;
margin-bottom: 15px;
}
.pb-dashboard table thead th {
padding-top: 0.6em;
font-weight: bold;
}
.pb-dashboard table td, .pb-dashboard table tbody th {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
padding: 0.2em;
font-weight: normal;
}
.pb-dashboard table .desc {
min-width: 350px;
}
.pb-dashboard table td.value {
min-width: 4em;
text-align: right;
padding-right: 5px;
border-right: 0.6em solid transparent;
}
.pb-dashboard table .avg-row td.desc {
font-style: italic;
}
// Client side code for the Problem Builder Dashboard XBlock
// So far, this code is only used to generate a downloadable report.
function PBDashboardBlock(runtime, element, initData) {
"use strict";
var reportTemplate = initData.reportTemplate;
var generateDataUriFromImageURL = function(imgURL) {
// Given the URL to an image, IF the image has already been cached by the browser,
// returns a data: URI with the contents of the image (image will be converted to PNG)
var img = new Image();
img.src = imgURL;
if (!img.complete)
return imgURL;
// Create an in-memory canvas from which we can extract a data URL:
var canvas = document.createElement("canvas");
canvas.width = img.naturalWidth;
canvas.height = img.naturalHeight;
// Draw the image onto our temporary canvas:
canvas.getContext('2d').drawImage(img, 0, 0);
return canvas.toDataURL("image/png");
};
var unicodeStringToBase64 = function(str) {
// Convert string to base64. A bit weird in order to support unicode, per
// https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/btoa
return window.btoa(unescape(encodeURIComponent(str)));
};
var downloadReport = function(ev) {
// Download Report:
// Change the URL to a data: URI before continuing with the click event.
if ($(this).attr('href').charAt(0) == '#') {
var $report = $('.dashboard-report', element).clone();
// Convert all images in $report to data URIs:
$report.find('image').each(function() {
var origURL = $(this).attr('xlink:href');
$(this).attr('xlink:href', generateDataUriFromImageURL(origURL));
});
// Take the resulting HTML and put it into the template we have:
var wrapperHTML = reportTemplate.replace('REPORT_GOES_HERE', $report.html());
//console.log(wrapperHTML);
var dataURI = "data:text/html;base64," + unicodeStringToBase64(wrapperHTML);
$(this).attr('href', dataURI);
}
};
var $downloadLink = $('.report-download-link', element);
$downloadLink.on('click', downloadReport);
}
...@@ -81,7 +81,7 @@ class QuestionnaireAbstractBlock(StudioEditableXBlockMixin, StudioContainerXBloc ...@@ -81,7 +81,7 @@ class QuestionnaireAbstractBlock(StudioEditableXBlockMixin, StudioContainerXBloc
scope=Scope.content, scope=Scope.content,
enforce_type=True enforce_type=True
) )
editable_fields = ('question', 'message', 'weight') editable_fields = ('question', 'message', 'weight', 'display_name', 'show_title')
has_children = True has_children = True
def _(self, text): def _(self, text):
...@@ -113,12 +113,6 @@ class QuestionnaireAbstractBlock(StudioEditableXBlockMixin, StudioContainerXBloc ...@@ -113,12 +113,6 @@ class QuestionnaireAbstractBlock(StudioEditableXBlockMixin, StudioContainerXBloc
return block return block
@property
def display_name_with_default(self):
if not self.lonely_step:
return self._(u"Question {number}").format(number=self.step_number)
return self._(u"Question")
def student_view(self, context=None): def student_view(self, context=None):
name = getattr(self, "unmixed_class", self.__class__).__name__ name = getattr(self, "unmixed_class", self.__class__).__name__
...@@ -127,6 +121,7 @@ class QuestionnaireAbstractBlock(StudioEditableXBlockMixin, StudioContainerXBloc ...@@ -127,6 +121,7 @@ class QuestionnaireAbstractBlock(StudioEditableXBlockMixin, StudioContainerXBloc
context = context or {} context = context or {}
context['self'] = self context['self'] = self
context['custom_choices'] = self.custom_choices context['custom_choices'] = self.custom_choices
context['hide_header'] = context.get('hide_header', False) or not self.show_title
fragment = Fragment(loader.render_template(template_path, context)) fragment = Fragment(loader.render_template(template_path, context))
# If we use local_resource_url(self, ...) the runtime may insert many identical copies # If we use local_resource_url(self, ...) the runtime may insert many identical copies
......
...@@ -19,9 +19,15 @@ ...@@ -19,9 +19,15 @@
# #
from lazy import lazy from lazy import lazy
from xblock.fields import String, Boolean, Scope
from xblockutils.helpers import child_isinstance from xblockutils.helpers import child_isinstance
# Make '_' a no-op so we can scrape strings
def _(text):
return text
def _normalize_id(key): def _normalize_id(key):
""" """
Helper method to normalize a key to avoid issues where some keys have version/branch and others don't. Helper method to normalize a key to avoid issues where some keys have version/branch and others don't.
...@@ -49,10 +55,26 @@ class StepParentMixin(object): ...@@ -49,10 +55,26 @@ class StepParentMixin(object):
class StepMixin(object): class StepMixin(object):
""" """
An XBlock mixin for a child block that is a "Step" An XBlock mixin for a child block that is a "Step".
A step is a question that the user can answer (as opposed to a read-only child).
""" """
has_author_view = True has_author_view = True
# Fields:
display_name = String(
display_name=_("Question title"),
help=_('Leave blank to use the default ("Question 1", "Question 2", etc.)'),
default="", # Blank will use 'Question x' - see display_name_with_default
scope=Scope.content
)
show_title = Boolean(
display_name=_("Show title"),
help=_("Display the title?"),
default=True,
scope=Scope.content
)
@lazy @lazy
def step_number(self): def step_number(self):
return list(self.get_parent().steps).index(_normalize_id(self.scope_ids.usage_id)) + 1 return list(self.get_parent().steps).index(_normalize_id(self.scope_ids.usage_id)) + 1
...@@ -63,6 +85,15 @@ class StepMixin(object): ...@@ -63,6 +85,15 @@ class StepMixin(object):
raise ValueError("Step's parent should contain Step", self, self.get_parent().steps) raise ValueError("Step's parent should contain Step", self, self.get_parent().steps)
return len(self.get_parent().steps) == 1 return len(self.get_parent().steps) == 1
@property
def display_name_with_default(self):
""" Get the title/display_name of this question. """
if self.display_name:
return self.display_name
if not self.lonely_step:
return self._(u"Question {number}").format(number=self.step_number)
return self._(u"Question")
def author_view(self, context): def author_view(self, context):
context = context or {} context = context or {}
context['hide_header'] = True context['hide_header'] = True
......
# -*- coding: utf-8 -*-
#
# Copyright (c) 2014-2015 Harvard, edX & OpenCraft
#
# This software's license gives you freedom; you can copy, convey,
# propagate, redistribute and/or modify this program under the terms of
# the GNU Affero General Public License (AGPL) as published by the Free
# Software Foundation (FSF), either version 3 of the License, or (at your
# option) any later version of the AGPL published by the FSF.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero
# General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program in a file in the toplevel directory called
# "AGPLv3". If not, see <http://www.gnu.org/licenses/>.
#
"""
Integrations between these XBlocks and the edX Submissions API
"""
try:
from submissions import api as sub_api
except ImportError:
sub_api = None # We are probably in the workbench. Don't use the submissions API
class SubmittingXBlockMixin(object):
""" Simplifies use of the submissions API by an XBlock """
@property
def student_item_key(self):
""" Get the student_item_dict required for the submissions API """
assert sub_api is not None
location = self.location.replace(branch=None, version=None) # Standardize the key in case it isn't already
return dict(
student_id=self.runtime.anonymous_student_id,
course_id=unicode(location.course_key),
item_id=unicode(location),
item_type=self.scope_ids.block_type,
)
{% load i18n %}
<div class="pb-dashboard">
<div class="dashboard-report">
<h2>{{display_name}}</h2>
{% if visual_repr %}
<div class="pb-dashboard-visual">
<svg width="{{visual_repr.width}}" height="{{visual_repr.height}}" role="img" aria-labelledby="pb-dashboard-vr-title-{{visual_repr.unique_id}} pb-dashboard-vr-desc-{{visual_repr.unique_id}}">
<title id="pb-dashboard-vr-title-{{visual_repr.unique_id}}">{{visual_repr.title}}</title>
<desc id="pb-dashboard-vr-desc-{{visual_repr.unique_id}}">{{visual_repr.desc}}</desc>
<!-- Filter definitions -->
{% for layer in visual_repr.layers %}
{% if layer.color %}
<filter id="{{layer.id}}">
<feFlood flood-color="{{layer.color}}" result="flood" />
<feBlend in="flood" in2="SourceGraphic" mode="multiply" />
</filter>
<mask id="{{layer.id}}-mask" maskUnits="userSpaceOnUse" x="0" y="0" width="100%" height="100%">
<image xlink:href="{{layer.url}}" x="0" y="0" height="100%" width="100%" />
</mask>
{% endif %}
{% endfor %}
<!-- Layer images -->
{% for layer in visual_repr.layers %}
{% if layer.color %}
<rect x="0" y="0" height="100%" width="100%" fill="{{layer.color}}" mask="url(#{{layer.id}}-mask)" />
{% else %}
<image xlink:href="{{layer.url}}" x="0" y="0" height="100%" width="100%" />
{% endif %}
{% endfor %}
</svg>
</div>
{% endif %}
{% for block in blocks %}
<table>
<thead>
<th colspan=2>{{ block.display_name }}</th>
</thead>
<tbody>
{% for mcq in block.mcqs %}
<tr>
<th class="desc">{{ mcq.display_name }}</th>
<td class="value" {% if mcq.color %}style="border-right-color: {{mcq.color}};"{% endif %}>
{% if mcq.value %}{{ mcq.value }}{% endif %}
</td>
</tr>
{% endfor %}
{% if block.has_average %}
<tr class="avg-row">
<th class="desc">{% trans "Average" %}</th>
<td class="value" {% if block.average_color %}style="border-right-color: {{block.average_color}};"{% endif %}>
{{ block.average|floatformat }}
</td>
</tr>
{% endif %}
</tbody>
</table>
{% endfor %}
</div>
{% if blocks %}
<br>
<p><a class="report-download-link" href="#report_download" download="report.html">{% trans "Download report" %}</a></p>
{% endif %}
</div>
{% load i18n %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>{{ title }}</title>
<style>
body {
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
}
{{css}}
</style>
</head>
<body>
<div class="pb-dashboard">
<div class="identification">
{% if student_name %}{% trans "Student" %}: {{student_name}}<br>{% endif %}
{% if course_name %}{% trans "Course" %}: {{course_name}}<br>{% endif %}
{% trans "Date" %}: {% now "DATE_FORMAT" %}<br>
</div>
REPORT_GOES_HERE
</div>
</body>
</html>
# -*- coding: utf-8 -*-
#
# Copyright (c) 2014-2015 Harvard, edX & OpenCraft
#
# This software's license gives you freedom; you can copy, convey,
# propagate, redistribute and/or modify this program under the terms of
# the GNU Affero General Public License (AGPL) as published by the Free
# Software Foundation (FSF), either version 3 of the License, or (at your
# option) any later version of the AGPL published by the FSF.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero
# General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program in a file in the toplevel directory called
# "AGPLv3". If not, see <http://www.gnu.org/licenses/>.
#
from mock import Mock, patch
from xblockutils.base_test import SeleniumXBlockTest
class MockSubmissionsAPI(object):
"""
Mock the submissions API, since it's not available in the test environment.
"""
def __init__(self):
self.submissions = {}
def dict_to_key(self, dict_key):
return (dict_key['student_id'], dict_key['course_id'], dict_key['item_id'], dict_key['item_type'])
def create_submission(self, dict_key, submission):
record = dict(
student_item=dict_key,
attempt_number=Mock(),
submitted_at=Mock(),
created_at=Mock(),
answer=submission,
)
self.submissions[self.dict_to_key(dict_key)] = record
return record
def get_submissions(self, key, limit=1):
assert limit == 1
key = self.dict_to_key(key)
if key in self.submissions:
return [self.submissions[key]]
return []
class TestDashboardBlock(SeleniumXBlockTest):
"""
Test the Student View of a dashboard XBlock linked to some problem builder blocks
"""
def setUp(self):
super(TestDashboardBlock, self).setUp()
# Set up our scenario:
self.set_scenario_xml("""
<vertical_demo>
<problem-builder display_name="Step 1">
<pb-mcq display_name="1.1 First question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="1">Option 1</pb-choice>
<pb-choice value="2">Option 2</pb-choice>
<pb-choice value="3">Option 3</pb-choice>
<pb-choice value="4">Option 4</pb-choice>
</pb-mcq>
<pb-mcq display_name="1.2 Second question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="1">Option 1</pb-choice>
<pb-choice value="2">Option 2</pb-choice>
<pb-choice value="3">Option 3</pb-choice>
<pb-choice value="4">Option 4</pb-choice>
</pb-mcq>
<pb-mcq display_name="1.3 Third question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="1">Option 1</pb-choice>
<pb-choice value="2">Option 2</pb-choice>
<pb-choice value="3">Option 3</pb-choice>
<pb-choice value="4">Option 4</pb-choice>
</pb-mcq>
<html_demo> This message here should be ignored. </html_demo>
</problem-builder>
<problem-builder display_name="Step 2">
<pb-mcq display_name="2.1 First question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="4">Option 4</pb-choice>
<pb-choice value="5">Option 5</pb-choice>
<pb-choice value="6">Option 6</pb-choice>
</pb-mcq>
<pb-mcq display_name="2.2 Second question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="1">Option 1</pb-choice>
<pb-choice value="2">Option 2</pb-choice>
<pb-choice value="3">Option 3</pb-choice>
<pb-choice value="4">Option 4</pb-choice>
</pb-mcq>
<pb-mcq display_name="2.3 Third question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="1">Option 1</pb-choice>
<pb-choice value="2">Option 2</pb-choice>
<pb-choice value="3">Option 3</pb-choice>
<pb-choice value="4">Option 4</pb-choice>
</pb-mcq>
</problem-builder>
<problem-builder display_name="Step 3">
<pb-mcq display_name="3.1 First question" question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="1">Option 1</pb-choice>
<pb-choice value="2">Option 2</pb-choice>
<pb-choice value="3">Option 3</pb-choice>
<pb-choice value="4">Option 4</pb-choice>
</pb-mcq>
<pb-mcq display_name="3.2 Question with non-numeric values"
question="Which option?" correct_choices='1,2,3,4'>
<pb-choice value="A">Option A</pb-choice>
<pb-choice value="B">Option B</pb-choice>
<pb-choice value="C">Option C</pb-choice>
</pb-mcq>
</problem-builder>
<pb-dashboard mentoring_ids='["dummy-value"]'>
</pb-dashboard>
</vertical_demo>
""")
# Apply a whole bunch of patches that are needed in lieu of the LMS/CMS runtime and edx-submissions:
def get_mentoring_blocks(dashboard_block, mentoring_ids, ignore_errors=True):
return [dashboard_block.runtime.get_block(key) for key in dashboard_block.get_parent().children[:-1]]
mock_submisisons_api = MockSubmissionsAPI()
patches = (
(
"problem_builder.dashboard.DashboardBlock._get_submission_key",
lambda _, child_id: dict(student_id="student", course_id="course", item_id=child_id, item_type="pb-mcq")
),
(
"problem_builder.sub_api.SubmittingXBlockMixin.student_item_key",
property(lambda block: dict(
student_id="student", course_id="course", item_id=block.scope_ids.usage_id, item_type="pb-mcq"
))
),
("problem_builder.dashboard.DashboardBlock.get_mentoring_blocks", get_mentoring_blocks),
("problem_builder.dashboard.sub_api", mock_submisisons_api),
("problem_builder.mcq.sub_api", mock_submisisons_api)
)
for p in patches:
patcher = patch(*p)
patcher.start()
self.addCleanup(patcher.stop)
# All the patches are installed; now we can proceed with using the XBlocks for tests:
self.go_to_view("student_view")
self.vertical = self.load_root_xblock()
def test_empty_dashboard(self):
"""
Test that when the student has not submitted any question answers, we still see
the dashboard, and its lists all the MCQ questions in the way we expect.
"""
dashboard = self.browser.find_element_by_css_selector('.pb-dashboard')
step_headers = dashboard.find_elements_by_css_selector('thead')
self.assertEqual(len(step_headers), 3)
self.assertEqual([hdr.text for hdr in step_headers], ["Step 1", "Step 2", "Step 3"])
steps = dashboard.find_elements_by_css_selector('tbody')
self.assertEqual(len(steps), 3)
for step in steps:
mcq_rows = step.find_elements_by_css_selector('tr')
self.assertTrue(2 <= len(mcq_rows) <= 3)
for mcq in mcq_rows:
value = mcq.find_element_by_css_selector('td:last-child')
self.assertEqual(value.text, '')
def test_dashboard(self):
"""
Submit an answer to each MCQ, then check that the dashboard reflects those answers.
"""
pbs = self.browser.find_elements_by_css_selector('.mentoring')
for pb in pbs:
mcqs = pb.find_elements_by_css_selector('fieldset.choices')
for idx, mcq in enumerate(mcqs):
choices = mcq.find_elements_by_css_selector('.choices .choice label')
choices[idx].click()
submit = pb.find_element_by_css_selector('.submit input.input-main')
submit.click()
self.wait_until_disabled(submit)
# Reload the page:
self.go_to_view("student_view")
dashboard = self.browser.find_element_by_css_selector('.pb-dashboard')
steps = dashboard.find_elements_by_css_selector('tbody')
self.assertEqual(len(steps), 3)
for step_num, step in enumerate(steps):
mcq_rows = step.find_elements_by_css_selector('tr:not(.avg-row)')
self.assertTrue(2 <= len(mcq_rows) <= 3)
for mcq in mcq_rows:
value = mcq.find_element_by_css_selector('td.value')
self.assertIn(value.text, ('1', '2', '3', '4', 'B'))
# Check the average:
avg_row = step.find_element_by_css_selector('tr.avg-row')
left_col = avg_row.find_element_by_css_selector('.desc')
self.assertEqual(left_col.text, "Average")
right_col = avg_row.find_element_by_css_selector('.value')
expected_average = {0: "2", 1: "3", 2: "1"}[step_num]
self.assertEqual(right_col.text, expected_average)
# -*- coding: utf-8 -*-
#
# Copyright (c) 2014-2015 Harvard, edX & OpenCraft
#
# This software's license gives you freedom; you can copy, convey,
# propagate, redistribute and/or modify this program under the terms of
# the GNU Affero General Public License (AGPL) as published by the Free
# Software Foundation (FSF), either version 3 of the License, or (at your
# option) any later version of the AGPL published by the FSF.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero
# General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program in a file in the toplevel directory called
# "AGPLv3". If not, see <http://www.gnu.org/licenses/>.
#
"""
Test that the various title/display_name options for Answer and MCQ/MRQ/Ratings work.
"""
# Imports ###########################################################
from mock import patch
from xblockutils.base_test import SeleniumXBlockTest
# Classes ###########################################################
class StepTitlesTest(SeleniumXBlockTest):
"""
Test that the various title/display_name options for Answer and MCQ/MRQ/Ratings work.
"""
test_parameters = (
# display_name, show_title?, expected_title: (None means default value)
("Custom Title", None, "Custom Title",),
("Custom Title", True, "Custom Title",),
("Custom Title", False, None),
("", None, "Question"),
("", True, "Question"),
("", False, None),
)
mcq_template = """
<problem-builder mode="{{mode}}">
<pb-mcq name="mcq_1_1" question="Who was your favorite character?"
correct_choices="gaius,adama,starbuck,roslin,six,lee"
{display_name_attr} {show_title_attr}
>
<pb-choice value="gaius">Gaius Baltar</pb-choice>
<pb-choice value="adama">Admiral William Adama</pb-choice>
<pb-choice value="starbuck">Starbuck</pb-choice>
<pb-choice value="roslin">Laura Roslin</pb-choice>
<pb-choice value="six">Number Six</pb-choice>
<pb-choice value="lee">Lee Adama</pb-choice>
</pb-mcq>
</problem-builder>
"""
mrq_template = """
<problem-builder mode="{{mode}}">
<pb-mrq name="mrq_1_1" question="What makes a great MRQ?"
ignored_choices="1,2,3"
{display_name_attr} {show_title_attr}
>
<pb-choice value="1">Lots of choices</pb-choice>
<pb-choice value="2">Funny choices</pb-choice>
<pb-choice value="3">Not sure</pb-choice>
</pb-mrq>
</problem-builder>
"""
rating_template = """
<problem-builder mode="{{mode}}">
<pb-rating name="rating_1_1" question="How do you rate Battlestar Galactica?"
correct_choices="5,6"
{display_name_attr} {show_title_attr}
>
<pb-choice value="6">More than 5 stars</pb-choice>
</pb-rating>
</problem-builder>
"""
long_answer_template = """
<problem-builder mode="{{mode}}">
<pb-answer name="answer_1_1" question="What did you think of the ending?"
{display_name_attr} {show_title_attr} />
</problem-builder>
"""
def setUp(self):
super(StepTitlesTest, self).setUp()
# Disable asides for this test since the acid aside seems to cause Database errors
# When we test multiple scenarios in one test method.
patcher = patch(
'workbench.runtime.WorkbenchRuntime.applicable_aside_types',
lambda self, block: [], create=True
)
patcher.start()
self.addCleanup(patcher.stop)
def test_all_the_things(self):
""" Test various permutations of our problem-builder components and title options. """
# We use a loop within the test rather than DDT, because this is WAY faster
# since we can bypass the Selenium set-up and teardown
for display_name, show_title, expected_title in self.test_parameters:
for mode in ("standard", "assessment"):
for qtype in ("mcq", "mrq", "rating", "long_answer"):
template = getattr(self, qtype + "_template")
xml = template.format(
mode=mode,
display_name_attr='display_name="{}"'.format(display_name) if display_name is not None else "",
show_title_attr='show_title="{}"'.format(show_title) if show_title is not None else "",
)
self.set_scenario_xml(xml)
pb_element = self.go_to_view()
if expected_title:
h3 = pb_element.find_element_by_css_selector('h3')
self.assertEqual(h3.text, expected_title)
else:
# No <h3> element should be present:
all_h3s = pb_element.find_elements_by_css_selector('h3')
self.assertEqual(len(all_h3s), 0)
"""
Unit tests for DashboardVisualData
"""
from problem_builder.dashboard_visual import DashboardVisualData
from mock import MagicMock, Mock
import unittest
from xblock.field_data import DictFieldData
class TestDashboardVisualData(unittest.TestCase):
"""
Test DashboardVisualData with some mocked data
"""
def test_construct_data(self):
"""
Test parsing of data and creation of SVG filter data.
"""
blocks = [
{
'display_name': 'Block 1',
'mcqs': [],
'has_average': True,
'average': 0,
},
{
'display_name': 'Block 2',
'mcqs': [],
'has_average': True,
'average': 1.3,
},
{
'display_name': 'Block 3',
'mcqs': [],
'has_average': True,
'average': 30.8,
},
]
rules = {
"images": [
"step1.png",
"step2.png",
"step3.png",
],
"background": "background.png",
"overlay": "overlay.png",
"width": "500",
"height": "500"
}
def color_for_value(value):
""" Mock color_for_value """
return "red" if value > 1 else None
data = DashboardVisualData(blocks, rules, color_for_value, "Visual Repr", "Description here")
self.assertEqual(len(data.layers), 5)
self.assertEqual(data.layers[0]["url"], "background.png")
self.assertEqual(data.layers[4]["url"], "overlay.png")
self.assertEqual(data.width, 500)
self.assertEqual(data.height, 500)
# Check the three middle layers built from the average values:
self.assertEqual(data.layers[1]["url"], "step1.png")
self.assertEqual(data.layers[1].get("color"), None)
self.assertEqual(data.layers[2]["url"], "step2.png")
self.assertEqual(data.layers[2]["color"], "red")
self.assertEqual(data.layers[3]["url"], "step3.png")
self.assertEqual(data.layers[3]["color"], "red")
...@@ -52,6 +52,8 @@ BLOCKS = [ ...@@ -52,6 +52,8 @@ BLOCKS = [
'pb-message = problem_builder:MentoringMessageBlock', 'pb-message = problem_builder:MentoringMessageBlock',
'pb-tip = problem_builder:TipBlock', 'pb-tip = problem_builder:TipBlock',
'pb-choice = problem_builder:ChoiceBlock', 'pb-choice = problem_builder:ChoiceBlock',
'pb-dashboard = problem_builder:DashboardBlock',
# Deprecated. You can temporarily uncomment and run 'python setup.py develop' if you have these blocks # Deprecated. You can temporarily uncomment and run 'python setup.py develop' if you have these blocks
# installed from testing mentoring v2 and need to get past an error message. # installed from testing mentoring v2 and need to get past an error message.
#'mentoring = problem_builder:MentoringBlock', # Deprecated alias for problem-builder #'mentoring = problem_builder:MentoringBlock', # Deprecated alias for problem-builder
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment