Commit dde3c4ce by Tim Krones

Merge pull request #65 from open-craft/review-step

Step Builder: Review step and assessment functionality
parents be0c46aa 1ff860a1
Problem Builder XBlock
----------------------
Problem Builder and Step Builder
--------------------------------
[![Build Status](https://travis-ci.org/open-craft/problem-builder.svg?branch=master)](https://travis-ci.org/open-craft/problem-builder)
This XBlock allows creation of questions of various types and simulating the
workflow of real-life mentoring, within an edX course.
This repository provides two XBlocks: Problem Builder and Step Builder.
It supports:
Both blocks allow to create questions of various types. They can be
used to simulate the workflow of real-life mentoring, within an edX
course.
Supported features include:
* **Free-form answers** (textarea) which can be shared accross
different XBlock instances (for example, to allow a student to
review and edit an answer he gave before).
* **Self-assessment MCQs** (multiple choice), to display predetermined
feedback to a student based on his choices in the
review and edit an answer they gave before).
* **Self-assessment MCQs** (multiple choice questions), to display
predetermined feedback to a student based on his choices in the
self-assessment. Supports rating scales and arbitrary answers.
* **MRQs (Multiple Response Questions)**, a type of multiple choice
question that allows the student to choose more than one choice.
question that allows the student to select more than one choice.
* **Answer recaps** that display a read-only summary of a user's
answer to a free-form question asked earlier in the course.
* **Progression tracking**, to require that the student has
......@@ -26,15 +29,15 @@ It supports:
* **Dashboards**, for displaying a summary of the student's answers
to multiple choice questions. [Details](doc/Dashboard.md)
The screenshot shows an example of a problem builder block containing a
free-form question, two MCQs and one MRQ.
The following screenshot shows an example of a Problem Builder block
containing a free-form question, two MCQs and one MRQ:
![Problem Builder Example](doc/img/mentoring-example.png)
Installation
------------
Install the requirements into the python virtual environment of your
Install the requirements into the Python virtual environment of your
`edx-platform` installation by running the following command from the
root folder:
......@@ -45,14 +48,20 @@ $ pip install -r requirements.txt
Enabling in Studio
------------------
You can enable the Problem Builder XBlock in studio through the advanced
settings.
You can enable the Problem Builder and Step Builder XBlocks in Studio
by modifying the advanced settings for your course:
1. From the main page of a specific course, navigate to **Settings** ->
**Advanced Settings** from the top menu.
2. Find the **Advanced Module List** setting.
3. To enable Problem Builder for your course, add `"problem-builder"`
to the modules listed there.
4. To enable Step Builder for your course, add `"step-builder"` to the
modules listed there.
5. Click the **Save changes** button.
1. From the main page of a specific course, navigate to `Settings ->
Advanced Settings` from the top menu.
2. Check for the `advanced_modules` policy key, and add `"problem-builder"`
to the policy value list.
3. Click the "Save changes" button.
Note that it is perfectly fine to enable both Problem Builder and Step
Builder for your course -- the blocks do not interfere with each other.
Usage
-----
......
Mentoring Block Usage
Problem Builder Usage
=====================
When you add the `Problem Builder` component to a course in the studio, the
built-in editing tools guide you through the process of configuring the
block and adding individual questions.
When you add the **Problem Builder** component to a course in the
studio, the built-in editing tools guide you through the process of
configuring the block and adding individual questions.
### Problem Builder modes
There are 2 mentoring modes available:
* *standard*: Traditional mentoring. All questions are displayed on the
* **standard**: Traditional mentoring. All questions are displayed on the
page and submitted at the same time. The students get some tips and
feedback about their answers. This is the default mode.
* *assessment*: Questions are displayed and submitted one by one. The
* **assessment**: Questions are displayed and submitted one by one. The
students don't get tips or feedback, but only know if their answer was
correct. Assessment mode comes with a default `max_attempts` of `2`.
Below are some LMS screenshots of a problem builder block in assessment mode.
**Note that assessment mode is deprecated**: In the future, Problem
Builder will only provide functionality that is currently part of
standard mode. Assessment mode will remain functional for a while to
ensure backward compatibility with courses that are currently using
it. If you want to use assessment functionality for a new course,
please use the Step Builder XBlock (described below).
Below are some LMS screenshots of a Problem Builder block in assessment mode.
Question before submitting an answer:
......@@ -35,9 +42,71 @@ Score review and the "Try Again" button:
![Assessment Step 4](img/assessment-4.png)
### Free-form Question
Free-form questions are represented by a "Long Answer" component.
Step Builder Usage
==================
The Step Builder XBlock replaces assessment mode functionality of the
Problem Builder XBlock, while allowing to group questions into explict
steps:
Instead of adding questions to Step Builder itself, you'll need to add
one or more **Mentoring Step** blocks to Step Builder. You can then
add one or more questions to each step. This allows you to group
questions into logical units (without being limited to showing only a
single question per step). As students progress through the block,
Step Builder will display one step at a time. All questions belonging
to a step need to be completed before the step can be submitted.
In addition to regular steps, Step Builder also provides a **Review
Step** block which allows students to review their performance, and to
jump back to individual steps to review their answers (if **Extended
feedback** setting is on and maximum number of attempts has been
reached). Note that only one such block is allowed per instance.
**Screenshots: Step**
Step with multiple questions (before submitting it):
![Step with multiple questions, before submit](img/step-with-multiple-questions-before-submit.png)
Step with multiple questions (after submitting it):
![Step with multiple questions, after submit](img/step-with-multiple-questions-after-submit.png)
As indicated by the orange check mark, this step is *partially*
correct (i.e., some answers are correct and some are incorrect or
partially correct).
**Screenshots: Review Step**
Unlimited attempts available:
![Unlimited attempts available](img/review-step-unlimited-attempts-available.png)
Limited attempts, some attempts remaining:
![Some attempts remaining](img/review-step-some-attempts-remaining.png)
Limited attempts, no attempts remaining, extended feedback off:
![No attempts remaining, extended feedback off](img/review-step-no-attempts-remaining-extended-feedback-off.png)
Limited attempts, no attempts remaining, extended feedback on:
![No attempts remaining, extended feedback on](img/review-step-no-attempts-remaining-extended-feedback-on.png)
**Screenshots: Step-level feedback**
Reviewing performance for a single step:
![Reviewing performance for single step](img/reviewing-performance-for-single-step.png)
Question Types
==============
### Free-form Questions
Free-form questions are represented by a **Long Answer** component.
Example screenshot before answering the question:
......@@ -47,39 +116,41 @@ Screenshot after answering the question:
![Answer Complete](img/answer-2.png)
You can add "Long Answer Recap" components to problem builder blocks later on
in the course to provide a read-only view of any answer that the student
entered earlier.
You can add **Long Answer Recap** components to problem builder blocks
later on in the course to provide a read-only view of any answer that
the student entered earlier.
The read-only answer is rendered as a quote in the LMS:
![Answer Read-Only](img/answer-3.png)
### Multiple Choice Questions (MCQ)
### Multiple Choice Questions (MCQs)
Multiple Choice Questions can be added to a problem builder component and
have the following configurable options:
* Question - The question to ask the student
* Message - A feedback message to display to the student after they
* **Question** - The question to ask the student
* **Message** - A feedback message to display to the student after they
have made their choice.
* Weight - The weight is used when computing total grade/score of
* **Weight** - The weight is used when computing total grade/score of
the problem builder block. The larger the weight, the more influence this
question will have on the grade. Value of zero means this question
has no influence on the grade (float, defaults to `1`).
* Correct Choice - Specify which choice[s] is considered correct. If
* **Correct Choice[s]** - Specify which choice[s] are considered correct. If
a student selects a choice that is not indicated as correct here,
the student will get the question wrong.
Using the Studio editor, you can add "Custom Choice" blocks to the MCQ.
Each Custom Choice represents one of the options from which students
will choose their answer.
Using the Studio editor, you can add **Custom Choice** blocks to an
MCQ. Each Custom Choice represents one of the options from which
students will choose their answer.
You can also add "Tip" entries. Each "Tip" must be configured to link
it to one or more of the choices. If the student chooses a choice, the
You can also add **Tip** entries. Each Tip must be configured to link
it to one or more of the choices. If the student selects a choice, the
tip will be displayed.
**Screenshots**
Screenshot: Before attempting to answer the questions:
Before attempting to answer the questions:
![MCQ Initial](img/mcq-1.png)
......@@ -91,7 +162,7 @@ After successfully completing the questions:
![MCQ Success](img/mcq-3.png)
#### Rating MCQ
#### Rating Questions
When constructing questions where the student rates some topic on the
scale from `1` to `5` (e.g. a Likert Scale), you can use the Rating
......@@ -100,11 +171,10 @@ The `Low` and `High` settings specify the text shown next to the
lowest and highest valued choice.
Rating questions are a specialized type of MCQ, and the same
instructions apply. You can also still add "Custom Choice" components
instructions apply. You can also still add **Custom Choice** components
if you want additional choices to be available such as "I don't know".
### Self-assessment Multiple Response Questions (MRQ)
### Self-assessment Multiple Response Questions (MRQs)
Multiple Response Questions are set up similarly to MCQs. The answers
are rendered as checkboxes. Unlike MCQs where only a single answer can
......@@ -113,24 +183,26 @@ time.
MRQ questions have these configurable settings:
* Question - The question to ask the student
* Required Choices - For any choices selected here, if the student
* **Question** - The question to ask the student
* **Required Choices** - For any choices selected here, if the student
does *not* select that choice, they will lose marks.
* Ignored Choices - For any choices selected here, the student will
* **Ignored Choices** - For any choices selected here, the student will
always be considered correct whether they choose this choice or not.
* Message - A feedback message to display to the student after they
have made their choice.
* Weight - The weight is used when computing total grade/score of
* **Weight** - The weight is used when computing total grade/score of
the problem builder block. The larger the weight, the more influence this
question will have on the grade. Value of zero means this question
has no influence on the grade (float, defaults to `1`).
* Hide Result - If set to True, the feedback icons next to each
choice will not be displayed (This is false by default).
* **Hide Result** - If set to `True`, the feedback icons next to each
choice will not be displayed (This is `False` by default).
The "Custom Choice" and "Tip" components work the same way as they
The **Custom Choice** and **Tip** components work the same way as they
do when used with MCQs (see above).
Screenshot - Before attempting to answer the questions:
**Screenshots**
Before attempting to answer the questions:
![MRQ Initial](img/mrq-1.png)
......@@ -146,24 +218,33 @@ After successfully completing the questions:
![MRQ Success](img/mrq-4.png)
Other Components
================
### Tables
The problem builder table allows you to present answers to multiple
free-form questions in a concise way. Once you create an "Answer
Recap Table" inside a Mentoring component in Studio, you will be
able to add columns to the table. Each column has an optional
"Header" setting that you can use to add a header to that column.
Each column can contain one or more "Answer Recap" element, as
well as HTML components.
Tables allow you to present answers to multiple free-form questions in
a concise way. Once you create an **Answer Recap Table** inside a
Mentoring component in Studio, you will be able to add columns to the
table. Each column has an optional **Header** setting that you can use
to add a header to that column. Each column can contain one or more
**Answer Recap** elements, as well as HTML components.
Screenshot:
![Table Screenshot](img/mentoring-table.png)
### "Dashboard" Self-Assessment Summary Block
[Instructions for using the "Dashboard" Self-Assessment Summary Block](Dashboard.md)
Configuration Options
====================
### Maximum Attempts
You can set the number of maximum attempts for the unit completion by
setting the Max. Attempts option of the Mentoring component.
You can limit the number of times students are allowed to complete a
Mentoring component by setting the **Max. attempts allowed** option.
Before submitting an answer for the first time:
......@@ -173,12 +254,8 @@ After submitting a wrong answer two times:
![Max Attempts Reached](img/max-attempts-reached.png)
### Custom tip popup window size
### Custom Window Size for Tip Popups
You can specify With and Height attributes of any Tip component to
customize the popup window size. The value of those attribute should
be valid CSS (e.g. `50px`).
### "Dashboard" Self-Assessment Summary Block
[Instructions for using the "Dashboard" Self-Assessment Summary Block](Dashboard.md)
You can specify **Width** and **Height** attributes of any Tip
component to customize the popup window size. The value of those
attributes should be valid CSS (e.g. `50px`).
from .mentoring import MentoringBlock, MentoringWithExplicitStepsBlock
from .step import MentoringStepBlock
from .step import MentoringStepBlock, ReviewStepBlock
from .answer import AnswerBlock, AnswerRecapBlock
from .choice import ChoiceBlock
from .dashboard import DashboardBlock
......
......@@ -92,6 +92,21 @@ class BaseMentoringBlock(
default=True,
scope=Scope.content
)
max_attempts = Integer(
display_name=_("Max. attempts allowed"),
help=_("Maximum number of times students are allowed to attempt the questions belonging to this block"),
default=0,
scope=Scope.content,
enforce_type=True
)
# User state
num_attempts = Integer(
# Number of attempts a user has answered for this questions
default=0,
scope=Scope.user_state,
enforce_type=True
)
has_children = True
......@@ -110,6 +125,28 @@ class BaseMentoringBlock(
except AttributeError:
return unicode(self.scope_ids.usage_id)
@property
def review_tips_json(self):
return json.dumps(self.review_tips)
@property
def max_attempts_reached(self):
return self.max_attempts > 0 and self.num_attempts >= self.max_attempts
def get_message_content(self, message_type, or_default=False):
for child_id in self.children:
if child_isinstance(self, child_id, MentoringMessageBlock):
child = self.runtime.get_block(child_id)
if child.type == message_type:
content = child.content
if hasattr(self.runtime, 'replace_jump_to_id_urls'):
content = self.runtime.replace_jump_to_id_urls(content)
return content
if or_default:
# Return the default value since no custom message is set.
# Note the WYSIWYG editor usually wraps the .content HTML in a <p> tag so we do the same here.
return '<p>{}</p>'.format(MentoringMessageBlock.MESSAGE_TYPES[message_type]['default'])
def get_theme(self):
"""
Gets theme settings from settings service. Falls back to default (LMS) theme
......@@ -129,6 +166,22 @@ class BaseMentoringBlock(
for theme_file in theme_files:
fragment.add_css(ResourceLoader(theme_package).load_unicode(theme_file))
def feedback_dispatch(self, target_data, stringify):
if self.show_extended_feedback():
if stringify:
return json.dumps(target_data)
else:
return target_data
def correct_json(self, stringify=True):
return self.feedback_dispatch(self.score.correct, stringify)
def incorrect_json(self, stringify=True):
return self.feedback_dispatch(self.score.incorrect, stringify)
def partial_json(self, stringify=True):
return self.feedback_dispatch(self.score.partially_correct, stringify)
@XBlock.json_handler
def view(self, data, suffix=''):
"""
......@@ -185,13 +238,6 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
default=None,
scope=Scope.content
)
max_attempts = Integer(
display_name=_("Max. Attempts Allowed"),
help=_("Number of max attempts allowed for this questions"),
default=0,
scope=Scope.content,
enforce_type=True
)
enforce_dependency = Boolean(
display_name=_("Enforce Dependency"),
help=_("Should the next step be the current block to complete?"),
......@@ -225,7 +271,7 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
display_name = String(
display_name=_("Title (Display name)"),
help=_("Title to display"),
default=_("Mentoring Questions"),
default=_("Problem Builder"),
scope=Scope.settings
)
feedback_label = String(
......@@ -247,12 +293,6 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
default=False,
scope=Scope.user_state
)
num_attempts = Integer(
# Number of attempts a user has answered for this questions
default=0,
scope=Scope.user_state,
enforce_type=True
)
step = Integer(
# Keep track of the student assessment progress.
default=0,
......@@ -319,7 +359,7 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
@property
def score(self):
"""Compute the student score taking into account the weight of each step."""
steps = self.get_steps()
steps = self.steps
steps_map = {q.name: q for q in steps}
total_child_weight = sum(float(step.weight) for step in steps)
if total_child_weight == 0:
......@@ -341,7 +381,7 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
self.migrate_fields()
# Validate self.step:
num_steps = len(self.get_steps())
num_steps = len(self.steps)
if self.step > num_steps:
self.step = num_steps
......@@ -460,11 +500,11 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
@property
def review_tips(self):
""" Get review tips, shown for wrong answers in assessment mode. """
if not self.is_assessment or self.step != len(self.steps):
if not self.is_assessment or self.step != len(self.step_ids):
return [] # Review tips are only used in assessment mode, and only on the last step.
review_tips = []
status_cache = dict(self.student_results)
for child in self.get_steps():
for child in self.steps:
result = status_cache.get(child.name)
if result and result.get('status') != 'correct':
# The student got this wrong. Check if there is a review tip to show.
......@@ -475,29 +515,9 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
review_tips.append(tip_html)
return review_tips
@property
def review_tips_json(self):
return json.dumps(self.review_tips)
def show_extended_feedback(self):
return self.extended_feedback and self.max_attempts_reached
def feedback_dispatch(self, target_data, stringify):
if self.show_extended_feedback():
if stringify:
return json.dumps(target_data)
else:
return target_data
def correct_json(self, stringify=True):
return self.feedback_dispatch(self.score.correct, stringify)
def incorrect_json(self, stringify=True):
return self.feedback_dispatch(self.score.incorrect, stringify)
def partial_json(self, stringify=True):
return self.feedback_dispatch(self.score.partially_correct, stringify)
@XBlock.json_handler
def get_results(self, queries, suffix=''):
"""
......@@ -542,7 +562,7 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
show_message = bool(self.student_results)
# In standard mode, all children is visible simultaneously, so need collecting responses from all of them
for child in self.get_steps():
for child in self.steps:
child_result = child.get_last_result()
results.append([child.name, child_result])
completed = completed and (child_result.get('status', None) == 'correct')
......@@ -565,7 +585,7 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
completed = True
choices = dict(self.student_results)
# Only one child should ever be of concern with this method.
for child in self.get_steps():
for child in self.steps:
if child.name and child.name in queries:
results = [child.name, child.get_results(choices[child.name])]
# Children may have their own definition of 'completed' which can vary from the general case
......@@ -598,7 +618,7 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
submit_results = []
previously_completed = self.completed
completed = True
for child in self.get_steps():
for child in self.steps:
if child.name and child.name in submissions:
submission = submissions[child.name]
child_result = child.submit(submission)
......@@ -653,7 +673,8 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
current_child = None
children = [self.runtime.get_block(child_id) for child_id in self.children]
children = [child for child in children if not isinstance(child, MentoringMessageBlock)]
steps = [child for child in children if isinstance(child, QuestionMixin)] # Faster than the self.steps property
# The following is faster than the self.step_ids property
steps = [child for child in children if isinstance(child, QuestionMixin)]
assessment_message = None
review_tips = []
......@@ -739,24 +760,6 @@ class MentoringBlock(BaseMentoringBlock, StudioContainerXBlockMixin, StepParentM
'result': 'success'
}
@property
def max_attempts_reached(self):
return self.max_attempts > 0 and self.num_attempts >= self.max_attempts
def get_message_content(self, message_type, or_default=False):
for child_id in self.children:
if child_isinstance(self, child_id, MentoringMessageBlock):
child = self.runtime.get_block(child_id)
if child.type == message_type:
content = child.content
if hasattr(self.runtime, 'replace_jump_to_id_urls'):
content = self.runtime.replace_jump_to_id_urls(content)
return content
if or_default:
# Return the default value since no custom message is set.
# Note the WYSIWYG editor usually wraps the .content HTML in a <p> tag so we do the same here.
return '<p>{}</p>'.format(MentoringMessageBlock.MESSAGE_TYPES[message_type]['default'])
def validate(self):
"""
Validates the state of this XBlock except for individual field values.
......@@ -831,11 +834,19 @@ class MentoringWithExplicitStepsBlock(BaseMentoringBlock, StudioContainerWithNes
"""
An XBlock providing mentoring capabilities with explicit steps
"""
# Content
extended_feedback = Boolean(
display_name=_("Extended feedback"),
help=_("Show extended feedback when all attempts are used up?"),
default=False,
Scope=Scope.content
)
# Settings
display_name = String(
display_name=_("Title (Display name)"),
display_name=_("Title (display name)"),
help=_("Title to display"),
default=_("Mentoring Questions (with explicit steps)"),
default=_("Step Builder"),
scope=Scope.settings
)
......@@ -847,17 +858,26 @@ class MentoringWithExplicitStepsBlock(BaseMentoringBlock, StudioContainerWithNes
enforce_type=True
)
editable_fields = ('display_name',)
editable_fields = ('display_name', 'max_attempts', 'extended_feedback')
@lazy
def question_ids(self):
"""
Get the usage_ids of all of this XBlock's children that are "Questions".
"""
return list(chain.from_iterable(self.runtime.get_block(step_id).step_ids for step_id in self.step_ids))
@lazy
def questions(self):
""" Get the usage_ids of all of this XBlock's children that are "Questions" """
return list(chain.from_iterable(self.runtime.get_block(step_id).steps for step_id in self.steps))
"""
Get all questions associated with this block.
"""
return [self.runtime.get_block(question_id) for question_id in self.question_ids]
@property
def steps(self):
@lazy
def step_ids(self):
"""
Get the usage_ids of all of this XBlock's children that are "Steps"
Get the usage_ids of all of this XBlock's children that are steps.
"""
from .step import MentoringStepBlock # Import here to avoid circular dependency
return [
......@@ -865,6 +885,90 @@ class MentoringWithExplicitStepsBlock(BaseMentoringBlock, StudioContainerWithNes
child_isinstance(self, child_id, MentoringStepBlock)
]
@lazy
def steps(self):
"""
Get the step children of this block.
"""
return [self.runtime.get_block(step_id) for step_id in self.step_ids]
def get_question_number(self, question_name):
question_names = [q.name for q in self.questions]
return question_names.index(question_name) + 1
def answer_mapper(self, answer_status):
steps = self.steps
answer_map = []
for step in steps:
for answer in step.student_results:
if answer[1]['status'] == answer_status:
answer_map.append({
'id': answer[0],
'details': answer[1],
'step': step.step_number,
'number': self.get_question_number(answer[0]),
})
return answer_map
@property
def has_review_step(self):
from .step import ReviewStepBlock
return any(child_isinstance(self, child_id, ReviewStepBlock) for child_id in self.children)
@property
def assessment_message(self):
"""
Get the message to display to a student following a submission in assessment mode.
"""
if not self.max_attempts_reached:
return self.get_message_content('on-assessment-review', or_default=True)
else:
assessment_message = _("Note: you have used all attempts. Continue to the next unit.")
return '<p>{}</p>'.format(assessment_message)
@property
def score(self):
questions = self.questions
total_child_weight = sum(float(question.weight) for question in questions)
if total_child_weight == 0:
return Score(0, 0, [], [], [])
steps = self.steps
questions_map = {question.name: question for question in questions}
points_earned = 0
for step in steps:
for question_name, question_results in step.student_results:
question = questions_map.get(question_name)
if question: # Under what conditions would this evaluate to False?
points_earned += question_results['score'] * question.weight
score = points_earned / total_child_weight
correct = self.answer_mapper(CORRECT)
incorrect = self.answer_mapper(INCORRECT)
partially_correct = self.answer_mapper(PARTIAL)
return Score(score, int(round(score * 100)), correct, incorrect, partially_correct)
@property
def review_tips(self):
""" Get review tips, shown for wrong answers. """
review_tips = []
status_cache = dict()
steps = self.steps
for step in steps:
status_cache.update(dict(step.student_results))
for question in self.questions:
result = status_cache.get(question.name)
if result and result.get('status') != 'correct':
# The student got this wrong. Check if there is a review tip to show.
tip_html = question.get_review_tip()
if tip_html:
if hasattr(self.runtime, 'replace_jump_to_id_urls'):
tip_html = self.runtime.replace_jump_to_id_urls(tip_html)
review_tips.append(tip_html)
return review_tips
def show_extended_feedback(self):
return self.extended_feedback
def student_view(self, context):
fragment = Fragment()
children_contents = []
......@@ -886,8 +990,12 @@ class MentoringWithExplicitStepsBlock(BaseMentoringBlock, StudioContainerWithNes
'children_contents': children_contents,
}))
fragment.add_css_url(self.runtime.local_resource_url(self, 'public/css/problem-builder.css'))
fragment.add_javascript_url(self.runtime.local_resource_url(self, 'public/js/vendor/underscore-min.js'))
fragment.add_javascript_url(self.runtime.local_resource_url(self, 'public/js/mentoring_with_steps.js'))
fragment.add_resource(loader.load_unicode('templates/html/mentoring_attempts.html'), "text/html")
fragment.add_resource(loader.load_unicode('templates/html/mentoring_review_templates.html'), "text/html")
self.include_theme_files(fragment)
fragment.initialize_js('MentoringWithStepsBlock')
......@@ -905,28 +1013,58 @@ class MentoringWithExplicitStepsBlock(BaseMentoringBlock, StudioContainerWithNes
NestedXBlockSpec allows explicitly setting disabled/enabled state, disabled reason (if any) and single/multiple
instances
"""
from .step import MentoringStepBlock # Import here to avoid circular dependency
from .step import MentoringStepBlock, ReviewStepBlock # Import here to avoid circular dependency
return [
MentoringStepBlock,
NestedXBlockSpec(CompletedMentoringMessageShim, boilerplate='completed'),
NestedXBlockSpec(IncompleteMentoringMessageShim, boilerplate='incomplete'),
NestedXBlockSpec(MaxAttemptsReachedMentoringMessageShim, boilerplate='max_attempts_reached'),
ReviewStepBlock,
NestedXBlockSpec(OnAssessmentReviewMentoringMessageShim, boilerplate='on-assessment-review'),
]
@XBlock.json_handler
def update_active_step(self, new_value, suffix=''):
if new_value < len(self.steps):
if new_value < len(self.step_ids):
self.active_step = new_value
elif new_value == len(self.step_ids):
if self.has_review_step:
self.active_step = -1
return {
'active_step': self.active_step
}
@XBlock.json_handler
def update_num_attempts(self, data, suffix=''):
if self.num_attempts < self.max_attempts:
self.num_attempts += 1
return {
'num_attempts': self.num_attempts
}
@XBlock.json_handler
def get_grade(self, data, suffix):
score = self.score
return {
'score': score.percentage,
'correct_answers': len(score.correct),
'incorrect_answers': len(score.incorrect),
'partially_correct_answers': len(score.partially_correct),
'correct': self.correct_json(stringify=False),
'incorrect': self.incorrect_json(stringify=False),
'partial': self.partial_json(stringify=False),
'assessment_message': self.assessment_message,
'assessment_review_tips': self.review_tips,
}
@XBlock.json_handler
def get_num_attempts(self, data, suffix):
return {
'num_attempts': self.num_attempts
}
@XBlock.json_handler
def try_again(self, data, suffix=''):
self.active_step = 0
step_blocks = [self.runtime.get_block(child_id) for child_id in self.steps]
step_blocks = [self.runtime.get_block(child_id) for child_id in self.step_ids]
for step in step_blocks:
step.reset()
......
......@@ -79,7 +79,7 @@ class StepParentMixin(object):
"""
@lazy
def steps(self):
def step_ids(self):
"""
Get the usage_ids of all of this XBlock's children that are "Steps"
"""
......@@ -87,11 +87,10 @@ class StepParentMixin(object):
_normalize_id(child_id) for child_id in self.children if child_isinstance(self, child_id, QuestionMixin)
]
def get_steps(self):
@lazy
def steps(self):
""" Get the step children of this block, cached if possible. """
if getattr(self, "_steps_cache", None) is None:
self._steps_cache = [self.runtime.get_block(child_id) for child_id in self.steps]
return self._steps_cache
return [self.runtime.get_block(child_id) for child_id in self.step_ids]
class QuestionMixin(EnumerableChildMixin):
......@@ -114,7 +113,7 @@ class QuestionMixin(EnumerableChildMixin):
@lazy
def siblings(self):
return self.get_parent().steps
return self.get_parent().step_ids
def author_view(self, context):
context = context.copy() if context else {}
......
/* Display of url_name below content */
.xblock[data-block-type=pb-mentoring-step] .url-name-footer,
.xblock[data-block-type=pb-mentoring] .url-name-footer,
.xblock[data-block-type=sb-step] .url-name-footer,
.xblock[data-block-type=step-builder] .url-name-footer,
.xblock[data-block-type=problem-builder] .url-name-footer,
.xblock[data-block-type=mentoring] .url-name-footer {
font-style: italic;
}
.xblock[data-block-type=pb-mentoring-step] .url-name-footer .url-name,
.xblock[data-block-type=pb-mentoring] .url-name-footer .url-name,
.xblock[data-block-type=sb-step] .url-name-footer .url-name,
.xblock[data-block-type=step-builder] .url-name-footer .url-name,
.xblock[data-block-type=problem-builder] .url-name-footer .url-name,
.xblock[data-block-type=mentoring] .url-name-footer .url-name {
margin: 0 10px;
......@@ -15,8 +15,8 @@
}
/* Custom appearance for our "Add" buttons */
.xblock[data-block-type=pb-mentoring-step] .add-xblock-component .new-component .new-component-type .add-xblock-component-button,
.xblock[data-block-type=pb-mentoring] .add-xblock-component .new-component .new-component-type .add-xblock-component-button,
.xblock[data-block-type=sb-step] .add-xblock-component .new-component .new-component-type .add-xblock-component-button,
.xblock[data-block-type=step-builder] .add-xblock-component .new-component .new-component-type .add-xblock-component-button,
.xblock[data-block-type=problem-builder] .add-xblock-component .new-component .new-component-type .add-xblock-component-button,
.xblock[data-block-type=mentoring] .add-xblock-component .new-component .new-component-type .add-xblock-component-button {
width: 200px;
......@@ -24,10 +24,10 @@
line-height: 30px;
}
.xblock[data-block-type=pb-mentoring-step] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled,
.xblock[data-block-type=pb-mentoring-step] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled:hover,
.xblock[data-block-type=pb-mentoring] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled,
.xblock[data-block-type=pb-mentoring] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled:hover,
.xblock[data-block-type=sb-step] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled,
.xblock[data-block-type=sb-step] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled:hover,
.xblock[data-block-type=step-builder] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled,
.xblock[data-block-type=step-builder] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled:hover,
.xblock[data-block-type=problem-builder] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled,
.xblock[data-block-type=problem-builder] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled:hover,
.xblock[data-block-type=mentoring] .add-xblock-component .new-component .new-component-type .add-xblock-component-button.disabled,
......@@ -37,7 +37,7 @@
cursor: default;
}
.xblock[data-block-type=pb-mentoring] .submission-message-help p,
.xblock[data-block-type=step-builder] .submission-message-help p,
.xblock[data-block-type=problem-builder] .submission-message-help p {
border-top: 1px solid #ddd;
font-size: 0.85em;
......
function MentoringWithStepsBlock(runtime, element) {
var steps = runtime.children(element).filter(
function(c) { return c.element.className.indexOf('pb-mentoring-step') > -1; }
);
// Set up gettext in case it isn't available in the client runtime:
if (typeof gettext == "undefined") {
window.gettext = function gettext_stub(string) { return string; };
window.ngettext = function ngettext_stub(strA, strB, n) { return n == 1 ? strA : strB; };
}
var children = runtime.children(element);
var steps = [];
var reviewStep;
for (var i = 0; i < children.length; i++) {
var child = children[i];
var blockType = $(child.element).data('block-type');
if (blockType === 'sb-step') {
steps.push(child);
} else if (blockType === 'sb-review-step') {
reviewStep = child;
}
}
var activeStep = $('.mentoring', element).data('active-step');
var checkmark, submitDOM, nextDOM, tryAgainDOM, submitXHR;
var reviewTipsTemplate = _.template($('#xblock-review-tips-template').html()); // Tips about specific questions the user got wrong
var attemptsTemplate = _.template($('#xblock-attempts-template').html());
var checkmark, submitDOM, nextDOM, reviewDOM, tryAgainDOM,
assessmentMessageDOM, gradeDOM, attemptsDOM, reviewTipsDOM, reviewLinkDOM, submitXHR;
function isLastStep() {
return (activeStep === steps.length-1);
}
function updateActiveStep(newValue) {
var handlerUrl = runtime.handlerUrl(element, 'update_active_step');
$.post(handlerUrl, JSON.stringify(newValue))
.success(function(response) {
activeStep = response.active_step;
});
function atReviewStep() {
return (activeStep === -1);
}
function handleResults(response) {
// Update active step so next step is shown on page reload (even if user does not click "Next Step")
updateActiveStep(activeStep+1);
function someAttemptsLeft() {
var data = attemptsDOM.data();
if (data.max_attempts === 0) { // Unlimited number of attempts available
return true;
}
return (data.num_attempts < data.max_attempts);
}
function extendedFeedbackEnabled() {
var data = gradeDOM.data();
return data.extended_feedback === "True";
}
// Update UI
if (response.completed === 'correct') {
function showFeedback(response) {
if (response.step_status === 'correct') {
checkmark.addClass('checkmark-correct icon-ok fa-check');
} else if (response.completed === 'partial') {
} else if (response.step_status === 'partial') {
checkmark.addClass('checkmark-partially-correct icon-ok fa-check');
} else {
checkmark.addClass('checkmark-incorrect icon-exclamation fa-exclamation');
}
}
function handleResults(response) {
showFeedback(response);
// Update active step:
// If we end up at the review step, proceed with updating the number of attempts used.
// Otherwise, get UI ready for showing next step.
var handlerUrl = runtime.handlerUrl(element, 'update_active_step');
$.post(handlerUrl, JSON.stringify(activeStep+1))
.success(function(response) {
activeStep = response.active_step;
if (activeStep === -1) {
updateNumAttempts();
} else {
updateControls();
}
});
}
function updateNumAttempts() {
var handlerUrl = runtime.handlerUrl(element, 'update_num_attempts');
$.post(handlerUrl, JSON.stringify({}))
.success(function(response) {
attemptsDOM.data('num_attempts', response.num_attempts);
// Now that relevant info is up-to-date, get the latest grade
updateGrade();
});
}
function updateGrade() {
var handlerUrl = runtime.handlerUrl(element, 'get_grade');
$.post(handlerUrl, JSON.stringify({}))
.success(function(response) {
gradeDOM.data('score', response.score);
gradeDOM.data('correct_answer', response.correct_answers);
gradeDOM.data('incorrect_answer', response.incorrect_answers);
gradeDOM.data('partially_correct_answer', response.partially_correct_answers);
gradeDOM.data('correct', response.correct);
gradeDOM.data('incorrect', response.incorrect);
gradeDOM.data('partial', response.partial);
gradeDOM.data('assessment_message', response.assessment_message);
gradeDOM.data('assessment_review_tips', response.assessment_review_tips);
updateControls();
});
}
function updateControls() {
submitDOM.attr('disabled', 'disabled');
nextDOM.removeAttr("disabled");
if (nextDOM.is(':visible')) { nextDOM.focus(); }
if (isLastStep()) {
tryAgainDOM.removeAttr('disabled');
tryAgainDOM.show();
if (atReviewStep()) {
if (reviewStep) {
reviewDOM.removeAttr('disabled');
} else {
if (someAttemptsLeft()) {
tryAgainDOM.removeAttr('disabled');
tryAgainDOM.show();
} else {
showAttempts();
}
}
}
}
......@@ -48,24 +127,130 @@ function MentoringWithStepsBlock(runtime, element) {
step.submit(handleResults);
}
function getResults() {
var step = steps[activeStep];
step.getResults(handleReviewResults);
}
function handleReviewResults(response) {
// Show step-level feedback
showFeedback(response);
// Forward to active step to show answer level feedback
var step = steps[activeStep];
var results = response.results;
var options = {
checkmark: checkmark
};
step.handleReview(results, options);
}
function hideAllSteps() {
for (var i=0; i < steps.length; i++) {
$(steps[i].element).hide();
}
}
function clearSelections() {
$('input[type=radio], input[type=checkbox]', element).prop('checked', false);
}
function cleanAll() {
checkmark.removeClass('checkmark-correct icon-ok fa-check');
checkmark.removeClass('checkmark-partially-correct icon-ok fa-check');
checkmark.removeClass('checkmark-incorrect icon-exclamation fa-exclamation');
hideAllSteps();
assessmentMessageDOM.html('');
gradeDOM.html('');
attemptsDOM.html('');
reviewTipsDOM.empty().hide();
}
function updateDisplay() {
cleanAll();
if (atReviewStep()) {
showAssessmentMessage();
showReviewStep();
showAttempts();
} else {
showActiveStep();
validateXBlock();
nextDOM.attr('disabled', 'disabled');
if (isLastStep() && reviewStep) {
reviewDOM.attr('disabled', 'disabled');
reviewDOM.show();
}
}
}
function showAssessmentMessage() {
var data = gradeDOM.data();
assessmentMessageDOM.html(data.assessment_message);
}
function showReviewStep() {
var data = gradeDOM.data();
// Forward to review step to render grade data
var showExtendedFeedback = (!someAttemptsLeft() && extendedFeedbackEnabled());
reviewStep.renderGrade(gradeDOM, showExtendedFeedback);
// Add click handler that takes care of showing associated step to step links
$('a.step-link', element).on('click', getStepToReview);
if (someAttemptsLeft()) {
tryAgainDOM.removeAttr('disabled');
// Review tips
if (data.assessment_review_tips.length > 0) {
// on-assessment-review-question messages specific to questions the student got wrong:
reviewTipsDOM.html(reviewTipsTemplate({
tips: data.assessment_review_tips
}));
reviewTipsDOM.show();
}
}
submitDOM.hide();
nextDOM.hide();
reviewDOM.hide();
tryAgainDOM.show();
}
function getStepToReview(event) {
event.preventDefault();
var stepIndex = parseInt($(event.target).data('step')) - 1;
jumpToReview(stepIndex);
}
function jumpToReview(stepIndex) {
activeStep = stepIndex;
cleanAll();
showActiveStep();
nextDOM.attr('disabled', 'disabled');
validateXBlock();
if (isLastStep()) {
reviewDOM.show();
reviewDOM.removeAttr('disabled');
nextDOM.hide();
nextDOM.attr('disabled', 'disabled');
} else {
nextDOM.show();
nextDOM.removeAttr('disabled');
}
tryAgainDOM.hide();
submitDOM.show();
submitDOM.attr('disabled', 'disabled');
reviewLinkDOM.show();
getResults();
}
function showAttempts() {
var data = attemptsDOM.data();
if (data.max_attempts > 0) {
attemptsDOM.html(attemptsTemplate(data));
} // Don't show attempts if unlimited attempts available (max_attempts === 0)
}
function showActiveStep() {
......@@ -101,17 +286,63 @@ function MentoringWithStepsBlock(runtime, element) {
function initSteps(options) {
for (var i=0; i < steps.length; i++) {
var step = steps[i];
var mentoring = {
setContent: setContent,
publish_event: publishEvent
};
options.mentoring = mentoring;
step.initChildren(options);
}
}
function setContent(dom, content) {
dom.html('');
dom.append(content);
var template = $('#light-child-template', dom).html();
if (template) {
dom.append(template);
}
}
function publishEvent(data) {
$.ajax({
type: "POST",
url: runtime.handlerUrl(element, 'publish_event'),
data: JSON.stringify(data)
});
}
function showGrade() {
cleanAll();
showAssessmentMessage();
showReviewStep();
showAttempts();
// Disable "Try again" button if no attempts left
if (!someAttemptsLeft()) {
tryAgainDOM.attr("disabled", "disabled");
}
nextDOM.off();
nextDOM.on('click', reviewNextStep);
reviewLinkDOM.hide();
}
function reviewNextStep() {
jumpToReview(activeStep+1);
}
function handleTryAgain(result) {
activeStep = result.active_step;
clearSelections();
updateDisplay();
tryAgainDOM.hide();
submitDOM.show();
if (! isLastStep()) {
nextDOM.off();
nextDOM.on('click', updateDisplay);
nextDOM.show();
reviewDOM.hide();
}
}
......@@ -123,28 +354,80 @@ function MentoringWithStepsBlock(runtime, element) {
submitXHR = $.post(handlerUrl, JSON.stringify({})).success(handleTryAgain);
}
function initClickHandlers() {
$(document).on("click", function(event, ui) {
var target = $(event.target);
var itemFeedbackParentSelector = '.choice';
var itemFeedbackSelector = ".choice .choice-tips";
function clickedInside(selector, parent_selector){
return target.is(selector) || target.parents(parent_selector).length>0;
}
if (!clickedInside(itemFeedbackSelector, itemFeedbackParentSelector)) {
$(itemFeedbackSelector).not(':hidden').hide();
$('.choice-tips-container').removeClass('with-tips');
}
});
}
function initXBlockView() {
// Hide steps until we're ready
hideAllSteps();
// Initialize references to relevant DOM elements and set up event handlers
checkmark = $('.assessment-checkmark', element);
submitDOM = $(element).find('.submit .input-main');
submitDOM.on('click', submit);
submitDOM.show();
nextDOM = $(element).find('.submit .input-next');
nextDOM.on('click', updateDisplay);
nextDOM.show();
if (atReviewStep()) {
nextDOM.on('click', reviewNextStep);
} else {
nextDOM.on('click', updateDisplay);
}
reviewDOM = $(element).find('.submit .input-review');
reviewDOM.on('click', showGrade);
tryAgainDOM = $(element).find('.submit .input-try-again');
tryAgainDOM.on('click', tryAgain);
assessmentMessageDOM = $('.assessment-message', element);
gradeDOM = $('.grade', element);
attemptsDOM = $('.attempts', element);
reviewTipsDOM = $('.assessment-review-tips', element);
reviewLinkDOM = $(element).find('.review-link');
reviewLinkDOM.on('click', showGrade);
// Initialize individual steps
// (sets up click handlers for questions and makes sure answer data is up-to-date)
var options = {
onChange: onChange
};
initSteps(options);
updateDisplay();
// Refresh info about number of attempts used:
// In the LMS, the HTML of multiple units can be loaded at once,
// and the user can flip among them. If that happens, information about
// number of attempts student has used up may be out of date.
var handlerUrl = runtime.handlerUrl(element, 'get_num_attempts');
$.post(handlerUrl, JSON.stringify({}))
.success(function(response) {
attemptsDOM.data('num_attempts', response.num_attempts);
// Finally, show controls and content
submitDOM.show();
nextDOM.show();
updateDisplay();
});
}
initClickHandlers();
initXBlockView();
}
function MentoringWithStepsEdit(runtime, element) {
"use strict";
// Disable "add" buttons when a message of that type already exists:
var $buttons = $('.add-xblock-component-button[data-category=pb-message]', element);
var updateButtons = function() {
$buttons.each(function() {
var msg_type = $(this).data('boilerplate');
$(this).toggleClass('disabled', $('.xblock .submission-message.'+msg_type).length > 0);
});
var blockIsPresent = function(klass) {
return $('.xblock ' + klass).length > 0;
};
updateButtons();
$buttons.click(function(ev) {
var updateButton = function(button, condition) {
button.toggleClass('disabled', condition);
};
var disableButton = function(ev) {
if ($(this).is('.disabled')) {
ev.preventDefault();
ev.stopPropagation();
} else {
$(this).addClass('disabled');
}
});
};
var initButtons = function(dataCategory) {
var $buttons = $('.add-xblock-component-button[data-category='+dataCategory+']', element);
$buttons.each(function() {
if (dataCategory === 'pb-message') {
var msg_type = $(this).data('boilerplate');
updateButton($(this), blockIsPresent('.submission-message.'+msg_type));
} else {
updateButton($(this), blockIsPresent('.xblock-header-sb-review-step'));
}
});
$buttons.on('click', disableButton);
};
initButtons('pb-message');
initButtons('sb-review-step');
ProblemBuilderUtil.transformClarifications(element);
StudioEditableXBlockMixin(runtime, element);
......
function ReviewStepBlock(runtime, element) {
var gradeTemplate = _.template($('#xblock-feedback-template').html());
var reviewStepsTemplate = _.template($('#xblock-step-links-template').html());
return {
'renderGrade': function(gradeDOM, showExtendedFeedback) {
var data = gradeDOM.data();
_.extend(data, {
'runDetails': function(correctness) {
if (!showExtendedFeedback) {
return '';
}
var self = this;
return reviewStepsTemplate({'questions': self[correctness], 'correctness': correctness});
}
});
gradeDOM.html(gradeTemplate(data));
}
};
}
function MentoringStepBlock(runtime, element) {
var children = runtime.children(element);
var submitXHR;
var submitXHR, resultsXHR;
function callIfExists(obj, fn) {
if (typeof obj !== 'undefined' && typeof obj[fn] == 'function') {
......@@ -34,13 +34,13 @@ function MentoringStepBlock(runtime, element) {
return is_valid;
},
submit: function(result_handler) {
submit: function(resultHandler) {
var handler_name = 'submit';
var data = {};
for (var i = 0; i < children.length; i++) {
var child = children[i];
if (child && child.name !== undefined && typeof(child[handler_name]) !== "undefined") {
data[child.name.toString()] = child[handler_name]();
if (child && child.name !== undefined) {
data[child.name.toString()] = callIfExists(child, handler_name);
}
}
var handlerUrl = runtime.handlerUrl(element, handler_name);
......@@ -49,8 +49,38 @@ function MentoringStepBlock(runtime, element) {
}
submitXHR = $.post(handlerUrl, JSON.stringify(data))
.success(function(response) {
result_handler(response);
resultHandler(response);
});
},
getResults: function(resultHandler) {
var handler_name = 'get_results';
var data = [];
for (var i = 0; i < children.length; i++) {
var child = children[i];
if (child && child.name !== undefined) { // Check if we are dealing with a question
data[i] = child.name;
}
}
var handlerUrl = runtime.handlerUrl(element, handler_name);
if (resultsXHR) {
resultsXHR.abort();
}
resultsXHR = $.post(handlerUrl, JSON.stringify(data))
.success(function(response) {
resultHandler(response);
});
},
handleReview: function(results, options) {
for (var i = 0; i < children.length; i++) {
var child = children[i];
if (child && child.name !== undefined) { // Check if we are dealing with a question
var result = results[child.name];
callIfExists(child, 'handleSubmit', result, options);
callIfExists(child, 'handleReview', result);
}
}
}
};
......
......@@ -79,7 +79,7 @@ class MentoringStepBlock(
"""
CAPTION = _(u"Step")
STUDIO_LABEL = _(u"Mentoring Step")
CATEGORY = 'pb-mentoring-step'
CATEGORY = 'sb-step'
# Settings
display_name = String(
......@@ -100,7 +100,12 @@ class MentoringStepBlock(
@lazy
def siblings(self):
return self.get_parent().steps
return self.get_parent().step_ids
@property
def is_last_step(self):
parent = self.get_parent()
return self.step_number == len(parent.step_ids)
@property
def allowed_nested_blocks(self):
......@@ -125,7 +130,7 @@ class MentoringStepBlock(
# Submit child blocks (questions) and gather results
submit_results = []
for child in self.get_steps():
for child in self.steps:
if child.name and child.name in submissions:
submission = submissions[child.name]
child_result = child.submit(submission)
......@@ -137,24 +142,41 @@ class MentoringStepBlock(
for result in submit_results:
self.student_results.append(result)
# Compute "answer status" for this step
if all(result[1]['status'] == 'correct' for result in submit_results):
completed = Correctness.CORRECT
elif all(result[1]['status'] == 'incorrect' for result in submit_results):
completed = Correctness.INCORRECT
else:
completed = Correctness.PARTIAL
return {
'message': 'Success!',
'completed': completed,
'step_status': self.answer_status,
'results': submit_results,
}
@XBlock.json_handler
def get_results(self, queries, suffix=''):
results = {}
answers = dict(self.student_results)
for question in self.steps:
previous_results = answers[question.name]
result = question.get_results(previous_results)
results[question.name] = result
# Add 'message' to results? Looks like it's not used on the client ...
return {
'results': results,
'step_status': self.answer_status,
}
def reset(self):
while self.student_results:
self.student_results.pop()
@property
def answer_status(self):
if all(result[1]['status'] == 'correct' for result in self.student_results):
answer_status = Correctness.CORRECT
elif all(result[1]['status'] == 'incorrect' for result in self.student_results):
answer_status = Correctness.INCORRECT
else:
answer_status = Correctness.PARTIAL
return answer_status
def author_edit_view(self, context):
"""
Add some HTML to the author view that allows authors to add child blocks.
......@@ -207,3 +229,34 @@ class MentoringStepBlock(
fragment.initialize_js('MentoringStepBlock')
return fragment
class ReviewStepBlock(XBlockWithPreviewMixin, XBlock):
""" A dedicated step for reviewing results for a mentoring block """
CATEGORY = 'sb-review-step'
STUDIO_LABEL = _("Review Step")
display_name = String(
default="Review Step"
)
def mentoring_view(self, context=None):
""" Mentoring View """
return self._render_view(context)
def student_view(self, context=None):
""" Student View """
return self._render_view(context)
def studio_view(self, context=None):
""" Studio View """
return Fragment(u'<p>This is a preconfigured block. It is not editable.</p>')
def _render_view(self, context):
fragment = Fragment()
fragment.add_content(loader.render_template('templates/html/review_step.html', {
'self': self,
}))
fragment.add_javascript_url(self.runtime.local_resource_url(self, 'public/js/review_step.js'))
fragment.initialize_js('ReviewStepBlock')
return fragment
<!-- Tips about specific questions the student got wrong. From pb-message[type=on-assessment-review-question] blocks -->
<script type="text/template" id="xblock-review-tips-template">
<p class="review-tips-intro"><%= gettext("You might consider reviewing the following items before your next assessment attempt:") %></p>
<ul class="review-tips-list">
<% for (var tip_idx in tips) {{ %>
<li><%= tips[tip_idx] %></li>
<% }} %>
</ul>
</script>
......@@ -9,16 +9,42 @@
<div class="assessment-question-block">
<div class="assessment-message"></div>
{% for child_content in children_contents %}
{{ child_content|safe }}
{% endfor %}
<div class="grade"
data-assessment_message="{{ self.assessment_message }}"
data-score="{{ self.score.percentage }}"
data-correct_answer="{{ self.score.correct|length }}"
data-incorrect_answer="{{ self.score.incorrect|length }}"
data-partially_correct_answer="{{ self.score.partially_correct|length }}"
data-assessment_review_tips="{{ self.review_tips_json }}"
data-extended_feedback="{{ self.extended_feedback }}"
data-correct="{{ self.correct_json }}"
data-incorrect="{{ self.incorrect_json }}"
data-partial="{{ self.partial_json }}">
</div>
<div class="submit">
<span class="assessment-checkmark fa icon-2x"></span>
<input type="button" class="input-main" value="Submit" disabled="disabled" />
<input type="button" class="input-next" value="Next Step" disabled="disabled" />
<input type="button" class="input-review" value="Review grade" disabled="disabled" />
<input type="button" class="input-try-again" value="Try again" disabled="disabled" />
<div class="attempts"
data-max_attempts="{{ self.max_attempts }}" data-num_attempts="{{ self.num_attempts }}">
</div>
</div>
<div class="assessment-review-tips"></div>
</div>
<div class="review-link"><a href="#">Review final grade</a></div>
</div>
<div class="sb-review-step">
<script type="text/template" id="xblock-feedback-template">
<div class="grade-result">
<h2>
<%= _.template(gettext("You scored {percent}% on this assessment."), {percent: score}, {interpolate: /\{(.+?)\}/g}) %>
</h2>
<hr/>
<span class="assessment-checkmark icon-2x checkmark-correct icon-ok fa fa-check"></span>
<div class="results-section">
<p>
<%= _.template(
ngettext(
"You answered 1 question correctly.",
"You answered {number_correct} questions correctly.",
correct_answer
), {number_correct: correct_answer}, {interpolate: /\{(.+?)\}/g})
%>
</p>
<%= runDetails('correct') %>
</div>
<div class="clear"></div>
<span class="assessment-checkmark icon-2x checkmark-partially-correct icon-ok fa fa-check"></span>
<div class="results-section">
<p>
<%= _.template(
ngettext(
"You answered 1 question partially correctly.",
"You answered {number_partially_correct} questions partially correctly.",
partially_correct_answer
), {number_partially_correct: partially_correct_answer}, {interpolate: /\{(.+?)\}/g})
%>
</p>
<%= runDetails('partial') %>
</div>
<div class="clear"></div>
<span class="assessment-checkmark icon-2x checkmark-incorrect icon-exclamation fa fa-exclamation"></span>
<div class="results-section">
<p>
<%= _.template(
ngettext(
"You answered 1 question incorrectly.",
"You answered {number_incorrect} questions incorrectly.",
incorrect_answer
), {number_incorrect: incorrect_answer}, {interpolate: /\{(.+?)\}/g})
%>
</p>
<%= runDetails('incorrect') %>
</div>
<div class="clear"></div>
<hr/>
</div>
</script>
<!-- Template for extended feedback: Show extended feedback details when all attempts are used up. -->
<script type="text/template" id="xblock-step-links-template">
<% var q, last_question; %>
<ul class="review-list <%= correctness %>-list">
<% for (var question in questions) {{ q = questions[question]; last_question = question == questions.length - 1; %>
<li><a href="#" class="step-link" data-step="<%= q.step %>"><%= _.template(gettext("Question {number}"), {number: q.number}, {interpolate: /\{(.+?)\}/g}) %></a></li>
<% }} %>
</ul>
</script>
</div>
<div class="pb-step">
<div class="sb-step">
{% if show_title %}
<div class="title">
<h3>
......
......@@ -30,6 +30,8 @@ MentoringBlock.url_name = String()
loader = ResourceLoader(__name__)
CORRECT, INCORRECT, PARTIAL = "correct", "incorrect", "partially-correct"
class PopupCheckMixin(object):
"""
......@@ -133,6 +135,88 @@ class MentoringAssessmentBaseTest(ProblemBuilderBaseTest):
return mentoring, controls
def assert_hidden(self, elem):
self.assertFalse(elem.is_displayed())
def assert_disabled(self, elem):
self.assertTrue(elem.is_displayed())
self.assertFalse(elem.is_enabled())
def assert_clickable(self, elem):
self.assertTrue(elem.is_displayed())
self.assertTrue(elem.is_enabled())
def ending_controls(self, controls, last):
if last:
self.assert_hidden(controls.next_question)
self.assert_disabled(controls.review)
else:
self.assert_disabled(controls.next_question)
self.assert_hidden(controls.review)
def selected_controls(self, controls, last):
self.assert_clickable(controls.submit)
self.ending_controls(controls, last)
def assert_message_text(self, mentoring, text):
message_wrapper = mentoring.find_element_by_css_selector('.assessment-message')
self.assertEqual(message_wrapper.text, text)
self.assertTrue(message_wrapper.is_displayed())
def assert_no_message_text(self, mentoring):
message_wrapper = mentoring.find_element_by_css_selector('.assessment-message')
self.assertEqual(message_wrapper.text, '')
def check_question_feedback(self, step_builder, question):
question_checkmark = step_builder.find_element_by_css_selector('.assessment-checkmark')
question_feedback = question.find_element_by_css_selector(".feedback")
self.assertTrue(question_feedback.is_displayed())
self.assertEqual(question_feedback.text, "Question Feedback Message")
question.click()
self.assertFalse(question_feedback.is_displayed())
question_checkmark.click()
self.assertTrue(question_feedback.is_displayed())
def do_submit_wait(self, controls, last):
if last:
self.wait_until_clickable(controls.review)
else:
self.wait_until_clickable(controls.next_question)
def do_post(self, controls, last):
if last:
controls.review.click()
else:
controls.next_question.click()
def multiple_response_question(self, number, mentoring, controls, choice_names, result, last=False):
question = self.peek_at_multiple_response_question(number, mentoring, controls, last=last)
choices = GetChoices(question)
expected_choices = {
"Its elegance": False,
"Its beauty": False,
"Its gracefulness": False,
"Its bugs": False,
}
self.assertEquals(choices.state, expected_choices)
for name in choice_names:
choices.select(name)
expected_choices[name] = True
self.assertEquals(choices.state, expected_choices)
self.selected_controls(controls, last)
controls.submit.click()
self.do_submit_wait(controls, last)
self._assert_checkmark(mentoring, result)
controls.review.click()
def expect_question_visible(self, number, mentoring, question_text=None):
if not question_text:
question_text = self.question_text(number)
......@@ -163,6 +247,14 @@ class MentoringAssessmentBaseTest(ProblemBuilderBaseTest):
self.wait_until_clickable(controls.next_question)
controls.next_question.click()
def _assert_checkmark(self, mentoring, result):
"""Assert that only the desired checkmark is present."""
states = {CORRECT: 0, INCORRECT: 0, PARTIAL: 0}
states[result] += 1
for name, count in states.items():
self.assertEqual(len(mentoring.find_elements_by_css_selector(".checkmark-{}".format(name))), count)
class GetChoices(object):
""" Helper class for interacting with MCQ options """
......
......@@ -18,9 +18,7 @@
# "AGPLv3". If not, see <http://www.gnu.org/licenses/>.
#
from ddt import ddt, unpack, data
from .base_test import MentoringAssessmentBaseTest, GetChoices
CORRECT, INCORRECT, PARTIAL = "correct", "incorrect", "partially-correct"
from .base_test import CORRECT, INCORRECT, PARTIAL, MentoringAssessmentBaseTest, GetChoices
@ddt
......@@ -47,29 +45,10 @@ class MentoringAssessmentTest(MentoringAssessmentBaseTest):
controls.click()
title.click()
def assert_hidden(self, elem):
self.assertFalse(elem.is_displayed())
def assert_disabled(self, elem):
self.assertTrue(elem.is_displayed())
self.assertFalse(elem.is_enabled())
def assert_clickable(self, elem):
self.assertTrue(elem.is_displayed())
self.assertTrue(elem.is_enabled())
def assert_persistent_elements_present(self, mentoring):
self.assertIn("A Simple Assessment", mentoring.text)
self.assertIn("This paragraph is shared between all questions.", mentoring.text)
def _assert_checkmark(self, mentoring, result):
"""Assert that only the desired checkmark is present."""
states = {CORRECT: 0, INCORRECT: 0, PARTIAL: 0}
states[result] += 1
for name, count in states.items():
self.assertEqual(len(mentoring.find_elements_by_css_selector(".checkmark-{}".format(name))), count)
def go_to_workbench_main_page(self):
self.browser.get(self.live_server_url)
......@@ -104,35 +83,6 @@ class MentoringAssessmentTest(MentoringAssessmentBaseTest):
self._assert_checkmark(mentoring, result)
self.do_post(controls, last)
def ending_controls(self, controls, last):
if last:
self.assert_hidden(controls.next_question)
self.assert_disabled(controls.review)
else:
self.assert_disabled(controls.next_question)
self.assert_hidden(controls.review)
def selected_controls(self, controls, last):
self.assert_clickable(controls.submit)
if last:
self.assert_hidden(controls.next_question)
self.assert_disabled(controls.review)
else:
self.assert_disabled(controls.next_question)
self.assert_hidden(controls.review)
def do_submit_wait(self, controls, last):
if last:
self.wait_until_clickable(controls.review)
else:
self.wait_until_clickable(controls.next_question)
def do_post(self, controls, last):
if last:
controls.review.click()
else:
controls.next_question.click()
def single_choice_question(self, number, mentoring, controls, choice_name, result, last=False):
question = self.expect_question_visible(number, mentoring)
......@@ -213,44 +163,6 @@ class MentoringAssessmentTest(MentoringAssessmentBaseTest):
return question
def check_question_feedback(self, mentoring, question):
question_checkmark = mentoring.find_element_by_css_selector('.assessment-checkmark')
question_feedback = question.find_element_by_css_selector(".feedback")
self.assertTrue(question_feedback.is_displayed())
self.assertEqual(question_feedback.text, "Question Feedback Message")
question.click()
self.assertFalse(question_feedback.is_displayed())
question_checkmark.click()
self.assertTrue(question_feedback.is_displayed())
def multiple_response_question(self, number, mentoring, controls, choice_names, result, last=False):
question = self.peek_at_multiple_response_question(number, mentoring, controls, last=last)
choices = GetChoices(question)
expected_choices = {
"Its elegance": False,
"Its beauty": False,
"Its gracefulness": False,
"Its bugs": False,
}
self.assertEquals(choices.state, expected_choices)
for name in choice_names:
choices.select(name)
expected_choices[name] = True
self.assertEquals(choices.state, expected_choices)
self.selected_controls(controls, last)
controls.submit.click()
self.do_submit_wait(controls, last)
self._assert_checkmark(mentoring, result)
controls.review.click()
def peek_at_review(self, mentoring, controls, expected, extended_feedback=False):
self.wait_until_text_in("You scored {percentage}% on this assessment.".format(**expected), mentoring)
self.assert_persistent_elements_present(mentoring)
......@@ -288,15 +200,6 @@ class MentoringAssessmentTest(MentoringAssessmentBaseTest):
self.assert_hidden(controls.review)
self.assert_hidden(controls.review_link)
def assert_message_text(self, mentoring, text):
message_wrapper = mentoring.find_element_by_css_selector('.assessment-message')
self.assertEqual(message_wrapper.text, text)
self.assertTrue(message_wrapper.is_displayed())
def assert_no_message_text(self, mentoring):
message_wrapper = mentoring.find_element_by_css_selector('.assessment-message')
self.assertEqual(message_wrapper.text, '')
def extended_feedback_checks(self, mentoring, controls, expected_results):
# Multiple choice is third correctly answered question
self.assert_hidden(controls.review_link)
......
from .base_test import CORRECT, INCORRECT, PARTIAL, MentoringAssessmentBaseTest, GetChoices
from ddt import ddt, data
@ddt
class StepBuilderTest(MentoringAssessmentBaseTest):
def freeform_answer(self, number, step_builder, controls, text_input, result, saved_value="", last=False):
self.expect_question_visible(number, step_builder)
answer = step_builder.find_element_by_css_selector("textarea.answer.editable")
self.assertIn(self.question_text(number), step_builder.text)
self.assertIn("What is your goal?", step_builder.text)
self.assertEquals(saved_value, answer.get_attribute("value"))
if not saved_value:
self.assert_disabled(controls.submit)
self.assert_disabled(controls.next_question)
answer.clear()
answer.send_keys(text_input)
self.assertEquals(text_input, answer.get_attribute("value"))
self.assert_clickable(controls.submit)
self.ending_controls(controls, last)
self.assert_hidden(controls.review)
self.assert_hidden(controls.try_again)
controls.submit.click()
self.do_submit_wait(controls, last)
self._assert_checkmark(step_builder, result)
self.do_post(controls, last)
def single_choice_question(self, number, step_builder, controls, choice_name, result, last=False):
question = self.expect_question_visible(number, step_builder)
self.assertIn("Do you like this MCQ?", question.text)
self.assert_disabled(controls.submit)
self.ending_controls(controls, last)
self.assert_hidden(controls.try_again)
choices = GetChoices(question)
expected_state = {"Yes": False, "Maybe not": False, "I don't understand": False}
self.assertEquals(choices.state, expected_state)
choices.select(choice_name)
expected_state[choice_name] = True
self.assertEquals(choices.state, expected_state)
self.selected_controls(controls, last)
controls.submit.click()
self.do_submit_wait(controls, last)
self._assert_checkmark(step_builder, result)
self.do_post(controls, last)
def rating_question(self, number, step_builder, controls, choice_name, result, last=False):
self.expect_question_visible(number, step_builder)
self.assertIn("How much do you rate this MCQ?", step_builder.text)
self.assert_disabled(controls.submit)
self.ending_controls(controls, last)
self.assert_hidden(controls.try_again)
choices = GetChoices(step_builder, ".rating")
expected_choices = {
"1 - Not good at all": False,
"2": False, "3": False, "4": False,
"5 - Extremely good": False,
"I don't want to rate it": False,
}
self.assertEquals(choices.state, expected_choices)
choices.select(choice_name)
expected_choices[choice_name] = True
self.assertEquals(choices.state, expected_choices)
self.ending_controls(controls, last)
controls.submit.click()
self.do_submit_wait(controls, last)
self._assert_checkmark(step_builder, result)
self.do_post(controls, last)
def peek_at_multiple_response_question(
self, number, step_builder, controls, last=False, extended_feedback=False, alternative_review=False
):
question = self.expect_question_visible(number, step_builder)
self.assertIn("What do you like in this MRQ?", step_builder.text)
return question
if extended_feedback:
self.assert_disabled(controls.submit)
self.check_question_feedback(step_builder, question)
if alternative_review:
self.assert_clickable(controls.review_link)
self.assert_hidden(controls.try_again)
def peek_at_review(self, step_builder, controls, expected, extended_feedback=False):
self.wait_until_text_in("You scored {percentage}% on this assessment.".format(**expected), step_builder)
# Check grade breakdown
if expected["correct"] == 1:
self.assertIn("You answered 1 questions correctly.".format(**expected), step_builder.text)
else:
self.assertIn("You answered {correct} questions correctly.".format(**expected), step_builder.text)
if expected["partial"] == 1:
self.assertIn("You answered 1 question partially correctly.", step_builder.text)
else:
self.assertIn("You answered {partial} questions partially correctly.".format(**expected), step_builder.text)
if expected["incorrect"] == 1:
self.assertIn("You answered 1 question incorrectly.", step_builder.text)
else:
self.assertIn("You answered {incorrect} questions incorrectly.".format(**expected), step_builder.text)
# Check presence of review links
# - If unlimited attempts: no review links
# - If limited attempts:
# - If not max attempts reached: no review links
# - If max attempts reached:
# - If extended feedback: review links available
# - If not extended feedback: review links
review_list = step_builder.find_elements_by_css_selector('.review-list')
if expected["max_attempts"] == 0:
self.assertFalse(review_list)
else:
if expected["num_attempts"] < expected["max_attempts"]:
self.assertFalse(review_list)
elif expected["num_attempts"] == expected["max_attempts"]:
if extended_feedback:
for correctness in ['correct', 'incorrect', 'partial']:
review_items = step_builder.find_elements_by_css_selector('.%s-list li' % correctness)
self.assertEqual(len(review_items), expected[correctness])
else:
self.assertFalse(review_list)
# Check if info about number of attempts used is correct
if expected["max_attempts"] == 1:
self.assertIn("You have used {num_attempts} of 1 submission.".format(**expected), step_builder.text)
elif expected["max_attempts"] == 0:
self.assertNotIn("You have used", step_builder.text)
else:
self.assertIn(
"You have used {num_attempts} of {max_attempts} submissions.".format(**expected),
step_builder.text
)
# Check controls
self.assert_hidden(controls.submit)
self.assert_hidden(controls.next_question)
self.assert_hidden(controls.review)
self.assert_hidden(controls.review_link)
def popup_check(self, step_builder, item_feedbacks, prefix='', do_submit=True):
for index, expected_feedback in enumerate(item_feedbacks):
choice_wrapper = step_builder.find_elements_by_css_selector(prefix + " .choice")[index]
choice_wrapper.click()
item_feedback_icon = choice_wrapper.find_element_by_css_selector(".choice-result")
item_feedback_icon.click()
item_feedback_popup = choice_wrapper.find_element_by_css_selector(".choice-tips")
self.assertTrue(item_feedback_popup.is_displayed())
self.assertEqual(item_feedback_popup.text, expected_feedback)
item_feedback_popup.click()
self.assertTrue(item_feedback_popup.is_displayed())
step_builder.click()
self.assertFalse(item_feedback_popup.is_displayed())
def extended_feedback_checks(self, step_builder, controls, expected_results):
# MRQ is third correctly answered question
self.assert_hidden(controls.review_link)
step_builder.find_elements_by_css_selector('.correct-list li a')[2].click()
self.peek_at_multiple_response_question(
None, step_builder, controls, extended_feedback=True, alternative_review=True
)
# Step should display 5 checkmarks (4 correct items for MRQ, plus step-level feedback about correctness)
correct_marks = step_builder.find_elements_by_css_selector('.checkmark-correct')
incorrect_marks = step_builder.find_elements_by_css_selector('.checkmark-incorrect')
self.assertEqual(len(correct_marks), 5)
self.assertEqual(len(incorrect_marks), 0)
item_feedbacks = [
"This is something everyone has to like about this MRQ",
"This is something everyone has to like about this MRQ",
"This MRQ is indeed very graceful",
"Nah, there aren't any!"
]
self.popup_check(step_builder, item_feedbacks, prefix='div[data-name="mrq_1_1"]', do_submit=False)
controls.review_link.click()
self.peek_at_review(step_builder, controls, expected_results, extended_feedback=True)
# Review rating question (directly precedes MRQ)
step_builder.find_elements_by_css_selector('.incorrect-list li a')[0].click()
# It should be possible to visit the MRQ from here
self.wait_until_clickable(controls.next_question)
controls.next_question.click()
self.peek_at_multiple_response_question(
None, step_builder, controls, extended_feedback=True, alternative_review=True
)
@data(
{"max_attempts": 0, "extended_feedback": False}, # Unlimited attempts, no extended feedback
{"max_attempts": 1, "extended_feedback": True}, # Limited attempts, extended feedback
{"max_attempts": 1, "extended_feedback": False}, # Limited attempts, no extended feedback
{"max_attempts": 2, "extended_feedback": True}, # Limited attempts, extended feedback
)
def test_step_builder(self, params):
max_attempts = params['max_attempts']
extended_feedback = params['extended_feedback']
step_builder, controls = self.load_assessment_scenario("step_builder.xml", params)
# Step 1
# Submit free-form answer, go to next step
self.freeform_answer(None, step_builder, controls, 'This is the answer', CORRECT)
# Step 2
# Submit MCQ, go to next step
self.single_choice_question(None, step_builder, controls, 'Maybe not', INCORRECT)
# Step 3
# Submit rating, go to next step
self.rating_question(None, step_builder, controls, "5 - Extremely good", CORRECT)
# Last step
# Submit MRQ, go to review
self.multiple_response_question(None, step_builder, controls, ("Its beauty",), PARTIAL, last=True)
# Review step
expected_results = {
"correct": 2, "partial": 1, "incorrect": 1, "percentage": 63,
"num_attempts": 1, "max_attempts": max_attempts
}
self.peek_at_review(step_builder, controls, expected_results, extended_feedback=extended_feedback)
if max_attempts == 1:
self.assert_message_text(step_builder, "Note: you have used all attempts. Continue to the next unit.")
self.assert_disabled(controls.try_again)
return
self.assert_message_text(step_builder, "Assessment additional feedback message text")
self.assert_clickable(controls.try_again)
# Try again
controls.try_again.click()
self.wait_until_hidden(controls.try_again)
self.assert_no_message_text(step_builder)
self.freeform_answer(
None, step_builder, controls, 'This is a different answer', CORRECT, saved_value='This is the answer'
)
self.single_choice_question(None, step_builder, controls, 'Yes', CORRECT)
self.rating_question(None, step_builder, controls, "1 - Not good at all", INCORRECT)
user_selection = ("Its elegance", "Its beauty", "Its gracefulness")
self.multiple_response_question(None, step_builder, controls, user_selection, CORRECT, last=True)
expected_results = {
"correct": 3, "partial": 0, "incorrect": 1, "percentage": 75,
"num_attempts": 2, "max_attempts": max_attempts
}
self.peek_at_review(step_builder, controls, expected_results, extended_feedback=extended_feedback)
if max_attempts == 2:
self.assert_disabled(controls.try_again)
else:
self.assert_clickable(controls.try_again)
if 1 <= max_attempts <= 2:
self.assert_message_text(step_builder, "Note: you have used all attempts. Continue to the next unit.")
else:
self.assert_message_text(step_builder, "Assessment additional feedback message text")
if extended_feedback:
self.extended_feedback_checks(step_builder, controls, expected_results)
def test_review_tips(self):
params = {
"max_attempts": 3,
"extended_feedback": False,
"include_review_tips": True
}
step_builder, controls = self.load_assessment_scenario("step_builder.xml", params)
# Get one question wrong and one partially wrong on attempt 1 of 3: ####################
self.freeform_answer(None, step_builder, controls, 'This is the answer', CORRECT)
self.single_choice_question(None, step_builder, controls, 'Maybe not', INCORRECT)
self.rating_question(None, step_builder, controls, "5 - Extremely good", CORRECT)
self.multiple_response_question(None, step_builder, controls, ("Its beauty",), PARTIAL, last=True)
# The review tips for MCQ 2 and the MRQ should be shown:
review_tips = step_builder.find_element_by_css_selector('.assessment-review-tips')
self.assertTrue(review_tips.is_displayed())
self.assertIn('You might consider reviewing the following items', review_tips.text)
self.assertIn('Take another look at', review_tips.text)
self.assertIn('Lesson 1', review_tips.text)
self.assertNotIn('Lesson 2', review_tips.text) # This MCQ was correct
self.assertIn('Lesson 3', review_tips.text)
# The on-assessment-review message is also shown if attempts remain:
self.assert_message_text(step_builder, "Assessment additional feedback message text")
# Try again
self.assert_clickable(controls.try_again)
controls.try_again.click()
# Get no questions wrong on attempt 2 of 3: ############################################
self.freeform_answer(
None, step_builder, controls, 'This is the answer', CORRECT, saved_value='This is the answer'
)
self.single_choice_question(None, step_builder, controls, 'Yes', CORRECT)
self.rating_question(None, step_builder, controls, "5 - Extremely good", CORRECT)
user_selection = ("Its elegance", "Its beauty", "Its gracefulness")
self.multiple_response_question(None, step_builder, controls, user_selection, CORRECT, last=True)
self.assert_message_text(step_builder, "Assessment additional feedback message text")
self.assertFalse(review_tips.is_displayed())
# Try again
self.assert_clickable(controls.try_again)
controls.try_again.click()
# Get some questions wrong again on attempt 3 of 3:
self.freeform_answer(
None, step_builder, controls, 'This is the answer', CORRECT, saved_value='This is the answer'
)
self.single_choice_question(None, step_builder, controls, 'Maybe not', INCORRECT)
self.rating_question(None, step_builder, controls, "1 - Not good at all", INCORRECT)
self.multiple_response_question(None, step_builder, controls, ("Its beauty",), PARTIAL, last=True)
# The review tips will not be shown because no attempts remain:
self.assertFalse(review_tips.is_displayed())
......@@ -38,8 +38,8 @@ class TitleTest(SeleniumXBlockTest):
@ddt.data(
('<problem-builder show_title="false"><pb-answer name="a"/></problem-builder>', None),
('<problem-builder><pb-answer name="a"/></problem-builder>', "Mentoring Questions"),
('<problem-builder mode="assessment"><pb-answer name="a"/></problem-builder>', "Mentoring Questions"),
('<problem-builder><pb-answer name="a"/></problem-builder>', "Problem Builder"),
('<problem-builder mode="assessment"><pb-answer name="a"/></problem-builder>', "Problem Builder"),
('<problem-builder display_name="A Question"><pb-answer name="a"/></problem-builder>', "A Question"),
('<problem-builder display_name="A Question" show_title="false"><pb-answer name="a"/></problem-builder>', None),
)
......
<step-builder url_name="step-builder" display_name="Step Builder"
max_attempts="{{max_attempts}}" extended_feedback="{{extended_feedback}}">
<sb-step display_name="First step">
<pb-answer name="goal" question="What is your goal?" />
</sb-step>
<sb-step display_name="Second step">
<pb-mcq name="mcq_1_1" question="Do you like this MCQ?" correct_choices='["yes"]'>
<pb-choice value="yes">Yes</pb-choice>
<pb-choice value="maybenot">Maybe not</pb-choice>
<pb-choice value="understand">I don't understand</pb-choice>
<pb-tip values='["yes"]'>Great!</pb-tip>
<pb-tip values='["maybenot"]'>Ah, damn.</pb-tip>
<pb-tip values='["understand"]'><div id="test-custom-html">Really?</div></pb-tip>
{% if include_review_tips %}
<pb-message type="on-assessment-review-question">
<html>Take another look at <a href="#">Lesson 1</a></html>
</pb-message>
{% endif %}
</pb-mcq>
</sb-step>
<sb-step display_name="Third step">
<pb-rating name="mcq_1_2" low="Not good at all" high="Extremely good" question="How much do you rate this MCQ?" correct_choices='["4","5"]'>
<pb-choice value="notwant">I don't want to rate it</pb-choice>
<pb-tip values='["4","5"]'>I love good grades.</pb-tip>
<pb-tip values='["1","2", "3"]'>Will do better next time...</pb-tip>
<pb-tip values='["notwant"]'>Your loss!</pb-tip>
{% if include_review_tips %}
<pb-message type="on-assessment-review-question">
<html>Take another look at <a href="#">Lesson 2</a></html>
</pb-message>
{% endif %}
</pb-rating>
</sb-step>
<sb-step display_name="Last step">
<pb-mrq name="mrq_1_1" question="What do you like in this MRQ?" required_choices='["gracefulness","elegance","beauty"]' message="Question Feedback Message">
<pb-choice value="elegance">Its elegance</pb-choice>
<pb-choice value="beauty">Its beauty</pb-choice>
<pb-choice value="gracefulness">Its gracefulness</pb-choice>
<pb-choice value="bugs">Its bugs</pb-choice>
<pb-tip values='["gracefulness"]'>This MRQ is indeed very graceful</pb-tip>
<pb-tip values='["elegance","beauty"]'>This is something everyone has to like about this MRQ</pb-tip>
<pb-tip values='["bugs"]'>Nah, there aren't any!</pb-tip>
{% if include_review_tips %}
<pb-message type="on-assessment-review-question">
<html>Take another look at <a href="#">Lesson 3</a></html>
</pb-message>
{% endif %}
</pb-mrq>
</sb-step>
<sb-review-step></sb-review-step>
<pb-message type="on-assessment-review">
<html>Assessment additional feedback message text</html>
</pb-message>
</step-builder>
......@@ -164,8 +164,7 @@ class TestMentoringBlockJumpToIds(unittest.TestCase):
self.mcq_block = MCQBlock(self.runtime_mock, DictFieldData({'name': 'test_mcq'}), Mock())
self.mcq_block.get_review_tip = Mock()
self.mcq_block.get_review_tip.return_value = self.message_block.content
self.block.steps = []
self.block.get_steps = Mock()
self.block.get_steps.return_value = [self.mcq_block]
self.block.step_ids = []
self.block.steps = [self.mcq_block]
self.block.student_results = {'test_mcq': {'status': 'incorrect'}}
self.assertEqual(self.block.review_tips, ['replaced-url'])
......@@ -47,7 +47,7 @@ class TestQuestionMixin(unittest.TestCase):
step = Step()
block._children = [step]
steps = [block.runtime.get_block(cid) for cid in block.steps]
steps = [block.runtime.get_block(cid) for cid in block.step_ids]
self.assertSequenceEqual(steps, [step])
def test_only_steps_are_returned(self):
......@@ -56,7 +56,7 @@ class TestQuestionMixin(unittest.TestCase):
step2 = Step()
block._set_children_for_test(step1, 1, "2", "Step", NotAStep(), False, step2, NotAStep())
steps = [block.runtime.get_block(cid) for cid in block.steps]
steps = [block.runtime.get_block(cid) for cid in block.step_ids]
self.assertSequenceEqual(steps, [step1, step2])
def test_proper_number_is_returned_for_step(self):
......
......@@ -41,8 +41,9 @@ def package_data(pkg, root_list):
BLOCKS = [
'problem-builder = problem_builder:MentoringBlock',
'pb-mentoring = problem_builder:MentoringWithExplicitStepsBlock',
'pb-mentoring-step = problem_builder:MentoringStepBlock',
'step-builder = problem_builder:MentoringWithExplicitStepsBlock',
'sb-step = problem_builder:MentoringStepBlock',
'sb-review-step = problem_builder:ReviewStepBlock',
'pb-table = problem_builder:MentoringTableBlock',
'pb-column = problem_builder:MentoringTableColumn',
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment