Commit 1cfff02f by J. Cliff Dyer

Add tests for scoring of diverse problem types.

Includes:
* CAPA
* ORA
* LTI
* LTI-Consumer
* SGA
* Randomized Content Block

TNL-5692
TNL-5464
parent f5647878
<problem url_name="capa-optionresponse">
<optionresponse>
<optioninput options="('Correct', 'Incorrect')" correct="Correct"></optioninput>
<optioninput options="('Correct', 'Incorrect')" correct="Correct"></optioninput>
</optionresponse>
</problem>
<problem display_name="Exercise: apply to each 3" markdown="null" weight="5.0">
<text>
<p>
<b>ESTIMATED TIME TO COMPLETE: 4 minutes</b>
</p>
<pre>
&gt;&gt;&gt; print testList
[1, 16, 64, 81]
</pre>
</text>
<coderesponse queuename="Watcher-MITx-6.00x">
<textbox rows="10" cols="80" mode="python" tabsize="4"/>
<codeparam>
<initial_display>
# Your Code Here
</initial_display>
<answer_display>
def square(a):
return a * a
applyToEach(testList, square)
</answer_display>
<grader_payload>{"grader": "finger_exercises/L6/applyToEach3/grade_ate3.py"}</grader_payload>
</codeparam>
</coderesponse>
</problem>
<library_content display_name="Final Exam" has_score="true" max_count="25" source_library_id="library-v1:MSX+msx_cld213xfinalexam" source_library_version="577b5aca45064f068278faa0">
<problem/>
<problem/>
</library_content>
<lti launch_url="http://www.imsglobal.org/developers/LTI/test/v1p1/tool.php" lti_id="ims"/>
<openassessment url_name="0e2bbf6cc89e45d98b028fa4e2d46314" allow_file_upload="False">
<title></title>
<assessments>
<assessment name="peer-assessment" must_grade="1" must_be_graded_by="1"/>
<assessment name="self-assessment"/>
</assessments>
<rubric>
<prompt>
Censorship in the Libraries
'All of us can think of a book that we hope none of our children or any
other children have taken off the shelf. But if I have the right to remove
that book from the shelf -- that work I abhor -- then you also have exactly
the same right and so does everyone else. And then we have no books left on
the shelf for any of us.' --Katherine Paterson, Author
Write a persuasive essay to a newspaper reflecting your views on censorship
in libraries. Do you believe that certain materials, such as books, music,
movies, magazines, etc., should be removed from the shelves if they are
found offensive? Support your position with convincing arguments from your
own experience, observations, and/or reading.
Read for conciseness, clarity of thought, and form.
</prompt>
<criterion>
<name>Ideas</name>
<prompt>Determine if there is a unifying theme or main idea.</prompt>
<option points="0">
<name>Poor</name>
<explanation>
Difficult for the reader to discern the main idea.
Too brief or too repetitive to establish or maintain a focus.
</explanation>
</option>
<option points="3">
<name>Fair</name>
<explanation>
Presents a unifying theme or main idea, but may
include minor tangents. Stays somewhat focused on topic and
task.
</explanation>
</option>
<option points="5">
<name>Good</name>
<explanation>
Presents a unifying theme or main idea without going
off on tangents. Stays completely focused on topic and task.
</explanation>
</option>
</criterion>
<criterion>
<name>Content</name>
<prompt>Assess the content of the submission</prompt>
<option points="0">
<name>Poor</name>
<explanation>
Includes little information with few or no details or
unrelated details. Unsuccessful in attempts to explore any
facets of the topic.
</explanation>
</option>
<option points="1">
<name>Fair</name>
<explanation>
Includes little information and few or no details.
Explores only one or two facets of the topic.
</explanation>
</option>
<option points="3">
<name>Good</name>
<explanation>
Includes sufficient information and supporting
details. (Details may not be fully developed; ideas may be
listed.) Explores some facets of the topic.
</explanation>
</option>
<option points="3">
<name>Excellent</name>
<explanation>
Includes in-depth information and exceptional
supporting details that are fully developed. Explores all
facets of the topic.
</explanation>
</option>
</criterion>
</rubric>
</openassessment>
This course fixture provides a representative sample of scoreable block types.
## Contents of the Course
It contains several scoreable blocks in one sequence:
- CAPA
- ORA
- SGA (Staff Graded Assignment)
- LTI
- LTI Consumer
- Randomized Content Block (containing Library Content with CAPA problems)
- Drag and Drop (v2)
## Adding Blocks
To expand coverage to other block types, you can either edit the course xml
directly, or do the following:
1. Zip up the scoreable directory into a tarball:
$ tar cvzf course.tar.gz common/test/data/scoreable'
2. Import the tarball into studio.
3. Add the new blocks.
4. Export the modified course.
5. Unzip to a temporary directory:
$ cd /tmp
$ tar xvzf ~/Downloads/course.*.tar.gz
$ somewhere, and
6. Copy the data back into the test directory:
$ rsync -avz --delete-after /tmp/course/ /path/to/edx-platform/common/test/data/scoreable
## Use in Tests
As of this writing, this course is used in
`lms/djangoapps/grades/tests/test_new.py` If you modify the course, you may
need to adjust the values for `SCORED_BLOCK_COUNT` and `ACTUAL_TOTAL_POSSIBLE`
in `TestMultipleProblemBlockTypes` to reflect the real number of scoreable
blocks.
<section class="about">
<h2>About This Course</h2>
<p>Include your long course description here. The long course description should contain 150-400 words.</p>
<p>This is paragraph 2 of the long course description. Add more paragraphs as needed. Make sure to enclose them in paragraph tags.</p>
</section>
<section class="prerequisites">
<h2>Requirements</h2>
<p>Add information about the skills and knowledge students need to take this course.</p>
</section>
<section class="course-staff">
<h2>Course Staff</h2>
<article class="teacher">
<div class="teacher-image">
<img src="/static/images/placeholder-faculty.png" align="left" style="margin:0 20 px 0" alt="Course Staff Image #1">
</div>
<h3>Staff Member #1</h3>
<p>Biography of instructor/staff member #1</p>
</article>
<article class="teacher">
<div class="teacher-image">
<img src="/static/images/placeholder-faculty.png" align="left" style="margin:0 20 px 0" alt="Course Staff Image #2">
</div>
<h3>Staff Member #2</h3>
<p>Biography of instructor/staff member #2</p>
</article>
</section>
<section class="faq">
<section class="responses">
<h2>Frequently Asked Questions</h2>
<article class="response">
<h3>What web browser should I use?</h3>
<p>The Open edX platform works best with current versions of Chrome, Firefox or Safari, or with Internet Explorer version 9 and above.</p>
<p>See our <a href="http://edx.readthedocs.org/projects/open-edx-learner-guide/en/latest/front_matter/browsers.html">list of supported browsers</a> for the most up-to-date information.</p>
</article>
<article class="response">
<h3>Question #2</h3>
<p>Your answer would be displayed here.</p>
</article>
</section>
</section>
<chapter display_name="Simple Problems">
<sequential url_name="sequential1"/>
</chapter>
<course url_name="course" org="edx" course="jcd101"/>
<course advanced_modules="[&quot;lti&quot;, &quot;lti_consumer&quot;, &quot;library_content&quot;, &quot;split_test&quot;, &quot;conditional&quot;, &quot;randomize&quot;, &quot;drag-and-drop-v2&quot;, &quot;library&quot;, &quot;videosequence&quot;, &quot;problemset&quot;, &quot;acid_parent&quot;, &quot;done&quot;, &quot;wrapper&quot;, &quot;edx_sga&quot;, &quot;bookmarks&quot;]" cert_html_view_enabled="true" display_name="Introduction to problems." graceperiod="" language="en" minimum_grade_credit="0.8" start="&quot;2030-01-01T00:00:00+00:00&quot;">
<chapter url_name="chapter1"/>
<wiki slug="edX.jcd101.2016-10-11"/>
</course>
<library_content source_library_id="library-v1:edX+problib" source_library_version="5800fcff56c02c667238ed09">
<problem url_name="library_text"/>
<problem url_name="library_mc"/>
<lti url_name="library_lti"/>
</library_content>
<lti button_text="Library content Launch third-party stuff" has_score="true" weight="1.0"/>
<lti button_text="Launch third-party stuff" has_score="true" weight="2.0"/>
{
"GRADER": [
{
"drop_count": 0,
"min_count": 1,
"short_label": "Prob",
"type": "Problems",
"weight": 1.0
}
],
"GRADE_CUTOFFS": {
"Pass": 0.5
}
}
{
"course/course": {
"advanced_modules": [
"lti",
"lti_consumer",
"library_content",
"split_test",
"conditional",
"randomize",
"drag-and-drop-v2",
"library",
"videosequence",
"problemset",
"acid_parent",
"done",
"wrapper",
"edx_sga",
"bookmarks"
],
"cert_html_view_enabled": true,
"discussion_topics": {
"General": {
"id": "course"
}
},
"display_name": "Introduction to problems.",
"graceperiod": "",
"language": "en",
"minimum_grade_credit": 0.8,
"start": "2030-01-01T00:00:00Z",
"tabs": [
{
"name": "Home",
"type": "course_info"
},
{
"name": "Course",
"type": "courseware"
},
{
"name": "Textbooks",
"type": "textbooks"
},
{
"name": "Discussion",
"type": "discussion"
},
{
"name": "Wiki",
"type": "wiki"
},
{
"name": "Progress",
"type": "progress"
}
],
"xml_attributes": {
"filename": [
"course/course.xml",
"course/course.xml"
]
}
}
}
<problem display_name="Multiple Choice" markdown="This is the first problem.&#10;&#10;&gt;&gt; What kind of swallow?&#10;&#10;(x) African&#10;( ) European&#10;( ) I don't know">
<multiplechoiceresponse>
<p>This is the first problem.</p>
<p>&gt;&gt; What kind of swallow?</p>
<choicegroup type="MultipleChoice">
<choice correct="true">African</choice>
<choice correct="false">European</choice>
<choice correct="false">I don't know</choice>
</choicegroup>
</multiplechoiceresponse>
</problem>
<problem>
<multiplechoiceresponse>
<p>This is a multiple choice version of the question.</p>
<label>What company do you work for?</label>
<description>Hint: It's not Microsoft. </description>
<choicegroup type="MultipleChoice">
<choice correct="false">Enron</choice>
<choice correct="false">Microsoft</choice>
<choice correct="true">edX</choice>
<choice correct="false">J. C. Penney</choice>
</choicegroup>
</multiplechoiceresponse>
</problem>
<problem>
<stringresponse answer="edX" type="ci">
<p>This is the first problem</p>
<label>What company do you work for?</label>
<description>Hint: It's not Microsoft. </description>
<additional_answer answer="edX.org"/>
<textline size="20"/>
</stringresponse>
</problem>
<sequential display_name="All types">
<vertical url_name="vertical1_capa"/>
<vertical url_name="vertical2_ora"/>
<vertical url_name="vertical3_sga"/>
<vertical url_name="vertical4_lti"/>
<vertical url_name="vertical5_lti_consumer"/>
<vertical url_name="vertical6_library_content"/>
<vertical url_name="vertical7_dndv2"/>
</sequential>
<vertical display_name="CAPA">
<problem url_name="capa"/>
</vertical>
<vertical display_name="ORA">
<openassessment url_name="4e34baa40ab944f2912e3c4037964fcd" submission_start="2001-01-01T00:00:00" submission_due="2029-01-01T00:00:00" allow_latex="False">
<title>Open Response Assessment</title>
<assessments>
<assessment name="student-training">
<example>
<answer>
<part>Replace this text with your own sample response for this assignment. Then, under Response Score to the right, select an option for each criterion. Learners practice performing peer assessments by assessing this response and comparing the options that they select in the rubric with the options that you specified.</part>
</answer>
<select criterion="Ideas" option="Fair"/>
<select criterion="Content" option="Good"/>
</example>
<example>
<answer>
<part>Replace this text with another sample response, and then specify the options that you would select for this response.</part>
</answer>
<select criterion="Ideas" option="Poor"/>
<select criterion="Content" option="Good"/>
</example>
</assessment>
<assessment name="peer-assessment" must_grade="5" must_be_graded_by="3" start="2001-01-01T00:00:00" due="2029-01-01T00:00:00"/>
<assessment name="self-assessment" start="2001-01-01T00:00:00" due="2029-01-01T00:00:00"/>
<assessment name="staff-assessment" start="2001-01-01T00:00:00" due="2029-01-01T00:00:00" required="False"/>
</assessments>
<prompts>
<prompt>
<description>
Censorship in the Libraries
'All of us can think of a book that we hope none of our children or any other children have taken off the shelf. But if I have the right to remove that book from the shelf -- that work I abhor -- then you also have exactly the same right and so does everyone else. And then we have no books left on the shelf for any of us.' --Katherine Paterson, Author
Write a persuasive essay to a newspaper reflecting your views on censorship in libraries. Do you believe that certain materials, such as books, music, movies, magazines, etc., should be removed from the shelves if they are found offensive? Support your position with convincing arguments from your own experience, observations, and/or reading.
Read for conciseness, clarity of thought, and form.
</description>
</prompt>
</prompts>
<rubric>
<criterion feedback="optional">
<name>Ideas</name>
<label>Ideas</label>
<prompt>Determine if there is a unifying theme or main idea.</prompt>
<option points="0">
<name>Poor</name>
<label>Poor</label>
<explanation>Difficult for the reader to discern the main idea. Too brief or too repetitive to establish or maintain a focus.</explanation>
</option>
<option points="3">
<name>Fair</name>
<label>Fair</label>
<explanation>Presents a unifying theme or main idea, but may include minor tangents. Stays somewhat focused on topic and task.</explanation>
</option>
<option points="5">
<name>Good</name>
<label>Good</label>
<explanation>Presents a unifying theme or main idea without going off on tangents. Stays completely focused on topic and task.</explanation>
</option>
</criterion>
<criterion>
<name>Content</name>
<label>Content</label>
<prompt>Assess the content of the submission</prompt>
<option points="0">
<name>Poor</name>
<label>Poor</label>
<explanation>Includes little information with few or no details or unrelated details. Unsuccessful in attempts to explore any facets of the topic.</explanation>
</option>
<option points="1">
<name>Fair</name>
<label>Fair</label>
<explanation>Includes little information and few or no details. Explores only one or two facets of the topic.</explanation>
</option>
<option points="3">
<name>Good</name>
<label>Good</label>
<explanation>Includes sufficient information and supporting details. (Details may not be fully developed; ideas may be listed.) Explores some facets of the topic.</explanation>
</option>
<option points="3">
<name>Excellent</name>
<label>Excellent</label>
<explanation>Includes in-depth information and exceptional supporting details that are fully developed. Explores all facets of the topic.</explanation>
</option>
</criterion>
<feedbackprompt>
(Optional) What aspects of this response stood out to you? What did it do well? How could it be improved?
</feedbackprompt>
<feedback_default_text>
I think that this response...
</feedback_default_text>
</rubric>
</openassessment>
</vertical>
<vertical display_name="Staff Graded (edx-sga)">
<edx_sga url_name="69615790ad9946bf885fe6be70beb076" xblock-family="xblock.v1" weight="2.0"/>
</vertical>
<vertical display_name="LTI">
<lti url_name="lti"/>
</vertical>
<vertical display_name="LTI Consumer">
<lti_consumer url_name="04fe04404d284ffd8dc754a6b70dc8f1" xblock-family="xblock.v1" weight="2.0" has_score="true" button_text="Launch" lti_id="lti" launch_url="http://example.com"/>
</vertical>
<vertical display_name="Library Content">
<library_content url_name="library_content"/>
</vertical>
<vertical display_name="Drag and Drop">
<drag-and-drop-v2 url_name="0db060e869744e92a06df0ae196884f9" xblock-family="xblock.v1" weight="1.0" show_title="true" item_text_color="" show_question_header="true" display_name="Dragon Drop" max_items_per_zone="null" max_attempts="3" question_text="" item_background_color="" data="{&#10; &quot;feedback&quot;: {&#10; &quot;finish&quot;: &quot;Good work! You have completed this drag and drop problem.&quot;,&#10; &quot;start&quot;: &quot;Drag the items onto the image above.&quot;&#10; },&#10; &quot;items&quot;: [&#10; {&#10; &quot;displayName&quot;: &quot;Goes to the top&quot;,&#10; &quot;feedback&quot;: {&#10; &quot;correct&quot;: &quot;Correct! This one belongs to The Top Zone.&quot;,&#10; &quot;incorrect&quot;: &quot;No, this item does not belong here. Try again.&quot;&#10; },&#10; &quot;id&quot;: 0,&#10; &quot;imageDescription&quot;: &quot;&quot;,&#10; &quot;imageURL&quot;: &quot;&quot;,&#10; &quot;zones&quot;: [&#10; &quot;top&quot;&#10; ]&#10; },&#10; {&#10; &quot;displayName&quot;: &quot;Goes to the middle&quot;,&#10; &quot;feedback&quot;: {&#10; &quot;correct&quot;: &quot;Correct! This one belongs to The Middle Zone.&quot;,&#10; &quot;incorrect&quot;: &quot;No, this item does not belong here. Try again.&quot;&#10; },&#10; &quot;id&quot;: 1,&#10; &quot;imageDescription&quot;: &quot;&quot;,&#10; &quot;imageURL&quot;: &quot;&quot;,&#10; &quot;zones&quot;: [&#10; &quot;middle&quot;&#10; ]&#10; },&#10; {&#10; &quot;displayName&quot;: &quot;Goes to the bottom&quot;,&#10; &quot;feedback&quot;: {&#10; &quot;correct&quot;: &quot;Correct! This one belongs to The Bottom Zone.&quot;,&#10; &quot;incorrect&quot;: &quot;No, this item does not belong here. Try again.&quot;&#10; },&#10; &quot;id&quot;: 2,&#10; &quot;imageDescription&quot;: &quot;&quot;,&#10; &quot;imageURL&quot;: &quot;&quot;,&#10; &quot;zones&quot;: [&#10; &quot;bottom&quot;&#10; ]&#10; },&#10; {&#10; &quot;displayName&quot;: &quot;Goes anywhere&quot;,&#10; &quot;feedback&quot;: {&#10; &quot;correct&quot;: &quot;Of course it goes here! It goes anywhere!&quot;,&#10; &quot;incorrect&quot;: &quot;&quot;&#10; },&#10; &quot;id&quot;: 3,&#10; &quot;imageDescription&quot;: &quot;&quot;,&#10; &quot;imageURL&quot;: &quot;&quot;,&#10; &quot;zones&quot;: [&#10; &quot;top&quot;,&#10; &quot;middle&quot;,&#10; &quot;bottom&quot;&#10; ]&#10; },&#10; {&#10; &quot;displayName&quot;: &quot;I don't belong anywhere&quot;,&#10; &quot;feedback&quot;: {&#10; &quot;correct&quot;: &quot;&quot;,&#10; &quot;incorrect&quot;: &quot;You silly, there are no zones for this one.&quot;&#10; },&#10; &quot;id&quot;: 4,&#10; &quot;imageDescription&quot;: &quot;&quot;,&#10; &quot;imageURL&quot;: &quot;&quot;,&#10; &quot;zones&quot;: []&#10; }&#10; ],&#10; &quot;targetImgDescription&quot;: &quot;An isosceles triangle with three layers of similar height. It is shown upright, so the widest layer is located at the bottom, and the narrowest layer is located at the top.&quot;,&#10; &quot;zones&quot;: [&#10; {&#10; &quot;align&quot;: &quot;center&quot;,&#10; &quot;description&quot;: &quot;Use this zone to associate an item with the top layer of the triangle.&quot;,&#10; &quot;height&quot;: 178,&#10; &quot;title&quot;: &quot;The Top Zone&quot;,&#10; &quot;uid&quot;: &quot;top&quot;,&#10; &quot;width&quot;: 196,&#10; &quot;x&quot;: 160,&#10; &quot;y&quot;: 30&#10; },&#10; {&#10; &quot;align&quot;: &quot;center&quot;,&#10; &quot;description&quot;: &quot;Use this zone to associate an item with the middle layer of the triangle.&quot;,&#10; &quot;height&quot;: 138,&#10; &quot;title&quot;: &quot;The Middle Zone&quot;,&#10; &quot;uid&quot;: &quot;middle&quot;,&#10; &quot;width&quot;: 340,&#10; &quot;x&quot;: 86,&#10; &quot;y&quot;: 210&#10; },&#10; {&#10; &quot;align&quot;: &quot;center&quot;,&#10; &quot;description&quot;: &quot;Use this zone to associate an item with the bottom layer of the triangle.&quot;,&#10; &quot;height&quot;: 135,&#10; &quot;title&quot;: &quot;The Bottom Zone&quot;,&#10; &quot;uid&quot;: &quot;bottom&quot;,&#10; &quot;width&quot;: 485,&#10; &quot;x&quot;: 15,&#10; &quot;y&quot;: 350&#10; }&#10; ]&#10;}" mode="assessment"/>
</vertical>
......@@ -8,6 +8,8 @@ import datetime
import ddt
from django.conf import settings
from django.db.utils import DatabaseError
import itertools
from mock import patch
import pytz
......@@ -16,16 +18,17 @@ from courseware.tests.helpers import get_request_for_user
from courseware.tests.test_submitting_problems import ProblemSubmissionTestMixin
from lms.djangoapps.course_blocks.api import get_course_blocks
from lms.djangoapps.grades.config.tests.utils import persistent_grades_feature_flags
from openedx.core.lib.xblock_utils.test_utils import add_xml_block_from_file
from student.models import CourseEnrollment
from student.tests.factories import UserFactory
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase, SharedModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory
from xmodule.modulestore.tests.utils import TEST_DATA_DIR
from xmodule.modulestore.xml_importer import import_course_from_xml
from ..models import PersistentSubsectionGrade
from ..new.course_grade import CourseGradeFactory
from ..new.subsection_grade import SubsectionGrade, SubsectionGradeFactory
from .utils import mock_get_score
from .utils import mock_get_score, mock_get_submissions_score
class GradeTestBase(SharedModuleStoreTestCase):
......@@ -239,20 +242,20 @@ class SubsectionGradeTest(GradeTestBase):
@ddt.ddt
class TestMultipleProblemTypesSubsectionScores(ModuleStoreTestCase, ProblemSubmissionTestMixin):
class TestMultipleProblemTypesSubsectionScores(SharedModuleStoreTestCase):
"""
Test grading of different problem types.
"""
default_problem_metadata = {
u'graded': True,
u'weight': 2.5,
u'max_score': 7.0,
u'due': datetime.datetime(2099, 3, 15, 12, 30, 0, tzinfo=pytz.utc),
}
SCORED_BLOCK_COUNT = 7
ACTUAL_TOTAL_POSSIBLE = 16.0
COURSE_NAME = u'Problem Type Test Course'
COURSE_NUM = u'probtype'
@classmethod
def setUpClass(cls):
super(TestMultipleProblemTypesSubsectionScores, cls).setUpClass()
cls.load_scoreable_course()
chapter1 = cls.course.get_children()[0]
cls.seq1 = chapter1.get_children()[0]
def setUp(self):
super(TestMultipleProblemTypesSubsectionScores, self).setUp()
......@@ -260,39 +263,104 @@ class TestMultipleProblemTypesSubsectionScores(ModuleStoreTestCase, ProblemSubmi
self.student = UserFactory.create(is_staff=False, username=u'test_student', password=password)
self.client.login(username=self.student.username, password=password)
self.request = get_request_for_user(self.student)
self.course = CourseFactory.create(
display_name=self.COURSE_NAME,
number=self.COURSE_NUM
self.course_structure = get_course_blocks(self.student, self.course.location)
@classmethod
def load_scoreable_course(cls):
"""
This test course lives at `common/test/data/scoreable`.
For details on the contents and structure of the file, see
`common/test/data/scoreable/README`.
"""
course_items = import_course_from_xml(
cls.store,
'test_user',
TEST_DATA_DIR,
source_dirs=['scoreable'],
static_content_store=None,
target_id=cls.store.make_course_key('edX', 'scoreable', '3000'),
raise_on_failure=True,
create_if_not_present=True,
)
cls.course = course_items[0]
def test_score_submission_for_all_problems(self):
subsection_factory = SubsectionGradeFactory(
self.student,
course_structure=self.course_structure,
course=self.course,
)
score = subsection_factory.create(self.seq1)
self.assertEqual(score.all_total.earned, 0.0)
self.assertEqual(score.all_total.possible, self.ACTUAL_TOTAL_POSSIBLE)
# Choose arbitrary, non-default values for earned and possible.
earned_per_block = 3.0
possible_per_block = 7.0
with mock_get_submissions_score(earned_per_block, possible_per_block) as mock_score:
# Configure one block to return no possible score, the rest to return 3.0 earned / 7.0 possible
block_count = self.SCORED_BLOCK_COUNT - 1
mock_score.side_effect = itertools.chain(
[(earned_per_block, None, earned_per_block, None)],
itertools.repeat(mock_score.return_value)
)
score = subsection_factory.update(self.seq1)
self.assertEqual(score.all_total.earned, earned_per_block * block_count)
self.assertEqual(score.all_total.possible, possible_per_block * block_count)
@ddt.ddt
class TestVariedMetadata(ProblemSubmissionTestMixin, ModuleStoreTestCase):
"""
Test that changing the metadata on a block has the desired effect on the
persisted score.
"""
default_problem_metadata = {
u'graded': True,
u'weight': 2.5,
u'due': datetime.datetime(2099, 3, 15, 12, 30, 0, tzinfo=pytz.utc),
}
def setUp(self):
super(TestVariedMetadata, self).setUp()
self.course = CourseFactory.create()
self.chapter = ItemFactory.create(
parent=self.course,
category=u'chapter',
display_name=u'Test Chapter'
category="chapter",
display_name="Test Chapter"
)
self.seq1 = ItemFactory.create(
self.sequence = ItemFactory.create(
parent=self.chapter,
category=u'sequential',
display_name=u'Test Sequential 1',
category='sequential',
display_name="Test Sequential 1",
graded=True
)
self.vert1 = ItemFactory.create(
parent=self.seq1,
category=u'vertical',
display_name=u'Test Vertical 1'
self.vertical = ItemFactory.create(
parent=self.sequence,
category='vertical',
display_name='Test Vertical 1'
)
def _get_fresh_subsection_score(self, course_structure, subsection):
"""
Return a Score object for the specified subsection.
Ensures that a stale cached value is not returned.
"""
subsection_factory = SubsectionGradeFactory(
self.student,
self.problem_xml = u'''
<problem url_name="capa-optionresponse">
<optionresponse>
<optioninput options="('Correct', 'Incorrect')" correct="Correct"></optioninput>
<optioninput options="('Correct', 'Incorrect')" correct="Correct"></optioninput>
</optionresponse>
</problem>
'''
self.request = get_request_for_user(UserFactory())
self.client.login(username=self.request.user.username, password="test")
CourseEnrollment.enroll(self.request.user, self.course.id)
course_structure = get_course_blocks(self.request.user, self.course.location)
self.subsection_factory = SubsectionGradeFactory(
self.request.user,
course_structure=course_structure,
course=self.course,
)
return subsection_factory.update(subsection)
def _get_altered_metadata(self, alterations):
"""
......@@ -303,48 +371,28 @@ class TestMultipleProblemTypesSubsectionScores(ModuleStoreTestCase, ProblemSubmi
metadata.update(alterations)
return metadata
def _get_score_with_alterations(self, alterations):
def _add_problem_with_alterations(self, alterations):
"""
Given a dict of alterations to the default_problem_metadata, return
the score when one correct problem (out of two) is submitted.
Add a problem to the course with the specified metadata alterations.
"""
metadata = self._get_altered_metadata(alterations)
add_xml_block_from_file(u'problem', u'capa.xml', parent=self.vert1, metadata=metadata)
course_structure = get_course_blocks(self.student, self.course.location)
self.submit_question_answer(u'problem', {u'2_1': u'Correct'})
return self._get_fresh_subsection_score(course_structure, self.seq1)
def test_score_submission_for_capa_problems(self):
add_xml_block_from_file(u'problem', u'capa.xml', parent=self.vert1, metadata=self.default_problem_metadata)
course_structure = get_course_blocks(self.student, self.course.location)
score = self._get_fresh_subsection_score(course_structure, self.seq1)
self.assertEqual(score.all_total.earned, 0.0)
self.assertEqual(score.all_total.possible, 2.5)
self.submit_question_answer(u'problem', {u'2_1': u'Correct'})
score = self._get_fresh_subsection_score(course_structure, self.seq1)
self.assertEqual(score.all_total.earned, 1.25)
self.assertEqual(score.all_total.possible, 2.5)
@ddt.data(
(u'openassessment', u'openassessment.xml'),
(u'coderesponse', u'coderesponse.xml'),
(u'lti', u'lti.xml'),
(u'library_content', u'library_content.xml'),
metadata = self._get_altered_metadata(alterations)
ItemFactory.create(
parent=self.vertical,
category="problem",
display_name="problem",
data=self.problem_xml,
metadata=metadata,
)
@ddt.unpack
def test_loading_different_problem_types(self, block_type, filename):
def _get_score(self):
"""
Test that transformation works for various block types
Return the score of the test problem when one correct problem (out of
two) is submitted.
"""
metadata = self.default_problem_metadata.copy()
if block_type == u'library_content':
# Library content does not have a weight
del metadata[u'weight']
add_xml_block_from_file(block_type, filename, parent=self.vert1, metadata=metadata)
self.submit_question_answer(u'problem', {u'2_1': u'Correct'})
return self.subsection_factory.create(self.sequence)
@ddt.data(
({}, 1.25, 2.5),
......@@ -355,7 +403,8 @@ class TestMultipleProblemTypesSubsectionScores(ModuleStoreTestCase, ProblemSubmi
)
@ddt.unpack
def test_weight_metadata_alterations(self, alterations, expected_earned, expected_possible):
score = self._get_score_with_alterations(alterations)
self._add_problem_with_alterations(alterations)
score = self._get_score()
self.assertEqual(score.all_total.earned, expected_earned)
self.assertEqual(score.all_total.possible, expected_possible)
......@@ -365,23 +414,11 @@ class TestMultipleProblemTypesSubsectionScores(ModuleStoreTestCase, ProblemSubmi
)
@ddt.unpack
def test_graded_metadata_alterations(self, alterations, expected_earned, expected_possible):
score = self._get_score_with_alterations(alterations)
self._add_problem_with_alterations(alterations)
score = self._get_score()
self.assertEqual(score.graded_total.earned, expected_earned)
self.assertEqual(score.graded_total.possible, expected_possible)
@ddt.data(
{u'max_score': 99.3},
{u'max_score': 1.0},
{u'max_score': 0.0},
{u'max_score': None},
)
def test_max_score_does_not_change_results(self, alterations):
expected_earned = 1.25
expected_possible = 2.5
score = self._get_score_with_alterations(alterations)
self.assertEqual(score.all_total.earned, expected_earned)
self.assertEqual(score.all_total.possible, expected_possible)
class TestCourseGradeLogging(SharedModuleStoreTestCase):
"""
......
......@@ -28,6 +28,16 @@ def mock_get_score(earned=0, possible=1):
yield mock_score
@contextmanager
def mock_get_submissions_score(earned=0, possible=1):
"""
Mocks the _get_submissions_score function to return the specified values
"""
with patch('lms.djangoapps.grades.scores._get_score_from_submissions') as mock_score:
mock_score.return_value = (earned, possible, earned, possible)
yield mock_score
def answer_problem(course, request, problem, score=1, max_value=1):
"""
Records a correct answer for the given problem.
......
"""
Utilities for testing xblocks
"""
from django.conf import settings
from xmodule.modulestore.tests.factories import ItemFactory
TEST_DATA_DIR = settings.COMMON_ROOT / u'test/data'
def add_xml_block_from_file(block_type, filename, parent, metadata):
"""
Create a block of the specified type with content included from the
specified XML file.
XML filenames are relative to common/test/data/blocks.
"""
with open(TEST_DATA_DIR / u'blocks' / filename) as datafile:
return ItemFactory.create(
parent=parent,
category=block_type,
data=datafile.read().decode('utf-8'),
metadata=metadata,
display_name=u'problem'
)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment