Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
E
edx-platform
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
edx
edx-platform
Commits
62b74008
Commit
62b74008
authored
May 30, 2013
by
JonahStanley
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Added tests for Limited Attempt Problems and Showing the Answer
parent
c3c4a0e9
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
118 additions
and
330 deletions
+118
-330
AUTHORS
+1
-0
cms/djangoapps/contentstore/features/courses.py
+0
-6
cms/djangoapps/contentstore/features/studio-overview-togglesection.py
+1
-1
lms/djangoapps/courseware/features/problems.feature
+55
-0
lms/djangoapps/courseware/features/problems.py
+61
-323
No files found.
AUTHORS
View file @
62b74008
...
@@ -72,3 +72,4 @@ Giulio Gratta <giulio@giuliogratta.com>
...
@@ -72,3 +72,4 @@ Giulio Gratta <giulio@giuliogratta.com>
David Baumgold <david@davidbaumgold.com>
David Baumgold <david@davidbaumgold.com>
Jason Bau <jbau@stanford.edu>
Jason Bau <jbau@stanford.edu>
Frances Botsford <frances@edx.org>
Frances Botsford <frances@edx.org>
Jonah Stanley <Jonah_Stanley@brown.edu>
cms/djangoapps/contentstore/features/courses.py
View file @
62b74008
...
@@ -47,12 +47,6 @@ def i_see_the_course_in_my_courses(step):
...
@@ -47,12 +47,6 @@ def i_see_the_course_in_my_courses(step):
assert
world
.
css_has_text
(
course_css
,
'Robot Super Course'
)
assert
world
.
css_has_text
(
course_css
,
'Robot Super Course'
)
@step
(
'the course is loaded$'
)
def
course_is_loaded
(
step
):
class_css
=
'a.class-name'
assert
world
.
css_has_text
(
course_css
,
'Robot Super Cousre'
)
@step
(
'I am on the "([^"]*)" tab$'
)
@step
(
'I am on the "([^"]*)" tab$'
)
def
i_am_on_tab
(
step
,
tab_name
):
def
i_am_on_tab
(
step
,
tab_name
):
header_css
=
'div.inner-wrapper h1'
header_css
=
'div.inner-wrapper h1'
...
...
cms/djangoapps/contentstore/features/studio-overview-togglesection.py
View file @
62b74008
...
@@ -112,7 +112,7 @@ def all_sections_are_expanded(step):
...
@@ -112,7 +112,7 @@ def all_sections_are_expanded(step):
@step
(
u'all sections are collapsed$'
)
@step
(
u'all sections are collapsed$'
)
def
all_sections_are_
expand
ed
(
step
):
def
all_sections_are_
collaps
ed
(
step
):
subsection_locator
=
'div.subsection-list'
subsection_locator
=
'div.subsection-list'
subsections
=
world
.
css_find
(
subsection_locator
)
subsections
=
world
.
css_find
(
subsection_locator
)
for
s
in
subsections
:
for
s
in
subsections
:
...
...
lms/djangoapps/courseware/features/problems.feature
View file @
62b74008
...
@@ -84,3 +84,58 @@ Feature: Answer problems
...
@@ -84,3 +84,58 @@ Feature: Answer problems
|
formula
|
incorrect
|
|
formula
|
incorrect
|
|
script
|
correct
|
|
script
|
correct
|
|
script
|
incorrect
|
|
script
|
incorrect
|
Scenario
:
I
can answer a problem with one attempt correctly
Given
I am viewing a
"multiple choice"
problem with
"1"
attempt
Then
I should see
"You have used 0 of 1 submissions"
somewhere in the page
And
The
"Final Check"
button does appear
When
I answer a
"multiple choice"
problem
"correctly"
Then
My
"multiple choice"
answer is marked
"correct"
And
The
"multiple choice"
problem displays a
"correct"
answer
And
The
"Reset"
button does not appear
Scenario
:
I
can answer a problem with one attempt incorrectly
Given
I am viewing a
"multiple choice"
problem with
"1"
attempt
When
I answer a
"multiple choice"
problem
"incorrectly"
Then
My
"multiple choice"
answer is marked
"incorrect"
And
The
"multiple choice"
problem displays a
"incorrect"
answer
And
The
"Reset"
button does not appear
Scenario
:
I
can answer a problem with multiple attempts correctly
Given
I am viewing a
"multiple choice"
problem with
"3"
attempts
Then
I should see
"You have used 0 of 3 submissions"
somewhere in the page
When
I answer a
"multiple choice"
problem
"correctly"
Then
My
"multiple choice"
answer is marked
"correct"
And
The
"multiple choice"
problem displays a
"correct"
answer
And
The
"Reset"
button does appear
Scenario
:
I
can answer a problem with multiple attempts correctly on final guess
Given
I am viewing a
"multiple choice"
problem with
"3"
attempts
Then
I should see
"You have used 0 of 3 submissions"
somewhere in the page
When
I answer a
"multiple choice"
problem
"incorrectly"
Then
My
"multiple choice"
answer is marked
"incorrect"
And
The
"multiple choice"
problem displays a
"incorrect"
answer
When
I reset the problem
Then
I should see
"You have used 1 of 3 submissions"
somewhere in the page
When
I answer a
"multiple choice"
problem
"incorrectly"
Then
My
"multiple choice"
answer is marked
"incorrect"
And
The
"multiple choice"
problem displays a
"incorrect"
answer
When
I reset the problem
Then
I should see
"You have used 2 of 3 submissions"
somewhere in the page
And
The
"Final Check"
button does appear
When
I answer a
"multiple choice"
problem
"correctly"
Then
My
"multiple choice"
answer is marked
"correct"
And
The
"multiple choice"
problem displays a
"correct"
answer
And
The
"Reset"
button does not appear
Scenario
:
I can view and hide the answer if the problem has it
:
Given
I am viewing a
"numerical"
that shows the answer
"always"
Then
The
"Show Answer"
button does appear
When
I press the
"Show Answer"
button
Then
The
"Hide Answer"
button does appear
And
The
"Show Answer"
button does not appear
And
I should see
"4.14159"
somewhere in the page
When
I press the
"Hide Answer"
button
Then
The
"Show Answer"
button does appear
And
I do not see
"4.14159"
anywhere on the page
lms/djangoapps/courseware/features/problems.py
View file @
62b74008
...
@@ -7,119 +7,42 @@ Steps for problem.feature lettuce tests
...
@@ -7,119 +7,42 @@ Steps for problem.feature lettuce tests
from
lettuce
import
world
,
step
from
lettuce
import
world
,
step
from
lettuce.django
import
django_url
from
lettuce.django
import
django_url
import
random
from
common
import
i_am_registered_for_the_course
,
TEST_SECTION_NAME
import
textwrap
from
problems_setup
import
*
from
common
import
i_am_registered_for_the_course
,
\
TEST_SECTION_NAME
,
section_location
from
capa.tests.response_xml_factory
import
OptionResponseXMLFactory
,
\
@step
(
u'I am viewing a "([^"]*)" problem with "([^"]*)" attempt'
)
ChoiceResponseXMLFactory
,
MultipleChoiceResponseXMLFactory
,
\
def
view_problem_with_attempts
(
step
,
problem_type
,
attempts
):
StringResponseXMLFactory
,
NumericalResponseXMLFactory
,
\
i_am_registered_for_the_course
(
step
,
'model_course'
)
FormulaResponseXMLFactory
,
CustomResponseXMLFactory
,
\
CodeResponseXMLFactory
# Ensure that the course has this problem type
add_problem_to_course
(
'model_course'
,
problem_type
,
{
'attempts'
:
attempts
})
# Factories from capa.tests.response_xml_factory that we will use
# to generate the problem XML, with the keyword args used to configure
# Go to the one section in the factory-created course
# the output.
# which should be loaded with the correct problem
PROBLEM_FACTORY_DICT
=
{
chapter_name
=
TEST_SECTION_NAME
.
replace
(
" "
,
"_"
)
'drop down'
:
{
section_name
=
chapter_name
'factory'
:
OptionResponseXMLFactory
(),
url
=
django_url
(
'/courses/edx/model_course/Test_Course/courseware/
%
s/
%
s'
%
'kwargs'
:
{
(
chapter_name
,
section_name
))
'question_text'
:
'The correct answer is Option 2'
,
'options'
:
[
'Option 1'
,
'Option 2'
,
'Option 3'
,
'Option 4'
],
world
.
browser
.
visit
(
url
)
'correct_option'
:
'Option 2'
}},
'multiple choice'
:
{
'factory'
:
MultipleChoiceResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'The correct answer is Choice 3'
,
'choices'
:
[
False
,
False
,
True
,
False
],
'choice_names'
:
[
'choice_0'
,
'choice_1'
,
'choice_2'
,
'choice_3'
]}},
'checkbox'
:
{
'factory'
:
ChoiceResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'The correct answer is Choices 1 and 3'
,
'choice_type'
:
'checkbox'
,
'choices'
:
[
True
,
False
,
True
,
False
,
False
],
'choice_names'
:
[
'Choice 1'
,
'Choice 2'
,
'Choice 3'
,
'Choice 4'
]}},
'radio'
:
{
'factory'
:
ChoiceResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'The correct answer is Choice 3'
,
'choice_type'
:
'radio'
,
'choices'
:
[
False
,
False
,
True
,
False
],
'choice_names'
:
[
'Choice 1'
,
'Choice 2'
,
'Choice 3'
,
'Choice 4'
]}},
'string'
:
{
'factory'
:
StringResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'The answer is "correct string"'
,
'case_sensitive'
:
False
,
'answer'
:
'correct string'
}},
'numerical'
:
{
'factory'
:
NumericalResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'The answer is pi + 1'
,
'answer'
:
'4.14159'
,
'tolerance'
:
'0.00001'
,
'math_display'
:
True
}},
'formula'
:
{
'factory'
:
FormulaResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'The solution is [mathjax]x^2+2x+y[/mathjax]'
,
'sample_dict'
:
{
'x'
:
(
-
100
,
100
),
'y'
:
(
-
100
,
100
)},
'num_samples'
:
10
,
'tolerance'
:
0.00001
,
'math_display'
:
True
,
'answer'
:
'x^2+2*x+y'
}},
'script'
:
{
'factory'
:
CustomResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'Enter two integers that sum to 10.'
,
'cfn'
:
'test_add_to_ten'
,
'expect'
:
'10'
,
'num_inputs'
:
2
,
'script'
:
textwrap
.
dedent
(
"""
def test_add_to_ten(expect,ans):
try:
a1=int(ans[0])
a2=int(ans[1])
except ValueError:
a1=0
a2=0
return (a1+a2)==int(expect)
"""
)}},
'code'
:
{
'factory'
:
CodeResponseXMLFactory
(),
'kwargs'
:
{
'question_text'
:
'Submit code to an external grader'
,
'initial_display'
:
'print "Hello world!"'
,
'grader_payload'
:
'{"grader": "ps1/Spring2013/test_grader.py"}'
,
}},
}
def
add_problem_to_course
(
course
,
problem_type
):
'''
Add a problem to the course we have created using factories.
'''
assert
(
problem_type
in
PROBLEM_FACTORY_DICT
)
# Generate the problem XML using capa.tests.response_xml_factory
@step
(
u'I am viewing a "([^"]*)" that shows the answer "([^"]*)"'
)
factory_dict
=
PROBLEM_FACTORY_DICT
[
problem_type
]
def
view_problem_with_show_answer
(
step
,
problem_type
,
answer
):
problem_xml
=
factory_dict
[
'factory'
]
.
build_xml
(
**
factory_dict
[
'kwargs'
]
)
i_am_registered_for_the_course
(
step
,
'model_course'
)
# Create a problem item using our generated XML
# Ensure that the course has this problem type
# We set rerandomize=always in the metadata so that the "Reset" button
add_problem_to_course
(
'model_course'
,
problem_type
,
{
'show_answer'
:
answer
})
# will appear.
template_name
=
"i4x://edx/templates/problem/Blank_Common_Problem"
# Go to the one section in the factory-created course
world
.
ItemFactory
.
create
(
parent_location
=
section_location
(
course
),
# which should be loaded with the correct problem
template
=
template_name
,
chapter_name
=
TEST_SECTION_NAME
.
replace
(
" "
,
"_"
)
display_name
=
str
(
problem_type
),
section_name
=
chapter_name
data
=
problem_xml
,
url
=
django_url
(
'/courses/edx/model_course/Test_Course/courseware/
%
s/
%
s'
%
metadata
=
{
'rerandomize'
:
'always'
})
(
chapter_name
,
section_name
))
world
.
browser
.
visit
(
url
)
@step
(
u'I am viewing a "([^"]*)" problem'
)
@step
(
u'I am viewing a "([^"]*)" problem'
)
...
@@ -153,7 +76,7 @@ def set_external_grader_response(step, correctness):
...
@@ -153,7 +76,7 @@ def set_external_grader_response(step, correctness):
@step
(
u'I answer a "([^"]*)" problem "([^"]*)ly"'
)
@step
(
u'I answer a "([^"]*)" problem "([^"]*)ly"'
)
def
answer_problem
(
step
,
problem_type
,
correctness
):
def
answer_problem
_step
(
step
,
problem_type
,
correctness
):
""" Mark a given problem type correct or incorrect, then submit it.
""" Mark a given problem type correct or incorrect, then submit it.
*problem_type* is a string representing the type of problem (e.g. 'drop down')
*problem_type* is a string representing the type of problem (e.g. 'drop down')
...
@@ -161,73 +84,18 @@ def answer_problem(step, problem_type, correctness):
...
@@ -161,73 +84,18 @@ def answer_problem(step, problem_type, correctness):
"""
"""
assert
(
correctness
in
[
'correct'
,
'incorrect'
])
assert
(
correctness
in
[
'correct'
,
'incorrect'
])
assert
(
problem_type
in
PROBLEM_DICT
)
if
problem_type
==
"drop down"
:
answer_problem
(
problem_type
,
correctness
)
select_name
=
"input_i4x-edx-model_course-problem-drop_down_2_1"
option_text
=
'Option 2'
if
correctness
==
'correct'
else
'Option 3'
world
.
browser
.
select
(
select_name
,
option_text
)
elif
problem_type
==
"multiple choice"
:
if
correctness
==
'correct'
:
inputfield
(
'multiple choice'
,
choice
=
'choice_2'
)
.
check
()
else
:
inputfield
(
'multiple choice'
,
choice
=
'choice_1'
)
.
check
()
elif
problem_type
==
"checkbox"
:
if
correctness
==
'correct'
:
inputfield
(
'checkbox'
,
choice
=
'choice_0'
)
.
check
()
inputfield
(
'checkbox'
,
choice
=
'choice_2'
)
.
check
()
else
:
inputfield
(
'checkbox'
,
choice
=
'choice_3'
)
.
check
()
elif
problem_type
==
'radio'
:
if
correctness
==
'correct'
:
inputfield
(
'radio'
,
choice
=
'choice_2'
)
.
check
()
else
:
inputfield
(
'radio'
,
choice
=
'choice_1'
)
.
check
()
elif
problem_type
==
'string'
:
textvalue
=
'correct string'
if
correctness
==
'correct'
\
else
'incorrect'
inputfield
(
'string'
)
.
fill
(
textvalue
)
elif
problem_type
==
'numerical'
:
textvalue
=
"pi + 1"
if
correctness
==
'correct'
\
else
str
(
random
.
randint
(
-
2
,
2
))
inputfield
(
'numerical'
)
.
fill
(
textvalue
)
elif
problem_type
==
'formula'
:
textvalue
=
"x^2+2*x+y"
if
correctness
==
'correct'
else
'x^2'
inputfield
(
'formula'
)
.
fill
(
textvalue
)
elif
problem_type
==
'script'
:
# Correct answer is any two integers that sum to 10
first_addend
=
random
.
randint
(
-
100
,
100
)
second_addend
=
10
-
first_addend
# If we want an incorrect answer, then change
# the second addend so they no longer sum to 10
if
correctness
==
'incorrect'
:
second_addend
+=
random
.
randint
(
1
,
10
)
inputfield
(
'script'
,
input_num
=
1
)
.
fill
(
str
(
first_addend
))
inputfield
(
'script'
,
input_num
=
2
)
.
fill
(
str
(
second_addend
))
elif
problem_type
==
'code'
:
# The fake xqueue server is configured to respond
# correct / incorrect no matter what we submit.
# Furthermore, since the inline code response uses
# JavaScript to make the code display nicely, it's difficult
# to programatically input text
# (there's not <textarea> we can just fill text into)
# For this reason, we submit the initial code in the response
# (configured in the problem XML above)
pass
# Submit the problem
# Submit the problem
check_problem
(
step
)
check_problem
(
step
)
@step
(
u'I check a problem'
)
def
check_problem
(
step
):
world
.
css_click
(
"input.check"
)
@step
(
u'The "([^"]*)" problem displays a "([^"]*)" answer'
)
@step
(
u'The "([^"]*)" problem displays a "([^"]*)" answer'
)
def
assert_problem_has_answer
(
step
,
problem_type
,
answer_class
):
def
assert_problem_has_answer
(
step
,
problem_type
,
answer_class
):
'''
'''
...
@@ -239,67 +107,8 @@ def assert_problem_has_answer(step, problem_type, answer_class):
...
@@ -239,67 +107,8 @@ def assert_problem_has_answer(step, problem_type, answer_class):
by setting answer_class='blank'
by setting answer_class='blank'
'''
'''
assert
answer_class
in
[
'correct'
,
'incorrect'
,
'blank'
]
assert
answer_class
in
[
'correct'
,
'incorrect'
,
'blank'
]
assert
problem_type
in
PROBLEM_DICT
if
problem_type
==
"drop down"
:
problem_has_answer
(
problem_type
,
answer_class
)
if
answer_class
==
'blank'
:
assert
world
.
browser
.
is_element_not_present_by_css
(
'option[selected="true"]'
)
else
:
actual
=
world
.
browser
.
find_by_css
(
'option[selected="true"]'
)
.
value
expected
=
'Option 2'
if
answer_class
==
'correct'
else
'Option 3'
assert
actual
==
expected
elif
problem_type
==
"multiple choice"
:
if
answer_class
==
'correct'
:
assert_checked
(
'multiple choice'
,
[
'choice_2'
])
elif
answer_class
==
'incorrect'
:
assert_checked
(
'multiple choice'
,
[
'choice_1'
])
else
:
assert_checked
(
'multiple choice'
,
[])
elif
problem_type
==
"checkbox"
:
if
answer_class
==
'correct'
:
assert_checked
(
'checkbox'
,
[
'choice_0'
,
'choice_2'
])
elif
answer_class
==
'incorrect'
:
assert_checked
(
'checkbox'
,
[
'choice_3'
])
else
:
assert_checked
(
'checkbox'
,
[])
elif
problem_type
==
"radio"
:
if
answer_class
==
'correct'
:
assert_checked
(
'radio'
,
[
'choice_2'
])
elif
answer_class
==
'incorrect'
:
assert_checked
(
'radio'
,
[
'choice_1'
])
else
:
assert_checked
(
'radio'
,
[])
elif
problem_type
==
'string'
:
if
answer_class
==
'blank'
:
expected
=
''
else
:
expected
=
'correct string'
if
answer_class
==
'correct'
\
else
'incorrect'
assert_textfield
(
'string'
,
expected
)
elif
problem_type
==
'formula'
:
if
answer_class
==
'blank'
:
expected
=
''
else
:
expected
=
"x^2+2*x+y"
if
answer_class
==
'correct'
else
'x^2'
assert_textfield
(
'formula'
,
expected
)
else
:
# The other response types use random data,
# which would be difficult to check
# We trade input value coverage in the other tests for
# input type coverage in this test.
pass
@step
(
u'I check a problem'
)
def
check_problem
(
step
):
world
.
css_click
(
"input.check"
)
@step
(
u'I reset the problem'
)
@step
(
u'I reset the problem'
)
...
@@ -307,45 +116,21 @@ def reset_problem(step):
...
@@ -307,45 +116,21 @@ def reset_problem(step):
world
.
css_click
(
'input.reset'
)
world
.
css_click
(
'input.reset'
)
# Dictionaries that map problem types to the css selectors
@step
(
u'The "([^"]*)" button does not appear'
)
# for correct/incorrect/unanswered marks.
def
action_button_not_present
(
step
,
buttonname
):
# The elements are lists of selectors because a particular problem type
button_css
=
'section.action input[value*="
%
s"]'
%
buttonname
# might be marked in multiple ways.
assert
not
world
.
is_css_present
(
button_css
)
# For example, multiple choice is marked incorrect differently
# depending on whether the user selects an incorrect
# item or submits without selecting any item)
@step
(
u'The "([^"]*)" button does appear'
)
CORRECTNESS_SELECTORS
=
{
def
action_button_present
(
step
,
buttonname
):
'correct'
:
{
'drop down'
:
[
'span.correct'
],
button_css
=
'section.action input[value*="
%
s"]'
%
buttonname
'multiple choice'
:
[
'label.choicegroup_correct'
],
assert
world
.
is_css_present
(
button_css
)
'checkbox'
:
[
'span.correct'
],
'radio'
:
[
'label.choicegroup_correct'
],
'string'
:
[
'div.correct'
],
@step
(
u'I do not see "([^"]*)" anywhere on the page'
)
'numerical'
:
[
'div.correct'
],
def
i_do_not_see_text_anywhere_on_the_page
(
step
,
text
):
'formula'
:
[
'div.correct'
],
assert
world
.
browser
.
is_text_not_present
(
text
)
'script'
:
[
'div.correct'
],
'code'
:
[
'span.correct'
]},
'incorrect'
:
{
'drop down'
:
[
'span.incorrect'
],
'multiple choice'
:
[
'label.choicegroup_incorrect'
,
'span.incorrect'
],
'checkbox'
:
[
'span.incorrect'
],
'radio'
:
[
'label.choicegroup_incorrect'
,
'span.incorrect'
],
'string'
:
[
'div.incorrect'
],
'numerical'
:
[
'div.incorrect'
],
'formula'
:
[
'div.incorrect'
],
'script'
:
[
'div.incorrect'
],
'code'
:
[
'span.incorrect'
]},
'unanswered'
:
{
'drop down'
:
[
'span.unanswered'
],
'multiple choice'
:
[
'span.unanswered'
],
'checkbox'
:
[
'span.unanswered'
],
'radio'
:
[
'span.unanswered'
],
'string'
:
[
'div.unanswered'
],
'numerical'
:
[
'div.unanswered'
],
'formula'
:
[
'div.unanswered'
],
'script'
:
[
'div.unanswered'
],
'code'
:
[
'span.unanswered'
]}}
@step
(
u'My "([^"]*)" answer is marked "([^"]*)"'
)
@step
(
u'My "([^"]*)" answer is marked "([^"]*)"'
)
...
@@ -359,12 +144,11 @@ def assert_answer_mark(step, problem_type, correctness):
...
@@ -359,12 +144,11 @@ def assert_answer_mark(step, problem_type, correctness):
"""
"""
# Determine which selector(s) to look for based on correctness
# Determine which selector(s) to look for based on correctness
assert
(
correctness
in
CORRECTNESS_SELECTORS
)
assert
(
correctness
in
[
'correct'
,
'incorrect'
,
'unanswered'
])
selector_dict
=
CORRECTNESS_SELECTORS
[
correctness
]
assert
(
problem_type
in
PROBLEM_DICT
)
assert
(
problem_type
in
selector_dict
)
# At least one of the correct selectors should be present
# At least one of the correct selectors should be present
for
sel
in
selector_dict
[
problem_type
]:
for
sel
in
PROBLEM_DICT
[
problem_type
][
correctness
]:
has_expected
=
world
.
is_css_present
(
sel
)
has_expected
=
world
.
is_css_present
(
sel
)
# As soon as we find the selector, break out of the loop
# As soon as we find the selector, break out of the loop
...
@@ -373,49 +157,3 @@ def assert_answer_mark(step, problem_type, correctness):
...
@@ -373,49 +157,3 @@ def assert_answer_mark(step, problem_type, correctness):
# Expect that we found the expected selector
# Expect that we found the expected selector
assert
(
has_expected
)
assert
(
has_expected
)
def
inputfield
(
problem_type
,
choice
=
None
,
input_num
=
1
):
""" Return the <input> element for *problem_type*.
For example, if problem_type is 'string', return
the text field for the string problem in the test course.
*choice* is the name of the checkbox input in a group
of checkboxes. """
sel
=
(
"input#input_i4x-edx-model_course-problem-
%
s_2_
%
s"
%
(
problem_type
.
replace
(
" "
,
"_"
),
str
(
input_num
)))
if
choice
is
not
None
:
base
=
"_choice_"
if
problem_type
==
"multiple choice"
else
"_"
sel
=
sel
+
base
+
str
(
choice
)
# If the input element doesn't exist, fail immediately
assert
world
.
is_css_present
(
sel
)
# Retrieve the input element
return
world
.
browser
.
find_by_css
(
sel
)
def
assert_checked
(
problem_type
,
choices
):
'''
Assert that choice names given in *choices* are the only
ones checked.
Works for both radio and checkbox problems
'''
all_choices
=
[
'choice_0'
,
'choice_1'
,
'choice_2'
,
'choice_3'
]
for
this_choice
in
all_choices
:
element
=
inputfield
(
problem_type
,
choice
=
this_choice
)
if
this_choice
in
choices
:
assert
element
.
checked
else
:
assert
not
element
.
checked
def
assert_textfield
(
problem_type
,
expected_text
,
input_num
=
1
):
element
=
inputfield
(
problem_type
,
input_num
=
input_num
)
assert
element
.
value
==
expected_text
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment