Commit c80d9141 by Will Daly

Merge pull request #541 from edx/will/update-dev-docs

Update developer docs
parents 45c53fba 34bd68c3
......@@ -32,7 +32,7 @@ install-nltk-data:
STATIC_JS = openassessment/xblock/static/js
minimize-js:
minimize-js: install-js
node_modules/.bin/uglifyjs $(STATIC_JS)/src/oa_shared.js $(STATIC_JS)/src/*.js > "$(STATIC_JS)/openassessment.min.js"
......
......@@ -47,6 +47,27 @@ to start the server on port 8001:
./scripts/workbench.sh 8001
Combining and Minimizing JavaScript and Sass
============================================
To reduce page size, the OpenAssessment XBlock serves combined/minified
versions of JavaScript and CSS. This combined/minified files are checked
into the git repository.
If you modify JavaScript or Sass, you MUST regenerate the combined/minified
files:
.. code:: bash
# Combine/minify JavaScript
make minimize-js
# Combine/minify CSS (from Sass)
./scripts/sass.sh
Make sure you commit the combined/minified files to the git repository!
Running Tests
=============
......
......@@ -39,8 +39,14 @@ Student Training
.. automodule:: openassessment.assessment.api.student_training
:members:
Workflow Assessment
*******************
File Upload
***********
.. automodule:: openassessment.fileupload.api
:members:
Workflow
********
.. automodule:: openassessment.workflow
:members:
......
......@@ -4,8 +4,6 @@
AI Grading
##########
.. warning:: This is a DRAFT that has not yet been implemented.
Overview
--------
......@@ -234,76 +232,10 @@ Recovery from Failure
c. Horizontally scale workers to handle additional load.
Data Model
----------
1. **GradingWorkflow**
a. Submission UUID (varchar)
b. ClassifierSet (Foreign Key, Nullable)
c. Assessment (Foreign Key, Nullable)
d. Rubric (Foreign Key): Used to search for classifier sets if none are available when the workflow is started.
e. Algorithm ID (varchar): Used to search for classifier sets if none are available when the workflow is started.
f. Scheduled at (timestamp): The time the task was placed on the queue.
g. Completed at (timestamp): The time the task was completed. If set, the task is considered complete.
h. Course ID (varchar): The ID of the course associated with the submission. Useful for rescheduling failed grading tasks in a particular course.
i. Item ID (varchar): The ID of the item (problem) associated with the submission. Useful for rescheduling failed grading tasks in a particular item in a course.
2. **TrainingWorkflow**
a. Algorithm ID (varchar)
b. Many-to-many relation with **TrainingExample**. We can re-use examples for multiple workflows.
c. ClassifierSet (Foreign Key)
d. Scheduled at (timestamp): The time the task was placed on the queue.
e. Completed at (timestamp): The time the task was completed. If set, the task is considered complete.
3. **TrainingExample**
a. Response text (text)
b. Options selected (many to many relation with CriterionOption)
4. **ClassifierSet**
a. Rubric (Foreign Key)
b. Created at (timestamp)
c. Algorithm ID (varchar)
5. **Classifier**
a. ClassifierSet (Foreign Key)
b. URL for trained classifier (varchar)
c. Criterion (Foreign Key)
6. **Assessment** (same as current implementation)
a. Submission UUID (varchar)
b. Rubric (Foreign Key)
7. **AssessmentPart** (same as current implementation)
a. Assessment (Foreign Key)
b. Option (Foreign Key to a **CriterionOption**)
8. **Rubric** (same as current implementation)
9. **Criterion** (same as current implementation)
a. Rubric (Foreign Key)
b. Name (varchar)
10. **CriterionOption** (same as current implementation)
a. Criterion (Foreign Key)
b. Points (positive integer)
c. Name (varchar)
Notes:
* We use a URL to reference the trained classifier so we can avoid storing it in the database.
In practice, the URL will almost certainly point to Amazon S3, but in principle we could use
other backends.
* The storage backend is pluggable. In production, we use Amazon S3, but in principle we could use other backends (including the local filesystem in local dev).
* Unfortunately, the ML algorithm we will use for initial release (EASE) requires that we
persist the trained classifiers using Python's ``pickle`` module. This has security implications
......
......@@ -4,8 +4,6 @@
Understanding the Workflow
##########################
.. warning:: The following section refers to features that are not yet fully
implemented.
The `openassessment.workflow` application is tasked with managing the overall
life-cycle of a student's submission as it goes through various evaluation steps
......@@ -49,6 +47,8 @@ Isolation of Assessment types
a non `None` value has been returned by this function for a given
`submission_uuid`, repeated calls to this function should return the same
thing.
`on_init(submission_uuid)`
Notification to the API that the student has submitted a response.
`on_start(submission_uuid)`
Notification to the API that the student has started the assessment step.
......
......@@ -12,9 +12,8 @@ Setup
-----
::
pip install -r requirements/dev.txt
pip install -e .
python manage.py runserver
See the `README <https://github.com/edx/edx-ora2/blob/master/README.rst>`_
Developer Documentation
......@@ -34,4 +33,3 @@ API Documentation
:maxdepth: 2
api
......@@ -90,7 +90,7 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None):
Args:
submission_uuid (str): The UUID of the submission to assess.
Kwargs:
Keyword Arguments:
rubric (dict): Serialized rubric model.
algorithm_id (unicode): Use only classifiers trained with the specified algorithm.
......@@ -104,8 +104,9 @@ def on_init(submission_uuid, rubric=None, algorithm_id=None):
AIGradingRequestError
AIGradingInternalError
Example usage:
>>> submit('74a9d63e8a5fea369fd391d07befbd86ae4dc6e2', rubric, 'ease')
Example Usage:
>>> on_init('74a9d63e8a5fea369fd391d07befbd86ae4dc6e2', rubric, 'ease')
'10df7db776686822e501b05f452dc1e4b9141fe5'
"""
......@@ -179,7 +180,8 @@ def get_latest_assessment(submission_uuid):
Raises:
AIGradingInternalError
Examle usage:
Example usage:
>>> get_latest_assessment('10df7db776686822e501b05f452dc1e4b9141fe5')
{
'points_earned': 6,
......@@ -261,6 +263,7 @@ def train_classifiers(rubric_dict, examples, course_id, item_id, algorithm_id):
AITrainingInternalError
Example usage:
>>> train_classifiers(rubric, examples, 'ease')
'10df7db776686822e501b05f452dc1e4b9141fe5'
......@@ -307,7 +310,7 @@ def reschedule_unfinished_tasks(course_id=None, item_id=None, task_type=u"grade"
only reschedule the unfinished grade tasks. Applied use case (with button in
staff mixin) is to call without argument, and to reschedule grades only.
Kwargs:
Keyword Arguments:
course_id (unicode): Restrict to unfinished tasks in a particular course.
item_id (unicode): Restrict to unfinished tasks for a particular item in a course.
NOTE: if you specify the item ID, you must also specify the course ID.
......
......@@ -225,7 +225,7 @@ def create_assessment(
assessments is reached, the grading_completed_at timestamp is set
for the Workflow.
Kwargs:
Keyword Args:
scored_at (datetime): Optional argument to override the time in which
the assessment took place. If not specified, scored_at is set to
now.
......@@ -358,8 +358,8 @@ def get_assessment_median_scores(submission_uuid):
appropriate median score.
Returns:
(dict): A dictionary of rubric criterion names, with a median score of
the peer assessments.
dict: A dictionary of rubric criterion names,
with a median score of the peer assessments.
Raises:
PeerAssessmentInternalError: If any error occurs while retrieving
......@@ -430,16 +430,19 @@ def get_assessments(submission_uuid, scored_only=True, limit=None):
submission_uuid (str): The submission all the requested assessments are
associated with. Required.
Kwargs:
Keyword Arguments:
scored (boolean): Only retrieve the assessments used to generate a score
for this submission.
limit (int): Limit the returned assessments. If None, returns all.
Returns:
list(dict): A list of dictionaries, where each dictionary represents a
list: A list of dictionaries, where each dictionary represents a
separate assessment. Each assessment contains points earned, points
possible, time scored, scorer id, score type, and feedback.
Raises:
PeerAssessmentRequestError: Raised when the submission_id is invalid.
PeerAssessmentInternalError: Raised when there is an internal error
......@@ -496,7 +499,7 @@ def get_submitted_assessments(submission_uuid, scored_only=True, limit=None):
submission_uuid (str): The submission of the student whose assessments
we are requesting. Required.
Kwargs:
Keyword Arguments:
scored (boolean): Only retrieve the assessments used to generate a score
for this submission.
limit (int): Limit the returned assessments. If None, returns all.
......
......@@ -112,7 +112,7 @@ def create_assessment(
overall_feedback (unicode): Free-form text feedback on the submission overall.
rubric_dict (dict): Serialized Rubric model.
Kwargs:
Keyword Arguments:
scored_at (datetime): The timestamp of the assessment; defaults to the current time.
Returns:
......
......@@ -398,7 +398,7 @@ def assess_training_example(submission_uuid, options_selected, update_workflow=T
submission_uuid (str): The UUID of the student's submission.
options_selected (dict): The options the student selected.
Kwargs:
Keyword Arguments:
update_workflow (bool): If true, mark the current item complete
if the student has assessed the example correctly.
......
......@@ -450,7 +450,7 @@ class Assessment(models.Model):
submission_uuid (str): The UUID of the submission being assessed.
score_type (unicode): The type of assessment (e.g. peer, self, or AI)
Kwargs:
Keyword Arguments:
feedback (unicode): Overall feedback on the submission.
scored_at (datetime): The time the assessment was created. Defaults to the current time.
......@@ -635,7 +635,7 @@ class AssessmentPart(models.Model):
assessment (Assessment): The assessment we're adding parts to.
selected (dict): A dictionary mapping criterion names to option names.
Kwargs:
Keyword Arguments:
feedback (dict): A dictionary mapping criterion names to written
feedback for the criterion.
......@@ -709,7 +709,7 @@ class AssessmentPart(models.Model):
assessment (Assessment): The assessment we're adding parts to.
selected (dict): A dictionary mapping criterion names to option point values.
Kwargs:
Keyword Arguments:
feedback (dict): A dictionary mapping criterion names to written
feedback for the criterion.
......
......@@ -93,7 +93,7 @@ class TrainingExample(models.Model):
Create a cache key based on the content hash
for serialized versions of this model.
Kwargs:
Keyword Arguments:
attribute: The name of the attribute being serialized.
If not specified, assume that we are serializing the entire model.
......
......@@ -65,7 +65,7 @@ class CsvWriter(object):
output_streams (dictionary): Provide the file handles
to write CSV data to.
Kwargs:
Keyword Arguments:
progress_callback (callable): Callable that accepts
no arguments. Called once per submission loaded
from the database.
......
......@@ -34,7 +34,7 @@ def create_workflow(submission_uuid, steps, on_init_params=None):
steps (list): List of steps that are part of the workflow, in the order
that the user must complete them. Example: `["peer", "self"]`
Kwargs:
Keyword Arguments:
on_init_params (dict): The parameters to pass to each assessment module
on init. Keys are the assessment step names.
......@@ -279,7 +279,7 @@ def get_status_counts(course_id, item_id, steps):
"""
Count how many workflows have each status, for a given item in a course.
Kwargs:
Keyword Arguments:
course_id (unicode): The ID of the course.
item_id (unicode): The ID of the item in the course.
steps (list): A list of assessment steps for this problem.
......
......@@ -441,7 +441,7 @@ def update_workflow_async(sender, **kwargs):
Args:
sender (object): Not used
Kwargs:
Keyword Arguments:
submission_uuid (str): The UUID of the submission associated
with the workflow being updated.
......
......@@ -364,7 +364,7 @@ class TestAssessmentWorkflowApi(CacheResetTest):
item_id (unicode): Item ID for the submission
status (unicode): One of acceptable status values (e.g. "peer", "self", "waiting", "done")
Kwargs:
Keyword Arguments:
answer (unicode): Submission answer.
steps (list): A list of steps to create the workflow with. If not
specified the default steps are "peer", "self".
......
......@@ -33,7 +33,7 @@ class GradeMixin(object):
Args:
data: Not used.
Kwargs:
Keyword Arguments:
suffix: Not used.
Returns:
......@@ -188,7 +188,7 @@ class GradeMixin(object):
data (dict): Can provide keys 'feedback_text' (unicode) and
'feedback_options' (list of unicode).
Kwargs:
Keyword Arguments:
suffix (str): Unused
Returns:
......
......@@ -26,7 +26,7 @@ class MessageMixin(object):
Args:
data: Not used.
Kwargs:
Keyword Arguments:
suffix: Not used.
Returns:
......
......@@ -430,7 +430,7 @@ class OpenAssessmentBlock(
the peer grading step AFTER the submission deadline has passed.
This may not be necessary when we implement a grading interface specifically for course staff.
Kwargs:
Keyword Arguments:
step (str): The step in the workflow to check. Options are:
None: check whether the problem as a whole is open.
"submission": check whether the submission section is open.
......@@ -540,7 +540,7 @@ class OpenAssessmentBlock(
"""
Check if a question has been released.
Kwargs:
Keyword Arguments:
step (str): The step in the workflow to check.
None: check whether the problem as a whole is open.
"submission": check whether the submission section is open.
......
......@@ -37,7 +37,7 @@ class StudentTrainingMixin(object):
Args:
data: Not used.
Kwargs:
Keyword Arguments:
suffix: Not used.
Returns:
......
......@@ -45,7 +45,7 @@ class StudioMixin(object):
data (dict): Data from the request; should have a value for the key 'xml'
containing the XML for this XBlock.
Kwargs:
Keyword Arguments:
suffix (str): Not used
Returns:
......@@ -75,7 +75,7 @@ class StudioMixin(object):
Args:
data (dict): Not used
Kwargs:
Keyword Arguments:
suffix (str): Not used
Returns:
......@@ -101,7 +101,7 @@ class StudioMixin(object):
Args:
data (dict): Not used
Kwargs:
Keyword Arguments:
suffix (str): Not used
Returns:
......
......@@ -19,7 +19,7 @@ def scenario(scenario_path, user_id=None):
Args:
scenario_path (str): Path to the scenario XML file.
Kwargs:
Keyword Arguments:
user_id (str or None): User ID to log in as, or None.
Returns:
......@@ -109,7 +109,7 @@ class XBlockHandlerTestCase(CacheResetTest):
handler_name (str): The name of the handler.
content (unicode): Content of the request.
Kwargs:
Keyword Arguments:
response_format (None or str): Expected format of the response string.
If `None`, return the raw response content; if 'json', parse the
response as JSON and return the result.
......
......@@ -318,7 +318,7 @@ class TestGrade(XBlockHandlerTestCase):
peer_assessments (list of dict): List of assessment dictionaries for peer assessments.
self_assessment (dict): Dict of assessment for self-assessment.
Kwargs:
Keyword Arguments:
waiting_for_peer (bool): If true, skip creation of peer assessments for the user's submission.
Returns:
......
......@@ -548,7 +548,7 @@ class TestDates(XBlockHandlerTestCase):
expected_start (datetime): Expected start date.
expected_due (datetime): Expected due date.
Kwargs:
Keyword Arguments:
released (bool): If set, check whether the XBlock has been released.
course_staff (bool): Whether to treat the user as course staff.
......
......@@ -507,7 +507,7 @@ class TestPeerAssessmentRender(XBlockHandlerTestCase):
expected_path (str): The expected template path.
expected_context (dict): The expected template context.
Kwargs:
Keyword Arguments:
continue_grading (bool): If true, the user has chosen to continue grading.
workflow_status (str): If provided, simulate this status from the workflow API.
graded_enough (bool): Did the student meet the requirement by assessing enough peers?
......@@ -679,7 +679,7 @@ class TestPeerAssessHandler(XBlockHandlerTestCase):
scorer_id (unicode): The ID of the student creating the assessment.
assessment (dict): Serialized assessment model.
Kwargs:
Keyword Arguments:
expect_failure (bool): If true, expect a failure response and return None
Returns:
......
......@@ -366,7 +366,7 @@ class TestSelfAssessmentRender(XBlockHandlerTestCase):
expected_path (str): The expected template path.
expected_context (dict): The expected template context.
Kwargs:
Keyword Arguments:
workflow_status (str): If provided, simulate this status from the workflow API.
workflow_status (str): If provided, simulate these details from the workflow API.
submission_uuid (str): If provided, simulate this submision UUI for the current workflow.
......
......@@ -309,7 +309,7 @@ def validator(oa_block, strict_post_release=True):
Args:
oa_block (OpenAssessmentBlock): The XBlock being updated.
Kwargs:
Keyword Arguments:
strict_post_release (bool): If true, restrict what authors can update once
a problem has been released.
......
......@@ -29,7 +29,7 @@ class WorkflowMixin(object):
Args:
data: Unused
Kwargs:
Keyword Arguments:
suffix: Unused
Returns:
......@@ -92,7 +92,7 @@ class WorkflowMixin(object):
from peer-assessment to self-assessment. Creates a score
if the student has completed all requirements.
Kwargs:
Keyword Arguments:
submission_uuid (str): The submission associated with the workflow to update.
Defaults to the submission created by the current student.
......
......@@ -589,7 +589,7 @@ def update_from_xml(oa_block, root, validator=DEFAULT_VALIDATOR):
oa_block (OpenAssessmentBlock): The open assessment block to update.
root (lxml.etree.Element): The XML definition of the XBlock's content.
Kwargs:
Keyword Arguments:
validator(callable): Function of the form:
(rubric_dict, submission_dict, assessments) -> (bool, unicode)
where the returned bool indicates whether the XML is semantically valid,
......@@ -679,7 +679,7 @@ def update_from_xml_str(oa_block, xml, **kwargs):
oa_block (OpenAssessmentBlock): The open assessment block to update.
xml (unicode): The XML definition of the XBlock's content.
Kwargs:
Keyword Arguments:
same as `update_from_xml`
Returns:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment