Commit 2549551e by David Baumgold

Merge pull request #3752 from edx/db/doc-cognitive-load

Added rst version of cognitive load page
parents 6396373b 8afe315c
*******************
Code Considerations
*******************
This is a checklist of all of the things that we expect a developer to consider
as they are building new or modifying existing functionality.
Operational Impact
==================
* Are there new points in the system that require operational monitoring?
* External system that you now depend on (Mathworks, SoftwareSecure,
CyberSource, etc...)
* New reliance on disk space?
* New stand process (workers? elastic search?) that need to always be available?
* A new queue that needs to be monitored for dequeueing
* Bulk Email --> Amazon SES, Inbound queues, etc...
* Are important feature metrics sent to datadog and is there a
dashboard to monitor them?
* Am I building a feature that will have impact to the performance of the system?
Keep in mind that Open edX needs to support hundreds of thousands if not
millions of students, so be careful that you code will work well when the
numbers get large.
* Deep Search
* Grade Downloads
* Are reasonable log messages being written out for debugging purposes?
* Will this new feature easily start up in the Vagrant image?
* Do we have documentation for how to start up this feature if it has any
new startup requirements?
* Are there any special directories/file system permissions that need to be set?
* Will this have any impact to the CDN related technologies?
* Are we pushing any extra manual burden on the Operations team to have to
provision anything new when new courses launch? when new schools start? etc....
* Has the feature been tested using a production configuration with vagrant?
See also: :doc:`deploy-new-service`
Documentation/Training/Support
==============================
* Is there appropriate documentation in the context of the product for
this feature? If not, how can we get it to folks?
* For Studio much of the documentation is in the product.
* Is this feature big enough that we need to have a session with stakeholders
to introduce this feature BEFORE we release it? (PMs, Support, etc...)
* Paid Certificates
* Do I have to give some more information to the Escalation Team
so that this can be supported?
* Did you add an entry to CHANGELOG?
* Did you write/edit docstrings for all of your modules, classes, and functions?
Development
===========
* Did you consider a reasonable upgrade path?
* Is this a feature that we need to slowly roll out to different audiences?
* Bulk Email
* Have you considered exposing an appropriate amount of configuration options
in case something happens?
* Have you considered a simple way to "disable" this feature if something is broken?
* Centralized Logging
* Will this feature require any security provisioning?
* Which roles use this feature? Does it make sense to ensure that only those
roles can see this feature?
* Assets in the Studio Library
* Did you ensure that any new libraries are added to appropriate provisioning
scripts and have been checked by OSCM for license appropriateness?
* Is there an open source alternative?
* Are we locked down to any proprietary technologies? (AWS, ...)
* Did you consider making APIs so that others can change the implementation if applicable?
* Did you consider Internationalization (I18N) and Localization (L10N)?
* Did you consider Accessibility (A11y)?
* Will your code work properly in workers?
* Have you considered the large-scale modularity of the code? For example,
xmodule and xblock should not use Django features directly.
Testing
=======
* Did you make sure that you tried boundary conditions?
* Did you try unicode input/data?
* The name of the person in paid certifactes
* The name of the person in bulk email
* The body of the text in bulk email
* etc
* Did you try funny characters in the input/data? (~!@#$%^&*()';/.,<>, etc...)
* Have you done performance testing on this feature? Do you know how much
performance is good enough?
* Did you ensure that your functionality works across all supported browsers?
* Do you have the right hooks in your HTML to ensure that the views are automatable?
* Are you ready if this feature has 10x the expected usage?
* What happens if an external service does not respond or responds with
a significant delay?
* What are possible failure modes? Do your unit tests exercise these code paths?
* Does this change affect templates and/or JavaScript? If so, are there
Selenium tests for the affected page(s)? Have you tested the affected
page(s) in a sandbox?
Analytics
=========
* Are learning analytics events being recorded in an appropriate way?
* Do your events use a descriptive and uniquely enough event type and
namespace?
* Did you ensure that you capture enough information for the researchers
to benefit from this event information?
* Is it possible to reconstruct the state of your module from the history
of its events?
* Has this new event been documented so that folks downstream know how
to interpret it?
* Are you increasing the amount of logging in any major way?
* Are you sending appropriate/enough information to MixPanel,
Google Analytics, Segment IO?
Collaboration
=============
* Are there are other teams that would benefit from knowing about this feature?
* Forums/LMS - email
* Does this feature require a special broadcast to external teams as well?
Open Source
===========
* Can we get help from the community on this feature?
* Does the community know enough about this?
UX/Design/Front End Development
===============================
* Did you make sure that the feature is going to pass
Accessibility requirements (still TBD)?
* Did you make sure any system/instructional text is I18N ready?
* Did you ensure that basic functionality works across all supported browsers?
* Did you plan for the feature's UI to degrade gracefully (or be
progressively enhanced) based on browser capability?
* Did you review the page/view under all browser/agent conditions -
viewport sizes, images off, css off?
* Did you write any HTML with ideal page/view semantics in mind?
* When writing HTML, did you adhere to standards/conventions around class/id names?
* When writing Sass, did you follow OOCSS/SMACSS philosophy ([1]_, [2]_, [3]_),
variable/extend organization and naming conventions, and UI abstraction conventions?
* When writing Sass, did you document any new variables,
extend-based classes, or mixins?
* When writing/adding JavaScript, did you consider the asset pipeline
and page load timeline?
* When writing JavaScript, did you note what code is for prototyping vs. production?
* When adding new templates, views, assets (Sass, images, plugins/libraries),
did you follow existing naming and file architecture conventions?
* When adding new templates, views, assets (Sass, images, plugins/libraries),
did you add any needed documentation?
* Did you use templates and good Sass architecture to keep DRY?
* Did we document any aspects about the feature (flow, purpose, intent)
that we or other teams will need to know going forward?
.. [1] http://smacss.com/
.. [2] http://thesassway.com/intermediate/avoid-nested-selectors-for-more-modular-css
.. [3] http://ianstormtaylor.com/oocss-plus-sass-is-the-best-way-to-css/
edX.org Specific
================
* Ensure that you have not broken import/export?
* Ensure that you have not broken video player? (Lyla video)
***********************************
So You Want to Deploy a New Service
***********************************
Intro
=====
This page is a work-in-progress aimed at capturing all the details needed to
deploy a new service in the edX environment.
Considerations
==============
What Does Your Service Do
-------------------------
Understanding how your service works and what it does helps Ops support
the service in production.
Sizing and Resource Profile
---------------------------
What class of machine does your service require. What resources are most
likely to be bottlenecks for your service, CPU, memory, bandwidth, something else?
Customers
---------
Who will be consuming your service? What is the anticipated initial usage?
What factors will cause usage to grow? How many users can your service support?
Code
----
What repository or repositories does your service require.
Will your service be deployed from a non-public repo?
Ideally your service should follow the same release management process as the LMS.
This is documented in the wiki, so please ensure you understand that process in depth.
Was the service code reviewed?
Settings
--------
How does your service read in environment specific settings? Were all
hard-coded references to values that should be settings, e.g., database URLs
and credentials, message queue endpoints, etc., found and resolved during
code review?
License
-------
Is the license included in the repo?
How does your service run
-------------------------
Is it HTTP based? Does it run periodically? Both?
Persistence
-----------
Ops will need to know the following things:
* What persistence needs does you service have
* Will it connect to an existing database?
* Will it connect to Mongo
* What are the least permissive permissions your service needs to do its job.
Logging
-------
It's important that your application logging in built out to provide sufficient
feedback for problem determination as well as ensuring that it is operating as
desired. It's also important that your service log using our deployment
standards, i.e., logs vs syslog in deployment environments and utilizes the
standard log format for syslog. Can the logs be consumed by Splunk? They
should not be if they contain data discussed in the Data Security section below.
Metrics
-------
What are the key metrics for your application? Concurrent users?
Transactions per second? Ideally you should create a DataDog view that
captures the key metrics for your service and provided an instant gauge of
overally service health.
Messaging
---------
Does your service need to access a message queue.
Email
-----
Does your service need to send email
Access to Other Service
-----------------------
Does your service need access to other service either within or
outside of the edX environment. Some example might be, the comment service,
the LMS, YouTube, s3 buckets, etc.
Service Monitoring
------------------
Your service should have a facility for remote monitoring that has the
following characteristics:
* It should exercise all the components that your service requires to run successfully.
* It should be necessary and sufficient for ensuring your service is healthy.
* It should be secure.
* It should not open your service to DDOS attacks.
Fault Tolerance and Scalability
-------------------------------
How can your application be deployed to ensure that it is fault tolerant
and scalable?
Network Access
--------------
From where should your service be accessible.
Data Security
-------------
Will your application be storing or handling data in any of the
following categories:
* Personally Identifiable Information in General, e.g., user's email addresses.
* Tracking log data
* edX confidential data
Testing
-------
Has your service been load tested? What there the details of the test.
What determinations can we make regarding when we will need to scale if usage
trend upward? How can ops exercise your service in order to tests end-to-end
integration. We love no-op-able tasks.
Additional Requirements
-----------------------
Anything else we should know about.
......@@ -23,6 +23,8 @@ Contents:
analytics.rst
process/index
testing/index
code-considerations
deploy-new-service
APIs
-----
......
......@@ -100,6 +100,7 @@ Further Information
For futher information on the pull request requirements, please see the following
links:
* :doc:`../code-considerations`
* :doc:`../testing`
* :doc:`../testing/jenkins`
* :doc:`../testing/code-coverage`
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment