Commit 19f2f2ec by Vik Paruchuri

Merge remote-tracking branch 'origin/master' into vik/ml-api

Conflicts:
	playbooks/roles/common/tasks/main.yml
parents 5aaa59e3 7fb17df0
# Configuration Management
## Introduction
**This project is currently in alpha**
The goal of the edx/configuration project is to provide a simple, but
flexible, way for anyone to stand up an instance of the edX platform
that is fully configured and ready-to-go.
Building the platform takes place to two phases:
* Infrastruce provisioning
* Service configuration
As much as possible, we have tried to keep a clean distinction between
provisioning and configuration. You are not obliged to use our tools
and are free to use one, but not the other. The provisioing phase
stands-up the required resources and tags them with role identifiers
so that the configuration tool can come in and complete the job.
The reference platform is provisioned using an Amazon
[CloudFormation](http://aws.amazon.com/cloudformation/) template.
When the stack has been fully created you will have a new AWS Virtual
Private Cloud with hosts for the core edX services. This template
will build quite a number of AWS resources that cost money, so please
consider this before you start.
The configuration phase is manged by [Ansible](http://ansible.cc/).
We have provided a number of playbooks that will configure each of
the edX service.
This project is a re-write of the current edX provisioning and
configuration tools, we will be migrating features to this project
over time, so expect frequent changes.
## AWS
### Building the stack
The first step is to provision the CloudFormation stack. There are
several options for doing this.
* The [AWS console](https://console.aws.amazon.com/cloudformation/home)
* The AWS [CloudFormation CLI](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-installing-cli.html)
* Via Ansible
If you don't have experience with CloudFormation, the web console is a
good place to start because it will use a form wizard to gather
configuration parameters, it will give you continuous feedback during
the process of building the stack and useful error messages when
problems occur.
Details on how to build the stack using Ansible are available below.
### Connecting to Hosts in the Stack
Because the reference architecture makes use of an Amazon VPC, you will not be able
to address the hosts in the private subnets directly. However, you can easily set
up a transparent "jumpbox" so that for all hosts in your vpc, connections are
tunneled.
Add something like the following to your `~/.ssh/config` file.
```
Host *.us-west-1.compute-internal
ProxyCommand ssh -W %h:%p vpc-00000000-jumpbox
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
Host vpc-00000000-jumpbox
HostName 54.236.224.226
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
```
This assumes that you only have one VPC in the ```us-west-1``` region
that you're trying to ssh into. Internal DNS names aren't qualified
any further than that, so to support multiple VPC's you'd have to get
creative with subnets, for example ip-10-1 and ip-10-2...
Test this by typing `ssh ip-10-0-10-1.us-west-1.compute.internal`,
(of course using a hostname exists in your environment.) If things
are configured correctly you will ssh to 10.0.10.1, jumping
transparently via your basion host.
Getting this working in important because we'll be using Ansible
with the SSH transport and it will rely on this configuration
being in place in order to configure your servers.
### Tagging
Every AWS EC2 instance will have a *Group* tag that corresponds to a group of
machines that need to be deployed/targetted to as a group of servers.
Tagging is the bridge between the provisioning and configuration
phases. The servers provisioned in your VPC will be stock Ubuntu
12.0.4 LTS servers. The only difference between them with be the tags
that CloudFront has applied to them. These tags will be used by Ansible
to map playbooks to the correct servers. The application of the
appropriate playbook, will turn each stock host into an appropriately
configured service.
The *Group* tag is where the magic happens. Every AWS EC2 instance
will have a *Group* tag that corresponds to a group of machines that
need to be deployed/targeted to as a group of servers.
**Example:**
* `Group`: `edxapp_stage`
......@@ -31,9 +129,9 @@ version instead of the official v1.1 release._
specific variables.
* __Groups__ - A Group name is an identifier that corresponds to a group of
roles plus an identifier for the environment. Example: *edxapp_stage*,
*edxapp_prod*, *xserver_stage*, etc. For the purpose of targetting servers
*edxapp_prod*, *xserver_stage*, etc. For the purpose of targeting servers
for deployment groups are created automatically by the `ec2.py` inventory
sript since these group names will map to the _Group_ AWS tag.
script since these group names will map to the _Group_ AWS tag.
* __Roles__ - A role will map to a single function/service that runs on
server.
......@@ -44,9 +142,9 @@ version instead of the official v1.1 release._
As a general policy we want to protect the following data:
* Usernames
* Public keys (keys are ok to be public, but can be used to figure out usernames)
* Public keys (keys are OK to be public, but can be used to figure out usernames)
* Hostnames
* Passwords, api keys
* Passwords, API keys
The following yml files and examples serve as templates that should be overridden with your own
environment specific configuration:
......@@ -54,7 +152,7 @@ environment specific configuration:
* vars in `secure_example/vars`
* files in `secure_example/files`
Directory structure for the secure repo:
Directory structure for the secure repository:
```
......@@ -166,6 +264,8 @@ playbooks
#### Provision the stack
**This assumes that you have workng ssh as described above**
```
cd playbooks
ansible-playbook -vvv cloudformation.yml -i inventory.ini -e 'region=<aws_region> key=<key_name> name=<stack_name> group=<group_name>'
......@@ -211,36 +311,6 @@ If that works fine, then you can add an export of PYTHONPATH to
* Creates base directories
* Creates the lms json configuration files
Because the reference architecture makes use of an Amazon VPC, you will not be able
to address the hosts in the private subnets directly. However, you can easily set
up a transparent "jumpbox" so that for all hosts in your vpc, connections are
tunneled.
Add something like the following to your `~/.ssh/config` file.
```
Host *.us-west-1.compute-internal
ProxyCommand ssh -W %h:%p vpc-00000000-jumpbox
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
Host vpc-00000000-jumpbox
HostName 54.236.224.226
IdentityFile /path/to/aws/key.pem
ForwardAgent yes
User ubuntu
```
This assumes that you only have one VPC in the ```us-west-1``` region
that you're trying to ssh into. Internal DNS names aren't qualified
any further than that, so to support multiple VPC's you'd have to get
creative with subnets, for example ip-10-1 and ip-10-2...
Test this by typing `ssh ip-10-0-10-1.us-west-1.compute.internal`,
(of course using a hostname exists in your environment.) If things
are configured correctly you will ssh to 10.0.10.1, jumping
transparently via your basion host.
Assuming that the edxapp_stage.yml playbook targets hosts in your vpc
for which there are entiries in your `.ssh/config`, do the
......
......@@ -9,6 +9,7 @@
- nginx
- gunicorn
- lms
- cms
- ruby
- npm
# run this role last
......
---
app_base_dir: /opt/wwc
log_base_dir: /mnt/logs
venv_dir: /opt/edx
# these pathes are relative to the playbook dir
......@@ -7,4 +8,4 @@ venv_dir: /opt/edx
secure_dir: '../../edx-secret'
# this indicates the path to site-specific (with precedence)
# things like nginx template files
local_dir: '../../ansible_local'
\ No newline at end of file
local_dir: '../../ansible_local'
......@@ -4,22 +4,30 @@
# - nginx/tasks/main.yml
---
- name: create cms application config
template: src=env.json.j2 dest=$app_base_dir/cms.env.json
template: src=env.json.j2 dest=$app_base_dir/cms.env.json mode=644
tags:
- cms-env
- cms
- name: create cms auth file
template: src=auth.json.j2 dest=$app_base_dir/cms.auth.json
template: src=auth.json.j2 dest=$app_base_dir/cms.auth.json mode=644
tags:
- cms-env
- cms
- include: ../../nginx/tasks/nginx_site.yml state=link site_name=cms
tags:
- cms
- cms-env
- include: ../../nginx/tasks/nginx_site.yml state=link site_name=cms-backend
- name: Create CMS log target directory
file: path={{log_base_dir}}/cms state=directory owner=syslog group=adm mode=2770
tags:
- cms
- cms
- cms-env
- logging
# If we set up CMS, we have to set up edx logging
- include: ../../common/tasks/edx_logging_base.yml
# Creates LMS upstart file
- include: ../../gunicorn/tasks/upstart.yml service_variant=cms
......@@ -26,7 +26,7 @@ cms_env_config:
'KEY_FUNCTION': 'util.memcache.safe_key'
'KEY_PREFIX': 'cms.edx.org'
'LOCATION': [ "deploycache-large.foo-bar.amazonaws.com:11211" ]
'LOG_DIR': '/mnt/logs/edx'
'LOG_DIR': '{{log_base_dir}}/edx'
'LOGGING_ENV': 'cms-dev'
'SITE_NAME': 'studio.cms-dev.m.edx.org'
'SYSLOG_SERVER': 'syslog.a.m.i4x.org'
......
#!/bin/bash
function usage() {
echo "update.sh [cms|lms|all|none]"
echo " option is what services to collectstatic and restart (default=all)"
}
if [ $# -gt 1 ]; then
usage
exit 1
fi
if [ $# == 0 ]; then
do_CMS=1
do_LMS=1
else
case $1 in
cms)
do_CMS=1
;;
lms)
do_LMS=1
;;
both|all)
do_CMS=1
do_LMS=1
;;
none)
;;
*)
usage
exit 1
;;
esac
fi
function run() {
echo
echo "======== $@"
$@
}
source /etc/profile
source /opt/edx/bin/activate
export PATH=$PATH:/opt/www/.gem/bin
cd /opt/wwc/mitx
BRANCH="origin/feature/edx-west/stanford-theme"
export GIT_SSH="/tmp/git_ssh.sh"
run git fetch origin -p
run git checkout $BRANCH
if [[ $do_CMS ]]; then
export SERVICE_VARIANT=cms
run rake cms:gather_assets:aws
run sudo restart cms
fi
if [[ $do_LMS ]]; then
export SERVICE_VARIANT=lms
run rake lms:gather_assets:aws
run sudo restart lms
fi
---
- name: restart rsyslogd
service: name=rsyslog state=restarted
sudo: True
......@@ -5,6 +5,7 @@
tags:
- users
- admin_users
- name: Add user 'ubuntu' to 'edx' group
# This is a temporary measure for initial configuration; after the last
# play is run and we've got a good set of users, ubuntu should no longer be used
......@@ -13,6 +14,7 @@
tags:
- users
- admin_users
- name: Creating admin users
# Admin users, by definition, should be able to sudo w/ password, and read adm-only files
user: name={{ item.user }} append=yes groups={{ "adm,edx,"+",".join(item.groups) }} shell=/bin/bash
......@@ -22,6 +24,7 @@
tags:
- users
- admin_users
- name: Copying ssh keys for admin users
authorized_key: user={{ item.user }} key="{{lookup('file', item.path)}}"
with_items: admin_keys
......@@ -29,15 +32,24 @@
tags:
- users
- admin_users
- name: Creating env users
user: name={{ item.user }} groups={{ ",".join(item.groups) }} shell=/bin/bash
with_items: env_users
when: env_users is defined
tags:
- users
- name: Copying ssh keys for env users
authorized_key: user={{ item.user }} key="{{lookup('file', item.path)}}"
with_items: env_keys
when: env_keys is defined
tags:
- users
- name: Group adm passwordless sudo
copy: content="%adm ALL=(ALL) NOPASSWD:ALL" dest=/etc/sudoers.d/adm-group owner=root group=root mode=0440
tags:
- users
- admin_users
---
- name: Install rsyslog configuration for edX
template: dest=/etc/rsyslog.d/99-edx.conf src=edx_rsyslog.j2 owner=root group=root mode=644
notify: restart rsyslogd
tags:
- logging
- name: Install logrotate configuration for edX
template: dest=/etc/logrotate.d/edx-services src=edx_logrotate.j2 owner=root group=root mode=644
tags:
- logging
- name: Touch tracking file into existence
command: touch -a {{log_base_dir}}/tracking.log creates={{log_base_dir}}/tracking.log
tags:
- logging
- name: Set permissions on tracking file
file: path={{log_base_dir}}/tracking.log owner=syslog group=adm mode=750
tags:
- logging
- name: Install logrotate configuration for tracking file
template: dest=/etc/logrotate.d/tracking.log src=edx_logrotate_tracking_log.j2 owner=root group=root mode=644
tags:
- logging
......@@ -2,40 +2,74 @@
- include: create_users.yml
- name: Create application root
# In the future consider making group edx r/t adm
file: path=$app_base_dir state=directory owner=root group=adm mode=2775
file: path=$app_base_dir state=directory owner=root group=adm mode=2775
sudo: True
tags:
- pre_install
- name: Create log directory
file: path=/mnt/logs state=directory mode=2770 group=adm owner=root
- name: Create upload directory
file: path=$app_base_dir/uploads mode=2775 state=directory owner=root group=adm
sudo: True
tags:
- pre_install
- name: Create aliases to the log directory
file: state=link src=/mnt/logs path=$app_base_dir/log
tags:
- pre_install
- name: Touch the edx log file into place
command: touch -a /mnt/logs/edx.log
tags:
- pre_install
- name: Update apt cache
apt: update_cache=yes
- name: Create data dir
file: path={{ app_base_dir }}/data state=directory owner=root group=root
sudo: True
tags:
- pre_install
- name: Install role-independent useful system packages
apt: pkg={{item}} install_recommends=yes state=present
# do this before log dir setup; rsyslog package guarantees syslog user present
apt: pkg={{item}} install_recommends=yes state=present update_cache=yes
sudo: True
with_items:
- ack-grep
- lynx-cur
- logrotate
- mosh
- most
- rsyslog
- screen
- python-pip
- tree
tags:
- pre_install
- include: create_venv.yml
\ No newline at end of file
- name: Create log directory
file: path=$log_base_dir state=directory mode=2770 group=adm owner=syslog
sudo: True
tags:
- pre_install
- name: Create alias from app_base_dir to the log_base_dir
file: state=link src=$log_base_dir path=$app_base_dir/log
sudo: True
tags:
- pre_install
- logging
- name: Create convenience link from log_base_dir to system logs
file: state=link src=/var/log path=$log_base_dir/system
sudo: True
tags:
- pre_install
- logging
- name: Touch edx log file into place
# This is done for the benefit of the rake commands, which expect it
command: touch -a {{log_base_dir}}/edx.log creates={{log_base_dir}}/edx.log
tags:
- pre_install
- logging
- name: Set permissions on edx log file
# This is done for the benefit of the rake commands, which expect it
file: path={{log_base_dir}}/edx.log owner=syslog group=adm mode=770
sudo: True
tags:
- pre_install
- logging
- include: create_venv.yml
- include: edx_logging_base.yml
- include: software_update.yml
\ No newline at end of file
---
- name: edx-update.sh, manual lms/cms update script
copy: src=roles/common/files/edx-update.sh dest=/usr/local/bin/edx-update.sh owner=ubuntu group=adm mode=0775
tags:
- update
{{log_base_dir}}/*/edx.log {
create
compress
copytruncate
delaycompress
dateext
missingok
notifempty
daily
rotate 90
size 1M
}
{{log_base_dir}}/tracking.log {
create
compress
delaycompress
dateext
missingok
notifempty
daily
rotate 365000
size 1M
}
# custom edx syslog configuration
# Put in place and templatized by ansible 
#
# Cliffs notes version: ansible uses local0 and local1, so they have to be
# plumbed through appropriately.
 
#############
# Change some global configuration
#############
# don't escape newlines
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$RepeatedMsgReduction off
$MaxMessageSize 32768
 
#############
# Override default auth config so we can ignore local0 and local1 also
#############
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none,local0.none,local1.none -/var/log/syslog
 
# According to the docs for rsyslog, "syslogtag" is the "TAG" from
# the message which in the case of tracking logs is interpreted to
# be everything before the first whitespace character.
# This is why we include "syslogtag."
# Maybe one day this will be answered:
# - http://stackoverflow.com/questions/10449447/how-to-avoid-syslogtag-from-rsyslog-template
$template tracking,"%syslogtag%%msg%\n"
 
# looks for [service_name=<name>] in the beginning of the log message,
# if it exists the log will go into {{log_base_dir}}/<name>/edx.log, otherwise
# it will go into {{log_base_dir}}/edx.log
$template DynaFile,"{{log_base_dir}}/%syslogtag:R,ERE,1,BLANK:\[service_variant=([a-zA-Z_-]*)\].*--end%/edx.log"
 
local0.* -?DynaFile
local1.* {{log_base_dir}}/tracking.log;tracking
# gunicorn
description "gunicorn server"
author "Calen Pennington <cpennington@mitx.mit.edu>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 3 30
env PID=/var/tmp/cms.pid
#env NEW_RELIC_CONFIG_FILE=${app_base_dir}/newrelic.ini
#env NEWRELIC=${venv_dir}/bin/newrelic-admin
env WORKERS={{ ansible_processor_cores * 2 }}
env PORT=8010
env LANG=en_US.UTF-8
env DJANGO_SETTINGS_MODULE=cms.envs.aws
env SERVICE_VARIANT="cms"
chdir ${app_base_dir}/mitx
setuid www-data
exec ${venv_dir}/bin/gunicorn_django -b 127.0.0.1:$PORT -w $WORKERS --timeout=300 --pythonpath=${app_base_dir}/mitx --settings=cms.envs.aws
......@@ -10,9 +10,9 @@ respawn
respawn limit 3 30
env PID=/var/tmp/lms.pid
#env NEW_RELIC_CONFIG_FILE=${app_base_dir}/opt/wwc/newrelic.ini
#env NEWRELIC=${app_base_dir}/bin/newrelic-admin
env WORKERS=${lms_num_workers}
#env NEW_RELIC_CONFIG_FILE=${app_base_dir}/newrelic.ini
#env NEWRELIC=${venv_dir}/bin/newrelic-admin
env WORKERS={{ ansible_processor_cores * 2 }}
env PORT=8000
env LANG=en_US.UTF-8
env DJANGO_SETTINGS_MODULE=lms.envs.aws
......
---
lms_num_workers: 1
\ No newline at end of file
......@@ -18,6 +18,22 @@
- include: ../../nginx/tasks/nginx_site.yml state=link site_name=lms
- include: ../../nginx/tasks/nginx_site.yml state=link site_name=lms-backend
- name: Change permissions on datadir
file: path={{ app_base_dir }}/data state=directory owner=www-data group=www-data
tags:
- cms
- lms
- lms-env
- name: Create lms log target directory
file: path={{log_base_dir}}/lms state=directory owner=syslog group=adm mode=2770
tags:
- lms
- lms-env
- logging
# If we set up LMS, we have to set up edx logging
- include: ../../common/tasks/edx_logging_base.yml
# Install ssh keys for ubuntu account to be able to check out from mitx
# Temprory behavior, not needed after June 1. Perhaps still useful as a recipe.
......
......@@ -24,24 +24,31 @@ lms_env_config:
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
'KEY_FUNCTION': 'util.memcache.safe_key'
'CERT_QUEUE': 'certificates'
'COURSE_LISTINGS':
'default': ['MITx/6.002x/2012_Fall']
'stage-berkeley': [ 'BerkeleyX/CS169/fa12']
'stage-harvard': [ 'HarvardX/CS50/2012H']
'stage-mit': [ 'MITx/3.091/MIT_2012_Fall']
'stage-num': [ 'MITx/6.002x-NUM/2012_Fall_NUM']
'stage-sjsu': [ 'MITx/6.002x-EE98/2012_Fall_SJSU']
# 'COURSE_LISTINGS':
# 'default': ['MITx/6.002x/2012_Fall']
# 'stage-berkeley': [ 'BerkeleyX/CS169/fa12']
# 'stage-harvard': [ 'HarvardX/CS50/2012H']
# 'stage-mit': [ 'MITx/3.091/MIT_2012_Fall']
# 'stage-num': [ 'MITx/6.002x-NUM/2012_Fall_NUM']
# 'stage-sjsu': [ 'MITx/6.002x-EE98/2012_Fall_SJSU']
'LOCAL_LOGLEVEL': 'INFO'
'META_UNIVERSITIES':
'UTx': [ 'UTAustinX']
'MITX_FEATURES': { 'AUTH_USE_OPENID_PROVIDER': true,
'CERTIFICATES_ENABLED': true, 'ENABLE_DISCUSSION_SERVICE': true,
'ENABLE_INSTRUCTOR_ANALYTICS': true, 'ENABLE_PEARSON_HACK_TEST': true,
'SUBDOMAIN_BRANDING': true, 'SUBDOMAIN_COURSE_LISTINGS': true}
'SUBDOMAIN_BRANDING': { 'stage-berkeley': 'BerkeleyX',
'stage-harvard': 'HarvardX', 'stage-mit': 'MITx',
'stage-num': 'MITx', 'stage-sjsu': 'MITx'}
'VIRTUAL_UNIVERSITIES': []
# 'META_UNIVERSITIES':
# 'UTx': [ 'UTAustinX']
'MITX_FEATURES':
'AUTH_USE_OPENID_PROVIDER': true
'CERTIFICATES_ENABLED': true
'ENABLE_DISCUSSION_SERVICE': true
'ENABLE_INSTRUCTOR_ANALYTICS': true
'ENABLE_PEARSON_HACK_TEST': false
'SUBDOMAIN_BRANDING': false
'SUBDOMAIN_COURSE_LISTINGS': false
# 'SUBDOMAIN_BRANDING':
# 'stage-berkeley': 'BerkeleyX'
# 'stage-harvard': 'HarvardX'
# 'stage-mit': 'MITx'
# 'stage-num': 'MITx'
# 'stage-sjsu': 'MITx'
# 'VIRTUAL_UNIVERSITIES': []
'WIKI_ENABLED': true
lms_source_repo: git@github.com:edx/mitx.git
......
......@@ -29,3 +29,22 @@
service: name=nginx state=started
tags:
- nginx
- name: Create nginx log file location (just in case)
file: path={{log_base_dir}}/nginx state=directory owner=syslog group=adm mode=2770
tags:
- nginx
- logging
# Commented out until default config has nginx log to {{log_base_dir}}/nginx
# and also until default logrotate task 'nginx' gets removed
###
#- name: Set up nginx access log rotation
# template: dest=/etc/logrotate.d/nginx-access src=edx_logrotate_nginx_access.j2 owner=root group=root mode=644
# tags:
# - logging
#
#- name: Set up nginx access log rotation
# template: dest=/etc/logrotate.d/nginx-error src=edx_logrotate_nginx_error.j2 owner=root group=root mode=644
# tags:
# - logging
......@@ -10,6 +10,7 @@
tags:
- nginx
- lms
- cms
- nginx-env
- name: Creating nginx config link {{ site_name }}
......@@ -18,4 +19,5 @@
tags:
- nginx
- lms
- cms
- nginx-env
{{log_base_dir}}/nginx/access.log {
create
compress
delaycompress
dateext
missingok
notifempty
daily
rotate 90
size 1M
}
{{log_base_dir}}/nginx/error.log {
create
compress
delaycompress
dateext
missingok
notifempty
daily
rotate 90
size 1M
}
......@@ -72,3 +72,76 @@ lms_env_config:
'LOGGING_ENV': 'hidden-prod'
'SESSION_COOKIE_DOMAIN': 'hidden-prod'
'COMMENTS_SERVICE_KEY': 'hidden-prod'
cms_auth_config:
'AWS_ACCESS_KEY_ID': 'hidden-prod'
'AWS_SECRET_ACCESS_KEY': 'hidden-prod'
'CONTENTSTORE':
'OPTIONS':
'db': 'hidden-prod'
'host': [ 'hidden-prod', 'hidden-prod']
'password': 'hidden-prod'
'port': 0000
'user': 'hidden-prod'
'DATABASES':
'default': { 'ENGINE': 'hidden-prod',
'HOST': 'hidden-prod', 'NAME': 'hidden-prod',
'PASSWORD': 'hidden-prod', 'PORT': 0000,
'USER': 'hidden-prod'}
'MODULESTORE':
'default':
'ENGINE': 'xmodule.modulestore.mongo.DraftMongoModuleStore'
'OPTIONS':
'collection': 'hidden-prod'
'db': 'hidden-prod'
'default_class': 'hidden-prod'
'fs_root': 'hidden-prod'
'host': [ 'hidden-prod', 'hidden-prod']
'password': 'hidden-prod'
'port': 0000
'render_template': 'hidden-prod'
'user': 'hidden-prod'
'direct':
'ENGINE': 'xmodule.modulestore.mongo.MongoModuleStore'
'OPTIONS':
'collection': 'hidden-prod'
'db': 'hidden-prod'
'default_class': 'hidden-prod'
'fs_root': 'hidden-prod'
'host': [ 'hidden-prod', 'hidden-prod']
'password': 'hidden-prod'
'port': 0000
'render_template': 'hidden-prod'
'user': 'hidden-prod'
'SECRET_KEY': 'hidden-prod'
cms_env_config:
'CACHES':
'default':
'KEY_PREFIX': 'hidden-prod'
'LOCATION': [ 'hidden-prod',
'hidden-prod']
'general':
'KEY_PREFIX': 'hidden-prod'
'LOCATION': [ 'hidden-prod',
'hidden-prod']
'mongo_metadata_inheritance':
'KEY_PREFIX': 'hidden-prod'
'LOCATION': [ 'hidden-prod',
'hidden-prod']
'staticfiles':
'KEY_PREFIX': 'hidden-prod'
'LOCATION': [ 'hidden-prod',
'hidden-prod']
'LOG_DIR': 'hidden-prod'
'LOGGING_ENV': 'hidden-prod'
'SITE_NAME': 'hidden-prod'
'SYSLOG_SERVER': 'hidden-prod'
'LMS_BASE': 'hidden-prod'
'SESSION_COOKIE_DOMAIN': 'hidden-prod'
'SEGMENT_IO_KEY': 'hidden-prod'
'MITX_FEATURES':
'DISABLE_COURSE_CREATION': false
'SEGMENT_IO': false
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment