Commit a65d9f0a by Ned Batchelder

Convert .md to .rst

parent 03527b9e
# Configuration Management
## Introduction
The goal of the edx/configuration project is to provide a simple, but
flexible, way for anyone to stand up an instance of Open edX that is
fully configured and ready-to-go.
Before getting started, please look at the [Open EdX Installation options](https://open.edx.org/installation-options), to see which method for deploying OpenEdX is right for you.
Building the platform takes place in two phases:
* Infrastructure provisioning
* Service configuration
As much as possible, we have tried to keep a clean distinction between
provisioning and configuration. You are not obliged to use our tools
and are free to use one, but not the other. The provisioning phase
stands-up the required resources and tags them with role identifiers
so that the configuration tool can come in and complete the job.
__Note__: The Cloudformation templates used for infrastructure provisioning
are no longer maintained. We are working to move to a more modern and flexible tool.
The reference platform is provisioned using an Amazon
[CloudFormation](http://aws.amazon.com/cloudformation/) template.
When the stack has been fully created you will have a new AWS Virtual
Private Cloud with hosts for the core Open edX services. This template
will build quite a number of AWS resources that cost money, so please
consider this before you start.
The configuration phase is managed by [Ansible](http://ansible.com/).
We have provided a number of playbooks that will configure each of
the Open edX services.
__Important__:
The Open edX configuration scripts need to be run as root on your servers and will make changes to service configurations including, but not limited to, sshd, dhclient, sudo, apparmor and syslogd. Our scripts are made available as we use them and they implement our best practices. We strongly recommend that you review everything that these scripts will do before running them against your servers. We also recommend against running them against servers that are hosting other applications. No warranty is expressed or implied.
For more information including installation instruction please see the [OpenEdX Wiki](https://openedx.atlassian.net/wiki/display/OpenOPS/Open+edX+Operations+Home).
For info on any large recent changes please see the [change log](https://github.com/edx/configuration/blob/master/CHANGELOG.md).
Configuration Management
########################
Introduction
************
The goal of the edx/configuration project is to provide a simple, but flexible,
way for anyone to stand up an instance of Open edX that is fully configured and
ready-to-go.
Before getting started, please look at the `Open EdX Installation options`_, to
see which method for deploying OpenEdX is right for you.
Building the platform takes place in two phases:
- Infrastructure provisioning
- Service configuration
As much as possible, we have tried to keep a clean distinction between
provisioning and configuration. You are not obliged to use our tools and are
free to use one, but not the other. The provisioning phase stands-up the
required resources and tags them with role identifiers so that the
configuration tool can come in and complete the job.
**Note**: The Cloudformation templates used for infrastructure provisioning are
no longer maintained. We are working to move to a more modern and flexible
tool.
The reference platform is provisioned using an Amazon `CloudFormation`_
template. When the stack has been fully created you will have a new AWS Virtual
Private Cloud with hosts for the core Open edX services. This template will
build quite a number of AWS resources that cost money, so please consider this
before you start.
The configuration phase is managed by `Ansible`_. We have provided a number of
playbooks that will configure each of the Open edX services.
**Important**: The Open edX configuration scripts need to be run as root on
your servers and will make changes to service configurations including, but not
limited to, sshd, dhclient, sudo, apparmor and syslogd. Our scripts are made
available as we use them and they implement our best practices. We strongly
recommend that you review everything that these scripts will do before running
them against your servers. We also recommend against running them against
servers that are hosting other applications. No warranty is expressed or
implied.
For more information including installation instruction please see the `OpenEdX
Wiki`_.
For info on any large recent changes please see the `change log`_.
.. _Open EdX Installation options: https://open.edx.org/installation-options
.. _CloudFormation: http://aws.amazon.com/cloudformation/
.. _Ansible: http://ansible.com/
.. _OpenEdX Wiki: https://openedx.atlassian.net/wiki/display/OpenOPS/Open+edX+Operations+Home
.. _change log: https://github.com/edx/configuration/blob/master/CHANGELOG.md
# Docker Support
## Introduction
Docker support for edX services is volatile and experimental.
We welcome interested testers and contributors. If you are
interested in participating, please join us on Slack at
https://openedx.slack.com/messages/docker.
We do not and may never run run these images in production.
They are not currently suitable for production use.
## Tooling
`Dockerfile`s for individual services should be placed in
`docker/build/<service>`. There should be an accompanying `ansible_overrides.yml`
which specifies any docker-specific configuration values.
Once the `Dockerfile` has been created, it can be built and published
using a set of make commands.
```shell
make docker.build.<service> # Build the service container (but don't tag it)
# By convention, this will build the container using
# the currently checked-out configuration repository,
# and will build on top of the most-recently available
# base container image from dockerhub.
make docker.test.<service> # Test that the Dockerfile for <service> will build.
# This will rebuild any edx-specific containers that
# the Dockerfile depends on as well, in case there
# are failures as a result of changes to the base image.
make docker.pkg.<service> # Package <service> for publishing to Dockerhub. This
# will also package and tag pre-requisite service containers.
make docker.push.<service> # Push <service> to Dockerhub as latest.
```
## Image naming
Images built from master branches are named `edxops/<service>`, for example,
`edxops/edxapp`. Images built from Open edX release branches include the
short release name: `edxops/ficus/edxapp`. Both images will have a `:latest`
version.
## Build arguments
Dockerfiles make use of these build arguments:
* `OPENEDX_RELEASE` is the release branch to use. It defaults to "master".
To use an Open edX release, provide the full branch name:
```
--build-arg OPENEDX_RELEASE=open-release/ficus.master
```
* `IMAGE_PREFIX` is the release branch component to add to images. It defaults
to an empty string for master builds. For Open edX release, use the short
name of the release, with a trailing slash:
```
--build-arg IMAGE_PREFIX=ficus/
```
## Conventions
In order to facilitate development, Dockerfiles should be based on
one of the `edxops/<ubuntu version>-common` base images, and should
`COPY . /edx/app/edx_ansible/edx_ansible` in order to load your local
ansible plays into the image. The actual work of configuring the image
should be done by executing ansible (rather than explicit steps in the
Dockerfile), unless those steps are docker specific. Devstack-specific
steps can be tagged with the `devstack:install` tag in order that they
only run when building a devstack image.
The user used in the `Dockerfile` should be `root`.
Docker Support
##############
Introduction
************
Docker support for edX services is volatile and experimental. We welcome
interested testers and contributors. If you are interested in participating,
please join us on Slack at https://openedx.slack.com/messages/docker.
We do not and may never run run these images in production. They are not
currently suitable for production use.
Tooling
*******
``Dockerfile``\ s for individual services should be placed in
``docker/build/<service>``. There should be an accompanying
``ansible_overrides.yml`` which specifies any docker-specific configuration
values.
Once the ``Dockerfile`` has been created, it can be built and published using a
set of make commands.
.. code:: shell
make docker.build.<service> # Build the service container (but don't tag it)
# By convention, this will build the container using
# the currently checked-out configuration repository,
# and will build on top of the most-recently available
# base container image from dockerhub.
make docker.test.<service> # Test that the Dockerfile for <service> will build.
# This will rebuild any edx-specific containers that
# the Dockerfile depends on as well, in case there
# are failures as a result of changes to the base image.
make docker.pkg.<service> # Package <service> for publishing to Dockerhub. This
# will also package and tag pre-requisite service containers.
make docker.push.<service> # Push <service> to Dockerhub as latest.
Image naming
************
Images built from master branches are named ``edxops/<service>``, for example,
``edxops/edxapp``. Images built from Open edX release branches include the
short release name: ``edxops/ficus/edxapp``. Both images will have a
``:latest`` version.
Build arguments
***************
Dockerfiles make use of these build arguments:
- ``OPENEDX_RELEASE`` is the release branch to use. It defaults to "master".
To use an Open edX release, provide the full branch name:
``--build-arg OPENEDX_RELEASE=open-release/ficus.master``
- ``IMAGE_PREFIX`` is the release branch component to add to images. It
defaults to an empty string for master builds. For Open edX release, use the
short name of the release, with a trailing slash:
``--build-arg IMAGE_PREFIX=ficus/``
Conventions
***********
In order to facilitate development, Dockerfiles should be based on one of the
``edxops/<ubuntu version>-common`` base images, and should
``COPY . /edx/app/edx_ansible/edx_ansible`` in order to load your local ansible
plays into the image. The actual work of configuring the image should be done
by executing ansible (rather than explicit steps in the Dockerfile), unless
those steps are docker specific. Devstack-specific steps can be tagged with the
``devstack:install`` tag in order that they only run when building a devstack
image.
The user used in the ``Dockerfile`` should be ``root``.
##Usage
Start the container with this:
```docker run -ti -e GO_SERVER=your.go.server.ip_or_host gocd/gocd-agent```
If you need to start a few GoCD agents together, you can of course use the shell to do that. Start a few agents in the background, like this:
```for each in 1 2 3; do docker run -d --link angry_feynman:go-server gocd/gocd-agent; done```
##Getting into the container
Sometimes, you need a shell inside the container (to create test repositories, etc). docker provides an easy way to do that:
```docker exec -i -t CONTAINER-ID /bin/bash```
To check the agent logs, you can do this:
```docker exec -i -t CONTAINER-ID tail -f /var/log/go-agent/go-agent.log```
##Agent Configuration
The go-agent expects it's configuration to be found at ```/var/lib/go-agent/config/```. Sharing the
configuration between containers is done by mounting a volume at this location that contains any configuration files
necessary.
**Example docker run command:**
```docker run -ti -v /tmp/go-agent/conf:/var/lib/go-agent/config -e GO_SERVER=gocd.sandbox.edx.org 718d75c467c0 bash```
[How to setup auto registration for remote agents](https://docs.go.cd/current/advanced_usage/agent_auto_register.html)
##Building and Uploading the container to ECS
* Copy the go-agent GitHub private key to this path:
- ```docker/build/go-agent/files/go_github_key.pem```
- A dummy key is in the repo file.
- The actual private key is kept in LastPass - see DevOps for access.
- WARNING: Do *NOT* commit/push the real private key to the public configuration repo!
* Create image
- This must be run from the root of the configuration repository
- ```docker build -f docker/build/go-agent/Dockerfile .```
- or
- ```make docker.test.go-agent```
* Log docker in to AWS
- Assume the role of the account you wish to log in to
- ```source assume_role.sh <account name>```
- ```sh -c `aws ecr get-login --region us-east-1` ```
- You might need to remove the `-e` option returned by that command in order to successfully login.
* Tag image
- ```docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent:latest```
- ```docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent:<version_number>```
* upload:
- ```docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent:latest```
- ```docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent:<version_number>```
Usage
#####
Start the container with this:
``docker run -ti -e GO_SERVER=your.go.server.ip_or_host gocd/gocd-agent``
If you need to start a few GoCD agents together, you can of course use the
shell to do that. Start a few agents in the background, like this:
``for each in 1 2 3; do docker run -d --link angry_feynman:go-server gocd/gocd-agent; done``
Getting into the container
##########################
Sometimes, you need a shell inside the container (to create test repositories,
etc). docker provides an easy way to do that:
``docker exec -i -t CONTAINER-ID /bin/bash``
To check the agent logs, you can do this:
``docker exec -i -t CONTAINER-ID tail -f /var/log/go-agent/go-agent.log``
Agent Configuration
###################
The go-agent expects it's configuration to be found at
``/var/lib/go-agent/config/``. Sharing the configuration between containers is
done by mounting a volume at this location that contains any configuration
files necessary.
**Example docker run command:**
``docker run -ti -v /tmp/go-agent/conf:/var/lib/go-agent/config -e GO_SERVER=gocd.sandbox.edx.org 718d75c467c0 bash``
`How to setup auto registration for remote agents`_
Building and Uploading the container to ECS
###########################################
- Copy the go-agent GitHub private key to this path:
- ``docker/build/go-agent/files/go_github_key.pem``
- A dummy key is in the repo file.
- The actual private key is kept in LastPass - see DevOps for access.
- WARNING: Do *NOT* commit/push the real private key to the public
configuration repo!
- Create image
- This must be run from the root of the configuration repository
- ``docker build -f docker/build/go-agent/Dockerfile .``
- or
- ``make docker.test.go-agent``
- Log docker in to AWS
- Assume the role of the account you wish to log in to
- ``source assume_role.sh <account name>``
- ``sh -c `aws ecr get-login --region us-east-1```
- You might need to remove the ``-e`` option returned by that command in
order to successfully login.
- Tag image
- ``docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent:latest``
- ``docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent:<version_number>``
- upload:
- ``docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent:latest``
- ``docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent:<version_number>``
.. _How to setup auto registration for remote agents: https://docs.go.cd/current/advanced_usage/agent_auto_register.html
This directory contains playbooks used by edx-east
for provisioning
```
ansible-playbook -c ssh -vvv --user=ubuntu <playbook> -i ./ec2.py -e 'secure_dir=path/to/configuration-secure/ansible'
```
Historical note: "edx-east" represents the edX organization in Cambridge, MA. At one point, an "edx-west" notion existed - a name which represented Stanford edX developers.
This directory contains playbooks used by edx-east for provisioning
::
ansible-playbook -c ssh -vvv --user=ubuntu <playbook> -i ./ec2.py -e 'secure_dir=path/to/configuration-secure/ansible'
Historical note: "edx-east" represents the edX organization in Cambridge, MA.
At one point, an "edx-west" notion existed - a name which represented Stanford
edX developers.
After EC2 discovery variables in the files that match any
of the discovered groups will be set.
For convenience a single variable is set
for every Group tag for conditional task execution.
After EC2 discovery variables in the files that match any of the discovered
groups will be set.
For convenience a single variable is set for every Group tag for conditional
task execution.
##In order to use this role you must use a specific set of AMIs
[This role is for use with the AWS ECS AMIs listed here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html)
In order to use this role you must use a specific set of AMIs
#############################################################
`This role is for use with the AWS ECS AMIs listed here`_
.. _This role is for use with the AWS ECS AMIs listed here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
---
# See README.md for variable descriptions
# See README.rst for variable descriptions
# Packages required to build edx-analytics-pipeline
JENKINS_ANALYTICS_EXTRA_PKGS:
- libpq-dev
- libffi-dev
# Change this default password: (see README.md to see how you can do it)
# Change this default password: (see README.rst to see how you can do it)
JENKINS_ANALYTICS_USER_PASSWORD_PLAIN: jenkins
JENKINS_ANALYTICS_AUTH_REALM: github_oauth
JENKINS_ANALYTICS_AUTH_ADMINISTRATORS: []
......@@ -170,7 +170,7 @@ jenkins_auth_realms_available:
jenkins_auth_realm: "{{ jenkins_auth_realms_available[JENKINS_ANALYTICS_AUTH_REALM] }}"
jenkins_auth_users:
anonymous:
anonymous:
- anonymous
administrators: "{{ jenkins_admin_users + JENKINS_ANALYTICS_AUTH_ADMINISTRATORS }}"
job_builders: "{{ JENKINS_ANALYTICS_AUTH_JOB_BUILDERS | default([]) }}"
......
* main.yml: installs nginx and will enable the basic nginx configuration for version introspection
- main.yml: installs nginx and will enable the basic nginx configuration for
version introspection
swapfile
========
########
Creates and enables a swap file.
Slightly modified from https://github.com/kamaln7/ansible-swapfile
## License
License
*******
The MIT License (MIT)
Copyright (c) 2014 Kamal Nasser <hello@kamal.io>
Copyright (c) 2014 Kamal Nasser hello@kamal.io
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
......
# How to add Dockerfiles to configuration file
The script that handles distributing build jobs across Travis CI shards relies on the parsefiles_config YAML file. This file contains a mapping from each application that has a Dockerfile to its corresponding weight/rank. The rank refers to the approximate running time of a Travis Docker build for that application's Dockerfile. When adding a new Dockerfile to the configuration repository, this configuration file needs to be manually updated in order to ensure that the Dockerfile is also built.
To modify configuration file:
1. Edit the docker.mk file:
1. Modify docker_test to include date commands.
Replace
```$(docker_test)%: .build/%/Dockerfile.test
docker build -t $*:test -f $< .```
with
```$(docker_test)%: .build/%/Dockerfile.test
date
docker build -t $*:test -f $< .
date```
2. Replace the command that runs the dependency analyzer with a line to build your Dockerfiles.
For example, if adding Dockerfile for ecommerce, rabbit mq, replace
`images:=$(shell git diff --name-only $(TRAVIS_COMMIT_RANGE) | python util/parsefiles.py)`
with
`images:= ecommerce rabbitmq`
3. Replace the command that runs the balancing script with a line to build all images.
Replace
`docker.test.shard: $(foreach image,$(shell echo $(images) | python util/balancecontainers.py $(SHARDS) | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))`
with
`docker.test.shard: $(foreach image,$(shell echo $(images) | tr ' ' '\n' | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))`
2. Commit and push to your branch.
3. Wait for Travis CI to run the builds.
4. Upon completion, examine the Travis CI logs to find where your Dockerfile was built (search for "docker build -t"). Find the amount of time the build took by comparing the output of the date command before the build command starts and the date command after the build command completes.
4. Round build time to a whole number, and add it to the configuration/util/parsefiles_config.yml file.
5. Undo steps 1a, 1b, 1c to revert back to the original state of the docker.mk file.
6. Commit and push to your branch. Your Dockerfile should now be built as a part of the Travis CI tests.
How to add Dockerfiles to configuration file
############################################
The script that handles distributing build jobs across Travis CI shards relies
on the parsefiles\_config YAML file. This file contains a mapping from each
application that has a Dockerfile to its corresponding weight/rank. The rank
refers to the approximate running time of a Travis Docker build for that
application's Dockerfile. When adding a new Dockerfile to the configuration
repository, this configuration file needs to be manually updated in order to
ensure that the Dockerfile is also built.
To modify configuration file:
1. Edit the docker.mk file:
2. Modify docker\_test to include date commands.
Replace
``$(docker_test)%: .build/%/Dockerfile.test docker build -t $*:test -f $< .``
with
``$(docker_test)%: .build/%/Dockerfile.test date docker build -t $*:test -f $< . date``
3. Replace the command that runs the dependency analyzer with a line to build
your Dockerfiles.
For example, if adding Dockerfile for ecommerce, rabbit mq, replace
``images:=$(shell git diff --name-only $(TRAVIS_COMMIT_RANGE) | python util/parsefiles.py)``
with
``images:= ecommerce rabbitmq``
4. Replace the command that runs the balancing script with a line to build all
images.
Replace
``docker.test.shard: $(foreach image,$(shell echo $(images) | python util/balancecontainers.py $(SHARDS) | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))``
with
``docker.test.shard: $(foreach image,$(shell echo $(images) | tr ' ' '\n' | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))``
5. Commit and push to your branch.
6. Wait for Travis CI to run the builds.
7. Upon completion, examine the Travis CI logs to find where your Dockerfile
was built (search for "docker build -t"). Find the amount of time the build
took by comparing the output of the date command before the build command
starts and the date command after the build command completes.
8. Round build time to a whole number, and add it to the
configuration/util/parsefiles\_config.yml file.
9. Undo steps 1a, 1b, 1c to revert back to the original state of the docker.mk
file.
10. Commit and push to your branch. Your Dockerfile should now be built as a
part of the Travis CI tests.
......@@ -26,7 +26,7 @@ def check_coverage(images, used_images):
# exit with error code if uncovered Dockerfiles exist
if uncovered:
LOGGER.error("The following Dockerfiles are not described in the parsefiles_config.yml file: {}. Please see the following documentation on how to add Dockerfile ranks to the configuration file: {}".format(uncovered, "https://github.com/edx/configuration/blob/master/util/README.md"))
LOGGER.error("The following Dockerfiles are not described in the parsefiles_config.yml file: {}. Please see the following documentation on how to add Dockerfile ranks to the configuration file: {}".format(uncovered, "https://github.com/edx/configuration/blob/master/util/README.rst"))
sys.exit(1)
def arg_parse():
......
......@@ -22,7 +22,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
raise 'Please set VAGRANT_JENKINS_LOCAL_VARS_FILE environment variable. '\
'That variable should point to a file containing variable '\
'overrides for analytics_jenkins role. For required overrides '\
'see README.md in the analytics_jenkins role folder.'
'see README.rst in the analytics_jenkins role folder.'
end
config.vm.provision :ansible do |ansible|
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment