Commit 6c7c152a by Edward Zarecor

Merge branch 'master' into e0d/make-cache-size-configurable

parents e97dab46 000af8ca
...@@ -24,3 +24,6 @@ vagrant_ansible_inventory_default ...@@ -24,3 +24,6 @@ vagrant_ansible_inventory_default
## Make artifacts ## Make artifacts
.build .build
playbooks/edx-east/travis-test.yml playbooks/edx-east/travis-test.yml
## Local virtualenv
/venv
- Role: rabbitmq
- Removed the RABBITMQ_CLUSTERED var and related tooling. The goal of the var was to be able to setup a cluster in the aws environment without having to know all the IPs of the cluster before hand. It relied on the `hostvars` ansible varible to work correctly which it no longer does in 1.9. This may get fixed in the future but for now, the "magic" setup doesn't work.
- Changed `rabbitmq_clustered_hosts` to RABBITMQ_CLUSTERED_HOSTS.
- Role: edxapp - Role: edxapp
- Removed SUBDOMAIN_BRANDING and SUBDOMAIN_COURSE_LISTINGS variables - Removed SUBDOMAIN_BRANDING and SUBDOMAIN_COURSE_LISTINGS variables
......
...@@ -6,6 +6,8 @@ The goal of the edx/configuration project is to provide a simple, but ...@@ -6,6 +6,8 @@ The goal of the edx/configuration project is to provide a simple, but
flexible, way for anyone to stand up an instance of Open edX that is flexible, way for anyone to stand up an instance of Open edX that is
fully configured and ready-to-go. fully configured and ready-to-go.
Before getting started, please look at the [Open EdX Deployment options](https://open.edx.org/deployment-options), to see which method for deploying OpenEdX is right for you.
Building the platform takes place in two phases: Building the platform takes place in two phases:
* Infrastructure provisioning * Infrastructure provisioning
...@@ -17,6 +19,9 @@ and are free to use one, but not the other. The provisioning phase ...@@ -17,6 +19,9 @@ and are free to use one, but not the other. The provisioning phase
stands-up the required resources and tags them with role identifiers stands-up the required resources and tags them with role identifiers
so that the configuration tool can come in and complete the job. so that the configuration tool can come in and complete the job.
__Note__: The Cloudformation templates used for infrastructure provisioning
are no longer maintained. We are working to move to a more modern and flexible tool.
The reference platform is provisioned using an Amazon The reference platform is provisioned using an Amazon
[CloudFormation](http://aws.amazon.com/cloudformation/) template. [CloudFormation](http://aws.amazon.com/cloudformation/) template.
When the stack has been fully created you will have a new AWS Virtual When the stack has been fully created you will have a new AWS Virtual
...@@ -28,11 +33,9 @@ The configuration phase is managed by [Ansible](http://ansible.com/). ...@@ -28,11 +33,9 @@ The configuration phase is managed by [Ansible](http://ansible.com/).
We have provided a number of playbooks that will configure each of We have provided a number of playbooks that will configure each of
the edX services. the edX services.
This project is a re-write of the current edX provisioning and __Important__:
configuration tools, we will be migrating features to this project The edX configuration scripts need to be run as root on your servers and will make changes to service configurations including, but not limited to, sshd, dhclient, sudo, apparmor and syslogd. Our scripts are made available as we use them and they implement our best practices. We strongly recommend that you review everything that these scripts will do before running them against your servers. We also recommend against running them against servers that are hosting other applications. No warranty is expressed or implied.
over time, so expect frequent changes.
For more information including installation instruction please see the [Configuration Wiki](https://github.com/edx/configuration/wiki). For more information including installation instruction please see the [OpenEdX Wiki](https://openedx.atlassian.net/wiki/display/OpenOPS/Open+edX+Operations+Home).
For info on any large recent changes please see the [change log](https://github.com/edx/configuration/blob/master/CHANGELOG.md). For info on any large recent changes please see the [change log](https://github.com/edx/configuration/blob/master/CHANGELOG.md).
...@@ -26,7 +26,7 @@ pkg: docker.pkg ...@@ -26,7 +26,7 @@ pkg: docker.pkg
clean: clean:
rm -rf .build rm -rf .build
docker.test.shard: $(foreach image,$(shell echo $(images) | tr ' ' '\n' | sed -n '$(SHARD)~$(SHARDS)p'),$(docker_test)$(image)) docker.test.shard: $(foreach image,$(shell echo $(images) | tr ' ' '\n' | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))
docker.build: $(foreach image,$(images),$(docker_build)$(image)) docker.build: $(foreach image,$(images),$(docker_build)$(image))
docker.test: $(foreach image,$(images),$(docker_test)$(image)) docker.test: $(foreach image,$(images),$(docker_test)$(image))
...@@ -52,8 +52,8 @@ $(docker_push)%: $(docker_pkg)% ...@@ -52,8 +52,8 @@ $(docker_push)%: $(docker_pkg)%
.build/%/Dockerfile.d: docker/build/%/Dockerfile Makefile .build/%/Dockerfile.d: docker/build/%/Dockerfile Makefile
@mkdir -p .build/$* @mkdir -p .build/$*
$(eval FROM=$(shell grep "FROM" $< | sed --regexp-extended "s/FROM //" | sed --regexp-extended "s/:/@/g")) $(eval FROM=$(shell grep "^\s*FROM" $< | sed -E "s/FROM //" | sed -E "s/:/@/g"))
$(eval EDXOPS_FROM=$(shell echo "$(FROM)" | sed --regexp-extended "s#edxops/([^@]+)(@.*)?#\1#")) $(eval EDXOPS_FROM=$(shell echo "$(FROM)" | sed -E "s#edxops/([^@]+)(@.*)?#\1#"))
@echo "$(docker_build)$*: $(docker_pull)$(FROM)" > $@ @echo "$(docker_build)$*: $(docker_pull)$(FROM)" > $@
@if [ "$(EDXOPS_FROM)" != "$(FROM)" ]; then \ @if [ "$(EDXOPS_FROM)" != "$(FROM)" ]; then \
echo "$(docker_test)$*: $(docker_test)$(EDXOPS_FROM:@%=)" >> $@; \ echo "$(docker_test)$*: $(docker_test)$(EDXOPS_FROM:@%=)" >> $@; \
...@@ -65,10 +65,10 @@ $(docker_push)%: $(docker_pkg)% ...@@ -65,10 +65,10 @@ $(docker_push)%: $(docker_pkg)%
.build/%/Dockerfile.test: docker/build/%/Dockerfile Makefile .build/%/Dockerfile.test: docker/build/%/Dockerfile Makefile
@mkdir -p .build/$* @mkdir -p .build/$*
@sed --regexp-extended "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@ @sed -E "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@
.build/%/Dockerfile.pkg: docker/build/%/Dockerfile Makefile .build/%/Dockerfile.pkg: docker/build/%/Dockerfile Makefile
@mkdir -p .build/$* @mkdir -p .build/$*
@sed --regexp-extended "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@ @sed -E "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@
-include $(foreach image,$(images),.build/$(image)/Dockerfile.d) -include $(foreach image,$(images),.build/$(image)/Dockerfile.d)
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
Docker support for edX services is volatile and experimental. Docker support for edX services is volatile and experimental.
We welcome interested testers and contributors. If you are We welcome interested testers and contributors. If you are
interested in paticipating, please join us on Slack at interested in participating, please join us on Slack at
https://openedx.slack.com/messages/docker. https://openedx.slack.com/messages/docker.
We do not and may never run run these images in production. We do not and may never run run these images in production.
......
# Build using: docker build -f Dockerfile.gocd-agent -t gocd-agent .
# FROM edxops/precise-common:latest
FROM gocd/gocd-agent:16.2.1
LABEL version="0.01" \
description="This custom go-agent docker file installs additional requirements for the edx pipeline"
RUN apt-get update && apt-get install -y -q \
python \
python-dev \
python-distribute \
python-pip
# TODO: repalce this with a pip install command so we can version this properly
RUN git clone https://github.com/edx/tubular.git /opt/tubular
RUN pip install -r /opt/tubular/requirements.txt
RUN cd /opt/tubular;python setup.py install
\ No newline at end of file
##Usage
Start the container with this:
```docker run -ti -e GO_SERVER=your.go.server.ip_or_host gocd/gocd-agent```
If you need to start a few GoCD agents together, you can of course use the shell to do that. Start a few agents in the background, like this:
```for each in 1 2 3; do docker run -d --link angry_feynman:go-server gocd/gocd-agent; done```
##Getting into the container
Sometimes, you need a shell inside the container (to create test repositories, etc). docker provides an easy way to do that:
```docker exec -i -t CONTAINER-ID /bin/bash```
To check the agent logs, you can do this:
```docker exec -i -t CONTAINER-ID tail -f /var/log/go-agent/go-agent.log```
##Agent Configuration
The go-agent expects it's configuration to be found at ```/var/lib/go-agent/config/```. Sharing the
configuration between containers is done by mounting a volume at this location that contains any configuration files
necessary.
**Example docker run command:**
```docker run -ti -v /tmp/go-agent/conf:/var/lib/go-agent/config -e GO_SERVER=gocd.sandbox.edx.org 718d75c467c0 bash```
[How to setup auto registration for remote agents](https://docs.go.cd/current/advanced_usage/agent_auto_register.html)
- name: Configure instance(s)
hosts: all
sudo: True
roles:
- jenkins_analytics
# #
# Requires MySQL-python be installed for system python
# This play will create databases and user for an application. # This play will create databases and user for an application.
# It can be run like so: # It can be run like so:
# #
# ansible-playbook -i 'localhost,' create_analytics_reports_dbs.yml -e@./db.yml # ansible-playbook -c local -i 'localhost,' create_dbs_and_users.yml -e@./db.yml
# #
# where the content of dbs.yml contains the following dictionaries # where the content of dbs.yml contains the following dictionaries
# #
...@@ -50,7 +49,6 @@ ...@@ -50,7 +49,6 @@
# to system python. # to system python.
- name: install python mysqldb module - name: install python mysqldb module
pip: name={{item}} state=present pip: name={{item}} state=present
sudo: yes
with_items: with_items:
- MySQL-python - MySQL-python
......
# Usage: ansible -i localhost, edx_service.yml -e@<PATH TO>/edx-secure/cloud_migrations/edx_service.yml -e@<PATH TO>/<DEPLOYMENT>-secure/cloud_migrations/vpcs/<ENVIRONMENT>-<DEPLOYMENT>.yml -e@<PATH TO>/edx-secure/cloud_migrations/idas/<CLUSTER>.yml # Usage: ansible-playbook -i localhost, edx_service.yml -e@<PATH TO>/edx-secure/cloud_migrations/edx_service.yml -e@<PATH TO>/<DEPLOYMENT>-secure/cloud_migrations/vpcs/<ENVIRONMENT>-<DEPLOYMENT>.yml -e@<PATH TO>/edx-secure/cloud_migrations/idas/<CLUSTER>.yml
--- ---
- name: Build application artifacts - name: Build application artifacts
...@@ -175,6 +175,7 @@ ...@@ -175,6 +175,7 @@
- name: Setup ELB DNS - name: Setup ELB DNS
route53: route53:
profile: "{{ profile }}"
command: "create" command: "create"
zone: "{{ dns_zone_name }}" zone: "{{ dns_zone_name }}"
record: "{{ item.elb.name }}.{{ dns_zone_name }}" record: "{{ item.elb.name }}.{{ dns_zone_name }}"
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
- name: stop certs service - name: stop certs service
service: name="certificates" state="stopped" service: name="certificates" state="stopped"
- name: checkout code - name: checkout code
git: > git_2_0_1: >
repo="{{ repo_url }}" repo="{{ repo_url }}"
dest="{{ repo_path }}" dest="{{ repo_path }}"
version="{{ certificates_version }}" version="{{ certificates_version }}"
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
- edxlocal - edxlocal
- role: mongo - role: mongo
when: "'localhost' in EDXAPP_MONGO_HOSTS" when: "'localhost' in EDXAPP_MONGO_HOSTS"
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- { role: 'edxapp', celery_worker: True } - { role: 'edxapp', celery_worker: True }
- edxapp - edxapp
- notifier - notifier
...@@ -42,7 +43,6 @@ ...@@ -42,7 +43,6 @@
- edx_notes_api - edx_notes_api
- demo - demo
- oauth_client_setup - oauth_client_setup
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- oraclejdk - oraclejdk
- role: elasticsearch - role: elasticsearch
when: "'localhost' in EDXAPP_ELASTIC_SEARCH_CONFIG|map(attribute='host')" when: "'localhost' in EDXAPP_ELASTIC_SEARCH_CONFIG|map(attribute='host')"
......
# ansible-playbook -i 'admin.edx.org,' ./hotg.yml -e@/path/to/ansible/vars/edx.yml -e@/path/to/secure/ansible/vars/edx_admin.yml
- name: Install go-agent-docker-server
hosts: all
sudo: True
gather_facts: True
roles:
- aws
- go-agent-docker-server
...@@ -6,5 +6,4 @@ ...@@ -6,5 +6,4 @@
gather_facts: True gather_facts: True
roles: roles:
- aws - aws
- supervisor
- go-server - go-server
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: git
author:
- "Ansible Core Team"
- "Michael DeHaan"
version_added: "0.0.1"
short_description: Deploy software (or files) from git checkouts
description:
- Manage I(git) checkouts of repositories to deploy files or software.
options:
repo:
required: true
aliases: [ name ]
description:
- git, SSH, or HTTP protocol address of the git repository.
dest:
required: true
description:
- Absolute path of where the repository should be checked out to.
This parameter is required, unless C(clone) is set to C(no)
This change was made in version 1.8.3. Prior to this version,
the C(dest) parameter was always required.
version:
required: false
default: "HEAD"
description:
- What version of the repository to check out. This can be the
full 40-character I(SHA-1) hash, the literal string C(HEAD), a
branch name, or a tag name.
accept_hostkey:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "1.5"
description:
- if C(yes), adds the hostkey for the repo url if not already
added. If ssh_opts contains "-o StrictHostKeyChecking=no",
this parameter is ignored.
ssh_opts:
required: false
default: None
version_added: "1.5"
description:
- Creates a wrapper script and exports the path as GIT_SSH
which git then automatically uses to override ssh arguments.
An example value could be "-o StrictHostKeyChecking=no"
key_file:
required: false
default: None
version_added: "1.5"
description:
- Specify an optional private key file to use for the checkout.
reference:
required: false
default: null
version_added: "1.4"
description:
- Reference repository (see "git clone --reference ...")
remote:
required: false
default: "origin"
description:
- Name of the remote.
refspec:
required: false
default: null
version_added: "1.9"
description:
- Add an additional refspec to be fetched.
If version is set to a I(SHA-1) not reachable from any branch
or tag, this option may be necessary to specify the ref containing
the I(SHA-1).
Uses the same syntax as the 'git fetch' command.
An example value could be "refs/meta/config".
force:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "0.7"
description:
- If C(yes), any modified files in the working
repository will be discarded. Prior to 0.7, this was always
'yes' and could not be disabled. Prior to 1.9, the default was
`yes`
depth:
required: false
default: null
version_added: "1.2"
description:
- Create a shallow clone with a history truncated to the specified
number or revisions. The minimum possible value is C(1), otherwise
ignored.
clone:
required: false
default: "yes"
choices: [ "yes", "no" ]
version_added: "1.9"
description:
- If C(no), do not clone the repository if it does not exist locally
update:
required: false
default: "yes"
choices: [ "yes", "no" ]
version_added: "1.2"
description:
- If C(no), do not retrieve new revisions from the origin repository
executable:
required: false
default: null
version_added: "1.4"
description:
- Path to git executable to use. If not supplied,
the normal mechanism for resolving binary paths will be used.
bare:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "1.4"
description:
- if C(yes), repository will be created as a bare repo, otherwise
it will be a standard repo with a workspace.
recursive:
required: false
default: "yes"
choices: [ "yes", "no" ]
version_added: "1.6"
description:
- if C(no), repository will be cloned without the --recursive
option, skipping sub-modules.
track_submodules:
required: false
default: "no"
choices: ["yes", "no"]
version_added: "1.8"
description:
- if C(yes), submodules will track the latest commit on their
master branch (or other branch specified in .gitmodules). If
C(no), submodules will be kept at the revision specified by the
main project. This is equivalent to specifying the --remote flag
to git submodule update.
verify_commit:
required: false
default: "no"
choices: ["yes", "no"]
version_added: "2.0"
description:
- if C(yes), when cloning or checking out a C(version) verify the
signature of a GPG signed commit. This requires C(git) version>=2.1.0
to be installed. The commit MUST be signed and the public key MUST
be trusted in the GPG trustdb.
requirements:
- git (the command line tool)
notes:
- "If the task seems to be hanging, first verify remote host is in C(known_hosts).
SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt,
one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling
the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts."
'''
EXAMPLES = '''
# Example git checkout from Ansible Playbooks
- git: repo=git://foosball.example.org/path/to/repo.git
dest=/srv/checkout
version=release-0.22
# Example read-write git checkout from github
- git: repo=ssh://git@github.com/mylogin/hello.git dest=/home/mylogin/hello
# Example just ensuring the repo checkout exists
- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no
# Example just get information about the repository whether or not it has
# already been cloned locally.
- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout clone=no update=no
# Example checkout a github repo and use refspec to fetch all pull requests
- git: repo=https://github.com/ansible/ansible-examples.git dest=/src/ansible-examples refspec=+refs/pull/*:refs/heads/*
'''
import re
import tempfile
def get_submodule_update_params(module, git_path, cwd):
#or: git submodule [--quiet] update [--init] [-N|--no-fetch]
#[-f|--force] [--rebase] [--reference <repository>] [--merge]
#[--recursive] [--] [<path>...]
params = []
# run a bad submodule command to get valid params
cmd = "%s submodule update --help" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=cwd)
lines = stderr.split('\n')
update_line = None
for line in lines:
if 'git submodule [--quiet] update ' in line:
update_line = line
if update_line:
update_line = update_line.replace('[','')
update_line = update_line.replace(']','')
update_line = update_line.replace('|',' ')
parts = shlex.split(update_line)
for part in parts:
if part.startswith('--'):
part = part.replace('--', '')
params.append(part)
return params
def write_ssh_wrapper():
module_dir = get_module_path()
try:
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
if os.access(module_dir, os.W_OK|os.R_OK|os.X_OK):
fd, wrapper_path = tempfile.mkstemp(prefix=module_dir + '/')
else:
raise OSError
except (IOError, OSError):
fd, wrapper_path = tempfile.mkstemp()
fh = os.fdopen(fd, 'w+b')
template = """#!/bin/sh
if [ -z "$GIT_SSH_OPTS" ]; then
BASEOPTS=""
else
BASEOPTS=$GIT_SSH_OPTS
fi
if [ -z "$GIT_KEY" ]; then
ssh $BASEOPTS "$@"
else
ssh -i "$GIT_KEY" $BASEOPTS "$@"
fi
"""
fh.write(template)
fh.close()
st = os.stat(wrapper_path)
os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
return wrapper_path
def set_git_ssh(ssh_wrapper, key_file, ssh_opts):
if os.environ.get("GIT_SSH"):
del os.environ["GIT_SSH"]
os.environ["GIT_SSH"] = ssh_wrapper
if os.environ.get("GIT_KEY"):
del os.environ["GIT_KEY"]
if key_file:
os.environ["GIT_KEY"] = key_file
if os.environ.get("GIT_SSH_OPTS"):
del os.environ["GIT_SSH_OPTS"]
if ssh_opts:
os.environ["GIT_SSH_OPTS"] = ssh_opts
def get_version(module, git_path, dest, ref="HEAD"):
''' samples the version of the git repo '''
cmd = "%s rev-parse %s" % (git_path, ref)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
sha = stdout.rstrip('\n')
return sha
def get_submodule_versions(git_path, module, dest, version='HEAD'):
cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg='Unable to determine hashes of submodules')
submodules = {}
subm_name = None
for line in out.splitlines():
if line.startswith("Entering '"):
subm_name = line[10:-1]
elif len(line.strip()) == 40:
if subm_name is None:
module.fail_json()
submodules[subm_name] = line.strip()
subm_name = None
else:
module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())
if subm_name is not None:
module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)
return submodules
def clone(git_path, module, repo, dest, remote, depth, version, bare,
reference, refspec, verify_commit):
''' makes a new git repo if it does not already exist '''
dest_dirname = os.path.dirname(dest)
try:
os.makedirs(dest_dirname)
except:
pass
cmd = [ git_path, 'clone' ]
if bare:
cmd.append('--bare')
else:
cmd.extend([ '--origin', remote ])
if is_remote_branch(git_path, module, dest, repo, version) \
or is_remote_tag(git_path, module, dest, repo, version):
cmd.extend([ '--branch', version ])
if depth:
cmd.extend([ '--depth', str(depth) ])
if reference:
cmd.extend([ '--reference', str(reference) ])
cmd.extend([ repo, dest ])
module.run_command(cmd, check_rc=True, cwd=dest_dirname)
if bare:
if remote != 'origin':
module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)
if refspec:
module.run_command([git_path, 'fetch', remote, refspec], check_rc=True, cwd=dest)
if verify_commit:
verify_commit_sign(git_path, module, dest, version)
def has_local_mods(module, git_path, dest, bare):
if bare:
return False
cmd = "%s status -s" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
lines = stdout.splitlines()
lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines)
return len(lines) > 0
def reset(git_path, module, dest):
'''
Resets the index and working tree to HEAD.
Discards any changes to tracked files in working
tree since that commit.
'''
cmd = "%s reset --hard HEAD" % (git_path,)
return module.run_command(cmd, check_rc=True, cwd=dest)
def get_remote_head(git_path, module, dest, version, remote, bare):
cloning = False
cwd = None
tag = False
if remote == module.params['repo']:
cloning = True
else:
cwd = dest
if version == 'HEAD':
if cloning:
# cloning the repo, just get the remote's HEAD version
cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)
else:
head_branch = get_head_branch(git_path, module, dest, remote, bare)
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)
elif is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
elif is_remote_tag(git_path, module, dest, remote, version):
tag = True
cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)
else:
# appears to be a sha1. return as-is since it appears
# cannot check for a specific sha1 on remote
return version
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)
if len(out) < 1:
module.fail_json(msg="Could not determine remote revision for %s" % version)
if tag:
# Find the dereferenced tag if this is an annotated tag.
for tag in out.split('\n'):
if tag.endswith(version + '^{}'):
out = tag
break
elif tag.endswith(version):
out = tag
rev = out.split()[0]
return rev
def is_remote_tag(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if version in out:
return True
else:
return False
def get_branches(git_path, module, dest):
branches = []
cmd = '%s branch -a' % (git_path,)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine branch data - received %s" % out)
for line in out.split('\n'):
branches.append(line.strip())
return branches
def get_tags(git_path, module, dest):
tags = []
cmd = '%s tag' % (git_path,)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine tag data - received %s" % out)
for line in out.split('\n'):
tags.append(line.strip())
return tags
def is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if version in out:
return True
else:
return False
def is_local_branch(git_path, module, dest, branch):
branches = get_branches(git_path, module, dest)
lbranch = '%s' % branch
if lbranch in branches:
return True
elif '* %s' % branch in branches:
return True
else:
return False
def is_not_a_branch(git_path, module, dest):
branches = get_branches(git_path, module, dest)
for b in branches:
if b.startswith('* ') and ('no branch' in b or 'detached from' in b):
return True
return False
def get_head_branch(git_path, module, dest, remote, bare=False):
'''
Determine what branch HEAD is associated with. This is partly
taken from lib/ansible/utils/__init__.py. It finds the correct
path to .git/HEAD and reads from that file the branch that HEAD is
associated with. In the case of a detached HEAD, this will look
up the branch in .git/refs/remotes/<remote>/HEAD.
'''
if bare:
repo_path = dest
else:
repo_path = os.path.join(dest, '.git')
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
# There is a posibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path.split('.git')[0], gitdir)
except (IOError, AttributeError):
return ''
# Read .git/HEAD for the name of the branch.
# If we're in a detached HEAD state, look up the branch associated with
# the remote HEAD in .git/refs/remotes/<remote>/HEAD
f = open(os.path.join(repo_path, "HEAD"))
if is_not_a_branch(git_path, module, dest):
f.close()
f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))
branch = f.readline().split('/')[-1].rstrip("\n")
f.close()
return branch
def set_remote_url(git_path, module, repo, dest, remote):
''' updates repo from remote sources '''
commands = [("set a new url %s for %s" % (repo, remote), [git_path, 'remote', 'set-url', remote, repo])]
for (label,command) in commands:
(rc,out,err) = module.run_command(command, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
def fetch(git_path, module, repo, dest, version, remote, bare, refspec):
''' updates repo from remote sources '''
set_remote_url(git_path, module, repo, dest, remote)
commands = []
fetch_str = 'download remote objects and refs'
if bare:
refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']
if refspec:
refspecs.append(refspec)
commands.append((fetch_str, [git_path, 'fetch', remote] + refspecs))
else:
# unlike in bare mode, there's no way to combine the
# additional refspec with the default git fetch behavior,
# so use two commands
commands.append((fetch_str, [git_path, 'fetch', remote]))
refspecs = ['+refs/tags/*:refs/tags/*']
if refspec:
refspecs.append(refspec)
commands.append((fetch_str, [git_path, 'fetch', remote] + refspecs))
for (label,command) in commands:
(rc,out,err) = module.run_command(command, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
def submodules_fetch(git_path, module, remote, track_submodules, dest):
changed = False
if not os.path.exists(os.path.join(dest, '.gitmodules')):
# no submodules
return changed
gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')
for line in gitmodules_file:
# Check for new submodules
if not changed and line.strip().startswith('path'):
path = line.split('=', 1)[1].strip()
# Check that dest/path/.git exists
if not os.path.exists(os.path.join(dest, path, '.git')):
changed = True
# add the submodule repo's hostkey
if line.strip().startswith('url'):
repo = line.split('=', 1)[1].strip()
if module.params['ssh_opts'] is not None:
if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
else:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
# Check for updates to existing modules
if not changed:
# Fetch updates
begin = get_submodule_versions(git_path, module, dest)
cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to fetch submodules: %s" % out + err)
if track_submodules:
# Compare against submodule HEAD
### FIXME: determine this from .gitmodules
version = 'master'
after = get_submodule_versions(git_path, module, dest, '%s/%s'
% (remote, version))
if begin != after:
changed = True
else:
# Compare against the superproject's expectation
cmd = [git_path, 'submodule', 'status']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)
for line in out.splitlines():
if line[0] != ' ':
changed = True
break
return changed
def submodule_update(git_path, module, dest, track_submodules):
''' init and update any submodules '''
# get the valid submodule params
params = get_submodule_update_params(module, git_path, dest)
# skip submodule commands if .gitmodules is not present
if not os.path.exists(os.path.join(dest, '.gitmodules')):
return (0, '', '')
cmd = [ git_path, 'submodule', 'sync' ]
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if 'remote' in params and track_submodules:
cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ,'--remote' ]
else:
cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ]
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to init/update submodules: %s" % out + err)
return (rc, out, err)
def switch_version(git_path, module, dest, remote, version, verify_commit):
cmd = ''
if version != 'HEAD':
if is_remote_branch(git_path, module, dest, remote, version):
if not is_local_branch(git_path, module, dest, version):
cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version)
else:
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % version)
cmd = "%s reset --hard %s/%s" % (git_path, remote, version)
else:
cmd = "%s checkout --force %s" % (git_path, version)
else:
branch = get_head_branch(git_path, module, dest, remote)
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % branch)
cmd = "%s reset --hard %s" % (git_path, remote)
(rc, out1, err1) = module.run_command(cmd, cwd=dest)
if rc != 0:
if version != 'HEAD':
module.fail_json(msg="Failed to checkout %s" % (version))
else:
module.fail_json(msg="Failed to checkout branch %s" % (branch))
if verify_commit:
verify_commit_sign(git_path, module, dest, version)
return (rc, out1, err1)
def verify_commit_sign(git_path, module, dest, version):
cmd = "%s verify-commit %s" % (git_path, version)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version)
return (rc, out, err)
# ===========================================
def main():
module = AnsibleModule(
argument_spec = dict(
dest=dict(),
repo=dict(required=True, aliases=['name']),
version=dict(default='HEAD'),
remote=dict(default='origin'),
refspec=dict(default=None),
reference=dict(default=None),
force=dict(default='no', type='bool'),
depth=dict(default=None, type='int'),
clone=dict(default='yes', type='bool'),
update=dict(default='yes', type='bool'),
verify_commit=dict(default='no', type='bool'),
accept_hostkey=dict(default='no', type='bool'),
key_file=dict(default=None, required=False),
ssh_opts=dict(default=None, required=False),
executable=dict(default=None),
bare=dict(default='no', type='bool'),
recursive=dict(default='yes', type='bool'),
track_submodules=dict(default='no', type='bool'),
),
supports_check_mode=True
)
dest = module.params['dest']
repo = module.params['repo']
version = module.params['version']
remote = module.params['remote']
refspec = module.params['refspec']
force = module.params['force']
depth = module.params['depth']
update = module.params['update']
allow_clone = module.params['clone']
bare = module.params['bare']
verify_commit = module.params['verify_commit']
reference = module.params['reference']
git_path = module.params['executable'] or module.get_bin_path('git', True)
key_file = module.params['key_file']
ssh_opts = module.params['ssh_opts']
# We screenscrape a huge amount of git commands so use C locale anytime we
# call run_command()
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
gitconfig = None
if not dest and allow_clone:
module.fail_json(msg="the destination directory must be specified unless clone=no")
elif dest:
dest = os.path.abspath(os.path.expanduser(dest))
if bare:
gitconfig = os.path.join(dest, 'config')
else:
gitconfig = os.path.join(dest, '.git', 'config')
# make sure the key_file path is expanded for ~ and $HOME
if key_file is not None:
key_file = os.path.abspath(os.path.expanduser(key_file))
# create a wrapper script and export
# GIT_SSH=<path> as an environment variable
# for git to use the wrapper script
ssh_wrapper = None
if key_file or ssh_opts:
ssh_wrapper = write_ssh_wrapper()
set_git_ssh(ssh_wrapper, key_file, ssh_opts)
module.add_cleanup_file(path=ssh_wrapper)
# add the git repo's hostkey
if module.params['ssh_opts'] is not None:
if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
else:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
recursive = module.params['recursive']
track_submodules = module.params['track_submodules']
rc, out, err, status = (0, None, None, None)
before = None
local_mods = False
repo_updated = None
if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):
# if there is no git configuration, do a clone operation unless:
# * the user requested no clone (they just want info)
# * we're doing a check mode test
# In those cases we do an ls-remote
if module.check_mode or not allow_clone:
remote_head = get_remote_head(git_path, module, dest, version, repo, bare)
module.exit_json(changed=True, before=before, after=remote_head)
# there's no git config, so clone
clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, verify_commit)
repo_updated = True
elif not update:
# Just return having found a repo already in the dest path
# this does no checking that the repo is the actual repo
# requested.
before = get_version(module, git_path, dest)
module.exit_json(changed=False, before=before, after=before)
else:
# else do a pull
local_mods = has_local_mods(module, git_path, dest, bare)
before = get_version(module, git_path, dest)
if local_mods:
# failure should happen regardless of check mode
if not force:
module.fail_json(msg="Local modifications exist in repository (force=no).")
# if force and in non-check mode, do a reset
if not module.check_mode:
reset(git_path, module, dest)
# exit if already at desired sha version
set_remote_url(git_path, module, repo, dest, remote)
remote_head = get_remote_head(git_path, module, dest, version, remote, bare)
if before == remote_head:
if local_mods:
module.exit_json(changed=True, before=before, after=remote_head,
msg="Local modifications exist")
elif is_remote_tag(git_path, module, dest, repo, version):
# if the remote is a tag and we have the tag locally, exit early
if version in get_tags(git_path, module, dest):
repo_updated = False
else:
# if the remote is a branch and we have the branch locally, exit early
if version in get_branches(git_path, module, dest):
repo_updated = False
if repo_updated is None:
if module.check_mode:
module.exit_json(changed=True, before=before, after=remote_head)
fetch(git_path, module, repo, dest, version, remote, bare, refspec)
repo_updated = True
# switch to version specified regardless of whether
# we got new revisions from the repository
if not bare:
switch_version(git_path, module, dest, remote, version, verify_commit)
# Deal with submodules
submodules_updated = False
if recursive and not bare:
submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)
if module.check_mode:
if submodules_updated:
module.exit_json(changed=True, before=before, after=remote_head, submodules_changed=True)
else:
module.exit_json(changed=False, before=before, after=remote_head)
if submodules_updated:
# Switch to version specified
submodule_update(git_path, module, dest, track_submodules)
# determine if we changed anything
after = get_version(module, git_path, dest)
changed = False
if before != after or local_mods or submodules_updated:
changed = True
# cleanup the wrapper script
if ssh_wrapper:
try:
os.remove(ssh_wrapper)
except OSError:
# No need to fail if the file already doesn't exist
pass
module.exit_json(changed=changed, before=before, after=after)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.known_hosts import *
if __name__ == '__main__':
main()
...@@ -99,6 +99,11 @@ ...@@ -99,6 +99,11 @@
#depends on no other vars #depends on no other vars
depends_on: True depends_on: True
- db_host: "{{ EDXAPP_MYSQL_CSMH_REPLICA_HOST }}"
db_name: "{{ EDXAPP_MYSQL_CSMH_DB_NAME }}"
script_name: csmh-mysql.sh
depends_on: True
- db_host: "{{ AD_HOC_REPORTING_XQUEUE_MYSQL_REPLICA_HOST }}" - db_host: "{{ AD_HOC_REPORTING_XQUEUE_MYSQL_REPLICA_HOST }}"
db_name: "{{ XQUEUE_MYSQL_DB_NAME }}" db_name: "{{ XQUEUE_MYSQL_DB_NAME }}"
script_name: xqueue-mysql.sh script_name: xqueue-mysql.sh
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
notify: restart alton notify: restart alton
- name: checkout the code - name: checkout the code
git: > git_2_0_1: >
dest="{{ alton_code_dir }}" repo="{{ alton_source_repo }}" dest="{{ alton_code_dir }}" repo="{{ alton_source_repo }}"
version="{{ alton_version }}" accept_hostkey=yes version="{{ alton_version }}" accept_hostkey=yes
sudo_user: "{{ alton_user }}" sudo_user: "{{ alton_user }}"
......
...@@ -66,8 +66,8 @@ ...@@ -66,8 +66,8 @@
- name: migrate - name: migrate
shell: > shell: >
chdir={{ analytics_api_code_dir }} chdir={{ analytics_api_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }} DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }} DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python ./manage.py migrate --noinput {{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python ./manage.py migrate --noinput
sudo_user: "{{ analytics_api_user }}" sudo_user: "{{ analytics_api_user }}"
environment: "{{ analytics_api_environment }}" environment: "{{ analytics_api_environment }}"
......
{
"connection_user": "hadoop",
"credentials_file_url": "/edx/etc/edx-analytics-pipeline/output.json",
"exporter_output_bucket": "",
"geolocation_data": "/var/tmp/geolocation-data.mmdb",
"hive_user": "hadoop",
"host": "localhost",
"identifier": "local-devstack",
"manifest_input_format": "org.edx.hadoop.input.ManifestTextInputFormat",
"oddjob_jar": "hdfs://localhost:9000/edx-analytics-pipeline/packages/edx-analytics-hadoop-util.jar",
"tasks_branch": "origin/HEAD",
"tasks_log_path": "/tmp/acceptance/",
"tasks_output_url": "hdfs://localhost:9000/acceptance-test-output/",
"tasks_repo": "/edx/app/analytics_pipeline/analytics_pipeline",
"vertica_creds_url": "",
"wheel_url": "https://edx-wheelhouse.s3-website-us-east-1.amazonaws.com/Ubuntu/precise"
}
...@@ -10,9 +10,9 @@ ...@@ -10,9 +10,9 @@
# #
# #
# Tasks for role analytics_pipeline # Tasks for role analytics_pipeline
# #
# Overview: # Overview:
# #
# Prepare the machine to run the edX Analytics Data Pipeline. The pipeline currently "installs itself" # Prepare the machine to run the edX Analytics Data Pipeline. The pipeline currently "installs itself"
# via an ansible playbook that is not included in the edx/configuration repo. However, in order to # via an ansible playbook that is not included in the edx/configuration repo. However, in order to
# run the pipeline in a devstack environment, some configuration needs to be performed. In a production # run the pipeline in a devstack environment, some configuration needs to be performed. In a production
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
# hadoop_master: ensures hadoop services are installed # hadoop_master: ensures hadoop services are installed
# hive: the pipeline makes extensive usage of hive, so that needs to be installed as well # hive: the pipeline makes extensive usage of hive, so that needs to be installed as well
# sqoop: similarly to hive, the pipeline uses this tool extensively # sqoop: similarly to hive, the pipeline uses this tool extensively
# #
# Example play: # Example play:
# #
# - name: Deploy all dependencies of edx-analytics-pipeline to the node # - name: Deploy all dependencies of edx-analytics-pipeline to the node
...@@ -83,7 +83,7 @@ ...@@ -83,7 +83,7 @@
- install:configuration - install:configuration
- name: util library source checked out - name: util library source checked out
git: > git_2_0_1: >
dest={{ analytics_pipeline_util_library.path }} repo={{ analytics_pipeline_util_library.repo }} dest={{ analytics_pipeline_util_library.path }} repo={{ analytics_pipeline_util_library.repo }}
version={{ analytics_pipeline_util_library.version }} version={{ analytics_pipeline_util_library.version }}
tags: tags:
...@@ -174,3 +174,22 @@ ...@@ -174,3 +174,22 @@
tags: tags:
- install - install
- install:configuration - install:configuration
- name: store configuration for acceptance tests
copy: >
src=acceptance.json
dest=/var/tmp/acceptance.json
mode=644
tags:
- install
- install:configuration
- name: grant access to table storing test data in output database
mysql_user: >
user={{ ANALYTICS_PIPELINE_OUTPUT_DATABASE.username }}
password={{ ANALYTICS_PIPELINE_OUTPUT_DATABASE.password }}
priv=acceptance%.*:ALL
append_privs=yes
tags:
- install
- install:configuration
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
{{ role_name|upper }}_VERSION: "master" {{ role_name|upper }}_VERSION: "master"
{{ role_name|upper }}_DJANGO_SETTINGS_MODULE: "{{ role_name }}.settings.production" {{ role_name|upper }}_DJANGO_SETTINGS_MODULE: "{{ role_name }}.settings.production"
{{ role_name|upper }}_URL_ROOT: 'http://{{ role_name }}:18{{ port_suffix }}' {{ role_name|upper }}_URL_ROOT: 'http://{{ role_name }}:18{{ port_suffix }}'
{{ role_name|upper }}_OAUTH_URL_ROOT: 'http://127.0.0.1:8000' {{ role_name|upper }}_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ISSUER | default("http://127.0.0.1:8000/oauth2") }}'
{{ role_name|upper }}_SECRET_KEY: 'Your secret key here' {{ role_name|upper }}_SECRET_KEY: 'Your secret key here'
{{ role_name|upper }}_TIME_ZONE: 'UTC' {{ role_name|upper }}_TIME_ZONE: 'UTC'
...@@ -63,7 +63,7 @@ ...@@ -63,7 +63,7 @@
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_KEY }}' SOCIAL_AUTH_EDX_OIDC_KEY: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_SECRET }}' SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_SECRET }}' SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ '{{' }} {{ role_name|upper }}_OAUTH_URL_ROOT }}/oauth2' SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ '{{' }} {{ role_name|upper }}_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}' SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
STATIC_ROOT: "{{ '{{' }} COMMON_DATA_DIR }}/{{ '{{' }} {{ role_name }}_service_name }}/staticfiles" STATIC_ROOT: "{{ '{{' }} COMMON_DATA_DIR }}/{{ '{{' }} {{ role_name }}_service_name }}/staticfiles"
......
...@@ -16,13 +16,17 @@ ...@@ -16,13 +16,17 @@
# logs by security group. # logs by security group.
# !! The buckets defined below MUST exist prior to enabling !! # !! The buckets defined below MUST exist prior to enabling !!
# this feature and the instance IAM role must have write permissions # this feature and the instance IAM role must have write permissions
# to the buckets # to the buckets, or you must specify the access and secret keys below.
AWS_S3_LOGS: false AWS_S3_LOGS: false
# If there are any issues with the s3 sync an error # If there are any issues with the s3 sync an error
# log will be sent to the following address. # log will be sent to the following address.
# This relies on your server being able to send mail # This relies on your server being able to send mail
AWS_S3_LOGS_NOTIFY_EMAIL: dummy@example.com AWS_S3_LOGS_NOTIFY_EMAIL: dummy@example.com
AWS_S3_LOGS_FROM_EMAIL: dummy@example.com AWS_S3_LOGS_FROM_EMAIL: dummy@example.com
# Credentials for S3 access in case the instance role doesn't have write
# permissions to S3
AWS_S3_LOGS_ACCESS_KEY_ID: ""
AWS_S3_LOGS_SECRET_KEY: ""
# #
# vars are namespace with the module name. # vars are namespace with the module name.
...@@ -50,7 +54,7 @@ aws_s3_sync_script: "{{ aws_dirs.home.path }}/send-logs-to-s3" ...@@ -50,7 +54,7 @@ aws_s3_sync_script: "{{ aws_dirs.home.path }}/send-logs-to-s3"
aws_s3_logfile: "{{ aws_dirs.logs.path }}/s3-log-sync.log" aws_s3_logfile: "{{ aws_dirs.logs.path }}/s3-log-sync.log"
aws_region: "us-east-1" aws_region: "us-east-1"
# default path to the aws binary # default path to the aws binary
aws_s3cmd: "{{ COMMON_BIN_DIR }}/s3cmd" aws_s3cmd: "/usr/local/bin/s3cmd"
aws_cmd: "/usr/local/bin/aws" aws_cmd: "/usr/local/bin/aws"
# #
# OS packages # OS packages
...@@ -63,7 +67,6 @@ aws_pip_pkgs: ...@@ -63,7 +67,6 @@ aws_pip_pkgs:
- https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz - https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
- awscli==1.4.2 - awscli==1.4.2
- boto=="{{ common_boto_version }}" - boto=="{{ common_boto_version }}"
- s3cmd==1.6.1
aws_redhat_pkgs: [] aws_redhat_pkgs: []
aws_s3cmd_version: s3cmd-1.5.0-beta1
aws_s3cmd_url: "http://files.edx.org/s3cmd/{{ aws_s3cmd_version }}.tar.gz"
...@@ -70,23 +70,6 @@ ...@@ -70,23 +70,6 @@
extra_args="-i {{ COMMON_PYPI_MIRROR_URL }}" extra_args="-i {{ COMMON_PYPI_MIRROR_URL }}"
with_items: aws_pip_pkgs with_items: aws_pip_pkgs
- name: get s3cmd
get_url: >
url={{ aws_s3cmd_url }}
dest={{ aws_dirs.data.path }}/
- name: untar s3cmd
shell: >
tar xf {{ aws_dirs.data.path }}/{{ aws_s3cmd_version }}.tar.gz
creates={{ aws_dirs.data.path }}/{{ aws_s3cmd_version }}/s3cmd
chdir={{ aws_dirs.home.path }}
- name: create symlink for s3cmd
file: >
src={{ aws_dirs.home.path }}/{{ aws_s3cmd_version }}/s3cmd
dest={{ aws_s3cmd }}
state=link
- name: create s3 log sync script - name: create s3 log sync script
template: > template: >
dest={{ aws_s3_sync_script }} dest={{ aws_s3_sync_script }}
......
...@@ -116,5 +116,11 @@ availability_zone=$(ec2metadata --availability-zone) ...@@ -116,5 +116,11 @@ availability_zone=$(ec2metadata --availability-zone)
# region isn't available via the metadata service # region isn't available via the metadata service
region=${availability_zone:0:${{ lb }}#availability_zone{{ rb }} - 1} region=${availability_zone:0:${{ lb }}#availability_zone{{ rb }} - 1}
{% if AWS_S3_LOGS_ACCESS_KEY_ID %}
auth_opts="--access_key {{ AWS_S3_LOGS_ACCESS_KEY_ID }} --secret_key {{ AWS_S3_LOGS_SECRET_KEY }}"
{% else %}
auth_opts=""
{% endif %}
s3_path="${2}/$sec_grp/" s3_path="${2}/$sec_grp/"
$noop {{ aws_s3cmd }} --multipart-chunk-size-mb 5120 --disable-multipart sync $directory "s3://${bucket_path}/${sec_grp}/${instance_id}-${ip}/" $noop {{ aws_s3cmd }} $auth_opts --multipart-chunk-size-mb 5120 --disable-multipart sync $directory "s3://${bucket_path}/${sec_grp}/${instance_id}-${ip}/"
...@@ -15,6 +15,13 @@ ...@@ -15,6 +15,13 @@
when: download_deb.changed when: download_deb.changed
with_items: browser_s3_deb_pkgs with_items: browser_s3_deb_pkgs
# Because the source location has been deprecated, we need to
# ensure it does not interfere with subsequent apt commands
- name: remove google chrome debian source list
file:
path: /etc/apt/sources.list.d/google-chrome.list
state: absent
- name: download ChromeDriver - name: download ChromeDriver
get_url: get_url:
url={{ chromedriver_url }} url={{ chromedriver_url }}
......
...@@ -41,7 +41,7 @@ ...@@ -41,7 +41,7 @@
when: CERTS_GIT_IDENTITY != "none" when: CERTS_GIT_IDENTITY != "none"
- name: checkout certificates repo into {{ certs_code_dir }} - name: checkout certificates repo into {{ certs_code_dir }}
git: > git_2_0_1: >
dest={{ certs_code_dir }} repo={{ CERTS_REPO }} version={{ certs_version }} dest={{ certs_code_dir }} repo={{ CERTS_REPO }} version={{ certs_version }}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ certs_user }}" sudo_user: "{{ certs_user }}"
...@@ -51,7 +51,7 @@ ...@@ -51,7 +51,7 @@
when: CERTS_GIT_IDENTITY != "none" when: CERTS_GIT_IDENTITY != "none"
- name: checkout certificates repo into {{ certs_code_dir }} - name: checkout certificates repo into {{ certs_code_dir }}
git: > git_2_0_1: >
dest={{ certs_code_dir }} repo={{ CERTS_REPO }} version={{ certs_version }} dest={{ certs_code_dir }} repo={{ CERTS_REPO }} version={{ certs_version }}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ certs_user }}" sudo_user: "{{ certs_user }}"
......
...@@ -46,8 +46,9 @@ CREDENTIALS_CACHES: ...@@ -46,8 +46,9 @@ CREDENTIALS_CACHES:
LOCATION: '{{ CREDENTIALS_MEMCACHE }}' LOCATION: '{{ CREDENTIALS_MEMCACHE }}'
CREDENTIALS_DJANGO_SETTINGS_MODULE: "credentials.settings.production" CREDENTIALS_DJANGO_SETTINGS_MODULE: "credentials.settings.production"
CREDENTIALS_URL_ROOT: 'http://credentials:18150' CREDENTIALS_DOMAIN: 'credentials'
CREDENTIALS_OAUTH_URL_ROOT: 'http://127.0.0.1:8000' CREDENTIALS_URL_ROOT: 'http://{{ CREDENTIALS_DOMAIN }}:18150'
CREDENTIALS_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ISSUER | default("http://127.0.0.1:8000/oauth2") }}'
CREDENTIALS_SECRET_KEY: 'SET-ME-TO-A-UNIQUE-LONG-RANDOM-STRING' CREDENTIALS_SECRET_KEY: 'SET-ME-TO-A-UNIQUE-LONG-RANDOM-STRING'
CREDENTIALS_TIME_ZONE: 'UTC' CREDENTIALS_TIME_ZONE: 'UTC'
...@@ -87,6 +88,9 @@ CREDENTIALS_STATIC_URL: '/static/' ...@@ -87,6 +88,9 @@ CREDENTIALS_STATIC_URL: '/static/'
# Example settings to use Amazon S3 as a storage backend with django storages: # Example settings to use Amazon S3 as a storage backend with django storages:
# https://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html#amazon-s3 # https://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html#amazon-s3
# #
# Note, AWS_S3_CUSTOM_DOMAIN is required, otherwise boto will generate non-working
# querystring URLs for assets (see https://github.com/boto/boto/issues/1477)
#
# CREDENTIALS_BUCKET: mybucket # CREDENTIALS_BUCKET: mybucket
# credentials_s3_domain: s3.amazonaws.com # credentials_s3_domain: s3.amazonaws.com
# CREDENTIALS_MEDIA_ROOT: 'media' # CREDENTIALS_MEDIA_ROOT: 'media'
...@@ -94,7 +98,7 @@ CREDENTIALS_STATIC_URL: '/static/' ...@@ -94,7 +98,7 @@ CREDENTIALS_STATIC_URL: '/static/'
# #
# CREDENTIALS_FILE_STORAGE_BACKEND: # CREDENTIALS_FILE_STORAGE_BACKEND:
# AWS_STORAGE_BUCKET_NAME: '{{ CREDENTIALS_BUCKET }}' # AWS_STORAGE_BUCKET_NAME: '{{ CREDENTIALS_BUCKET }}'
# AWS_CUSTOM_DOMAIN: '{{ CREDENTIALS_BUCKET }}.{{ credentials_s3_domain }}' # AWS_S3_CUSTOM_DOMAIN: '{{ CREDENTIALS_BUCKET }}.{{ credentials_s3_domain }}'
# AWS_ACCESS_KEY_ID: 'XXXAWS_ACCESS_KEYXXX' # AWS_ACCESS_KEY_ID: 'XXXAWS_ACCESS_KEYXXX'
# AWS_SECRET_ACCESS_KEY: 'XXXAWS_SECRET_KEYXXX' # AWS_SECRET_ACCESS_KEY: 'XXXAWS_SECRET_KEYXXX'
# AWS_QUERYSTRING_AUTH: False # AWS_QUERYSTRING_AUTH: False
...@@ -117,9 +121,14 @@ CREDENTIALS_FILE_STORAGE_BACKEND: ...@@ -117,9 +121,14 @@ CREDENTIALS_FILE_STORAGE_BACKEND:
STATIC_ROOT: '{{ CREDENTIALS_STATIC_ROOT }}' STATIC_ROOT: '{{ CREDENTIALS_STATIC_ROOT }}'
MEDIA_URL: '{{ CREDENTIALS_MEDIA_URL }}' MEDIA_URL: '{{ CREDENTIALS_MEDIA_URL }}'
STATIC_URL: '{{ CREDENTIALS_STATIC_URL }}' STATIC_URL: '{{ CREDENTIALS_STATIC_URL }}'
STATICFILES_STORAGE: 'django.contrib.staticfiles.storage.StaticFilesStorage' STATICFILES_STORAGE: 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
DEFAULT_FILE_STORAGE: 'django.core.files.storage.FileSystemStorage' DEFAULT_FILE_STORAGE: 'django.core.files.storage.FileSystemStorage'
# Note: the protocol for CORS whitelist values is necessary for matching the correct origin by nginx
CREDENTIALS_CORS_WHITELIST:
- "http://{{ CREDENTIALS_DOMAIN }}"
- "https://{{ CREDENTIALS_DOMAIN }}"
CREDENTIALS_VERSION: "master" CREDENTIALS_VERSION: "master"
CREDENTIALS_REPOS: CREDENTIALS_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}" - PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
...@@ -146,11 +155,11 @@ CREDENTIALS_SERVICE_CONFIG: ...@@ -146,11 +155,11 @@ CREDENTIALS_SERVICE_CONFIG:
TIME_ZONE: '{{ CREDENTIALS_TIME_ZONE }}' TIME_ZONE: '{{ CREDENTIALS_TIME_ZONE }}'
LANGUAGE_CODE: '{{ CREDENTIALS_LANGUAGE_CODE }}' LANGUAGE_CODE: '{{ CREDENTIALS_LANGUAGE_CODE }}'
OAUTH2_PROVIDER_URL: '{{ CREDENTIALS_OAUTH_URL_ROOT }}/oauth2' OAUTH2_PROVIDER_URL: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}' SOCIAL_AUTH_EDX_OIDC_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}' SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}' SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ CREDENTIALS_OAUTH_URL_ROOT }}/oauth2' SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}' SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
# db config # db config
......
...@@ -38,6 +38,9 @@ ...@@ -38,6 +38,9 @@
state: present state: present
sudo_user: "{{ credentials_user }}" sudo_user: "{{ credentials_user }}"
with_items: "{{ credentials_requirements }}" with_items: "{{ credentials_requirements }}"
tags:
- install
- install:app-requirements
- name: create nodeenv - name: create nodeenv
shell: > shell: >
......
...@@ -15,6 +15,11 @@ upstream credentials_app_server { ...@@ -15,6 +15,11 @@ upstream credentials_app_server {
{% endfor %} {% endfor %}
} }
map $http_origin $cors_header {
default "";
'~*^({{ CREDENTIALS_CORS_WHITELIST|join('|')|replace('.', '\.') }})$' "$http_origin";
}
server { server {
server_name {{ CREDENTIALS_HOSTNAME }}; server_name {{ CREDENTIALS_HOSTNAME }};
...@@ -39,6 +44,8 @@ server { ...@@ -39,6 +44,8 @@ server {
location ~ ^{{ CREDENTIALS_STATIC_URL }}(?P<file>.*) { location ~ ^{{ CREDENTIALS_STATIC_URL }}(?P<file>.*) {
root {{ CREDENTIALS_STATIC_ROOT }}; root {{ CREDENTIALS_STATIC_ROOT }};
add_header Access-Control-Allow-Origin $cors_header always;
add_header Cache-Control "max-age=31536000";
try_files /$file =404; try_files /$file =404;
} }
......
--- ---
- name: check out the demo course - name: check out the demo course
git: > git_2_0_1: >
dest={{ demo_code_dir }} repo={{ demo_repo }} version={{ demo_version }} dest={{ demo_code_dir }} repo={{ demo_repo }} version={{ demo_version }}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ demo_edxapp_user }}" sudo_user: "{{ demo_edxapp_user }}"
......
...@@ -55,7 +55,10 @@ DISCOVERY_CACHES: ...@@ -55,7 +55,10 @@ DISCOVERY_CACHES:
DISCOVERY_VERSION: "master" DISCOVERY_VERSION: "master"
DISCOVERY_DJANGO_SETTINGS_MODULE: "course_discovery.settings.production" DISCOVERY_DJANGO_SETTINGS_MODULE: "course_discovery.settings.production"
DISCOVERY_URL_ROOT: 'http://discovery:18381' DISCOVERY_URL_ROOT: 'http://discovery:18381'
DISCOVERY_OAUTH_URL_ROOT: 'http://127.0.0.1:8000' DISCOVERY_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ISSUER | default("http://127.0.0.1:8000/oauth2") }}'
DISCOVERY_EDX_DRF_EXTENSIONS:
OAUTH2_USER_INFO_URL: '{{ DISCOVERY_OAUTH_URL_ROOT }}/user_info'
DISCOVERY_SECRET_KEY: 'Your secret key here' DISCOVERY_SECRET_KEY: 'Your secret key here'
DISCOVERY_TIME_ZONE: 'UTC' DISCOVERY_TIME_ZONE: 'UTC'
...@@ -79,7 +82,7 @@ DISCOVERY_SERVICE_CONFIG: ...@@ -79,7 +82,7 @@ DISCOVERY_SERVICE_CONFIG:
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_KEY }}' SOCIAL_AUTH_EDX_OIDC_KEY: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}' SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}' SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ DISCOVERY_OAUTH_URL_ROOT }}/oauth2' SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ DISCOVERY_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}' SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ discovery_service_name }}/staticfiles" STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ discovery_service_name }}/staticfiles"
...@@ -95,6 +98,8 @@ DISCOVERY_SERVICE_CONFIG: ...@@ -95,6 +98,8 @@ DISCOVERY_SERVICE_CONFIG:
ECOMMERCE_API_URL: '{{ DISCOVERY_ECOMMERCE_API_URL }}' ECOMMERCE_API_URL: '{{ DISCOVERY_ECOMMERCE_API_URL }}'
COURSES_API_URL: '{{ DISCOVERY_COURSES_API_URL }}' COURSES_API_URL: '{{ DISCOVERY_COURSES_API_URL }}'
EDX_DRF_EXTENSIONS: '{{ DISCOVERY_EDX_DRF_EXTENSIONS }}'
DISCOVERY_REPOS: DISCOVERY_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}" - PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
......
...@@ -54,8 +54,8 @@ ...@@ -54,8 +54,8 @@
- name: migrate - name: migrate
shell: > shell: >
chdir={{ ecommerce_code_dir }} chdir={{ ecommerce_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }} DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }} DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ ecommerce_venv_dir }}/bin/python ./manage.py migrate --noinput {{ ecommerce_venv_dir }}/bin/python ./manage.py migrate --noinput
sudo_user: "{{ ecommerce_user }}" sudo_user: "{{ ecommerce_user }}"
environment: "{{ ecommerce_environment }}" environment: "{{ ecommerce_environment }}"
......
--- ---
- name: git checkout edx_ansible repo into edx_ansible_code_dir - name: git checkout edx_ansible repo into edx_ansible_code_dir
git: > git_2_0_1: >
dest={{ edx_ansible_code_dir }} repo={{ edx_ansible_source_repo }} version={{ configuration_version }} dest={{ edx_ansible_code_dir }} repo={{ edx_ansible_source_repo }} version={{ configuration_version }}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ edx_ansible_user }}" sudo_user: "{{ edx_ansible_user }}"
......
...@@ -12,7 +12,7 @@ IFS="," ...@@ -12,7 +12,7 @@ IFS=","
-v add verbosity to edx_ansible run -v add verbosity to edx_ansible run
-h this -h this
<repo> - must be one of edx-platform, edx-workers, xqueue, cs_comments_service, xserver, configuration, read-only-certificate-code, edx-analytics-data-api, edx-ora2, insights, ecommerce, programs, course_discovery <repo> - must be one of edx-platform, edx-workers, xqueue, cs_comments_service, credentials, xserver, configuration, read-only-certificate-code, edx-analytics-data-api, edx-ora2, insights, ecommerce, programs, course_discovery
<version> - can be a commit or tag <version> - can be a commit or tag
EO EO
...@@ -48,6 +48,7 @@ edx_ansible_cmd="{{ edx_ansible_venv_bin }}/ansible-playbook -i localhost, -c lo ...@@ -48,6 +48,7 @@ edx_ansible_cmd="{{ edx_ansible_venv_bin }}/ansible-playbook -i localhost, -c lo
repos_to_cmd["edx-platform"]="$edx_ansible_cmd edxapp.yml -e 'edx_platform_version=$2'" repos_to_cmd["edx-platform"]="$edx_ansible_cmd edxapp.yml -e 'edx_platform_version=$2'"
repos_to_cmd["edx-workers"]="$edx_ansible_cmd edxapp.yml -e 'edx_platform_version=$2' -e 'celery_worker=true'" repos_to_cmd["edx-workers"]="$edx_ansible_cmd edxapp.yml -e 'edx_platform_version=$2' -e 'celery_worker=true'"
repos_to_cmd["xqueue"]="$edx_ansible_cmd xqueue.yml -e 'xqueue_version=$2' -e 'elb_pre_post=false'" repos_to_cmd["xqueue"]="$edx_ansible_cmd xqueue.yml -e 'xqueue_version=$2' -e 'elb_pre_post=false'"
repos_to_cmd["credentials"]="$edx_ansible_cmd credentials.yml -e 'credentials_version=$2'"
repos_to_cmd["cs_comments_service"]="$edx_ansible_cmd forum.yml -e 'forum_version=$2'" repos_to_cmd["cs_comments_service"]="$edx_ansible_cmd forum.yml -e 'forum_version=$2'"
repos_to_cmd["xserver"]="$edx_ansible_cmd xserver.yml -e 'xserver_version=$2'" repos_to_cmd["xserver"]="$edx_ansible_cmd xserver.yml -e 'xserver_version=$2'"
repos_to_cmd["configuration"]="$edx_ansible_cmd edx_ansible.yml -e 'configuration_version=$2'" repos_to_cmd["configuration"]="$edx_ansible_cmd edx_ansible.yml -e 'configuration_version=$2'"
......
...@@ -108,7 +108,6 @@ edx_notes_api_requirements_base: "{{ edx_notes_api_code_dir }}/requirements" ...@@ -108,7 +108,6 @@ edx_notes_api_requirements_base: "{{ edx_notes_api_code_dir }}/requirements"
# Application python requirements # Application python requirements
edx_notes_api_requirements: edx_notes_api_requirements:
- base.txt - base.txt
- optional.txt
# #
# OS packages # OS packages
......
...@@ -55,8 +55,8 @@ ...@@ -55,8 +55,8 @@
- name: migrate - name: migrate
shell: > shell: >
chdir={{ edx_notes_api_code_dir }} chdir={{ edx_notes_api_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }} DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }} DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ edx_notes_api_home }}/venvs/{{ edx_notes_api_service_name }}/bin/python {{ edx_notes_api_manage }} migrate --noinput --settings="notesserver.settings.yaml_config" {{ edx_notes_api_home }}/venvs/{{ edx_notes_api_service_name }}/bin/python {{ edx_notes_api_manage }} migrate --noinput --settings="notesserver.settings.yaml_config"
sudo_user: "{{ edx_notes_api_user }}" sudo_user: "{{ edx_notes_api_user }}"
environment: environment:
......
...@@ -163,7 +163,7 @@ ...@@ -163,7 +163,7 @@
- install:code - install:code
- name: checkout code over ssh - name: checkout code over ssh
git: > git_2_0_1: >
repo=git@{{ item.DOMAIN }}:{{ item.PATH }}/{{ item.REPO }} repo=git@{{ item.DOMAIN }}:{{ item.PATH }}/{{ item.REPO }}
dest={{ item.DESTINATION }} version={{ item.VERSION }} dest={{ item.DESTINATION }} version={{ item.VERSION }}
accept_hostkey=yes key_file={{ edx_service_home }}/.ssh/{{ item.REPO }} accept_hostkey=yes key_file={{ edx_service_home }}/.ssh/{{ item.REPO }}
...@@ -176,7 +176,7 @@ ...@@ -176,7 +176,7 @@
- install:code - install:code
- name: checkout code over https - name: checkout code over https
git: > git_2_0_1: >
repo=https://{{ item.DOMAIN }}/{{ item.PATH }}/{{ item.REPO }} repo=https://{{ item.DOMAIN }}/{{ item.PATH }}/{{ item.REPO }}
dest={{ item.DESTINATION }} version={{ item.VERSION }} dest={{ item.DESTINATION }} version={{ item.VERSION }}
sudo_user: "{{ edx_service_user }}" sudo_user: "{{ edx_service_user }}"
......
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
# Example play: # Example play:
# #
# export AWS_PROFILE=sandbox # export AWS_PROFILE=sandbox
# ansible-playbook -c local -i 'localhost,' edx_service_rds.yml -e@~/vpc-test.yml -e@~/e0dTest-edx.yml -e 'cluster=test' # ansible-playbook -i 'localhost,' edx_service_rds.yml -e@/path/to/secure-repo/cloud_migrations/vpcs/vpc-file.yml -e@/path/to/secure-repo/cloud_migrations/dbs/e-d-c-rds.yml
# #
# TODO: # TODO:
# - handle db deletes and updates # - handle db deletes and updates
......
...@@ -133,6 +133,7 @@ EDXAPP_CAS_ATTRIBUTE_PACKAGE: "" ...@@ -133,6 +133,7 @@ EDXAPP_CAS_ATTRIBUTE_PACKAGE: ""
EDXAPP_ENABLE_AUTO_AUTH: false EDXAPP_ENABLE_AUTO_AUTH: false
# Settings for enabling and configuring third party authorization # Settings for enabling and configuring third party authorization
EDXAPP_ENABLE_THIRD_PARTY_AUTH: false EDXAPP_ENABLE_THIRD_PARTY_AUTH: false
EDXAPP_ENABLE_OAUTH2_PROVIDER: false
EDXAPP_ENABLE_EDXNOTES: false EDXAPP_ENABLE_EDXNOTES: false
...@@ -142,9 +143,6 @@ EDXAPP_ENABLE_CREDIT_API: false ...@@ -142,9 +143,6 @@ EDXAPP_ENABLE_CREDIT_API: false
# Settings for enabling and JWT auth for DRF API's # Settings for enabling and JWT auth for DRF API's
EDXAPP_ENABLE_JWT_AUTH: false EDXAPP_ENABLE_JWT_AUTH: false
EDXAPP_MODULESTORE_MAPPINGS:
'preview\.': 'draft-preferred'
EDXAPP_GIT_REPO_DIR: '/edx/var/edxapp/course_repos' EDXAPP_GIT_REPO_DIR: '/edx/var/edxapp/course_repos'
EDXAPP_GIT_REPO_EXPORT_DIR: '/edx/var/edxapp/export_course_repos' EDXAPP_GIT_REPO_EXPORT_DIR: '/edx/var/edxapp/export_course_repos'
...@@ -198,6 +196,7 @@ EDXAPP_FEATURES: ...@@ -198,6 +196,7 @@ EDXAPP_FEATURES:
ENABLE_CREDIT_ELIGIBILITY: "{{ EDXAPP_ENABLE_CREDIT_ELIGIBILITY }}" ENABLE_CREDIT_ELIGIBILITY: "{{ EDXAPP_ENABLE_CREDIT_ELIGIBILITY }}"
ENABLE_SPECIAL_EXAMS: false ENABLE_SPECIAL_EXAMS: false
ENABLE_JWT_AUTH: "{{ EDXAPP_ENABLE_JWT_AUTH }}" ENABLE_JWT_AUTH: "{{ EDXAPP_ENABLE_JWT_AUTH }}"
ENABLE_OAUTH2_PROVIDER: "{{ EDXAPP_ENABLE_OAUTH2_PROVIDER }}"
EDXAPP_BOOK_URL: "" EDXAPP_BOOK_URL: ""
# This needs to be set to localhost # This needs to be set to localhost
...@@ -634,6 +633,26 @@ EDXAPP_LMS_SPLIT_DOC_STORE_CONFIG: ...@@ -634,6 +633,26 @@ EDXAPP_LMS_SPLIT_DOC_STORE_CONFIG:
EDXAPP_CMS_DOC_STORE_CONFIG: EDXAPP_CMS_DOC_STORE_CONFIG:
<<: *edxapp_generic_default_docstore <<: *edxapp_generic_default_docstore
edxapp_databases:
# edxapp's edxapp-migrate scripts and the edxapp_migrate play
# will ensure that any DB not named read_replica will be migrated
# for both the lms and cms.
read_replica:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_REPLICA_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_REPLICA_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_REPLICA_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_REPLICA_HOST }}"
PORT: "{{ EDXAPP_MYSQL_REPLICA_PORT }}"
default:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_HOST }}"
PORT: "{{ EDXAPP_MYSQL_PORT }}"
ATOMIC_REQUESTS: True
edxapp_generic_auth_config: &edxapp_generic_auth edxapp_generic_auth_config: &edxapp_generic_auth
EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST: "{{ EDXAPP_EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST }}" EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST: "{{ EDXAPP_EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST }}"
ECOMMERCE_API_SIGNING_KEY: "{{ EDXAPP_ECOMMERCE_API_SIGNING_KEY }}" ECOMMERCE_API_SIGNING_KEY: "{{ EDXAPP_ECOMMERCE_API_SIGNING_KEY }}"
...@@ -662,24 +681,7 @@ edxapp_generic_auth_config: &edxapp_generic_auth ...@@ -662,24 +681,7 @@ edxapp_generic_auth_config: &edxapp_generic_auth
ssl: "{{ EDXAPP_MONGO_USE_SSL }}" ssl: "{{ EDXAPP_MONGO_USE_SSL }}"
ADDITIONAL_OPTIONS: "{{ EDXAPP_CONTENTSTORE_ADDITIONAL_OPTS }}" ADDITIONAL_OPTIONS: "{{ EDXAPP_CONTENTSTORE_ADDITIONAL_OPTS }}"
DOC_STORE_CONFIG: *edxapp_generic_default_docstore DOC_STORE_CONFIG: *edxapp_generic_default_docstore
DATABASES: DATABASES: "{{ edxapp_databases }}"
# edxapp's edxapp-migrate scripts and the edxapp_migrate play
# will ensure that any DB not named read_replica will be migrated
# for both the lms and cms.
read_replica:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_REPLICA_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_REPLICA_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_REPLICA_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_REPLICA_HOST }}"
PORT: "{{ EDXAPP_MYSQL_REPLICA_PORT }}"
default:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_HOST }}"
PORT: "{{ EDXAPP_MYSQL_PORT }}"
ANALYTICS_API_KEY: "{{ EDXAPP_ANALYTICS_API_KEY }}" ANALYTICS_API_KEY: "{{ EDXAPP_ANALYTICS_API_KEY }}"
EMAIL_HOST_USER: "{{ EDXAPP_EMAIL_HOST_USER }}" EMAIL_HOST_USER: "{{ EDXAPP_EMAIL_HOST_USER }}"
EMAIL_HOST_PASSWORD: "{{ EDXAPP_EMAIL_HOST_PASSWORD }}" EMAIL_HOST_PASSWORD: "{{ EDXAPP_EMAIL_HOST_PASSWORD }}"
...@@ -822,7 +824,6 @@ generic_env_config: &edxapp_generic_env ...@@ -822,7 +824,6 @@ generic_env_config: &edxapp_generic_env
CAS_SERVER_URL: "{{ EDXAPP_CAS_SERVER_URL }}" CAS_SERVER_URL: "{{ EDXAPP_CAS_SERVER_URL }}"
CAS_EXTRA_LOGIN_PARAMS: "{{ EDXAPP_CAS_EXTRA_LOGIN_PARAMS }}" CAS_EXTRA_LOGIN_PARAMS: "{{ EDXAPP_CAS_EXTRA_LOGIN_PARAMS }}"
CAS_ATTRIBUTE_CALLBACK: "{{ EDXAPP_CAS_ATTRIBUTE_CALLBACK }}" CAS_ATTRIBUTE_CALLBACK: "{{ EDXAPP_CAS_ATTRIBUTE_CALLBACK }}"
HOSTNAME_MODULESTORE_DEFAULT_MAPPINGS: "{{ EDXAPP_MODULESTORE_MAPPINGS }}"
UNIVERSITY_EMAIL: "{{ EDXAPP_UNIVERSITY_EMAIL }}" UNIVERSITY_EMAIL: "{{ EDXAPP_UNIVERSITY_EMAIL }}"
PRESS_EMAIL: "{{ EDXAPP_PRESS_EMAIL }}" PRESS_EMAIL: "{{ EDXAPP_PRESS_EMAIL }}"
SOCIAL_MEDIA_FOOTER_URLS: "{{ EDXAPP_SOCIAL_MEDIA_FOOTER_URLS }}" SOCIAL_MEDIA_FOOTER_URLS: "{{ EDXAPP_SOCIAL_MEDIA_FOOTER_URLS }}"
......
...@@ -63,7 +63,7 @@ ...@@ -63,7 +63,7 @@
# Do A Checkout # Do A Checkout
- name: checkout edx-platform repo into {{ edxapp_code_dir }} - name: checkout edx-platform repo into {{ edxapp_code_dir }}
git: > git_2_0_1: >
dest={{ edxapp_code_dir }} dest={{ edxapp_code_dir }}
repo={{ edx_platform_repo }} repo={{ edx_platform_repo }}
version={{ edx_platform_version }} version={{ edx_platform_version }}
...@@ -90,7 +90,7 @@ ...@@ -90,7 +90,7 @@
# (yes, lowercase) to a Stanford-style theme and set # (yes, lowercase) to a Stanford-style theme and set
# edxapp_theme_name (again, lowercase) to its name. # edxapp_theme_name (again, lowercase) to its name.
- name: checkout Stanford-style theme - name: checkout Stanford-style theme
git: > git_2_0_1: >
dest={{ edxapp_app_dir }}/themes/{{ edxapp_theme_name }} dest={{ edxapp_app_dir }}/themes/{{ edxapp_theme_name }}
repo={{ edxapp_theme_source_repo }} repo={{ edxapp_theme_source_repo }}
version={{ edxapp_theme_version }} version={{ edxapp_theme_version }}
...@@ -109,7 +109,7 @@ ...@@ -109,7 +109,7 @@
# EDXAPP_COMPREHENSIVE_THEME_DIR to the directory you want to check # EDXAPP_COMPREHENSIVE_THEME_DIR to the directory you want to check
# out to. # out to.
- name: checkout comprehensive theme - name: checkout comprehensive theme
git: > git_2_0_1: >
dest={{ EDXAPP_COMPREHENSIVE_THEME_DIR }} dest={{ EDXAPP_COMPREHENSIVE_THEME_DIR }}
repo={{ EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO }} repo={{ EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO }}
version={{ EDXAPP_COMPREHENSIVE_THEME_VERSION }} version={{ EDXAPP_COMPREHENSIVE_THEME_VERSION }}
...@@ -118,7 +118,7 @@ ...@@ -118,7 +118,7 @@
sudo_user: "{{ edxapp_user }}" sudo_user: "{{ edxapp_user }}"
environment: environment:
GIT_SSH: "{{ edxapp_git_ssh }}" GIT_SSH: "{{ edxapp_git_ssh }}"
register: edxapp_theme_checkout register: edxapp_comprehensive_theme_checkout
tags: tags:
- install - install
- install:code - install:code
......
...@@ -9,6 +9,7 @@ edxlocal_databases: ...@@ -9,6 +9,7 @@ edxlocal_databases:
- "{{ ORA_MYSQL_DB_NAME | default(None) }}" - "{{ ORA_MYSQL_DB_NAME | default(None) }}"
- "{{ XQUEUE_MYSQL_DB_NAME | default(None) }}" - "{{ XQUEUE_MYSQL_DB_NAME | default(None) }}"
- "{{ EDXAPP_MYSQL_DB_NAME | default(None) }}" - "{{ EDXAPP_MYSQL_DB_NAME | default(None) }}"
- "{{ EDXAPP_MYSQL_CSMH_DB_NAME | default(None) }}"
- "{{ EDX_NOTES_API_MYSQL_DB_NAME | default(None) }}" - "{{ EDX_NOTES_API_MYSQL_DB_NAME | default(None) }}"
- "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}" - "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}"
- "{{ ANALYTICS_API_DEFAULT_DB_NAME | default(None) }}" - "{{ ANALYTICS_API_DEFAULT_DB_NAME | default(None) }}"
...@@ -43,6 +44,11 @@ edxlocal_database_users: ...@@ -43,6 +44,11 @@ edxlocal_database_users:
pass: "{{ EDXAPP_MYSQL_PASSWORD | default(None) }}" pass: "{{ EDXAPP_MYSQL_PASSWORD | default(None) }}"
} }
- { - {
db: "{{ EDXAPP_MYSQL_CSMH_DB_NAME | default(None) }}",
user: "{{ EDXAPP_MYSQL_CSMH_USER | default(None) }}",
pass: "{{ EDXAPP_MYSQL_CSMH_PASSWORD | default(None) }}"
}
- {
db: "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}", db: "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}",
user: "{{ PROGRAMS_DATABASES.default.USER | default(None) }}", user: "{{ PROGRAMS_DATABASES.default.USER | default(None) }}",
pass: "{{ PROGRAMS_DATABASES.default.PASSWORD | default(None) }}" pass: "{{ PROGRAMS_DATABASES.default.PASSWORD | default(None) }}"
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
name: "{{ item.user }}" name: "{{ item.user }}"
password: "{{ item.pass }}" password: "{{ item.pass }}"
priv: "{{ item.db }}.*:ALL" priv: "{{ item.db }}.*:ALL"
append_privs: yes
when: item.db != None and item.db != '' when: item.db != None and item.db != ''
with_items: "{{ edxlocal_database_users }}" with_items: "{{ edxlocal_database_users }}"
......
...@@ -33,7 +33,7 @@ script.disable_dynamic: true ...@@ -33,7 +33,7 @@ script.disable_dynamic: true
# to perform discovery when new nodes (master or data) are started: # to perform discovery when new nodes (master or data) are started:
# #
# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"] # discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]
{%- if ELASTICSEARCH_CLUSTER_MEMBERS|length > 1 -%} {% if ELASTICSEARCH_CLUSTER_MEMBERS|length > 1 -%}
discovery.zen.ping.unicast.hosts: ['{{ELASTICSEARCH_CLUSTER_MEMBERS|join("\',\'") }}'] discovery.zen.ping.unicast.hosts: ['{{ELASTICSEARCH_CLUSTER_MEMBERS|join("\',\'") }}']
......
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
- install:configuration - install:configuration
- name: git checkout forum repo into {{ forum_code_dir }} - name: git checkout forum repo into {{ forum_code_dir }}
git: > git_2_0_1: >
dest={{ forum_code_dir }} repo={{ forum_source_repo }} version={{ forum_version }} dest={{ forum_code_dir }} repo={{ forum_source_repo }} version={{ forum_version }}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ forum_user }}" sudo_user: "{{ forum_user }}"
......
# Tasks to run if cloning repos to edx-platform. # Tasks to run if cloning repos to edx-platform.
- name: clone all course repos - name: clone all course repos
git: dest={{ GITRELOAD_REPODIR }}/{{ item.name }} repo={{ item.url }} version={{ item.commit }} git_2_0_1: dest={{ GITRELOAD_REPODIR }}/{{ item.name }} repo={{ item.url }} version={{ item.commit }}
sudo_user: "{{ common_web_user }}" sudo_user: "{{ common_web_user }}"
with_items: GITRELOAD_REPOS with_items: GITRELOAD_REPOS
......
##In order to use this role you must use a specific set of AMIs
[This role is for use with the AWS ECS AMIs listed here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html)
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
# Defaults for role go-agent-docker-server
#
# key for go-agents to autoregister with the go-server
GO_SERVER_AUTO_REGISTER_KEY: "dev-only-override-this-key"
GO_AGENT_DOCKER_RESOURCES: "tubular,python"
GO_AGENT_DOCKER_ENVIRONMENT: "sandbox"
GO_AGENT_DOCKER_CONF_HOME: "/tmp/go-agent/conf"
\ No newline at end of file
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
#
# Tasks for role go-agent-docker-server
#
# Overview:
#
# Deploys go-server using aptitude!
#
# Dependencies:
# - openjdk7
#
# Example play:
#
# - name: Configure instance(s)
# hosts: go-server
# sudo: True
# vars_files:
# - "{{ secure_dir }}/admin/sandbox.yml"
# gather_facts: True
# roles:
# - common
#
- name: install go-server configuration
template:
src: edx/app/go-agent-docker-server/autoregister.properties.j2
dest: "{{ GO_AGENT_DOCKER_CONF_HOME }}/autoregister.properties"
mode: 0600
owner: root
group: root
agent.auto.register.key={{ GO_SERVER_AUTO_REGISTER_KEY }}
agent.auto.register.resources={{ GO_AGENT_DOCKER_RESOURCES }}
agent.auto.register.environments={{ GO_AGENT_DOCKER_ENVIRONMENT }}
\ No newline at end of file
...@@ -18,9 +18,9 @@ GO_AGENT_HOME: "/var/lib/go-agent/" ...@@ -18,9 +18,9 @@ GO_AGENT_HOME: "/var/lib/go-agent/"
GO_AGENT_CONF_HOME: "/etc/default/" GO_AGENT_CONF_HOME: "/etc/default/"
# Java version settings # Java version settings
GO_AGENT_ORACLEJDK_VERSION: "7u51" GO_AGENT_ORACLEJDK_VERSION: "7u80"
GO_AGENT_ORACLEJDK_BASE: "jdk1.7.0_51" GO_AGENT_ORACLEJDK_BASE: "jdk1.7.0_80"
GO_AGENT_ORACLEJDK_BUILD: "b13" GO_AGENT_ORACLEJDK_BUILD: "b15"
GO_AGENT_ORACLEJDK_LINK: "/usr/lib/jvm/java-7-oracle" GO_AGENT_ORACLEJDK_LINK: "/usr/lib/jvm/java-7-oracle"
# java tuning # java tuning
...@@ -34,4 +34,4 @@ GO_AGENT_APT_NAME: "go-agent" ...@@ -34,4 +34,4 @@ GO_AGENT_APT_NAME: "go-agent"
# go-agent configuration settings # go-agent configuration settings
# override the server ip and port to connect an agent to it's go-server master. # override the server ip and port to connect an agent to it's go-server master.
GO_AGENT_SERVER_IP: 127.0.0.1 GO_AGENT_SERVER_IP: 127.0.0.1
GO_AGENT_SERVER_PORT: 8153 GO_AGENT_SERVER_PORT: 8153
\ No newline at end of file
...@@ -14,14 +14,13 @@ GO_SERVER_SERVICE_NAME: "go-server" ...@@ -14,14 +14,13 @@ GO_SERVER_SERVICE_NAME: "go-server"
GO_SERVER_USER: "go" GO_SERVER_USER: "go"
GO_SERVER_GROUP: "{{ GO_SERVER_USER }}" GO_SERVER_GROUP: "{{ GO_SERVER_USER }}"
GO_SERVER_VERSION: "16.1.0-2855" GO_SERVER_VERSION: "16.1.0-2855"
GO_SERVER_HOME: "/var/lib/go-server/" GO_SERVER_HOME: "/var/lib/go-server"
GO_SERVER_CONF_HOME: "/etc/go/" GO_SERVER_CONF_HOME: "/etc/go/"
# Java version settings # Java version settings
GO_SERVER_ORACLEJDK_VERSION: "7u51" GO_SERVER_ORACLEJDK_VERSION: "7u80"
GO_SERVER_ORACLEJDK_BASE: "jdk1.7.0_51" GO_SERVER_ORACLEJDK_BASE: "jdk1.7.0_80"
GO_SERVER_ORACLEJDK_BUILD: "b13" GO_SERVER_ORACLEJDK_BUILD: "b15"
GO_SERVER_ORACLEJDK_LINK: "/usr/lib/jvm/java-7-oracle" GO_SERVER_ORACLEJDK_LINK: "/usr/lib/jvm/java-7-oracle"
# java tuning # java tuning
...@@ -42,3 +41,6 @@ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION: "{{ GO_SERVER_HOME }}/plugins/external/" ...@@ -42,3 +41,6 @@ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION: "{{ GO_SERVER_HOME }}/plugins/external/"
GO_SERVER_PASSWORD_FILE_NAME: "password.txt" GO_SERVER_PASSWORD_FILE_NAME: "password.txt"
GO_SERVER_ADMIN_USERS: ["admin"] GO_SERVER_ADMIN_USERS: ["admin"]
GO_SERVER_CRUISE_CONTROL_DB_DESTIONATION: "/var/lib/go-server/db/h2db/cruise.h2.db" GO_SERVER_CRUISE_CONTROL_DB_DESTIONATION: "/var/lib/go-server/db/h2db/cruise.h2.db"
# key for go-agents to autoregister with the go-server
GO_SERVER_AUTO_REGISTER_KEY: "dev-only-override-this-key"
...@@ -43,7 +43,15 @@ ...@@ -43,7 +43,15 @@
name: "{{ GO_SERVER_APT_NAME }}={{ GO_SERVER_VERSION }}" name: "{{ GO_SERVER_APT_NAME }}={{ GO_SERVER_VERSION }}"
update_cache: yes update_cache: yes
- name: install go-server-oauth-login - name: create go-server plugin directory
file:
path: "{{ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION }}"
state: directory
mode: 0776
owner: "{{ GO_SERVER_USER }}"
group: "{{ GO_SERVER_GROUP }}"
- name: install go-server oauth plugin
get_url: get_url:
url: "{{ GO_SERVER_OAUTH_LOGIN_JAR_URL }}" url: "{{ GO_SERVER_OAUTH_LOGIN_JAR_URL }}"
dest: "{{ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION }}" dest: "{{ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION }}"
...@@ -68,14 +76,6 @@ ...@@ -68,14 +76,6 @@
owner: "{{ GO_SERVER_USER }}" owner: "{{ GO_SERVER_USER }}"
group: "{{ GO_SERVER_GROUP }}" group: "{{ GO_SERVER_GROUP }}"
- name: copy go-server cruise database
copy:
src: cruise.h2.db
dest: "{{ GO_SERVER_CRUISE_CONTROL_DB_DESTIONATION }}"
mode: 0660
owner: "{{ GO_SERVER_USER }}"
group: "{{ GO_SERVER_GROUP }}"
- name: restart go-server - name: restart go-server
service: service:
name: "{{ GO_SERVER_SERVICE_NAME }}" name: "{{ GO_SERVER_SERVICE_NAME }}"
......
<cruise xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="77"> <cruise xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="77">
<server artifactsdir="artifacts" siteUrl="http://{{ ansible_fqdn }}:8153" secureSiteUrl="https://{{ ansible_fqdn }}:8154" commandRepositoryLocation="default" serverId="d3a0287d-7698-4afe-a687-c165e8295918"> <server artifactsdir="artifacts" siteUrl="http://{{ ansible_fqdn }}:8153" secureSiteUrl="https://{{ ansible_fqdn }}:8154" commandRepositoryLocation="default" serverId="d3a0287d-7698-4afe-a687-c165e8295918" agentAutoRegisterKey="{{ GO_SERVER_AUTO_REGISTER_KEY }}">
<security> <security>
<passwordFile path="{{ GO_SERVER_CONF_HOME }}/{{ GO_SERVER_PASSWORD_FILE_NAME }}" /> <passwordFile path="{{ GO_SERVER_CONF_HOME }}/{{ GO_SERVER_PASSWORD_FILE_NAME }}" />
<admins> <admins>
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
# #
## ##
# Defaults for role hadoop_common # Defaults for role hadoop_common
# #
HADOOP_COMMON_VERSION: 2.3.0 HADOOP_COMMON_VERSION: 2.3.0
HADOOP_COMMON_USER_HOME: "{{ COMMON_APP_DIR }}/hadoop" HADOOP_COMMON_USER_HOME: "{{ COMMON_APP_DIR }}/hadoop"
...@@ -60,3 +60,23 @@ hadoop_common_debian_pkgs: ...@@ -60,3 +60,23 @@ hadoop_common_debian_pkgs:
- maven - maven
hadoop_common_redhat_pkgs: [] hadoop_common_redhat_pkgs: []
#
# MapReduce/Yarn memory config (defaults for m1.medium)
# http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/TaskConfiguration_H2.html
#
# mapred_site_config:
# mapreduce.map.memory_mb: 768
# mapreduce.map.java.opts: '-Xmx512M'
# mapreduce.reduce.memory.mb: 1024
# mapreduce.reduce.java.opts: '-Xmx768M'
# yarn_site_config:
# yarn.app.mapreduce.am.resource.mb: 1024
# yarn.scheduler.minimum-allocation-mb: 32
# yarn.scheduler.maximum-allocation-mb: 2048
# yarn.nodemanager.resource.memory-mb: 2048
# yarn.nodemanager.vmem-pmem-ratio: 2.1
mapred_site_config: {}
yarn_site_config: {}
...@@ -6,4 +6,14 @@ ...@@ -6,4 +6,14 @@
<name>mapreduce.framework.name</name> <name>mapreduce.framework.name</name>
<value>yarn</value> <value>yarn</value>
</property> </property>
{% if mapred_site_config is defined %}
{% for key,value in mapred_site_config.iteritems() %}
<property>
<name>{{ key }}}</name>
<value>{{ value }}</value>
</property>
{% endfor %}
{% endif %}
</configuration> </configuration>
\ No newline at end of file
...@@ -5,9 +5,19 @@ ...@@ -5,9 +5,19 @@
<name>yarn.nodemanager.aux-services</name> <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> <value>mapreduce_shuffle</value>
</property> </property>
<property> <property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value> <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property> </property>
{% if yarn_site_config is defined %}
{% for key,value in yarn_site_config.iteritems() %}
<property>
<name>{{ key }}}</name>
<value>{{ value }}</value>
</property>
{% endfor %}
{% endif %}
</configuration> </configuration>
\ No newline at end of file
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
mode=0755 mode=0755
- name: check out the harprofiler - name: check out the harprofiler
git: > git_2_0_1: >
dest={{ harprofiler_dir }} dest={{ harprofiler_dir }}
repo={{ harprofiler_github_url }} version={{ harprofiler_version }} repo={{ harprofiler_github_url }} version={{ harprofiler_version }}
accept_hostkey=yes accept_hostkey=yes
......
...@@ -71,8 +71,8 @@ ...@@ -71,8 +71,8 @@
- name: migrate - name: migrate
shell: > shell: >
chdir={{ insights_code_dir }} chdir={{ insights_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }} DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }} DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ insights_home }}/venvs/{{ insights_service_name }}/bin/python {{ insights_manage }} migrate --noinput {{ insights_home }}/venvs/{{ insights_service_name }}/bin/python {{ insights_manage }} migrate --noinput
sudo_user: "{{ insights_user }}" sudo_user: "{{ insights_user }}"
environment: "{{ insights_environment }}" environment: "{{ insights_environment }}"
......
# Jenkins Analytics
A role that sets up Jenkins for scheduling analytics tasks.
This role performs the following steps:
* Installs Jenkins using `jenkins_master`.
* Configures `config.xml` to enable security and use
Linux Auth Domain.
* Creates Jenkins credentials.
* Enables the use of Jenkins CLI.
* Installs a seed job from configured repository, launches it and waits
for it to finish.
## Configuration
When you are using vagrant you **need** to set `VAGRANT_JENKINS_LOCAL_VARS_FILE`
environment variable. This variable must point to a file containing
all required variables from this section.
This file needs to contain, at least, the following variables
(see the next few sections for more information about them):
* `JENKINS_ANALYTICS_USER_PASSWORD_HASHED`
* `JENKINS_ANALYTICS_USER_PASSWORD_PLAIN`
* `JENKINS_ANALYTICS_GITHUB_KEY` or `JENKINS_ANALYTICS_CREDENTIALS`
### End-user editable configuration
#### Jenkins user password
You'll need to override default `jenkins` user password, please do that
as this sets up the **shell** password for this user.
You'll need to set both a plain password and a hashed one.
To obtain a hashed password use the `mkpasswd` command, for example:
`mkpasswd --method=sha-512`. (Note: a hashed password is required
to have clean "changed"/"unchanged" notification for this step
in Ansible.)
* `JENKINS_ANALYTICS_USER_PASSWORD_HASHED`: hashed password
* `JENKINS_ANALYTICS_USER_PASSWORD_PLAIN`: plain password
#### Jenkins seed job configuration
This will be filled as part of PR[#2830](https://github.com/edx/configuration/pull/2830).
For now go with defaults.
#### Jenkins credentials
Jenkins contains its own credential store. To fill it with credentials,
please use the `JENKINS_ANALYTICS_CREDENTIALS` variable. This variable
is a list of objects, each object representing a single credential.
For now passwords and ssh-keys are supported.
If you only need credentials to access github repositories
you can override `JENKINS_ANALYTICS_GITHUB_KEY`,
which should contain contents of private key used for
authentication to checkout github repositories.
Each credential has a unique ID, which is used to match
the credential to the task(s) for which it is needed
Examples of credentials variables:
JENKINS_ANALYTICS_GITHUB_KEY: "{{ lookup('file', 'path to keyfile') }}"
JENKINS_ANALYTICS_CREDENTIALS:
# id is a scope-unique credential identifier
- id: test-password
# Scope must be global. To have other scopes you'll need to modify addCredentials.groovy
scope: GLOBAL
# Username associated with this password
username: jenkins
type: username-password
description: Autogenerated by ansible
password: 'password'
# id is a scope-unique credential identifier
- id: github-deploy-key
scope: GLOBAL
# Username this ssh-key is attached to
username: git
# Type of credential, see other entries for example
type: ssh-private-key
passphrase: 'foobar'
description: Generated by ansible
privatekey: |
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,....
Key contents
-----END RSA PRIVATE KEY-----
#### Other useful variables
* `JENKINS_ANALYTICS_CONCURRENT_JOBS_COUNT`: Configures number of
executors (or concurrent jobs this Jenkins instance can
execute). Defaults to `2`.
### General configuration
Following variables are used by this role:
Variables used by command waiting on Jenkins start-up after running
`jenkins_master` role:
jenkins_connection_retries: 60
jenkins_connection_delay: 0.5
#### Auth realm
Jenkins auth realm encapsulates user management in Jenkins, that is:
* What users can log in
* What credentials they use to log in
Realm type stored in `jenkins_auth_realm.name` variable.
In future we will try to enable other auth domains, while
preserving the ability to run cli.
##### Unix Realm
For now only `unix` realm supported -- which requires every Jenkins
user to have a shell account on the server.
Unix realm requires the following settings:
* `service`: Jenkins uses PAM configuration for this service. `su` is
a safe choice as it doesn't require a user to have the ability to login
remotely.
* `plain_password`: plaintext password, **you should change** default values.
* `hashed_password`: hashed password
Example realm configuration:
jenkins_auth_realm:
name: unix
service: su
plain_password: jenkins
hashed_password: $6$rAVyI.p2wXVDKk5w$y0G1MQehmHtvaPgdtbrnvAsBqYQ99g939vxrdLXtPQCh/e7GJVwbnqIKZpve8EcMLTtq.7sZwTBYV9Tdjgf1k.
#### Seed job configuration
Seed job is configured in `jenkins_seed_job` variable, which has the following
attributes:
* `name`: Name of the job in Jenkins.
* `time_trigger`: A Jenkins cron entry defining how often this job should run.
* `removed_job_action`: what to do when a job created by a previous run of seed job
is missing from current run. This can be either `DELETE` or`IGNORE`.
* `removed_view_action`: what to do when a view created by a previous run of seed job
is missing from current run. This can be either `DELETE` or`IGNORE`.
* `scm`: Scm object is used to define seed job repository and related settings.
It has the following properties:
* `scm.type`: It must have value of `git`.
* `scm.url`: URL for the repository.
* `scm.credential_id`: Id of a credential to use when authenticating to the
repository.
This setting is optional. If it is missing or falsy, credentials will be omitted.
Please note that when you use ssh repository url, you'll need to set up a key regardless
of whether the repository is public or private (to establish an ssh connection
you need a valid public key).
* `scm.target_jobs`: A shell glob expression relative to repo root selecting
jobs to import.
* `scm.additional_classpath`: A path relative to repo root, pointing to a
directory that contains additional groovy scripts used by the seed jobs.
Example scm configuration:
jenkins_seed_job:
name: seed
time_trigger: "H * * * *"
removed_job_action: "DELETE"
removed_view_action: "IGNORE"
scm:
type: git
url: "git@github.com:edx-ops/edx-jenkins-job-dsl.git"
credential_id: "github-deploy-key"
target_jobs: "jobs/analytics-edx-jenkins.edx.org/*Jobs.groovy"
additional_classpath: "src/main/groovy"
Known issues
------------
1. Playbook named `execute_ansible_cli.yaml`, should be converted to an
Ansible module (it is already used in a module-ish way).
2. Anonymous user has discover and get job permission, as without it
`get-job`, `build <<job>>` commands wouldn't work.
Giving anonymous these permissions is a workaround for
transient Jenkins issue (reported [couple][1] [of][2] [times][3]).
3. We force unix authentication method -- that is, every user that can login
to Jenkins also needs to have a shell account on master.
Dependencies
------------
- `jenkins_master`
[1]: https://issues.jenkins-ci.org/browse/JENKINS-12543
[2]: https://issues.jenkins-ci.org/browse/JENKINS-11024
[3]: https://issues.jenkins-ci.org/browse/JENKINS-22143
---
# See README.md for variable descriptions
JENKINS_ANALYTICS_USER_PASSWORD_HASHED: $6$rAVyI.p2wXVDKk5w$y0G1MQehmHtvaPgdtbrnvAsBqYQ99g939vxrdLXtPQCh/e7GJVwbnqIKZpve8EcMLTtq.7sZwTBYV9Tdjgf1k.
JENKINS_ANALYTICS_USER_PASSWORD_PLAIN: jenkins
JENKINS_ANALYTICS_CREDENTIALS:
- id: github-deploy-key
scope: GLOBAL
username: git
type: ssh-private-key
passphrase: null
description: Autogenerated by ansible
privatekey: "{{ JENKINS_ANALYTICS_GITHUB_KEY }}"
JENKINS_ANALYTICS_CONCURRENT_JOBS_COUNT: 2
jenkins_credentials_root: '/tmp/credentials'
jenkins_credentials_file_dest: "{{ jenkins_credentials_root }}/credentials.json"
jenkins_credentials_script: "{{ jenkins_credentials_root }}/addCredentials.groovy"
jenkins_connection_retries: 240
jenkins_connection_delay: 1
jenkins_auth_realm:
name: unix
service: su
# Change this default password: (see README.md to see how you can do it)
plain_password: "{{ JENKINS_ANALYTICS_USER_PASSWORD_PLAIN }}"
hashed_password: "{{ JENKINS_ANALYTICS_USER_PASSWORD_HASHED }}"
jenkins_seed_job:
name: analytics-seed-job
time_trigger: "H * * * *"
removed_job_action: "DELETE"
removed_view_action: "IGNORE"
scm:
type: git
url: "git@github.com:edx-ops/edx-jenkins-job-dsl.git"
credential_id: "github-deploy-key"
target_jobs: "jobs/analytics-edx-jenkins.edx.org/*Jobs.groovy"
additional_classpath: "src/main/groovy"
---
- fail: msg=for now we can execute commands iff jenkins auth realm is unix
when: jenkins_auth_realm.name != "unix"
- set_fact:
jenkins_cli_root: "/tmp/jenkins-cli/{{ ansible_ssh_user }}"
- set_fact:
jenkins_cli_jar: "{{ jenkins_cli_root }}/jenkins_cli.jar"
jenkins_cli_pass: "{{ jenkins_cli_root }}/jenkins_cli_pass"
- name: create cli dir
file: name={{ jenkins_cli_root }} state=directory mode="700"
- name: create pass file
template: src=jenkins-pass-file.j2 dest={{ jenkins_cli_pass }} mode="600"
- name: Wait for Jenkins CLI
uri:
url: "http://localhost:{{ jenkins_port }}/cli/"
method: GET
return_content: yes
status_code: 200,403
register: result
until: (result.status is defined) and ((result.status == 403) or (results.status == 200))
retries: "{{ jenkins_connection_retries }}"
delay: "{{ jenkins_connection_delay }}"
changed_when: false
- name: get cli
get_url:
url: "http://localhost:{{ jenkins_port }}/jnlpJars/jenkins-cli.jar"
dest: "{{ jenkins_cli_jar }}"
- name: login
command: java -jar {{ jenkins_cli_jar }} -s http://localhost:{{ jenkins_port }}
login --username={{ jenkins_user }}
--password-file={{ jenkins_cli_pass }}
- name: execute command
shell: >
{{ jenkins_command_prefix|default('') }} java -jar {{ jenkins_cli_jar }} -s http://localhost:{{ jenkins_port }}
{{ jenkins_command_string }}
register: jenkins_command_output
ignore_errors: "{{ jenkins_ignore_cli_errors|default (False) }}"
- name: "clean up --- remove the credentials dir"
file: name=jenkins_cli_root state=absent
- name: "clean up --- remove cached Jenkins credentials"
command: rm -rf $HOME/.jenkins
---
- fail: msg=included unix realm by accident
when: jenkins_auth_realm.name != "unix"
- fail: msg=Please change default password for jenkins user
when: jenkins_auth_realm.plain_password == 'jenkins'
- user:
name: "{{ jenkins_user }}"
groups: shadow
append: yes
password: "{{ jenkins_auth_realm.hashed_password }}"
update_password: always
- name: template config.xml
template:
src: jenkins.config.main.xml
dest: "{{ jenkins_home }}/config.xml"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
# Unconditionally restart Jenkins, this has two side-effects:
# 1. Jenkins uses new auth realm
# 2. We guarantee that jenkins is started (this is not certain
# as Jenkins is started by handlers from jenkins_master,
# these handlers are launched after this role).
- name: restart Jenkins
service: name=jenkins state=restarted
# Upload Jenkins credentials
- name: create credentials dir
file: name={{ jenkins_credentials_root }} state=directory
- name: upload groovy script
template:
src: addCredentials.groovy
dest: "{{ jenkins_credentials_script }}"
mode: "600"
- name: upload credentials file
template:
src: credentials_file.json.j2
dest: "{{ jenkins_credentials_file_dest }}"
mode: "600"
owner: "{{ jenkins_user }}"
- name: add credentials
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "groovy {{ jenkins_credentials_script }}"
- name: clean up
file: name={{ jenkins_credentials_root }} state=absent
# Upload seed job
- name: upload job file
template: src=seed_job_template.xml dest=/tmp/{{ jenkins_seed_job.name }} mode="600"
- name: check if job is present
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "get-job {{ jenkins_seed_job.name }}"
jenkins_ignore_cli_errors: yes
- set_fact:
get_job_output: "{{ jenkins_command_output }}"
# Upload seed job to Jenkins
- name: Create seed job if absent
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "create-job {{ jenkins_seed_job.name }}"
jenkins_command_prefix: "cat /tmp/{{ jenkins_seed_job.name }} | "
when: get_job_output.rc != 0
- name: update seed job
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "update-job {{ jenkins_seed_job.name }}"
jenkins_command_prefix: "cat /tmp/{{ jenkins_seed_job.name }} | "
when: get_job_output.rc == 0
# Build the seed job
- name: Build the seed job
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "build {{ jenkins_seed_job.name }} -s"
/**
* This script can be run via the Jenkins CLI as follows:
*
* java -jar /var/jenkins/war/WEB-INF/jenkins-cli.jar -s http://localhost:8080 groovy addCredentials.groovy
*
* For a given json file, this script will create a set of credentials.
* The script can be run safely multiple times and it will update each changed credential
* (deleting credentials is not currently supported).
*
* This is useful in conjunction with the job-dsl to bootstrap a barebone Jenkins instance.
*
* This script will currently fail if the plugins it requires have not been installed:
*
* credentials-plugin
* credentials-ssh-plugin
*/
import com.cloudbees.plugins.credentials.Credentials
import com.cloudbees.plugins.credentials.CredentialsScope
import com.cloudbees.plugins.credentials.common.IdCredentials
import com.cloudbees.plugins.credentials.domains.Domain
import hudson.model.*
import com.cloudbees.plugins.credentials.SystemCredentialsProvider
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl
import com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey
import groovy.json.JsonSlurper;
boolean addUsernamePassword(scope, id, username, password, description) {
provider = SystemCredentialsProvider.getInstance()
provider.getCredentials().add(new UsernamePasswordCredentialsImpl(scope, id, description, username, password))
provider.save()
return true
}
boolean addSSHUserPrivateKey(scope, id, username, privateKey, passphrase, description) {
provider = SystemCredentialsProvider.getInstance()
source = new BasicSSHUserPrivateKey.DirectEntryPrivateKeySource(privateKey)
provider.getCredentials().add(new BasicSSHUserPrivateKey(scope, id, username, source, passphrase, description))
provider.save()
return true
}
def jsonFile = new File("{{ jenkins_credentials_file_dest }}");
if (!jsonFile.exists()){
throw RuntimeException("Credentials file does not exist on remote host");
}
def jsonSlurper = new JsonSlurper()
def credentialList = jsonSlurper.parse(new FileReader(jsonFile))
credentialList.each { credential ->
if (credential.scope != "GLOBAL"){
throw new RuntimeException("Sorry for now only global scope is supported");
}
scope = CredentialsScope.valueOf(credential.scope)
def provider = SystemCredentialsProvider.getInstance();
def toRemove = [];
for (Credentials current_credentials: provider.getCredentials()){
if (current_credentials instanceof IdCredentials){
if (current_credentials.getId() == credential.id){
toRemove.add(current_credentials);
}
}
}
toRemove.each {curr ->provider.getCredentials().remove(curr)};
if (credential.type == "username-password") {
addUsernamePassword(scope, credential.id, credential.username, credential.password, credential.description)
}
if (credential.type == "ssh-private-key") {
if (credential.passphrase != null && credential.passphrase.trim().length() == 0){
credential.passphrase = null;
}
addSSHUserPrivateKey(scope, credential.id, credential.username, credential.privatekey, credential.passphrase, credential.description)
}
}
{{ JENKINS_ANALYTICS_CREDENTIALS|to_json }}
\ No newline at end of file
{{ jenkins_auth_realm.plain_password }}
\ No newline at end of file
<?xml version='1.0' encoding='UTF-8'?>
<hudson>
<disabledAdministrativeMonitors/>
<version>1.638</version>
<numExecutors>{{ JENKINS_ANALYTICS_CONCURRENT_JOBS_COUNT }}</numExecutors>
<mode>NORMAL</mode>
<useSecurity>true</useSecurity>
{% if jenkins_auth_realm.name == "unix" %}
<authorizationStrategy class="hudson.security.GlobalMatrixAuthorizationStrategy">
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.Create:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.Delete:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.ManageDomains:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.Update:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.View:jenkins</permission>
<permission>hudson.model.Computer.Build:jenkins</permission>
<permission>hudson.model.Computer.Configure:jenkins</permission>
<permission>hudson.model.Computer.Connect:jenkins</permission>
<permission>hudson.model.Computer.Create:jenkins</permission>
<permission>hudson.model.Computer.Delete:jenkins</permission>
<permission>hudson.model.Computer.Disconnect:jenkins</permission>
<permission>hudson.model.Hudson.Administer:jenkins</permission>
<permission>hudson.model.Hudson.ConfigureUpdateCenter:jenkins</permission>
<permission>hudson.model.Hudson.Read:jenkins</permission>
<permission>hudson.model.Hudson.RunScripts:jenkins</permission>
<permission>hudson.model.Hudson.UploadPlugins:jenkins</permission>
<permission>hudson.model.Item.Build:jenkins</permission>
<permission>hudson.model.Item.Cancel:jenkins</permission>
<permission>hudson.model.Item.Configure:jenkins</permission>
<permission>hudson.model.Item.Create:jenkins</permission>
<permission>hudson.model.Item.Delete:jenkins</permission>
<permission>hudson.model.Item.Discover:anonymous</permission>
<permission>hudson.model.Item.Discover:jenkins</permission>
<permission>hudson.model.Item.Move:jenkins</permission>
<permission>hudson.model.Item.Read:anonymous</permission>
<permission>hudson.model.Item.Read:jenkins</permission>
<permission>hudson.model.Item.Workspace:jenkins</permission>
<permission>hudson.model.Run.Delete:jenkins</permission>
<permission>hudson.model.Run.Update:jenkins</permission>
<permission>hudson.model.View.Configure:jenkins</permission>
<permission>hudson.model.View.Create:jenkins</permission>
<permission>hudson.model.View.Delete:jenkins</permission>
<permission>hudson.model.View.Read:jenkins</permission>
<permission>hudson.scm.SCM.Tag:jenkins</permission>
</authorizationStrategy>
<securityRealm class="hudson.security.PAMSecurityRealm" plugin="pam-auth@1.2">
<serviceName>{{ jenkins_auth_realm.service }}</serviceName>
</securityRealm>
{% endif %}
<disableRememberMe>false</disableRememberMe>
<projectNamingStrategy class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
<workspaceDir>${JENKINS_HOME}/workspace/${ITEM_FULLNAME}</workspaceDir>
<buildsDir>${ITEM_ROOTDIR}/builds</buildsDir>
<markupFormatter class="hudson.markup.EscapedMarkupFormatter"/>
<jdks/>
<viewsTabBar class="hudson.views.DefaultViewsTabBar"/>
<myViewsTabBar class="hudson.views.DefaultMyViewsTabBar"/>
<clouds/>
<quietPeriod>5</quietPeriod>
<scmCheckoutRetryCount>0</scmCheckoutRetryCount>
<views>
<hudson.model.AllView>
<owner class="hudson" reference="../../.."/>
<name>All</name>
<filterExecutors>false</filterExecutors>
<filterQueue>false</filterQueue>
<properties class="hudson.model.View$PropertyList"/>
</hudson.model.AllView>
</views>
<primaryView>All</primaryView>
<slaveAgentPort>0</slaveAgentPort>
<label>312312321</label>
<nodeProperties/>
<globalNodeProperties/>
</hudson>
<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description>
Seed job autogenerated by ansible, it will be overridden.
</description>
<keepDependencies>false</keepDependencies>
<properties>
<jenkins.advancedqueue.AdvancedQueueSorterJobProperty plugin="PrioritySorter@2.9">
<useJobPriority>false</useJobPriority>
<priority>-1</priority>
</jenkins.advancedqueue.AdvancedQueueSorterJobProperty>
</properties>
<scm class="hudson.plugins.git.GitSCM" plugin="git@2.4.0">
<configVersion>2</configVersion>
<userRemoteConfigs>
<hudson.plugins.git.UserRemoteConfig>
<url>{{ jenkins_seed_job.scm.url}}</url>
{% if jenkins_seed_job.scm.credential_id is defined and jenkins_seed_job.scm.credential_id %}
<credentialsId>{{ jenkins_seed_job.scm.credential_id }}</credentialsId>
{% endif %}
</hudson.plugins.git.UserRemoteConfig>
</userRemoteConfigs>
<branches>
<hudson.plugins.git.BranchSpec>
<name>master</name>
</hudson.plugins.git.BranchSpec>
</branches>
<doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
<browser class="hudson.plugins.git.browser.AssemblaWeb">
<url></url>
</browser>
<submoduleCfg class="list"/>
<extensions/>
</scm>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers>
<hudson.triggers.TimerTrigger>
<spec>{{ jenkins_seed_job.time_trigger }}</spec>
</hudson.triggers.TimerTrigger>
</triggers>
<concurrentBuild>false</concurrentBuild>
<builders>
<hudson.plugins.gradle.Gradle plugin="gradle@1.24">
<description></description>
<switches></switches>
<tasks>clean test</tasks>
<rootBuildScriptDir></rootBuildScriptDir>
<buildFile></buildFile>
<gradleName>(x)</gradleName>
<useWrapper>true</useWrapper>
<makeExecutable>false</makeExecutable>
<fromRootBuildScriptDir>true</fromRootBuildScriptDir>
<useWorkspaceAsHome>false</useWorkspaceAsHome>
</hudson.plugins.gradle.Gradle>
<javaposse.jobdsl.plugin.ExecuteDslScripts plugin="job-dsl@1.43">
<targets>{{ jenkins_seed_job.scm.target_jobs }}</targets>
<usingScriptText>false</usingScriptText>
<ignoreExisting>false</ignoreExisting>
<removedJobAction>{{ jenkins_seed_job.removed_job_action }}</removedJobAction>
<removedViewAction>{{ jenkins_seed_job.removed_view_action }}</removedViewAction>
<lookupStrategy>JENKINS_ROOT</lookupStrategy>
<additionalClasspath>{{ jenkins_seed_job.scm.additional_classpath }}</additionalClasspath>
</javaposse.jobdsl.plugin.ExecuteDslScripts>
</builders>
<publishers/>
<buildWrappers/>
</project>
...@@ -19,7 +19,9 @@ jenkins_plugins: ...@@ -19,7 +19,9 @@ jenkins_plugins:
- { name: "build-name-setter", version: "1.3" } - { name: "build-name-setter", version: "1.3" }
- { name: "build-pipeline-plugin", version: "1.4" } - { name: "build-pipeline-plugin", version: "1.4" }
- { name: "build-timeout", version: "1.14.1" } - { name: "build-timeout", version: "1.14.1" }
- { name: "build-user-vars-plugin", version: "1.5" }
- { name: "buildgraph-view", version: "1.1.1" } - { name: "buildgraph-view", version: "1.1.1" }
- { name: "cloudbees-folder", version: "5.2.1" }
- { name: "cobertura", version: "1.9.6" } - { name: "cobertura", version: "1.9.6" }
- { name: "copyartifact", version: "1.32.1" } - { name: "copyartifact", version: "1.32.1" }
- { name: "copy-to-slave", version: "1.4.3" } - { name: "copy-to-slave", version: "1.4.3" }
...@@ -34,15 +36,19 @@ jenkins_plugins: ...@@ -34,15 +36,19 @@ jenkins_plugins:
- { name: "github", version: "1.14.0" } - { name: "github", version: "1.14.0" }
- { name: "github-api", version: "1.69" } - { name: "github-api", version: "1.69" }
- { name: "github-oauth", version: "0.20" } - { name: "github-oauth", version: "0.20" }
- { name: "github-sqs-plugin", version: "1.6" } - { name: "github-sqs-plugin", version: "1.5" }
- { name: "gradle", version: "1.24" }
- { name: "grails", version: "1.7" }
- { name: "groovy-postbuild", version: "2.2" } - { name: "groovy-postbuild", version: "2.2" }
- { name: "htmlpublisher", version: "1.3" } - { name: "htmlpublisher", version: "1.3" }
- { name: "javadoc", version: "1.3" } - { name: "javadoc", version: "1.3" }
- { name: "jobConfigHistory", version: "2.10" } - { name: "jobConfigHistory", version: "2.10" }
- { name: "job-dsl", version: "1.43" }
- { name: "junit", version: "1.3" } - { name: "junit", version: "1.3" }
- { name: "ldap", version: "1.11" } - { name: "ldap", version: "1.11" }
- { name: "mailer", version: "1.16" } - { name: "mailer", version: "1.16" }
- { name: "mapdb-api", version: "1.0.6.0" } - { name: "mapdb-api", version: "1.0.6.0" }
- { name: "mask-passwords", version: "2.8" }
- { name: "matrix-auth", version: "1.2" } - { name: "matrix-auth", version: "1.2" }
- { name: "matrix-project", version: "1.4" } - { name: "matrix-project", version: "1.4" }
- { name: "monitoring", version: "1.56.0" } - { name: "monitoring", version: "1.56.0" }
......
...@@ -99,7 +99,7 @@ ...@@ -99,7 +99,7 @@
path: "{{ jenkins_home }}/plugins/{{ item.item.name }}.hpi" path: "{{ jenkins_home }}/plugins/{{ item.item.name }}.hpi"
owner: "{{ jenkins_user }}" owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}" group: "{{ jenkins_group }}"
mode: 644 mode: "644"
with_items: jenkins_plugin_downloads.results with_items: jenkins_plugin_downloads.results
when: item.changed when: item.changed
notify: notify:
...@@ -110,7 +110,7 @@ ...@@ -110,7 +110,7 @@
# upstream, we may be able to use the regular plugin install process. # upstream, we may be able to use the regular plugin install process.
# Until then, we compile and install the forks ourselves. # Until then, we compile and install the forks ourselves.
- name: checkout custom plugin repo - name: checkout custom plugin repo
git: > git_2_0_1: >
repo={{ item.repo_url }} dest=/tmp/{{ item.repo_name }} version={{ item.version }} repo={{ item.repo_url }} dest=/tmp/{{ item.repo_name }} version={{ item.version }}
accept_hostkey=yes accept_hostkey=yes
with_items: jenkins_custom_plugins with_items: jenkins_custom_plugins
...@@ -131,7 +131,7 @@ ...@@ -131,7 +131,7 @@
- name: set custom plugin permissions - name: set custom plugin permissions
file: path={{ jenkins_home }}/plugins/{{ item.item.package }} file: path={{ jenkins_home }}/plugins/{{ item.item.package }}
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700 owner={{ jenkins_user }} group={{ jenkins_group }} mode="700"
with_items: jenkins_custom_plugins_checkout.results with_items: jenkins_custom_plugins_checkout.results
when: item.changed when: item.changed
......
...@@ -16,6 +16,10 @@ jenkins_debian_pkgs: ...@@ -16,6 +16,10 @@ jenkins_debian_pkgs:
# packer direct download URL # packer direct download URL
packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip" packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip"
# custom firefox
custom_firefox_version: 42.0
custom_firefox_url: "https://ftp.mozilla.org/pub/firefox/releases/{{ custom_firefox_version }}/linux-x86_64/en-US/firefox-{{ custom_firefox_version }}.tar.bz2"
# Pip-accel itself and other workarounds that need to be installed with pip # Pip-accel itself and other workarounds that need to be installed with pip
pip_accel_reqs: pip_accel_reqs:
# Install Shapely with pip as it does not install cleanly # Install Shapely with pip as it does not install cleanly
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
# refers to the --depth-setting of git clone. A value of 1 # refers to the --depth-setting of git clone. A value of 1
# will truncate all history prior to the last revision. # will truncate all history prior to the last revision.
- name: Create shallow clone of edx-platform - name: Create shallow clone of edx-platform
git: > git_2_0_1: >
repo=https://github.com/edx/edx-platform.git repo=https://github.com/edx/edx-platform.git
dest={{ jenkins_home }}/shallow-clone dest={{ jenkins_home }}/shallow-clone
version={{ jenkins_edx_platform_version }} version={{ jenkins_edx_platform_version }}
...@@ -74,7 +74,23 @@ ...@@ -74,7 +74,23 @@
chdir={{ jenkins_home }} chdir={{ jenkins_home }}
sudo_user: "{{ jenkins_user }}" sudo_user: "{{ jenkins_user }}"
# Remove the shallow-clone directory now that we archive # Remove the shallow-clone directory now that we are
# done with it # done with it
- name: Remove shallow-clone - name: Remove shallow-clone
file: path={{ jenkins_home }}/shallow-clone state=absent file: path={{ jenkins_home }}/shallow-clone state=absent
# Although firefox is installed through the browsers role, install
# a newer copy under the jenkins home directory. This will allow
# platform pull requests to use a custom firefox path to a different
# version
- name: Install custom firefox to jenkins home
get_url:
url: "{{ custom_firefox_url }}"
dest: "{{ jenkins_home }}/firefox-{{ custom_firefox_version }}.tar.bz2"
- name: unpack custom firefox version
unarchive:
src: "{{ jenkins_home }}/firefox-{{ custom_firefox_version }}.tar.bz2"
dest: "{{ jenkins_home }}"
creates: "{{ jenkins_home }}/firefox"
copy: no
# Courtesy of Gregory Nicholas
_subcommand_opts()
{
local awkfile command cur usage
command=$1
cur=${COMP_WORDS[COMP_CWORD]}
awkfile=/tmp/paver-option-awkscript-$$.awk
echo '
BEGIN {
opts = "";
}
{
for (i = 1; i <= NF; i = i + 1) {
# Match short options (-a, -S, -3)
# or long options (--long-option, --another_option)
# in output from paver help [subcommand]
if ($i ~ /^(-[A-Za-z0-9]|--[A-Za-z][A-Za-z0-9_-]*)/) {
opt = $i;
# remove trailing , and = characters.
match(opt, "[,=]");
if (RSTART > 0) {
opt = substr(opt, 0, RSTART);
}
opts = opts " " opt;
}
}
}
END {
print opts
}' > $awkfile
usage=`paver help $command`
options=`echo "$usage"|awk -f $awkfile`
COMPREPLY=( $(compgen -W "$options" -- "$cur") )
}
_paver()
{
local cur prev
COMPREPLY=()
# Variable to hold the current word
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD - 1]}"
# Build a list of the available tasks from: `paver --help --quiet`
local cmds=$(paver -hq | awk '/^ ([a-zA-Z][a-zA-Z0-9_]+)/ {print $1}')
subcmd="${COMP_WORDS[1]}"
# Generate possible matches and store them in the
# array variable COMPREPLY
if [[ -n $subcmd ]]
then
case $subcmd in
test_system)
_test_system_args
if [[ -n $COMPREPLY ]]
then
return 0
fi
;;
test_bokchoy)
_test_bokchoy_args
if [[ -n $COMPREPLY ]]
then
return 0
fi
;;
*)
;;
esac
if [[ ${#COMP_WORDS[*]} == 3 ]]
then
_subcommand_opts $subcmd
return 0
else
if [[ "$cur" == -* ]]
then
_subcommand_opts $subcmd
return 0
else
COMPREPLY=( $(compgen -o nospace -- "$cur") )
fi
fi
fi
if [[ ${#COMP_WORDS[*]} == 2 ]]
then
COMPREPLY=( $(compgen -W "${cmds}" -- "$cur") )
fi
}
_test_system_args()
{
local cur prev
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD - 1]}"
case "$prev" in
-s|--system)
COMPREPLY=( $(compgen -W "lms cms" -- "$cur") )
return 0
;;
*)
;;
esac
}
_test_bokchoy_args()
{
local bokchoy_tests cur prev
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD - 1]}"
case "$prev" in
-d|--test_dir)
bokchoy_tests=`find common/test/acceptance -name \*.py| sed 's:common/test/acceptance/::'`
COMPREPLY=( $(compgen -o filenames -W "$bokchoy_tests" -- $cur) )
return 0
;;
-t|--test_spec)
bokchoy_tests=`find common/test/acceptance/tests -name \*.py| sed 's:common/test/acceptance/::'`
COMPREPLY=( $(compgen -o filenames -W "$bokchoy_tests" -- $cur) )
return 0
;;
*)
;;
esac
}
# Assign the auto-completion function for our command.
complete -F _paver -o default paver
...@@ -60,9 +60,11 @@ ...@@ -60,9 +60,11 @@
# Create scripts to add paver autocomplete # Create scripts to add paver autocomplete
- name: add paver autocomplete - name: add paver autocomplete
template: copy:
src=paver_autocomplete dest={{ item.home }}/.paver_autocomplete src: paver_autocomplete
owner={{ item.user }} mode=755 dest: "{{ item.home }}/.paver_autocomplete"
owner: "{{ item.user }}"
mode: 0755
with_items: localdev_accounts with_items: localdev_accounts
when: item.user != 'None' when: item.user != 'None'
ignore_errors: yes ignore_errors: yes
......
# Courtesy of Gregory Nicholas
_paver()
{
local cur
COMPREPLY=()
# Variable to hold the current word
cur="${COMP_WORDS[COMP_CWORD]}"
# Build a list of the available tasks from: `paver --help --quiet`
local cmds=$(paver -hq | awk '/^ ([a-zA-Z][a-zA-Z0-9_]+)/ {print $1}')
# Generate possible matches and store them in the
# array variable COMPREPLY
COMPREPLY=($(compgen -W "${cmds}" $cur))
}
# Assign the auto-completion function for our command.
complete -F _paver paver
\ No newline at end of file
...@@ -56,3 +56,16 @@ locust_debian_pkgs: ...@@ -56,3 +56,16 @@ locust_debian_pkgs:
- gfortran - gfortran
locust_redhat_pkgs: [] locust_redhat_pkgs: []
# ulimit variables
ulimit_config:
- domain: '*'
type: soft
item: nofile
value: 4096
- domain: '*'
type: hard
item: nofile
value: 4096
ulimit_conf_file: "/etc/security/limits.conf"
...@@ -72,3 +72,9 @@ ...@@ -72,3 +72,9 @@
name={{ locust_service_name }} name={{ locust_service_name }}
when: not disable_edx_services when: not disable_edx_services
sudo_user: "{{ supervisor_service_user }}" sudo_user: "{{ supervisor_service_user }}"
- name: increase file descriptor limit of the system (Session Logout and Login would be required)
lineinfile:
dest: "{{ ulimit_conf_file }}"
line: "{{ item.domain }} {{ item.type }} {{ item.item }} {{ item.value }}"
with_items: "{{ ulimit_config }}"
...@@ -58,7 +58,7 @@ server { ...@@ -58,7 +58,7 @@ server {
{% endif %} {% endif %}
location ~ ^/static/(?P<file>.*) { location ~ ^/static/(?P<file>.*) {
root {{ COMMON_DATA_DIR }}/{{ programs_service_name }}; root {{ PROGRAMS_DATA_DIR }};
try_files /staticfiles/$file =404; try_files /staticfiles/$file =404;
# Request that the browser use SSL for these connections. Repeated here # Request that the browser use SSL for these connections. Repeated here
...@@ -71,6 +71,13 @@ server { ...@@ -71,6 +71,13 @@ server {
add_header Cache-Control "public; max-age=3600"; add_header Cache-Control "public; max-age=3600";
} }
location ~ ^/media/(?P<file>.*) {
root {{ PROGRAMS_DATA_DIR }};
try_files /media/$file =404;
# django / app always assigns new filenames so these can be cached forever.
add_header Cache-Control "public; max-age=31536000";
}
location / { location / {
try_files $uri @proxy_to_app; try_files $uri @proxy_to_app;
} }
......
--- ---
- name: checkout code - name: checkout code
git: git_2_0_1:
dest={{ NOTIFIER_CODE_DIR }} repo={{ NOTIFIER_SOURCE_REPO }} dest={{ NOTIFIER_CODE_DIR }} repo={{ NOTIFIER_SOURCE_REPO }}
version={{ NOTIFIER_VERSION }} version={{ NOTIFIER_VERSION }}
accept_hostkey=yes accept_hostkey=yes
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
when: NOTIFIER_GIT_IDENTITY != "" when: NOTIFIER_GIT_IDENTITY != ""
- name: checkout theme - name: checkout theme
git: > git_2_0_1: >
dest={{ NOTIFIER_CODE_DIR }}/{{ NOTIFIER_THEME_NAME }} dest={{ NOTIFIER_CODE_DIR }}/{{ NOTIFIER_THEME_NAME }}
repo={{ NOTIFIER_THEME_REPO }} repo={{ NOTIFIER_THEME_REPO }}
version={{ NOTIFIER_THEME_VERSION }} version={{ NOTIFIER_THEME_VERSION }}
......
...@@ -9,3 +9,6 @@ oraclejdk_arch: "x64" ...@@ -9,3 +9,6 @@ oraclejdk_arch: "x64"
oraclejdk_file: "jdk-{{ oraclejdk_version }}-{{ oraclejdk_platform }}-{{ oraclejdk_arch }}.tar.gz" oraclejdk_file: "jdk-{{ oraclejdk_version }}-{{ oraclejdk_platform }}-{{ oraclejdk_arch }}.tar.gz"
oraclejdk_url: "http://download.oracle.com/otn-pub/java/jdk/{{ oraclejdk_version }}-{{ oraclejdk_build }}/{{ oraclejdk_file }}" oraclejdk_url: "http://download.oracle.com/otn-pub/java/jdk/{{ oraclejdk_version }}-{{ oraclejdk_build }}/{{ oraclejdk_file }}"
oraclejdk_link: "/usr/lib/jvm/java-8-oracle" oraclejdk_link: "/usr/lib/jvm/java-8-oracle"
oraclejdk_debian_pkgs:
- curl
...@@ -12,6 +12,10 @@ ...@@ -12,6 +12,10 @@
# - common # - common
# - oraclejdk # - oraclejdk
- name: install debian needed pkgs
apt: pkg={{ item }}
with_items: oraclejdk_debian_pkgs
- name: download Oracle Java - name: download Oracle Java
shell: > shell: >
curl -b gpw_e24=http%3A%2F%2Fwww.oracle.com -b oraclelicense=accept-securebackup-cookie -O -L {{ oraclejdk_url }} curl -b gpw_e24=http%3A%2F%2Fwww.oracle.com -b oraclelicense=accept-securebackup-cookie -O -L {{ oraclejdk_url }}
......
...@@ -55,6 +55,43 @@ PROGRAMS_PLATFORM_NAME: 'Your Platform Name Here' ...@@ -55,6 +55,43 @@ PROGRAMS_PLATFORM_NAME: 'Your Platform Name Here'
# See: https://github.com/ottoyiu/django-cors-headers/. # See: https://github.com/ottoyiu/django-cors-headers/.
PROGRAMS_CORS_ORIGIN_WHITELIST: [] PROGRAMS_CORS_ORIGIN_WHITELIST: []
PROGRAMS_DATA_DIR: '{{ COMMON_DATA_DIR }}/{{ programs_service_name }}'
PROGRAMS_MEDIA_ROOT: '{{ PROGRAMS_DATA_DIR }}/media'
PROGRAMS_MEDIA_URL: '/media/'
# Example settings to use Amazon S3 as a storage backend for user-uploaded files
# https://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html#amazon-s3
#
# This is only for user-uploaded files and does not cover static assets that ship
# with the code.
#
# Note, AWS_S3_CUSTOM_DOMAIN is required, otherwise boto will generate non-working
# querystring URLs for assets (see https://github.com/boto/boto/issues/1477)
#
# Note, set AWS_S3_CUSTOM_DOMAIN to the cloudfront domain instead, when that is in use.
#
# PROGRAMS_BUCKET: mybucket
# programs_s3_domain: s3.amazonaws.com
# PROGRAMS_MEDIA_ROOT: 'media' # NOTE use '$source_ip/media' for an edx sandbox
#
# PROGRAMS_MEDIA_STORAGE_BACKEND:
# DEFAULT_FILE_STORAGE: 'programs.apps.core.s3utils.MediaS3BotoStorage'
# MEDIA_ROOT: '{{ PROGRAMS_MEDIA_ROOT }}'
# MEDIA_URL: 'https://{{ PROGRAMS_BUCKET }}.{{ programs_s3_domain }}/{{ PROGRAMS_MEDIA_ROOT }}/'
# AWS_STORAGE_BUCKET_NAME: '{{ PROGRAMS_BUCKET }}'
# AWS_S3_CUSTOM_DOMAIN: '{{ PROGRAMS_BUCKET }}.{{ programs_s3_domain }}'
# AWS_QUERYSTRING_AUTH: false
# AWS_QUERYSTRING_EXPIRE: false
# AWS_DEFAULT_ACL: ''
# AWS_HEADERS:
# Cache-Control: max-age=31536000
#
#
PROGRAMS_MEDIA_STORAGE_BACKEND:
DEFAULT_FILE_STORAGE: 'django.core.files.storage.FileSystemStorage'
MEDIA_ROOT: '{{ PROGRAMS_MEDIA_ROOT }}'
MEDIA_URL: '{{ PROGRAMS_MEDIA_URL }}'
PROGRAMS_SERVICE_CONFIG: PROGRAMS_SERVICE_CONFIG:
SECRET_KEY: '{{ PROGRAMS_SECRET_KEY }}' SECRET_KEY: '{{ PROGRAMS_SECRET_KEY }}'
TIME_ZONE: '{{ PROGRAMS_TIME_ZONE }}' TIME_ZONE: '{{ PROGRAMS_TIME_ZONE }}'
...@@ -66,7 +103,7 @@ PROGRAMS_SERVICE_CONFIG: ...@@ -66,7 +103,7 @@ PROGRAMS_SERVICE_CONFIG:
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_URL_ROOT }}' SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}' SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ programs_service_name }}/staticfiles" STATIC_ROOT: '{{ PROGRAMS_DATA_DIR }}/staticfiles'
# db config # db config
DATABASE_OPTIONS: DATABASE_OPTIONS:
connect_timeout: 10 connect_timeout: 10
...@@ -76,7 +113,10 @@ PROGRAMS_SERVICE_CONFIG: ...@@ -76,7 +113,10 @@ PROGRAMS_SERVICE_CONFIG:
CORS_ORIGIN_WHITELIST: '{{ PROGRAMS_CORS_ORIGIN_WHITELIST }}' CORS_ORIGIN_WHITELIST: '{{ PROGRAMS_CORS_ORIGIN_WHITELIST }}'
PUBLIC_URL_ROOT: '{{ PROGRAMS_URL_ROOT }}'
ORGANIZATIONS_API_URL_ROOT: '{{ PROGRAMS_ORGANIZATIONS_API_URL_ROOT }}' ORGANIZATIONS_API_URL_ROOT: '{{ PROGRAMS_ORGANIZATIONS_API_URL_ROOT }}'
MEDIA_STORAGE_BACKEND: '{{ PROGRAMS_MEDIA_STORAGE_BACKEND }}'
PROGRAMS_REPOS: PROGRAMS_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}" - PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
...@@ -130,6 +170,7 @@ programs_requirements: ...@@ -130,6 +170,7 @@ programs_requirements:
# #
programs_debian_pkgs: programs_debian_pkgs:
- libjpeg-dev
- libmysqlclient-dev - libmysqlclient-dev
- libssl-dev - libssl-dev
......
...@@ -88,6 +88,14 @@ ...@@ -88,6 +88,14 @@
- "compress" - "compress"
when: not devstack when: not devstack
# NOTE this isn't used or needed when s3 is used for PROGRAMS_MEDIA_STORAGE_BACKEND
- name: create programs media dir
file: >
path="{{ item }}" state=directory mode=0775
owner="{{ programs_user }}" group="{{ common_web_group }}"
with_items:
- "{{ PROGRAMS_MEDIA_ROOT }}"
- name: write out the supervisor wrapper - name: write out the supervisor wrapper
template: template:
src: "edx/app/programs/programs.sh.j2" src: "edx/app/programs/programs.sh.j2"
......
...@@ -19,11 +19,11 @@ RABBIT_USERS: ...@@ -19,11 +19,11 @@ RABBIT_USERS:
- name: 'celery' - name: 'celery'
password: 'celery' password: 'celery'
RABBITMQ_CLUSTERED: !!null
RABBITMQ_VHOSTS: RABBITMQ_VHOSTS:
- '/' - '/'
RABBITMQ_CLUSTERED_HOSTS: []
# Internal role variables below this line # Internal role variables below this line
# option to force deletion of the mnesia dir # option to force deletion of the mnesia dir
...@@ -56,7 +56,5 @@ rabbitmq_auth_config: ...@@ -56,7 +56,5 @@ rabbitmq_auth_config:
erlang_cookie: "{{ RABBIT_ERLANG_COOKIE }}" erlang_cookie: "{{ RABBIT_ERLANG_COOKIE }}"
admins: "{{ RABBIT_USERS }}" admins: "{{ RABBIT_USERS }}"
rabbitmq_clustered_hosts: []
rabbitmq_plugins: rabbitmq_plugins:
- rabbitmq_management - rabbitmq_management
...@@ -134,7 +134,7 @@ ...@@ -134,7 +134,7 @@
- name: make queues mirrored - name: make queues mirrored
shell: > shell: >
/usr/sbin/rabbitmqctl -p {{ item }} set_policy HA "" '{"ha-mode":"all","ha-sync-mode":"automatic"}' /usr/sbin/rabbitmqctl -p {{ item }} set_policy HA "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
when: RABBITMQ_CLUSTERED or rabbitmq_clustered_hosts|length > 1 when: RABBITMQ_CLUSTERED_HOSTS|length > 1
with_items: RABBITMQ_VHOSTS with_items: RABBITMQ_VHOSTS
tags: tags:
- ha - ha
......
...@@ -2,19 +2,8 @@ ...@@ -2,19 +2,8 @@
[{rabbit, [ [{rabbit, [
{log_levels, [{connection, info}]}, {log_levels, [{connection, info}]},
{% if RABBITMQ_CLUSTERED -%} {#
{%- set hosts= [] -%}
{%- for host in hostvars.keys() -%}
{% do hosts.append("rabbit@ip-" + host.replace('.','-')) %}
{%- endfor %}
{cluster_nodes, {['{{ hosts|join("\',\'") }}'], disc}}
{%- else -%}
{# If rabbitmq_clustered_hosts is set, use that instead assuming an aws stack.
Note: That these names should include the node name prefix. eg. 'rabbit@hostname' Note: That these names should include the node name prefix. eg. 'rabbit@hostname'
#} #}
{cluster_nodes, {['{{ rabbitmq_clustered_hosts|join("\',\'") }}'], disc}} {cluster_nodes, {['{{ RABBITMQ_CLUSTERED_HOSTS|join("\',\'") }}'], disc}}
{%- endif %}
]}]. ]}].
...@@ -59,7 +59,7 @@ ...@@ -59,7 +59,7 @@
- install:base - install:base
- name: update rbenv repo - name: update rbenv repo
git: > git_2_0_1: >
repo=https://github.com/sstephenson/rbenv.git repo=https://github.com/sstephenson/rbenv.git
dest={{ rbenv_dir }}/.rbenv version={{ rbenv_version }} dest={{ rbenv_dir }}/.rbenv version={{ rbenv_version }}
accept_hostkey=yes accept_hostkey=yes
......
...@@ -19,6 +19,7 @@ MIGRATION_COMMANDS = { ...@@ -19,6 +19,7 @@ MIGRATION_COMMANDS = {
'insights': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list", 'insights': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'analytics_api': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list", 'analytics_api': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'credentials': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list", 'credentials': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'discovery': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
} }
HIPCHAT_USER = "PreSupervisor" HIPCHAT_USER = "PreSupervisor"
...@@ -91,7 +92,7 @@ if __name__ == '__main__': ...@@ -91,7 +92,7 @@ if __name__ == '__main__':
ecom_migration_args.add_argument("--ecommerce-env", ecom_migration_args.add_argument("--ecommerce-env",
help="Location of the ecommerce environment file.") help="Location of the ecommerce environment file.")
ecom_migration_args.add_argument("--ecommerce-code-dir", ecom_migration_args.add_argument("--ecommerce-code-dir",
help="Location to of the ecommerce code.") help="Location of the ecommerce code.")
programs_migration_args = parser.add_argument_group("programs_migrations", programs_migration_args = parser.add_argument_group("programs_migrations",
"Args for running programs migration checks.") "Args for running programs migration checks.")
...@@ -100,7 +101,7 @@ if __name__ == '__main__': ...@@ -100,7 +101,7 @@ if __name__ == '__main__':
programs_migration_args.add_argument("--programs-env", programs_migration_args.add_argument("--programs-env",
help="Location of the programs environment file.") help="Location of the programs environment file.")
programs_migration_args.add_argument("--programs-code-dir", programs_migration_args.add_argument("--programs-code-dir",
help="Location to of the programs code.") help="Location of the programs code.")
credentials_migration_args = parser.add_argument_group("credentials_migrations", credentials_migration_args = parser.add_argument_group("credentials_migrations",
"Args for running credentials migration checks.") "Args for running credentials migration checks.")
...@@ -109,7 +110,16 @@ if __name__ == '__main__': ...@@ -109,7 +110,16 @@ if __name__ == '__main__':
credentials_migration_args.add_argument("--credentials-env", credentials_migration_args.add_argument("--credentials-env",
help="Location of the credentials environment file.") help="Location of the credentials environment file.")
credentials_migration_args.add_argument("--credentials-code-dir", credentials_migration_args.add_argument("--credentials-code-dir",
help="Location to of the credentials code.") help="Location of the credentials code.")
discovery_migration_args = parser.add_argument_group("discovery_migrations",
"Args for running discovery migration checks.")
discovery_migration_args.add_argument("--discovery-python",
help="Path to python to use for executing migration check.")
discovery_migration_args.add_argument("--discovery-env",
help="Location of the discovery environment file.")
discovery_migration_args.add_argument("--discovery-code-dir",
help="Location of the discovery code.")
insights_migration_args = parser.add_argument_group("insights_migrations", insights_migration_args = parser.add_argument_group("insights_migrations",
"Args for running insights migration checks.") "Args for running insights migration checks.")
...@@ -118,7 +128,7 @@ if __name__ == '__main__': ...@@ -118,7 +128,7 @@ if __name__ == '__main__':
insights_migration_args.add_argument("--insights-env", insights_migration_args.add_argument("--insights-env",
help="Location of the insights environment file.") help="Location of the insights environment file.")
insights_migration_args.add_argument("--insights-code-dir", insights_migration_args.add_argument("--insights-code-dir",
help="Location to of the insights code.") help="Location of the insights code.")
analyticsapi_migration_args = parser.add_argument_group("analytics_api_migrations", analyticsapi_migration_args = parser.add_argument_group("analytics_api_migrations",
"Args for running analytics_api migration checks.") "Args for running analytics_api migration checks.")
...@@ -127,7 +137,7 @@ if __name__ == '__main__': ...@@ -127,7 +137,7 @@ if __name__ == '__main__':
analyticsapi_migration_args.add_argument("--analytics-api-env", analyticsapi_migration_args.add_argument("--analytics-api-env",
help="Location of the analytics_api environment file.") help="Location of the analytics_api environment file.")
analyticsapi_migration_args.add_argument("--analytics-api-code-dir", analyticsapi_migration_args.add_argument("--analytics-api-code-dir",
help="Location to of the analytics_api code.") help="Location of the analytics_api code.")
hipchat_args = parser.add_argument_group("hipchat", hipchat_args = parser.add_argument_group("hipchat",
"Args for hipchat notification.") "Args for hipchat notification.")
...@@ -233,6 +243,7 @@ if __name__ == '__main__': ...@@ -233,6 +243,7 @@ if __name__ == '__main__':
"ecommerce": {'python': args.ecommerce_python, 'env_file': args.ecommerce_env, 'code_dir': args.ecommerce_code_dir}, "ecommerce": {'python': args.ecommerce_python, 'env_file': args.ecommerce_env, 'code_dir': args.ecommerce_code_dir},
"programs": {'python': args.programs_python, 'env_file': args.programs_env, 'code_dir': args.programs_code_dir}, "programs": {'python': args.programs_python, 'env_file': args.programs_env, 'code_dir': args.programs_code_dir},
"credentials": {'python': args.credentials_python, 'env_file': args.credentials_env, 'code_dir': args.credentials_code_dir}, "credentials": {'python': args.credentials_python, 'env_file': args.credentials_env, 'code_dir': args.credentials_code_dir},
"discovery": {'python': args.discovery_python, 'env_file': args.discovery_env, 'code_dir': args.discovery_code_dir},
"insights": {'python': args.insights_python, 'env_file': args.insights_env, 'code_dir': args.insights_code_dir}, "insights": {'python': args.insights_python, 'env_file': args.insights_env, 'code_dir': args.insights_code_dir},
"analytics_api": {'python': args.analytics_api_python, 'env_file': args.analytics_api_env, 'code_dir': args.analytics_api_code_dir} "analytics_api": {'python': args.analytics_api_python, 'env_file': args.analytics_api_env, 'code_dir': args.analytics_api_code_dir}
} }
......
...@@ -17,4 +17,11 @@ setuid {{ supervisor_user }} ...@@ -17,4 +17,11 @@ setuid {{ supervisor_user }}
{% set credentials_command = "" %} {% set credentials_command = "" %}
{% endif %} {% endif %}
exec {{ supervisor_venv_dir }}/bin/python {{ supervisor_app_dir }}/pre_supervisor_checks.py --available={{ supervisor_available_dir }} --enabled={{ supervisor_cfg_dir }} {% if SUPERVISOR_HIPCHAT_API_KEY is defined %}--hipchat-api-key {{ SUPERVISOR_HIPCHAT_API_KEY }} --hipchat-room {{ SUPERVISOR_HIPCHAT_ROOM }} {% endif %} {% if edxapp_code_dir is defined %}--edxapp-python {{ COMMON_BIN_DIR }}/python.edxapp --edxapp-code-dir {{ edxapp_code_dir }} --edxapp-env {{ edxapp_app_dir }}/edxapp_env{% endif %} {% if xqueue_code_dir is defined %}--xqueue-code-dir {{ xqueue_code_dir }} --xqueue-python {{ COMMON_BIN_DIR }}/python.xqueue {% endif %} {% if ecommerce_code_dir is defined %}--ecommerce-env {{ ecommerce_home }}/ecommerce_env --ecommerce-code-dir {{ ecommerce_code_dir }} --ecommerce-python {{ COMMON_BIN_DIR }}/python.ecommerce {% endif %} {% if insights_code_dir is defined %}--insights-env {{ insights_home }}/insights_env --insights-code-dir {{ insights_code_dir }} --insights-python {{ COMMON_BIN_DIR }}/python.insights {% endif %} {% if analytics_api_code_dir is defined %}--analytics-api-env {{ analytics_api_home }}/analytics_api_env --analytics-api-code-dir {{ analytics_api_code_dir }} --analytics-api-python {{ COMMON_BIN_DIR }}/python.analytics_api {% endif %} {{ programs_command }} {{ credentials_command }} {% if discovery_code_dir is defined %}
{% set discovery_command = "--discovery-env " + discovery_home + "/discovery_env --discovery-code-dir " + discovery_code_dir + " --discovery-python " + COMMON_BIN_DIR + "/python.discovery" %}
{% else %}
{% set discovery_command = "" %}
{% endif %}
exec {{ supervisor_venv_dir }}/bin/python {{ supervisor_app_dir }}/pre_supervisor_checks.py --available={{ supervisor_available_dir }} --enabled={{ supervisor_cfg_dir }} {% if SUPERVISOR_HIPCHAT_API_KEY is defined %}--hipchat-api-key {{ SUPERVISOR_HIPCHAT_API_KEY }} --hipchat-room {{ SUPERVISOR_HIPCHAT_ROOM }} {% endif %} {% if edxapp_code_dir is defined %}--edxapp-python {{ COMMON_BIN_DIR }}/python.edxapp --edxapp-code-dir {{ edxapp_code_dir }} --edxapp-env {{ edxapp_app_dir }}/edxapp_env{% endif %} {% if xqueue_code_dir is defined %}--xqueue-code-dir {{ xqueue_code_dir }} --xqueue-python {{ COMMON_BIN_DIR }}/python.xqueue {% endif %} {% if ecommerce_code_dir is defined %}--ecommerce-env {{ ecommerce_home }}/ecommerce_env --ecommerce-code-dir {{ ecommerce_code_dir }} --ecommerce-python {{ COMMON_BIN_DIR }}/python.ecommerce {% endif %} {% if insights_code_dir is defined %}--insights-env {{ insights_home }}/insights_env --insights-code-dir {{ insights_code_dir }} --insights-python {{ COMMON_BIN_DIR }}/python.insights {% endif %} {% if analytics_api_code_dir is defined %}--analytics-api-env {{ analytics_api_home }}/analytics_api_env --analytics-api-code-dir {{ analytics_api_code_dir }} --analytics-api-python {{ COMMON_BIN_DIR }}/python.analytics_api {% endif %} {{ programs_command }} {{ discovery_command }} {{ credentials_command }}
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
# #
- name: Create clone of edx-platform - name: Create clone of edx-platform
git: > git_2_0_1: >
repo=https://github.com/edx/edx-platform.git repo=https://github.com/edx/edx-platform.git
dest={{ test_build_server_repo_path }}/edx-platform-clone dest={{ test_build_server_repo_path }}/edx-platform-clone
version={{ test_edx_platform_version }} version={{ test_edx_platform_version }}
......
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
# Do A Checkout # Do A Checkout
- name: git checkout xqueue repo into xqueue_code_dir - name: git checkout xqueue repo into xqueue_code_dir
git: > git_2_0_1: >
dest={{ xqueue_code_dir }} repo={{ xqueue_source_repo }} version={{ xqueue_version }} dest={{ xqueue_code_dir }} repo={{ xqueue_source_repo }} version={{ xqueue_version }}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ xqueue_user }}" sudo_user: "{{ xqueue_user }}"
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
# a per queue basis. # a per queue basis.
- name: checkout grader code - name: checkout grader code
git: > git_2_0_1: >
dest={{ xqwatcher_app_dir }}/data/{{ item.COURSE }} repo={{ item.GIT_REPO }} dest={{ xqwatcher_app_dir }}/data/{{ item.COURSE }} repo={{ item.GIT_REPO }}
version={{ item.GIT_REF }} version={{ item.GIT_REF }}
ssh_opts="{{ xqwatcher_course_git_ssh_opts }}" ssh_opts="{{ xqwatcher_course_git_ssh_opts }}"
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
- restart xserver - restart xserver
- name: checkout code - name: checkout code
git: > git_2_0_1: >
dest={{ xserver_code_dir }} repo={{ xserver_source_repo }} version={{xserver_version}} dest={{ xserver_code_dir }} repo={{ xserver_source_repo }} version={{xserver_version}}
accept_hostkey=yes accept_hostkey=yes
sudo_user: "{{ xserver_user }}" sudo_user: "{{ xserver_user }}"
...@@ -58,7 +58,7 @@ ...@@ -58,7 +58,7 @@
notify: restart xserver notify: restart xserver
- name: checkout grader code - name: checkout grader code
git: > git_2_0_1: >
dest={{ XSERVER_GRADER_DIR }} repo={{ XSERVER_GRADER_SOURCE }} version={{ xserver_grader_version }} dest={{ XSERVER_GRADER_DIR }} repo={{ XSERVER_GRADER_SOURCE }} version={{ xserver_grader_version }}
accept_hostkey=yes accept_hostkey=yes
environment: environment:
......
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
notify: restart xsy notify: restart xsy
- name: checkout the code - name: checkout the code
git: > git_2_0_1: >
dest="{{ xsy_code_dir }}" repo="{{ xsy_source_repo }}" dest="{{ xsy_code_dir }}" repo="{{ xsy_source_repo }}"
version="{{ xsy_version }}" accept_hostkey=yes version="{{ xsy_version }}" accept_hostkey=yes
sudo_user: "{{ xsy_user }}" sudo_user: "{{ xsy_user }}"
......
---
#EDXAPP_PREVIEW_LMS_BASE: preview-${deploy_host}
#EDXAPP_LMS_BASE: ${deploy_host}
#EDXAPP_CMS_BASE: studio-${deploy_host}
#EDXAPP_SITE_NAME: ${deploy_host}
#CERTS_DOWNLOAD_URL: "http://${deploy_host}:18090"
#CERTS_VERIFY_URL: "http://${deploy_host}:18090"
#edx_internal: True
#COMMON_USER_INFO:
# - name: ${github_username}
# github: true
# type: admin
#USER_CMD_PROMPT: '[$name_tag] '
#COMMON_ENABLE_NEWRELIC_APP: $enable_newrelic
#COMMON_ENABLE_DATADOG: $enable_datadog
#FORUM_NEW_RELIC_ENABLE: $enable_newrelic
#ENABLE_PERFORMANCE_COURSE: $performance_course
#ENABLE_DEMO_TEST_COURSE: $demo_test_course
#ENABLE_EDX_DEMO_COURSE: $edx_demo_course
#EDXAPP_NEWRELIC_LMS_APPNAME: sandbox-${dns_name}-edxapp-lms
#EDXAPP_NEWRELIC_CMS_APPNAME: sandbox-${dns_name}-edxapp-cms
#EDXAPP_NEWRELIC_WORKERS_APPNAME: sandbox-${dns_name}-edxapp-workers
#XQUEUE_NEWRELIC_APPNAME: sandbox-${dns_name}-xqueue
#FORUM_NEW_RELIC_APP_NAME: sandbox-${dns_name}-forums
#SANDBOX_USERNAME: $github_username
#EDXAPP_ECOMMERCE_PUBLIC_URL_ROOT: "https://ecommerce-${deploy_host}"
#EDXAPP_ECOMMERCE_API_URL: "https://ecommerce-${deploy_host}/api/v2"
#
#ECOMMERCE_ECOMMERCE_URL_ROOT: "https://ecommerce-${deploy_host}"
#ECOMMERCE_LMS_URL_ROOT: "https://${deploy_host}"
#ECOMMERCE_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
#
#PROGRAMS_LMS_URL_ROOT: "https://${deploy_host}"
#PROGRAMS_URL_ROOT: "https://programs-${deploy_host}"
#PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
#
#CREDENTIALS_LMS_URL_ROOT: "https://${deploy_host}"
#CREDENTIALS_URL_ROOT: "https://credentials-${deploy_host}"
#CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
#COURSE_DISCOVERY_ECOMMERCE_API_URL: "https://ecommerce-${deploy_host}/api/v2"
#
#DISCOVERY_OAUTH_URL_ROOT: "https://${deploy_host}"
#DISCOVERY_URL_ROOT: "https://discovery-${deploy_host}"
#DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
## These flags are used to toggle role installation
## in the plays that install each server cluster
#COMMON_NEWRELIC_LICENSE: ''
#COMMON_AWS_SYNC: True
#NEWRELIC_LICENSE_KEY: ''
#NEWRELIC_LOGWATCH: []
# - logwatch-cms-errors.j2
# - logwatch-lms-errors.j2
#COMMON_ENABLE_NEWRELIC: True
## Datadog Settings
#datadog_api_key: ""
#COMMON_DATADOG_API_KEY: ""
#DATADOG_API_KEY: ""
## NGINX settings:
#NGINX_ENABLE_SSL: True
#NGINX_SSL_CERTIFICATE: '/path/to/ssl.crt"
#NGINX_SSL_KEY: '/path/to/ssl.key'
#NGINX_SERVER_ERROR_IMG: https://files.edx.org/images-public/edx-sad-small.png
#EDXAPP_XBLOCK_FS_STORAGE_BUCKET: 'your-xblock-storage-bucket'
#EDXAPP_XBLOCK_FS_STORAGE_PREFIX: 'sandbox-edx/'
#EDXAPP_LMS_SSL_NGINX_PORT: 443
#EDXAPP_CMS_SSL_NGINX_PORT: 443
#EDXAPP_LMS_NGINX_PORT: 80
#EDXAPP_LMS_PREVIEW_NGINX_PORT: 80
#EDXAPP_CMS_NGINX_PORT: 80
#EDXAPP_WORKERS:
# lms: 2
# cms: 2
#XSERVER_GRADER_DIR: "/edx/var/xserver/data/content-mit-600x~2012_Fall"
#XSERVER_GRADER_SOURCE: "git@github.com:/MITx/6.00x.git"
#CERTS_BUCKET: "verify-test.example.org"
#migrate_db: "yes"
#openid_workaround: True
#rabbitmq_ip: "127.0.0.1"
#rabbitmq_refresh: True
#COMMON_HOSTNAME: edx-server
#COMMON_DEPLOYMENT: edx
#COMMON_ENVIRONMENT: sandbox
#ora_gunicorn_workers: 1
#AS_WORKERS: 1
#ANALYTICS_WORKERS: 1
#ANALYTICS_API_GUNICORN_WORKERS: 1
#XQUEUE_WORKERS_PER_QUEUE: 2
## Settings for Grade downloads
#EDXAPP_GRADE_STORAGE_TYPE: 's3'
#EDXAPP_GRADE_BUCKET: 'your-grade-bucket'
#EDXAPP_GRADE_ROOT_PATH: 'sandbox'
#EDXAPP_SEGMENT_IO: 'true'
#EDXAPP_SEGMENT_IO_LMS: 'true'
#EDXAPP_SEGMENT_IO_KEY: 'your segment.io key'
#EDXAPP_SEGMENT_IO_LMS_KEY: 'your segment.io key'
#EDXAPP_YOUTUBE_API_KEY: "Your Youtube API Key"
#
#EDXAPP_FEATURES:
# AUTH_USE_OPENID_PROVIDER: true
# CERTIFICATES_ENABLED: true
# ENABLE_DISCUSSION_SERVICE: true
# ENABLE_DISCUSSION_HOME_PANEL: true
# ENABLE_INSTRUCTOR_ANALYTICS: false
# SUBDOMAIN_BRANDING: false
# SUBDOMAIN_COURSE_LISTINGS: false
# PREVIEW_LMS_BASE: "{{ EDXAPP_PREVIEW_LMS_BASE }}"
# ENABLE_S3_GRADE_DOWNLOADS: true
# USE_CUSTOM_THEME: "{{ edxapp_use_custom_theme }}"
# ENABLE_MKTG_SITE: "{{ EDXAPP_ENABLE_MKTG_SITE }}"
# AUTOMATIC_AUTH_FOR_TESTING: "{{ EDXAPP_ENABLE_AUTO_AUTH }}"
# ENABLE_THIRD_PARTY_AUTH: "{{ EDXAPP_ENABLE_THIRD_PARTY_AUTH }}"
# AUTOMATIC_VERIFY_STUDENT_IDENTITY_FOR_TESTING: true
# ENABLE_PAYMENT_FAKE: true
# ENABLE_VIDEO_UPLOAD_PIPELINE: true
# SEPARATE_VERIFICATION_FROM_PAYMENT: true
# ENABLE_COMBINED_LOGIN_REGISTRATION: true
# ENABLE_CORS_HEADERS: true
# ENABLE_MOBILE_REST_API: true
# ENABLE_OAUTH2_PROVIDER: true
# LICENSING: true
# CERTIFICATES_HTML_VIEW: true
#
#EDXAPP_CORS_ORIGIN_WHITELIST:
# - "example.org"
# - "www.example.org"
# - "{{ ECOMMERCE_ECOMMERCE_URL_ROOT }}"
#
#EDXAPP_VIDEO_UPLOAD_PIPELINE:
# BUCKET: "your-video-bucket"
# ROOT_PATH: "edx-video-upload-pipeline/unprocessed"
#
#EDXAPP_CC_PROCESSOR_NAME: "CyberSource2"
#EDXAPP_CC_PROCESSOR:
# CyberSource2:
# PURCHASE_ENDPOINT: "/shoppingcart/payment_fake/"
# SECRET_KEY: ""
# ACCESS_KEY: ""
# PROFILE_ID: ""
#
#EDXAPP_PROFILE_IMAGE_BACKEND:
# class: storages.backends.s3boto.S3BotoStorage
# options:
# location: /{{ ansible_ec2_public_ipv4 }}
# bucket: your-profile-image-bucket
# custom_domain: yourcloudfrontdomain.cloudfront.net
# headers:
# Cache-Control: max-age-{{ EDXAPP_PROFILE_IMAGE_MAX_AGE }}
#EDXAPP_PROFILE_IMAGE_SECRET_KEY: "SECRET KEY HERE"
#
##TODO: remove once ansible_provision.sh stops sucking or is burned to the ground
#EDXAPP_PROFILE_IMAGE_MAX_AGE: 31536000
#
## send logs to s3
#AWS_S3_LOGS: true
#AWS_S3_LOGS_NOTIFY_EMAIL: devops+logs@example.com
#AWS_S3_LOGS_FROM_EMAIL: devops@example.com
#EDX_ANSIBLE_DUMP_VARS: true
#configuration_version: release
#CERTS_AWS_KEY: 'AWS SECRET KEY HERE'
#CERTS_AWS_ID: 'AWS KEY ID HERE'
#CERTS_REPO: "git@github.com:/edx/certificates"
#XSERVER_GIT_IDENTITY: |
# -----BEGIN RSA PRIVATE KEY-----
# ssh private key here
# -----END RSA PRIVATE KEY-----
#CERTS_GIT_IDENTITY: "{{ XSERVER_GIT_IDENTITY }}"
#EDXAPP_INSTALL_PRIVATE_REQUIREMENTS: true
#EDXAPP_USE_GIT_IDENTITY: true
#_local_git_identity: |
# -----BEGIN RSA PRIVATE KEY-----
# ssh private key here
# -----END RSA PRIVATE KEY-----
#
#EDXAPP_GIT_IDENTITY: "{{ _local_git_identity }}"
#
################################################################
##
## Analytics API Settings
##
#ANALYTICS_API_PIP_EXTRA_ARGS: "--use-wheel --no-index --find-links=http://edx-wheelhouse.s3-website-us-east-1.amazonaws.com/Ubuntu/precise/Python-2.7"
#ANALYTICS_API_GIT_IDENTITY: "{{ _local_git_identity }}"
#
#TESTCOURSES_EXPORTS:
# - github_url: "https://github.com/edx/demo-performance-course.git"
# install: "{{ ENABLE_PERFORMANCE_COURSE }}"
# course_id: "course-v1:DemoX+PERF101+course"
# - github_url: "https://github.com/edx/demo-test-course.git"
# install: "{{ ENABLE_DEMO_TEST_COURSE }}"
# course_id: "course-v1:edX+Test101+course"
# - github_url: "https://github.com/edx/edx-demo-course.git"
# install: "{{ ENABLE_EDX_DEMO_COURSE }}"
# course_id: "course-v1:edX+DemoX+Demo_Course"
#
#EDXAPP_FILE_UPLOAD_STORAGE_BUCKET_NAME: edxuploads-sandbox
#EDXAPP_AWS_STORAGE_BUCKET_NAME: edxuploads-sandbox
#
#EDXAPP_SESSION_COOKIE_SECURE: true
#
## Celery Flower configuration
## By default, we now turn on Google OAuth2 configuration
## This disables that on sandboxes so you can use flower to manage your
## local celery processes.
#FLOWER_AUTH_REGEX: ""
#
################################################################
##
## LOCUST Settings
##
#LOCUST_GIT_IDENTITY: "{{ _local_git_identity }}"
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
EDXAPP_LMS_BASE: 127.0.0.1:8000 EDXAPP_LMS_BASE: 127.0.0.1:8000
EDXAPP_OAUTH_ENFORCE_SECURE: false EDXAPP_OAUTH_ENFORCE_SECURE: false
EDXAPP_LMS_BASE_SCHEME: http EDXAPP_LMS_BASE_SCHEME: http
ECOMMERCE_DJANGO_SETTINGS_MODULE: "ecommerce.settings.devstack"
roles: roles:
- common - common
- vhost - vhost
...@@ -25,10 +26,12 @@ ...@@ -25,10 +26,12 @@
- oraclejdk - oraclejdk
- elasticsearch - elasticsearch
- forum - forum
- ecommerce
- ecomworker
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' } - { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- analytics_api
- insights
- local_dev - local_dev
- demo - demo
- analytics_api
- analytics_pipeline - analytics_pipeline
- insights
- oauth_client_setup - oauth_client_setup
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
serial: 1 serial: 1
gather_facts: True gather_facts: True
vars: vars:
rabbitmq_clustered_hosts: RABBITMQ_CLUSTERED_HOSTS:
- "rabbit@cluster1" - "rabbit@cluster1"
- "rabbit@cluster2" - "rabbit@cluster2"
- "rabbit@cluster3" - "rabbit@cluster3"
......
...@@ -22,13 +22,13 @@ ...@@ -22,13 +22,13 @@
- mysql - mysql
- edxlocal - edxlocal
- mongo - mongo
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- edxapp - edxapp
- oraclejdk - oraclejdk
- elasticsearch - elasticsearch
- forum - forum
- ecommerce - ecommerce
- ecomworker - ecomworker
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- programs - programs
- role: notifier - role: notifier
NOTIFIER_DIGEST_TASK_INTERVAL: "5" NOTIFIER_DIGEST_TASK_INTERVAL: "5"
......
...@@ -32,10 +32,10 @@ ...@@ -32,10 +32,10 @@
- mysql - mysql
- edxlocal - edxlocal
- mongo - mongo
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- edxapp - edxapp
- { role: 'edxapp', celery_worker: True } - { role: 'edxapp', celery_worker: True }
- demo - demo
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- oraclejdk - oraclejdk
- elasticsearch - elasticsearch
- forum - forum
......
...@@ -49,7 +49,7 @@ if [ -n "$OPENEDX_RELEASE" ]; then ...@@ -49,7 +49,7 @@ if [ -n "$OPENEDX_RELEASE" ]; then
-e forum_version=$OPENEDX_RELEASE \ -e forum_version=$OPENEDX_RELEASE \
-e xqueue_version=$OPENEDX_RELEASE \ -e xqueue_version=$OPENEDX_RELEASE \
-e configuration_version=$OPENEDX_RELEASE \ -e configuration_version=$OPENEDX_RELEASE \
" $EXTRA_VARS"
CONFIG_VER=$OPENEDX_RELEASE CONFIG_VER=$OPENEDX_RELEASE
else else
CONFIG_VER="master" CONFIG_VER="master"
......
...@@ -115,7 +115,7 @@ fi ...@@ -115,7 +115,7 @@ fi
if [[ -z $ami ]]; then if [[ -z $ami ]]; then
if [[ $server_type == "full_edx_installation" ]]; then if [[ $server_type == "full_edx_installation" ]]; then
ami="ami-c8093ea2" ami="ami-e686bc8c"
elif [[ $server_type == "ubuntu_12.04" || $server_type == "full_edx_installation_from_scratch" ]]; then elif [[ $server_type == "ubuntu_12.04" || $server_type == "full_edx_installation_from_scratch" ]]; then
ami="ami-94be91fe" ami="ami-94be91fe"
elif [[ $server_type == "ubuntu_14.04(experimental)" ]]; then elif [[ $server_type == "ubuntu_14.04(experimental)" ]]; then
...@@ -277,11 +277,11 @@ PROGRAMS_URL_ROOT: "https://programs-${deploy_host}" ...@@ -277,11 +277,11 @@ PROGRAMS_URL_ROOT: "https://programs-${deploy_host}"
PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
CREDENTIALS_LMS_URL_ROOT: "https://${deploy_host}" CREDENTIALS_LMS_URL_ROOT: "https://${deploy_host}"
CREDENTIALS_URL_ROOT: "https://credentials-${deploy_host}" CREDENTIALS_DOMAIN: "credentials-${deploy_host}"
CREDENTIALS_URL_ROOT: "http://{{ CREDENTIALS_DOMAIN }}"
CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
COURSE_DISCOVERY_ECOMMERCE_API_URL: "https://ecommerce-${deploy_host}/api/v2" COURSE_DISCOVERY_ECOMMERCE_API_URL: "https://ecommerce-${deploy_host}/api/v2"
DISCOVERY_OAUTH_URL_ROOT: "https://${deploy_host}"
DISCOVERY_URL_ROOT: "https://discovery-${deploy_host}" DISCOVERY_URL_ROOT: "https://discovery-${deploy_host}"
DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
......
Vagrant Vagrant
======= =======
Vagrant instances for local development and testing. Vagrant instances for local development and testing of edX instances and Ansible playbooks/roles.
- Vagrant stacks in ``base`` create new base boxes from scratch. - Vagrant stacks in ``base`` create new base boxes from scratch.
- Vagrant stacks in ``release`` download a base box with most requirements already installed. The instances then update themselves with the latest versions of the application code. - Vagrant stacks in ``release`` download a base box with most requirements already installed. The instances then update themselves with the latest versions of the application code.
If you are a developer or designer, you should use the ``release`` stacks. If you are a developer or designer, you should use the ``release`` stacks.
There are two versions of the stack: For creating test edX instances, there are two versions of the stack:
- ``fullstack`` is a production-like configuration running all the services on a single server. https://github.com/edx/configuration/wiki/edX-Production-Stack - ``fullstack`` is a production-like configuration running all the services on a single server. https://github.com/edx/configuration/wiki/edX-Production-Stack
- ``devstack`` is designed for local development. Although it uses the same system requirements as in production, it simplifies certain settings to make development more convenient. https://github.com/edx/configuration/wiki/edX-Developer-Stack - ``devstack`` is designed for local development. It uses the same system requirements as in production, but simplifies certain settings to make development more convenient. https://github.com/edx/configuration/wiki/edX-Developer-Stack
- ``test_role`` (under ``base`` directory) is not used for creating test edx instances. Instead, it is used for testing the configuration scripts themselves.
For testing Ansible playbooks and roles, there are two directories under the ``base`` directory:
- ``test_playbook`` is used for testing the playbooks in the Ansible configuration scripts.
- ``test_role`` is used for testing the roles in the Ansible configuration scripts.
To test an Ansible playbook using Vagrant:
- Create/modify a playbook under ``/playbooks`` (e.g. "foo.yml")
- Export its name as the value of the environment variable ``VAGRANT_ANSIBLE_PLAYBOOK``, like this:
- ``export VAGRANT_ANSIBLE_PLAYBOOK=foo``
- Execute ``vagrant up`` from within the ``test_playbook`` directory.
To test an Ansible role using Vagrant:
- Create/modify a role under ``/playbooks/roles`` (e.g. "bar-role")
- Export its name as the value of the environment variable ``VAGRANT_ANSIBLE_ROLE``, like this:
- ``export VAGRANT_ANSIBLE_ROLE=bar-role``
- Execute ``vagrant up`` from within the ``test_role`` directory.
# -*- mode: ruby -*-
VAGRANTFILE_API_VERSION = '2'
MEMORY = 2048
PRIVATE_IP = ENV['VAGRANT_PRIVATE_IP'] || '192.168.33.15'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = 'ubuntu/trusty64'
config.vm.network 'private_network', ip: PRIVATE_IP
config.vm.synced_folder '.', '/vagrant', disabled: true
config.vm.provider 'virtualbox' do |vb|
vb.memory = MEMORY
end
unless ENV['VAGRANT_NO_PORTS']
config.vm.network :forwarded_port, guest: 8080, host: 8080 # Jenkins
end
unless ENV['VAGRANT_JENKINS_LOCAL_VARS_FILE']
raise 'Please set VAGRANT_JENKINS_LOCAL_VARS_FILE environment variable. '\
'That variable should point to a file containing variable '\
'overrides for analytics_jenkins role. For required overrides '\
'see README.md in the analytics_jenkins role folder.'
end
config.vm.provision :ansible do |ansible|
ansible.playbook = '../../../playbooks/analytics-jenkins.yml'
ansible.verbose = 'vvvv'
ansible.extra_vars = ENV['VAGRANT_JENKINS_LOCAL_VARS_FILE']
end
end
...@@ -26,6 +26,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| ...@@ -26,6 +26,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
if not ENV['VAGRANT_NO_PORTS'] if not ENV['VAGRANT_NO_PORTS']
config.vm.network :forwarded_port, guest: 8000, host: 8000 # LMS config.vm.network :forwarded_port, guest: 8000, host: 8000 # LMS
config.vm.network :forwarded_port, guest: 8001, host: 8001 # Studio config.vm.network :forwarded_port, guest: 8001, host: 8001 # Studio
config.vm.network :forwarded_port, guest: 8002, host: 8002 # Ecommerce
config.vm.network :forwarded_port, guest: 8003, host: 8003 # LMS for Bok Choy config.vm.network :forwarded_port, guest: 8003, host: 8003 # LMS for Bok Choy
config.vm.network :forwarded_port, guest: 8031, host: 8031 # Studio for Bok Choy config.vm.network :forwarded_port, guest: 8031, host: 8031 # Studio for Bok Choy
config.vm.network :forwarded_port, guest: 8120, host: 8120 # edX Notes Service config.vm.network :forwarded_port, guest: 8120, host: 8120 # edX Notes Service
...@@ -75,6 +76,8 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| ...@@ -75,6 +76,8 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
certs_version: ENV['OPENEDX_RELEASE'], certs_version: ENV['OPENEDX_RELEASE'],
forum_version: ENV['OPENEDX_RELEASE'], forum_version: ENV['OPENEDX_RELEASE'],
xqueue_version: ENV['OPENEDX_RELEASE'], xqueue_version: ENV['OPENEDX_RELEASE'],
ANALYTICS_API_VERSION: ENV['OPENEDX_RELEASE'],
INSIGHTS_VERSION: ENV['OPENEDX_RELEASE'],
} }
end end
if ENV['CONFIGURATION_VERSION'] if ENV['CONFIGURATION_VERSION']
...@@ -83,5 +86,8 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| ...@@ -83,5 +86,8 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
if ENV['EDX_PLATFORM_VERSION'] if ENV['EDX_PLATFORM_VERSION']
ansible.extra_vars['edx_platform_version'] = ENV['EDX_PLATFORM_VERSION'] ansible.extra_vars['edx_platform_version'] = ENV['EDX_PLATFORM_VERSION']
end end
if ENV['ECOMMERCE_VERSION']
ansible.extra_vars['ECOMMERCE_VERSION'] = ENV['ECOMMERCE_VERSION']
end
end end
end end
...@@ -24,6 +24,8 @@ if [ -n "$OPENEDX_RELEASE" ]; then ...@@ -24,6 +24,8 @@ if [ -n "$OPENEDX_RELEASE" ]; then
-e certs_version=$OPENEDX_RELEASE \ -e certs_version=$OPENEDX_RELEASE \
-e forum_version=$OPENEDX_RELEASE \ -e forum_version=$OPENEDX_RELEASE \
-e xqueue_version=$OPENEDX_RELEASE \ -e xqueue_version=$OPENEDX_RELEASE \
-e ANALYTICS_API_VERSION=$OPENEDX_RELEASE \
-e INSIGHTS_VERSION=$OPENEDX_RELEASE \
" "
CONFIG_VER=$OPENEDX_RELEASE CONFIG_VER=$OPENEDX_RELEASE
# Need to ensure that the configuration repo is updated # Need to ensure that the configuration repo is updated
...@@ -44,8 +46,11 @@ MOUNT_DIRS = { ...@@ -44,8 +46,11 @@ MOUNT_DIRS = {
:edx_platform => {:repo => "edx-platform", :local => "/edx/app/edxapp/edx-platform", :owner => "edxapp"}, :edx_platform => {:repo => "edx-platform", :local => "/edx/app/edxapp/edx-platform", :owner => "edxapp"},
:themes => {:repo => "themes", :local => "/edx/app/edxapp/themes", :owner => "edxapp"}, :themes => {:repo => "themes", :local => "/edx/app/edxapp/themes", :owner => "edxapp"},
:forum => {:repo => "cs_comments_service", :local => "/edx/app/forum/cs_comments_service", :owner => "forum"}, :forum => {:repo => "cs_comments_service", :local => "/edx/app/forum/cs_comments_service", :owner => "forum"},
:ecommerce => {:repo => "ecommerce", :local => "/edx/app/ecommerce/ecommerce", :owner => "ecommerce"},
:ecommerce_worker => {:repo => "ecommerce-worker", :local => "/edx/app/ecommerce_worker/ecommerce_worker", :owner => "ecommerce_worker"},
:insights => {:repo => "insights", :local => "/edx/app/insights/edx_analytics_dashboard", :owner => "insights"}, :insights => {:repo => "insights", :local => "/edx/app/insights/edx_analytics_dashboard", :owner => "insights"},
:analytics_api => {:repo => "analytics_api", :local => "/edx/app/analytics_api/analytics_api", :owner => "analytics_api"}, :analytics_api => {:repo => "analytics_api", :local => "/edx/app/analytics_api/analytics_api", :owner => "analytics_api"},
:analytics_pipeline => {:repo => "edx-analytics-pipeline", :local => "/edx/app/analytics_pipeline/analytics_pipeline", :owner => "hadoop"},
# This src directory won't have useful permissions. You can set them from the # This src directory won't have useful permissions. You can set them from the
# vagrant user in the guest OS. "sudo chmod 0777 /edx/src" is useful. # vagrant user in the guest OS. "sudo chmod 0777 /edx/src" is useful.
:src => {:repo => "src", :local => "/edx/src", :owner => "root"}, :src => {:repo => "src", :local => "/edx/src", :owner => "root"},
...@@ -60,14 +65,17 @@ end ...@@ -60,14 +65,17 @@ end
# a Vagrant box from the internet. # a Vagrant box from the internet.
openedx_releases = { openedx_releases = {
"named-release/dogwood.rc" => { "named-release/dogwood.rc" => {
:name => "analyticstack", :file => "analyticstack.box", :name => "analyticstack", :file => "dogwood-analyticstack-2016-03-15.box",
},
"named-release/dogwood.1" => {
:name => "analyticstack", :file => "dogwood-analyticstack-2016-03-15.box",
}, },
"named-release/dogwood" => { "named-release/dogwood" => {
:name => "analyticstack", :file => "analyticstack.box", :name => "analyticstack", :file => "dogwood-analyticstack-2016-03-15.box",
}, },
} }
openedx_releases.default = { openedx_releases.default = {
:name => "analyticstack", :file => "analyticstack.box", :name => "analyticstack", :file => "analyticstack-latest.box",
} }
rel = ENV['OPENEDX_RELEASE'] rel = ENV['OPENEDX_RELEASE']
...@@ -84,6 +92,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| ...@@ -84,6 +92,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
if not ENV['VAGRANT_NO_PORTS'] if not ENV['VAGRANT_NO_PORTS']
config.vm.network :forwarded_port, guest: 8000, host: 8000 # LMS config.vm.network :forwarded_port, guest: 8000, host: 8000 # LMS
config.vm.network :forwarded_port, guest: 8001, host: 8001 # Studio config.vm.network :forwarded_port, guest: 8001, host: 8001 # Studio
config.vm.network :forwarded_port, guest: 8002, host: 8002 # Ecommerce
config.vm.network :forwarded_port, guest: 8003, host: 8003 # LMS for Bok Choy config.vm.network :forwarded_port, guest: 8003, host: 8003 # LMS for Bok Choy
config.vm.network :forwarded_port, guest: 8031, host: 8031 # Studio for Bok Choy config.vm.network :forwarded_port, guest: 8031, host: 8031 # Studio for Bok Choy
config.vm.network :forwarded_port, guest: 8120, host: 8120 # edX Notes Service config.vm.network :forwarded_port, guest: 8120, host: 8120 # edX Notes Service
......
...@@ -60,8 +60,13 @@ end ...@@ -60,8 +60,13 @@ end
# to a name and a file path, which are used for retrieving # to a name and a file path, which are used for retrieving
# a Vagrant box from the internet. # a Vagrant box from the internet.
openedx_releases = { openedx_releases = {
# Note: the devstack and fullstack boxes differ, because devstack had an issue
# that needed fixing, but it didn't affect fullstack.
"named-release/dogwood.rc" => { "named-release/dogwood.rc" => {
:name => "dogwood-devstack-rc2", :file => "20151221-dogwood-devstack-rc2.box", :name => "dogwood-devstack-2016-03-09", :file => "dogwood-devstack-2016-03-09.box",
},
"named-release/dogwood.1" => {
:name => "dogwood-devstack-2016-03-09", :file => "dogwood-devstack-2016-03-09.box",
}, },
"named-release/dogwood" => { "named-release/dogwood" => {
:name => "dogwood-devstack-rc2", :file => "20151221-dogwood-devstack-rc2.box", :name => "dogwood-devstack-rc2", :file => "20151221-dogwood-devstack-rc2.box",
...@@ -76,7 +81,7 @@ openedx_releases = { ...@@ -76,7 +81,7 @@ openedx_releases = {
# }, # },
} }
openedx_releases.default = { openedx_releases.default = {
:name => "devstack-nightly-20160125", :file => "20160125-devstack-from-nightly.box", :name => "devstack-periodic-2016-03-08", :file => "devstack-periodic-2016-03-08.box",
} }
rel = ENV['OPENEDX_RELEASE'] rel = ENV['OPENEDX_RELEASE']
......
...@@ -9,9 +9,14 @@ CPU_COUNT = 2 ...@@ -9,9 +9,14 @@ CPU_COUNT = 2
# to a name and a file path, which are used for retrieving # to a name and a file path, which are used for retrieving
# a Vagrant box from the internet. # a Vagrant box from the internet.
openedx_releases = { openedx_releases = {
# Note: the devstack and fullstack boxes differ, because devstack had an issue
# that needed fixing, but it didn't affect fullstack.
"named-release/dogwood" => { "named-release/dogwood" => {
:name => "dogwood-fullstack-rc2", :file => "20151221-dogwood-fullstack-rc2.box", :name => "dogwood-fullstack-rc2", :file => "20151221-dogwood-fullstack-rc2.box",
}, },
"named-release/dogwood.1" => {
:name => "dogwood-fullstack-rc2", :file => "20151221-dogwood-fullstack-rc2.box",
},
"named-release/dogwood.rc" => { "named-release/dogwood.rc" => {
:name => "dogwood-fullstack-rc2", :file => "20151221-dogwood-fullstack-rc2.box", :name => "dogwood-fullstack-rc2", :file => "20151221-dogwood-fullstack-rc2.box",
}, },
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment