Commit 4d31b18c by Nico van Niekerk

Merge remote-tracking branch 'upstream/master' into proversity/NVN-update-script-extra-vars

parents fbf86e63 60801cd7
......@@ -8,4 +8,4 @@ Make sure that the following steps are done before merging:
- [ ] Update the appropriate internal repo (be sure to update for all our environments)
- [ ] If you are updating a secure value rather than an internal one, file a DEVOPS ticket with details.
- [ ] Add an entry to the CHANGELOG.
- [ ] Have you performed the proper testing specified on the [Ops Ansible Testing Checklist](https://openedx.atlassian.net/wiki/display/EdxOps/Ops+Ansible+Testing+Checklist)?
- [ ] If you are making a complicated change, have you performed the proper testing specified on the [Ops Ansible Testing Checklist](https://openedx.atlassian.net/wiki/display/EdxOps/Ops+Ansible+Testing+Checklist)? Adding a new variable does not require the full list (although testing on a sandbox is a great idea to ensure it links with your downstream code changes).
......@@ -18,7 +18,7 @@ addons:
before_install:
- sudo apt-get -y update
- sudo apt-get -y install -o Dpkg::Options::="--force-confold" docker-engine
- sudo apt-get -y install -o Dpkg::Options::="--force-confold" docker-ce
install:
- "pip install --allow-all-external -r requirements.txt"
......
......@@ -57,3 +57,4 @@ Bill DeRusha <bill@edx.org>
Jillian Vogel <jill@opencraft.com>
Zubair Afzal <zubair.afzal@arbisoft.com>
Kyle McCormick <kylemccor@gmail.com>
Muzaffar Yousaf <muzaffar@edx.org>
- Role: discovery
- Added `DISCOVERY_REPOS` to allow configuring discovery repository details.
- Role: edx_django_service
- Made the keys `edx_django_service_git_protocol`, `edx_django_service_git_domain`, and `edx_django_service_git_path` of `edx_django_service_repos` all individually configurable.
- Role: discovery
- Updated LANGUAGE_CODE to generic english. Added configuration for multilingual language package django-parler.
- Role: edxapp
- Added `EDXAPP_EXTRA_MIDDLEWARE_CLASSES` for configuring additional middleware logic.
- Role: discovery
- Added `OPENEXCHANGERATES_API_KEY` for retrieving currency exchange rates.
- Role: edxapp
- Added `EDXAPP_SCORM_PKG_STORAGE_DIR`, with default value as it was in the server template.
- Added `EDXAPP_SCORM_PLAYER_LOCAL_STORAGE_ROOT`, with default value as it was in the server template.
- Role: edxapp
- Added `EDXAPP_ENTERPRISE_TAGLINE` for customized header taglines for different enterprises.
- Added `EDXAPP_PLATFORM_DESCRIPTION` used to describe the specific Open edX platform.
- Role: edxapp
- Added `ENTERPRISE_SUPPORT_URL` variable used by the LMS.
- Role: edxapp
- Added OAUTH_DELETE_EXPIRED to enable automatic deletion of edx-django-oauth2-provider grants, access tokens, and refresh tokens as they are consumed. This will not do a bulk delete of existing rows.
- Role: mongo_3_2
- Added role for mongo 3.2, not yet in use.
- Removed MONGO_CLUSTERED variable. In this role mongo replication is always configured, even if there is only one node.
- Role: edxapp
- Added creation of enterprise_worker user to provisioning. This user is used by the edx-enterprise package when making API requests to Open edX IDAs.
- Role: neo4j
- Increase heap and page caches sizes for neo4j
- Role: neo4j
- Updated neo4j to 3.2.2
- Removed authentication requirement for neo4j
- Role: forum
- Added `FORUM_REBUILD_INDEX` to rebuild the ElasticSearch index from the database, when enabled. Default: `False`.
- Role: nginx
- Added `NGINX_EDXAPP_CMS_APP_EXTRA`, which makes it possible to add custom settings to the site configuration for Studio.
- Added `NGINX_EDXAPP_LMS_APP_EXTRA`, which makes it possible to add custom settings to the site configuration for the LMS.
- Role: edxapp
- Let `confirm_email` in `EDXAPP_REGISTRATION_EXTRA_FIELDS` default to `"hidden"`.
- Let `terms_of_service` in `EDXAPP_REGISTRATION_EXTRA_FIELDS` default to `"hidden"`.
- Role: ecommerce
- Added ECOMMERCE_LANGUAGE_COOKIE_NAME which is the name of the cookie the ecommerce django app looks at for determining the language preference.
- Role: neo4j
- Enabled splunk forwarding for neo4j logs.
- Increased maximum amount of open files to 40000, as suggested by neo4j.
- Updated the java build that neo4j uses to run.
- Role: edxapp
- Set the default value for EDXAPP_POLICY_CHANGE_GRADES_ROUTING_KEY to
'edx.lms.core.default'.
- Role: edxapp
- Set the default value for EDXAPP_BULK_EMAIL_ROUTING_KEY_SMALL_JOBS to
'edx.lms.core.low'.
- Role: jenkins_master
- Update pinned use of JDK7 in Jenkins installs to default JDK version from role `oraclejdk`.
- Role: notifier
- Added `NOTIFIER_DATABASE_ENGINE`, `NOTIFIER_DATABASE_NAME`, `NOTIFIER_DATABASE_USER`, `NOTIFIER_DATABASE_PASSWORD`, `NOTIFIER_DATABASE_HOST`, and `NOTIFIER_DATABASE_PORT` to be able to configure the `notifier` service to use a database engine other than sqlite. Defaults to local sqlite.
- Deprecated: `NOTIFIER_DB_DIR`: Please use `NOTIFIER_DATABASE_NAME` instead.
- Role: elasticsearch
- Replaced `elasticsearch_apt_key` and `elastic_search_apt_keyserver` with `elasticsearch_apt_key_url`
- Updated elasticsearch version to 1.5.0
......@@ -20,6 +97,19 @@
- Added the EDXAPP_ACTIVATION_EMAIL_SUPPORT_LINK URL with default value `''`.
- Added the EDXAPP_PASSWORD_RESET_SUPPORT_LINK URL with default value `''`.
- Role: nginx
- Modified `server-template.j2` to be more accessible and configurable.
- The template should contain the `lang` attribute in the HTML tag.
- If the image loaded has some meaning, as a logo, it should have the `alt` attribute.
- After the header 1 (h1) there is no relevant text content, so next it can not be
another header (h2). It was changed to be a paragraph with the header 2 CSS style.
- Added `NGINX_SERVER_ERROR_IMG_ALT` with default value as it was in the server template
- Added `NGINX_SERVER_ERROR_LANG` with default value `en`
- Added `NGINX_SERVER_ERROR_STYLE_H1` with default value as it was in the server template
- Added `NGINX_SERVER_ERROR_STYLE_P_H2` with default value as it was in the server template
- Added `NGINX_SERVER_ERROR_STYLE_P` with default value as it was in the server template
- Added `NGINX_SERVER_ERROR_STYLE_DIV` with default value as it was in the server template
- Role: edxapp
- Added the EDXAPP_SHOW_HEADER_LANGUAGE_SELECTOR feature flag with default value [false]
- Added the EDXAPP_SHOW_FOOTER_LANGUAGE_SELECTOR feature flag with default value [false]
......@@ -284,3 +374,52 @@
- Role: insights
- Removed `SUPPORT_EMAIL` setting from `INSIGHTS_CONFIG`, as it is was replaced by `SUPPORT_URL`.
- Role: insights
- Added `INSIGHTS_DOMAIN` to configure the domain Insights is deployed on
- Added `INSIGHTS_CLOUDFRONT_DOMAIN` to configure the domain static files can be served from
- Added `INSIGHTS_CORS_ORIGIN_WHITELIST_EXTRA` to configure allowing CORS on domains other than the `INSIGHTS_DOMAIN`
- Role: edxapp
- Added `EDXAPP_VIDEO_IMAGE_SETTINGS` to configure S3-backed video images.
- Role: edxapp
- Added `EDXAPP_BASE_COOKIE_DOMAIN` for sharing cookies across edx domains.
- Role: insights
- Removed `bower install` task
- Replaced r.js build task with webpack build task
- Removed `./manage.py compress` task
- Role: insights
- Moved `THEME_SCSS` from `INSIGHTS_CONFIG` to `insights_environment`
- Role: analytics_api
- Added a number of `ANALYTICS_API_DEFAULT_*` and `ANALYTICS_API_REPORTS_*` variables to allow more selective specification of database parameters (rather than
overriding the whole structure).
- Role: edxapp
- Remove EDXAPP_ANALYTICS_API_KEY, EDXAPP_ANALYTICS_SERVER_URL, EDXAPP_ANALYTICS_DATA_TOKEN, EDXAPP_ANALYTICS_DATA_URL since they are old and
no longer consumed.
- Role: edxapp
- Added `PASSWORD_MIN_LENGTH` for password minimum length validation on reset page.
- Added `PASSWORD_MAX_LENGTH` for password maximum length validation on reset page.
- Role: credentials
- Replaced `CREDENTIALS_OAUTH_URL_ROOT` with `COMMON_OAUTH_URL_ROOT` from `common_vars`
- Replaced `CREDENTIALS_OIDC_LOGOUT_URL` with `COMMON_OAUTH_LOGOUT_URL` from `common_vars`
- Replaced `CREDENTIALS_JWT_AUDIENCE` with `COMMON_JWT_AUDIENCE` from `common_vars`
- Replaced `CREDENTIALS_JWT_ISSUER` with `COMMON_JWT_ISSUER` from `common_vars`
- Replaced `CREDENTIALS_JWT_SECRET_KEY` with `COMMON_JWT_SECRET_KEY` from `common_vars`
- Replaced `CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_ISSUER` with `COMMON_JWT_ISSUER` from `common_vars`
- Role: ecommerce
- Replaced `ECOMMERCE_OAUTH_URL_ROOT` with `COMMON_OAUTH_URL_ROOT` from `common_vars`
- Replaced `ECOMMERCE_OIDC_LOGOUT_URL` with `COMMON_OAUTH_LOGOUT_URL` from `common_vars`
- Replaced `ECOMMERCE_JWT_SECRET_KEY` with `COMMON_JWT_SECRET_KEY` from `common_vars`
- Replaced `ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_ISSUER` with `COMMON_JWT_ISSUER` from `common_vars`
- Role: edxapp
- Added `EDXAPP_VIDEO_TRANSCRIPTS_SETTINGS` to configure S3-backed video transcripts.
- Removed unused `EDXAPP_BOOK_URL` setting
Do not use GitHub issues for Open edX support. The mailing list and Slack channels are explained here: http://open.edx.org/getting-help. If it turns out there's a bug in the configuration scripts, we can open an issue or PR here.
FROM edxops/precise-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
ADD . /edx/app/edx_ansible/edx_ansible
......
FROM selenium/standalone-chrome-debug:3.4.0-einsteinium
MAINTAINER edxops
USER root
# Install a password generator
RUN apt-get update -qqy \
&& apt-get -qqy install \
pwgen \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*
USER seluser
CMD export VNC_PASSWORD=$(pwgen -s -1 $(shuf -i 10-20 -n 1)) \
&& x11vnc -storepasswd $VNC_PASSWORD /home/seluser/.vnc/passwd \
&& echo "Chrome VNC password: $VNC_PASSWORD" \
&& /opt/bin/entry_point.sh
EXPOSE 4444 5900
......@@ -2,8 +2,7 @@ EDXAPP_LMS_BASE: 'edx.devstack.lms:18000'
EDXAPP_LMS_ROOT_URL: 'http://{{ EDXAPP_LMS_BASE }}'
EDXAPP_LMS_PUBLIC_ROOT_URL: 'http://localhost:18000'
COMMON_OAUTH_LOGOUT_URL: '{{ EDXAPP_LMS_PUBLIC_ROOT_URL }}/logout'
COMMON_OAUTH_PUBLIC_URL_ROOT: '{{ EDXAPP_LMS_PUBLIC_ROOT_URL }}/oauth2'
COMMON_OAUTH_BASE_URL: '{{ EDXAPP_LMS_PUBLIC_ROOT_URL }}'
COMMON_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ROOT_URL }}/oauth2'
COMMON_JWT_AUDIENCE: 'lms-key'
COMMON_JWT_SECRET_KEY: 'lms-secret'
......@@ -12,4 +12,6 @@ ECOMMERCE_DATABASES:
HOST: 'db.{{ DOCKER_TLD }}'
PORT: '3306'
ATOMIC_REQUESTS: true
CONN_MAX_AGE: 60
\ No newline at end of file
CONN_MAX_AGE: 60
ECOMMERCE_MEMCACHE: ['edx.devstack.memcached:11211']
......@@ -29,6 +29,7 @@ RUN sudo /edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook edxapp.yml
--extra-vars=edx_platform_version=${OPENEDX_RELEASE} \
--extra-vars="@/ansible_overrides.yml" \
--extra-vars="@/devstack.yml" \
--extra-vars="@/devstack/ansible_overrides.yml"
--extra-vars="@/devstack/ansible_overrides.yml" \
&& rm -rf /edx/app/edxapp/edx-platform
EXPOSE 18000 18010
FROM selenium/standalone-firefox-debug:3.4.0-einsteinium
MAINTAINER edxops
USER root
# Install a password generator and the codecs needed to support mp4 video in Firefox
RUN apt-get update -qqy \
&& apt-get -qqy install \
gstreamer1.0-libav \
pwgen \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*
USER seluser
CMD export VNC_PASSWORD=$(pwgen -s -1 $(shuf -i 10-20 -n 1)) \
&& x11vnc -storepasswd $VNC_PASSWORD /home/seluser/.vnc/passwd \
&& echo "Firefox VNC password: $VNC_PASSWORD" \
&& /opt/bin/entry_point.sh
EXPOSE 4444 5900
......@@ -9,3 +9,5 @@ FORUM_ELASTICSEARCH_HOST: "es.{{ FLOCK_TLD }}"
FORUM_USE_TCP: "true"
FORUM_RACK_ENV: "staging"
FORUM_SINATRA_ENV: "staging"
devstack: "true"
# Build using: docker build -f Dockerfile.gocd-agent -t gocd-agent .
# https://hub.docker.com/r/gocd/gocd-agent-deprecated/
FROM gocd/gocd-agent-deprecated:17.1.0
FROM gocd/gocd-agent-deprecated:17.7.0
LABEL version="0.02" \
description="This custom go-agent docker file installs additional requirements for the edx pipeline"
......
FROM edxops/precise-common
FROM edxops/xenial-common
MAINTAINER edxops
USER root
# Fix selinux issue with useradd on 12.04
RUN curl http://salilab.org/~ben/libselinux1_2.1.0-5.1ubuntu1_amd64.deb -o /tmp/libselinux1_2.1.0-5.1ubuntu1_amd64.deb
RUN dpkg -i /tmp/libselinux1_2.1.0-5.1ubuntu1_amd64.deb
RUN apt-get update
ADD . /edx/app/edx_ansible/edx_ansible
COPY docker/build/xqwatcher/ansible_overrides.yml /
......
FROM edxops/precise-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
USER root
......
FROM edxops/xenial-common:latest
MAINTAINER edxops
ADD . /edx/app/edx_ansible/edx_ansible
COPY docker/build/mongo/ansible_overrides.yml /
WORKDIR /edx/app/edx_ansible/edx_ansible/docker/plays
RUN /edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook mongo.yml \
-i '127.0.0.1,' -c local \
-t 'install' \
-e@/ansible_overrides.yml
WORKDIR /edx/app
EXPOSE 27017
FROM edxops/precise-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
USER root
......
FROM ubuntu:precise
MAINTAINER edxops
# Set locale to UTF-8 which is not the default for docker.
# See the links for details:
# http://jaredmarkell.com/docker-and-locales/
# https://github.com/docker-library/python/issues/13
# https://github.com/docker-library/python/pull/14/files
ENV LANG C.UTF-8
ENV ANSIBLE_REPO="https://github.com/edx/ansible"
ENV CONFIGURATION_REPO="https://github.com/edx/configuration.git"
ENV CONFIGURATION_VERSION="master"
ADD util/install/ansible-bootstrap.sh /tmp/ansible-bootstrap.sh
RUN chmod +x /tmp/ansible-bootstrap.sh
RUN /tmp/ansible-bootstrap.sh
FROM edxops/trusty-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
USER root
......
......@@ -6,7 +6,12 @@ MAINTAINER edxops
# http://jaredmarkell.com/docker-and-locales/
# https://github.com/docker-library/python/issues/13
# https://github.com/docker-library/python/pull/14/files
ENV LANG C.UTF-8
RUN apt-get update &&\
apt-get install -y locales &&\
locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
ENV ANSIBLE_REPO="https://github.com/edx/ansible"
ENV CONFIGURATION_REPO="https://github.com/edx/configuration.git"
......
FROM edxops/precise-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
USER root
......
#
# Single Docker Compose cluster that will eventually start
# all edX services in a single flock of coordinated containers
#
# This work is currently experimental and a number of services
# are missing entirely. Containers that are present will not
# currently work without manual steps. We are working on
# addressing that.
#
# When running compose you must pass in two environment variables
#
# DOCKER_EDX_ROOT which points to the directory into which you checkout
# your edX source code. For example, assuming the following directory
# structure under /home/me
#
# |-- edx-src
# | |-- discovery
# | |-- cs_comments_service
# | |-- edx_discovery
# | |-- edx-platform
# | |-- xqueue
# you would define DOCKER_EDX_ROOT="/home/me/edx-src"
#
# DOCKER_DATA_ROOT is the location on your host machine where Docker
# guests can access your local filesystem for storing persistent data
# files, say MongoDB or MySQL data files.
#
db:
container_name: db
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD='password'
#- MYSQL_DATABASE=''
- MYSQL_USER='migrate'
- MYSQL_PASSWORD='password'
volumes:
- ${DOCKER_DATA_ROOT}/mysql/data:/data
ports:
- 3306:3306
mongo:
container_name: mongo
image: mongo:3.0
volumes:
- ${DOCKER_DATA_ROOT}/mongo/data:/data
ports:
- 27017:27017
# Need to build our own for ES 0.9
es:
container_name: es
image: edxops/elasticsearch:v1
volumes:
- ${DOCKER_DATA_ROOT}/elasticsearch/data:/data
ports:
- 9100:9100
- 9200:9200
- 9300:9300
memcache:
container_name: memcache
image: memcached:1.4.24
volumes:
- ${DOCKER_DATA_ROOT}/memcache/data:/data
ports:
- 11211:11211
nginx:
container_name: nginx
image: edxops/nginx:v1
ports:
- 80:80
- 443:443
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.5.3
volumes:
- ${DOCKER_DATA_ROOT}/rabbitmq/data:/data
ports:
- 5672:5672
forum:
container_name: forum
# Image built from the opencraft fork as it fixes
# an auth bug. Update when the change merges
# upstream
image: edxops/forums:opencraft-v2
volumes:
- ${DOCKER_EDX_ROOT}/cs_comments_service:/edx/app/forum/cs_comments_service
ports:
- 4567:4567
xqueue:
container_name: xqueue
image: edxops/xqueue:v1
ports:
- 8040:8040
- 18040:18040
volumes:
- ${DOCKER_EDX_ROOT}/xqueue:/edx/app/edxapp/xqueue
lms:
container_name: lms
image: edxops/edxapp:v2
ports:
- 8000:8000
- 18000:18000
volumes:
- ${DOCKER_EDX_ROOT}/edx-platform:/edx/app/edxapp/edx-platform
cms:
container_name: cms
image: edxops/edxapp:v2
ports:
- 8010:8010
- 18010:18010
volumes:
- ${DOCKER_EDX_ROOT}/edx-platform:/edx/app/edxapp/edx-platform
- name: Deploy MongoDB 3.2
hosts: all
become: True
gather_facts: True
roles:
- common_vars
- docker
- mongo_3_2
import os
import time
from ansible import utils
try:
import prettytable
except ImportError:
prettytable = None
try:
import hipchat
except ImportError:
hipchat = None
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""Send status updates to a HipChat channel during playbook execution.
This plugin makes use of the following environment variables:
HIPCHAT_TOKEN (required): HipChat API token
HIPCHAT_ROOM (optional): HipChat room to post in. Default: ansible
HIPCHAT_FROM (optional): Name to post as. Default: ansible
HIPCHAT_NOTIFY (optional): Add notify flag to important messages ("true" or "false"). Default: true
HIPCHAT_MSG_PREFIX (option): Optional prefix to add to all hipchat messages
HIPCHAT_MSG_COLOR (option): Optional color for hipchat messages
HIPCHAT_CONDENSED (option): Condense the task summary output
Requires:
prettytable
"""
def __init__(self):
self.enabled = "HIPCHAT_TOKEN" in os.environ
if not self.enabled:
return
# make sure we got our imports
if not hipchat:
raise ImportError(
"The hipchat plugin requires the hipchat Python module, "
"which is not installed or was not found."
)
if not prettytable:
raise ImportError(
"The hipchat plugin requires the prettytable Python module, "
"which is not installed or was not found."
)
self.start_time = time.time()
self.task_report = []
self.last_task = None
self.last_task_changed = False
self.last_task_count = 0
self.last_task_delta = 0
self.last_task_start = time.time()
self.condensed_task_report = (os.getenv('HIPCHAT_CONDENSED', True) == True)
self.room = os.getenv('HIPCHAT_ROOM', 'ansible')
self.from_name = os.getenv('HIPCHAT_FROM', 'ansible')
self.allow_notify = (os.getenv('HIPCHAT_NOTIFY') != 'false')
try:
self.hipchat_conn = hipchat.HipChat(token=os.getenv('HIPCHAT_TOKEN'))
except Exception as e:
utils.warning("Unable to connect to hipchat: {}".format(e))
self.hipchat_msg_prefix = os.getenv('HIPCHAT_MSG_PREFIX', '')
self.hipchat_msg_color = os.getenv('HIPCHAT_MSG_COLOR', '')
self.printed_playbook = False
self.playbook_name = None
def _send_hipchat(self, message, room=None, from_name=None, color=None, message_format='text'):
if not room:
room = self.room
if not from_name:
from_name = self.from_name
if not color:
color = self.hipchat_msg_color
try:
self.hipchat_conn.message_room(room, from_name, message, color=color, message_format=message_format)
except Exception as e:
utils.warning("Could not submit message to hipchat: {}".format(e))
def _flush_last_task(self):
if self.last_task:
delta = time.time() - self.last_task_start
self.task_report.append(dict(
changed=self.last_task_changed,
count=self.last_task_count,
delta="{:0>.1f}".format(self.last_task_delta),
task=self.last_task))
self.last_task_count = 0
self.last_task_changed = False
self.last_task = None
self.last_task_delta = 0
def _process_message(self, msg, msg_type='STATUS'):
if msg_type == 'OK' and self.last_task:
if msg.get('changed', True):
self.last_task_changed = True
if msg.get('delta', False):
(hour, minute, sec) = msg['delta'].split(':')
total = float(hour) * 1200 + float(minute) * 60 + float(sec)
self.last_task_delta += total
self.last_task_count += 1
else:
self._flush_last_task()
if msg_type == 'TASK_START':
self.last_task = msg
self.last_task_start = time.time()
elif msg_type == 'FAILED':
self.last_task_start = time.time()
if 'msg' in msg:
self._send_hipchat('/code {}: The ansible run returned the following error:\n\n {}'.format(
self.hipchat_msg_prefix, msg['msg']), color='red', message_format='text')
else:
# move forward the last task start time
self.last_task_start = time.time()
def on_any(self, *args, **kwargs):
pass
def runner_on_failed(self, host, res, ignore_errors=False):
if self.enabled:
self._process_message(res, 'FAILED')
def runner_on_ok(self, host, res):
if self.enabled:
# don't send the setup results
if 'invocation' in res and 'module_name' in res['invocation'] and res['invocation']['module_name'] != "setup":
self._process_message(res, 'OK')
def runner_on_error(self, host, msg):
if self.enabled:
self._process_message(msg, 'ERROR')
def runner_on_skipped(self, host, item=None):
if self.enabled:
self._process_message(item, 'SKIPPED')
def runner_on_unreachable(self, host, res):
pass
def runner_on_no_hosts(self):
pass
def runner_on_async_poll(self, host, res, jid, clock):
if self.enabled:
self._process_message(res, 'ASYNC_POLL')
def runner_on_async_ok(self, host, res, jid):
if self.enabled:
self._process_message(res, 'ASYNC_OK')
def runner_on_async_failed(self, host, res, jid):
if self.enabled:
self._process_message(res, 'ASYNC_FAILED')
def playbook_on_start(self):
pass
def playbook_on_notify(self, host, handler):
pass
def playbook_on_no_hosts_matched(self):
pass
def playbook_on_no_hosts_remaining(self):
pass
def playbook_on_task_start(self, name, is_conditional):
if self.enabled:
self._process_message(name, 'TASK_START')
def playbook_on_vars_prompt(self, varname, private=True, prompt=None,
encrypt=None, confirm=False, salt_size=None,
salt=None, default=None):
pass
def playbook_on_setup(self):
pass
def playbook_on_import_for_host(self, host, imported_file):
pass
def playbook_on_not_import_for_host(self, host, missing_file):
pass
def playbook_on_play_start(self, pattern):
if self.enabled:
"""Display Playbook and play start messages"""
self.start_time = time.time()
self.playbook_name, _ = os.path.splitext(os.path.basename(self.play.playbook.filename))
host_list = self.play.playbook.inventory.host_list
inventory = os.path.basename(os.path.realpath(host_list))
subset = self.play.playbook.inventory._subset
msg = "<b>{description}</b>: Starting ansible run for play <b><i>{play}</i></b>".format(description=self.hipchat_msg_prefix, play=self.playbook_name)
if self.play.playbook.only_tags and 'all' not in self.play.playbook.only_tags:
msg = msg + " with tags <b><i>{}</i></b>".format(','.join(self.play.playbook.only_tags))
if subset:
msg = msg + " on hosts <b><i>{}</i></b>".format(','.join(subset))
self._send_hipchat(msg, message_format='html')
def playbook_on_stats(self, stats):
if self.enabled:
self._flush_last_task()
delta = time.time() - self.start_time
self.start_time = time.time()
"""Display info about playbook statistics"""
hosts = sorted(stats.processed.keys())
task_column = '{} - Task'.format(self.hipchat_msg_prefix)
task_summary = prettytable.PrettyTable([task_column, 'Time', 'Count', 'Changed'])
task_summary.align[task_column] = "l"
task_summary.align['Time'] = "r"
task_summary.align['Count'] = "r"
task_summary.align['Changed'] = "r"
for task in self.task_report:
if self.condensed_task_report:
# for the condensed task report skip all tasks
# that are not marked as changed and that have
# a time delta less than 1
if not task['changed'] and float(task['delta']) < 1:
continue
task_summary.add_row([task['task'], task['delta'], str(task['count']), str(task['changed'])])
summary_table = prettytable.PrettyTable(['Ok', 'Changed', 'Unreachable', 'Failures'])
self._send_hipchat("/code " + str(task_summary) )
summary_all_host_output = []
for host in hosts:
stats = stats.summarize(host)
summary_output = "<b>{}</b>: <i>{}</i> - ".format(self.hipchat_msg_prefix, host)
for summary_item in ['ok', 'changed', 'unreachable', 'failures']:
if stats[summary_item] != 0:
summary_output += "<b>{}</b> - {} ".format(summary_item, stats[summary_item])
summary_all_host_output.append(summary_output)
self._send_hipchat("<br />".join(summary_all_host_output), message_format='html')
msg = "<b>{description}</b>: Finished Ansible run for <b><i>{play}</i> in {min:02} minutes, {sec:02} seconds</b><br /><br />".format(
description=self.hipchat_msg_prefix,
play=self.playbook_name,
min=int(delta / 60),
sec=int(delta % 60))
self._send_hipchat(msg, message_format='html')
......@@ -6,10 +6,10 @@
jinja2_extensions=jinja2.ext.do
host_key_checking=False
roles_path=../../../ansible-roles/roles:../../../ansible-private/roles:../../../ansible-roles/
roles_path=../../../ansible-roles/roles:../../../ansible-private/roles:../../../ansible-roles/:../../playbooks/roles
library=../library/
ansible_managed=This file is created and updated by ansible, edit at your peril
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath="~/.ansible/tmp/ansible-ssh-%h-%p-%r" -o ServerAliveInterval=30
retries=5
\ No newline at end of file
retries=5
......@@ -47,13 +47,3 @@
file:
path: "{{ artifact_path }}"
state: absent
- name: Send Hipchat notification cleanup has finished
hipchat:
api: "{{ hipchat_url }}"
token: "{{ hipchat_token }}"
room: "{{ hipchat_room }}"
msg: "Cleanup for run id: {{ keypair_id }} complete."
ignore_errors: yes
when: hipchat_token is defined
......@@ -57,7 +57,7 @@
register: instance_tags
- name: Create AMI
ec2_ami_2_0_0_1:
ec2_ami:
instance_id: "{{ instance_id }}"
name: "{{ edx_environment }} -- {{ deployment }} -- {{ play }} -- {{ extra_name_identifier }} -- {{ app_version[:7] }}"
region: "{{ ec2_region }}"
......@@ -116,7 +116,7 @@
api: "{{ hipchat_url }}"
token: "{{ hipchat_token }}"
room: "{{ hipchat_room }}"
msg: "Finished baking AMI for: {{ play }} \n
msg: "Finished baking AMI for: {{ edx_environment }}-{{ deployment }}-{{ play }} \n
AMI-ID: {{ ami_register.image_id }} \n
"
ignore_errors: yes
......
......@@ -70,13 +70,12 @@
key_name: "{{ automation_prefix }} {{ unique_key_name.stdout }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ launch_ami_id }}"
wait: yes
group_id: "{{ ec2_security_group_id }}"
count: 1
vpc_subnet_id: "{{ ec2_vpc_subnet_id }}"
assign_public_ip: "{{ ec2_assign_public_ip }}"
volumes:
- device_name: /dev/sdf
- device_name: /dev/sda1
volume_type: 'gp2'
volume_size: "{{ ebs_volume_size }}"
wait: yes
......
......@@ -118,6 +118,7 @@ from boto import ec2
from boto import rds
from boto import route53
import ConfigParser
import traceback
try:
import json
......@@ -612,5 +613,11 @@ class Ec2Inventory(object):
# Run the script
Ec2Inventory()
RETRIES = 3
for _ in xrange(RETRIES):
try:
Ec2Inventory()
break
except Exception:
traceback.print_exc()
---
- name: Bootstrap instance(s)
hosts: all
gather_facts: no
become: True
roles:
- role: python
tags:
- install
- install:system-requirements
- name: Configure instance(s)
hosts: all
become: True
gather_facts: True
roles:
- oauth2_proxy
......@@ -6,7 +6,6 @@
migrate_db: "yes"
disable_edx_services: false
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
roles:
- aws
......
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
CLUSTER_NAME: 'analytics-api'
roles:
......@@ -19,3 +18,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
- name: Deploy common
hosts: all
become: True
gather_facts: True
vars:
SECURITY_UNATTENDED_UPGRADES: true
COMMON_SECURITY_UPDATES: true
roles:
- common
......@@ -4,6 +4,10 @@
#
# ansible-playbook -c local -i 'localhost,' create_dbs_and_users.yml -e@./db.yml
#
# If running ansible from a python virtualenv you will need a command like the following
#
# ansible-playbook -c local -i 'localhost,' create_dbs_and_users.yml -e@./db.yml -e "ansible_python_interpreter=$(which python)"
#
# where the content of db.yml contains the following dictionaries
#
# database_connection: &default_connection
......@@ -67,6 +71,7 @@
- name: create mysql users and assign privileges
mysql_user:
name: "{{ item.name }}"
state: "{{ item.state | default('present') }}"
priv: "{{ '/'.join(item.privileges) }}"
password: "{{ item.password }}"
host: "{{ item.host }}"
......
......@@ -16,7 +16,7 @@
- name: Validate arguments
fail:
msg: "One or more arguments were not set correctly: {{ item }}"
when: not {{ item }}
when: not item
with_items:
- from_db
- rds_name
......
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
CLUSTER_NAME: 'credentials'
roles:
......@@ -21,3 +20,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
CLUSTER_NAME: 'discovery'
roles:
......@@ -19,3 +18,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
CLUSTER_NAME: 'ecommerce'
roles:
......@@ -21,3 +20,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
roles:
- aws
......@@ -15,3 +14,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
......@@ -22,7 +22,7 @@
- role: edxlocal
tags: edxlocal
- memcache
- mongo
- mongo_3_2
- { role: 'edxapp', celery_worker: True }
- edxapp
- testcourses
......
......@@ -6,8 +6,8 @@
keypair: continuous-integration
instance_type: t2.medium
security_group: sandbox-vpc
# ubuntu 12.04
ami: ami-f478849c
# ubuntu 16.04 - 20170721
ami: ami-cd0f5cb6
region: us-east-1
zone: us-east-1c
instance_tags:
......@@ -18,6 +18,7 @@
owner: temp
root_ebs_size: 50
dns_name: temp
instance_initiated_shutdown_behavior: stop
dns_zone: sandbox.edx.org
name_tag: sandbox-temp
elb: false
......@@ -33,6 +34,7 @@
- role: launch_ec2
keypair: "{{ keypair }}"
instance_type: "{{ instance_type }}"
instance_initiated_shutdown_behavior: "{{ instance_initiated_shutdown_behavior }}"
security_group: "{{ security_group }}"
ami: "{{ ami }}"
region: "{{ region }}"
......@@ -58,7 +60,7 @@
- name: Wait for cloud-init to finish
wait_for:
path: /var/log/cloud-init.log
timeout: 15
timeout: 15
search_regex: "final-message"
- name: gather_facts
setup: ""
......
......@@ -7,7 +7,8 @@
CLUSTER_NAME: 'edxapp'
serial: "{{ serial_count }}"
roles:
- aws
- role: aws
when: COMMON_ENABLE_AWS_ROLE
- role: automated
AUTOMATED_USERS: "{{ EDXAPP_AUTOMATED_USERS | default({}) }}"
- role: nginx
......@@ -20,11 +21,15 @@
nginx_extra_configs: "{{ NGINX_EDXAPP_EXTRA_CONFIGS }}"
nginx_redirects: "{{ NGINX_EDXAPP_CUSTOM_REDIRECTS }}"
- edxapp
- role: devstack_sqlite_fix
when: devstack is defined and devstack
- role: datadog
when: COMMON_ENABLE_DATADOG
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
- role: minos
when: COMMON_ENABLE_MINOS
......@@ -18,3 +18,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: True
CLUSTER_NAME: 'insights'
roles:
......@@ -19,3 +18,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
# Configure an instance with the admin jenkins.
- name: install python2
hosts: all
become: True
gather_facts: False
roles:
- python
- name: Configure instance(s)
hosts: all
become: True
gather_facts: True
vars:
serial_count: 1
COMMON_SECURITY_UPDATES: yes
SECURITY_UPGRADE_ON_ANSIBLE: true
serial: "{{ serial_count }}"
roles:
- aws
- jenkins_admin
......@@ -20,3 +24,5 @@
# crcSalt: <SOURCE>
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
---
- name: Bootstrap instance(s)
hosts: all
gather_facts: no
become: True
roles:
- python
- name: Configure instance(s)
hosts: all
become: True
gather_facts: True
vars:
COMMON_ENABLE_DATADOG: True
COMMON_ENABLE_SPLUNKFORWARDER: True
COMMON_SECURITY_UPDATES: yes
SECURITY_UPGRADE_ON_ANSIBLE: true
SPLUNKFORWARDER_LOG_ITEMS:
- source: '/var/lib/jenkins/jobs/*/builds/*/junitResult.xml'
recursive: true
index: 'testeng'
sourcetype: junit
followSymlink: false
blacklist: '\.gz$'
crcSalt: '<SOURCE>'
- source: '/var/lib/jenkins/jobs/*/builds/*/build.xml'
index: 'testeng'
recursive: true
sourcetype: build_result
followSymlink: false
crcSalt: '<SOURCE>'
blacklist: '\.gz$'
- source: '/var/lib/jenkins/jobs/edx-platform-*/builds/*/archive/test_root/log/timing.*.log'
index: 'testeng'
recursive: true
sourcetype: 'json_timing_log'
followSymlink: false
crcSalt: '<SOURCE>'
blacklist: coverage|private|subset|specific|custom|special|\.gz$
- source: '/var/log/jenkins/jenkins.log'
index: 'testeng'
recursive: false
followSymlink: false
blacklist: '\.gz$'
roles:
- aws
- role: datadog
when: COMMON_ENABLE_DATADOG
- jenkins_build
# run just the splunkforwarder role by using '--tags "splunkonly"'
# e.g. ansible-playbook jenkins_testeng_master.yml -i inventory.ini --tags "splunkonly" -vvvv
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
tags:
- splunkonly
- jenkins:promote-to-production
become: True
......@@ -28,6 +28,7 @@
sourcetype: build_result
followSymlink: false
crcSalt: '<SOURCE>'
blacklist: '(((\.(gz))|\d)$)|(.*seed.*)'
- source: '/var/lib/jenkins/jobs/*/builds/*/log'
index: 'testeng'
......@@ -35,6 +36,7 @@
sourcetype: build_log
followSymlink: false
crcSalt: '<SOURCE>'
blacklist: '(((\.(gz))|\d)$)|(.*seed.*)'
- source: '/var/lib/jenkins/jobs/*/builds/*/archive/test_root/log/timing.*.log'
index: 'testeng'
......
......@@ -29,71 +29,68 @@ group and state.
}
"""
import argparse
import boto
import boto.ec2.autoscale
import boto3
import json
from collections import defaultdict
from os import environ
class LifecycleInventory():
profile = None
def __init__(self, profile):
def __init__(self, region):
parser = argparse.ArgumentParser()
self.profile = profile
self.region = region
def get_e_d_from_tags(self, group):
environment = "default_environment"
deployment = "default_deployment"
for r in group.tags:
if r.key == "environment":
environment = r.value
elif r.key == "deployment":
deployment = r.value
for r in group['Tags']:
if r['Key'] == "environment":
environment = r['Value']
elif r['Key'] == "deployment":
deployment = r['Value']
return environment,deployment
def get_instance_dict(self):
ec2 = boto.ec2.connect_to_region(region,profile_name=self.profile)
reservations = ec2.get_all_instances()
ec2 = boto3.client('ec2', region_name=self.region)
reservations = ec2.describe_instances()['Reservations']
dict = {}
for instance in [i for r in reservations for i in r.instances]:
dict[instance.id] = instance
for instance in [i for r in reservations for i in r['Instances']]:
dict[instance['InstanceId']] = instance
return dict
def run(self):
asg = boto.ec2.autoscale.connect_to_region(region,profile_name=self.profile)
groups = asg.get_all_groups()
asg = boto3.client('autoscaling', region_name=self.region)
groups = asg.describe_auto_scaling_groups()['AutoScalingGroups']
instances = self.get_instance_dict()
inventory = defaultdict(list)
for group in groups:
for instance in group.instances:
for instance in group['Instances']:
private_ip_address = instances[instance.instance_id].private_ip_address
private_ip_address = instances[instance['InstanceId']]['PrivateIpAddress']
if private_ip_address:
environment,deployment = self.get_e_d_from_tags(group)
inventory[environment + "_" + deployment + "_" + instance.lifecycle_state.replace(":","_")].append(private_ip_address)
inventory[group.name].append(private_ip_address)
inventory[group.name + "_" + instance.lifecycle_state.replace(":","_")].append(private_ip_address)
inventory[instance.lifecycle_state.replace(":","_")].append(private_ip_address)
inventory[environment + "_" + deployment + "_" + instance['LifecycleState'].replace(":","_")].append(private_ip_address)
inventory[group['AutoScalingGroupName']].append(private_ip_address)
inventory[group['AutoScalingGroupName'] + "_" + instance['LifecycleState'].replace(":","_")].append(private_ip_address)
inventory[instance['LifecycleState'].replace(":","_")].append(private_ip_address)
print json.dumps(inventory, sort_keys=True, indent=2)
if __name__=="__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--profile', help='The aws profile to use when connecting.')
parser.add_argument('-r', '--region', help='The aws region to use when connecting.', default='us-east-1')
parser.add_argument('-l', '--list', help='Ansible passes this, we ignore it.', action='store_true', default=True)
args = parser.parse_args()
region = environ.get('AWS_REGION','us-east-1')
LifecycleInventory(args.profile).run()
LifecycleInventory(args.region).run()
......@@ -10,7 +10,7 @@
#
# Overview:
# This playbook ensures that the specified users and groups exist in the targeted
# edxapp cluster.
# edxapp cluster.
#
# Users have the following properties:
# - username (required, str)
......@@ -72,7 +72,6 @@
# for perm in Permission.objects.all():
# print '{}:{}:{}'.format(perm.content_type.app_label, perm.content_type.model, perm.codename)
#
- hosts: all
vars:
python_path: /edx/bin/python.edxapp
......
......@@ -25,3 +25,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
# Manages a mongo cluster.
# To set up a new mongo cluster, make sure you've configured MONGO_RS_CONFIG
# as used by mongo_replica_set in the mongo_3_2 role.
#
# If you are initializing a cluster, your command might look like:
# ansible-playbook mongo_3_2.yml -i 203.0.113.11,203.0.113.12,203.0.113.13 -e@/path/to/edx.yml -e@/path/to/ed.yml
# If you just want to deploy an updated replica set config, you can run
# ansible-playbook mongo_3_2.yml -i any-cluster-ip -e@/path/to/edx.yml -e@/path/to/ed.yml --tags configure_replica_set
#
# ADDING A NEW CLUSTER MEMBER
# If you are adding a member to a cluster, you must be sure that the new machine is not first in your inventory
# ansible-playbook mongo_3_2.yml -i 203.0.113.11,203.0.113.12,new-machine-ip -e@/path/to/edx.yml -e@/path/to/ed.yml
- name: Bootstrap instance(s)
hosts: all
gather_facts: no
become: True
roles:
- python
- name: Deploy MongoDB
hosts: all
become: True
gather_facts: True
roles:
- aws
- mongo_3_2
- munin_node
- role: datadog
when: COMMON_ENABLE_DATADOG
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
- name: Deploy mongo_mms instance
hosts: all
become: True
gather_facts: True
vars:
serial_count: 1
serial: "{{ serial_count }}"
roles:
- aws
- mongo_mms
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: datadog
when: COMMON_ENABLE_DATADOG
......@@ -13,3 +13,5 @@
- coursegraph
# - aws
- neo4j
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
......@@ -4,7 +4,6 @@
gather_facts: True
vars:
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: True
roles:
- aws
......
......@@ -21,7 +21,7 @@
- name: Validate arguments
fail:
msg: "One or more arguments were not set correctly: {{ item }}"
when: not {{ item }}
when: not item
with_items:
- rds_name
- admin_password
......@@ -52,7 +52,7 @@
- name: Modify edxapp history RDS
shell: >
aws rds modify-db-instance
--db-instance-identifier {{ rds_name }}
--db-instance-identifier {{ rds_name }}
--apply-immediately
--multi-az
--master-user-password {{ admin_password }}
......
......@@ -40,6 +40,8 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
post_tasks:
- debug:
var: ansible_ec2_instance_id
......
......@@ -10,3 +10,6 @@
service:
name: "{{ supervisor_service }}"
state: restarted
register: rc
until: rc|success
retries: 5
......@@ -12,10 +12,6 @@
hosts: "{{TARGET}}"
become: True
gather_facts: True
pre_tasks:
- set_fact:
STOP_ALL_EDX_SERVICES_EXTRA_ARGS: "--no-wait"
when: ansible_distribution_release == 'precise' or ansible_distribution_release == 'trusty'
roles:
- stop_all_edx_services
......
# Documentation on updating tools-edx-jenkins: https://openedx.atlassian.net/wiki/display/EdxOps/Updating+tools-edx-jenkins
# Updating or creating a new install of tools_jenkins (will restart Jenkins)
# ansible-playbook -i tools-edx-jenkins.m.edx.org, tools_jenkins.yml -e@/path/to/secure-config/tools-edx.yml
# Update tools_jenkins with new plugins (will not restart Jenkins):
# ansible-playbook -i tools-edx-jenkins.m.edx.org, tools_jenkins.yml -e@/path/to/secure-config/tools-edx.yml --tags install:plugins
# Configure an instance with the tool jenkins.
- name: Configure Jenkins instance(s)
hosts: all
......@@ -26,3 +34,5 @@
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
......@@ -6,7 +6,6 @@
COMMON_APP_DIR: "/edx/app"
common_web_group: "www-data"
ENABLE_DATADOG: False
ENABLE_SPLUNKFORWARDER: False
ENABLE_NEWRELIC: False
serial_count: 1
serial: "{{ serial_count }}"
......
......@@ -21,10 +21,7 @@
edx_platform_version: 'master'
# Set to false if deployed behind another proxy/load balancer.
NGINX_SET_X_FORWARDED_HEADERS: True
# These should stay false for the public AMI
COMMON_ENABLE_DATADOG: False
SANDBOX_ENABLE_ECOMMERCE: False
COMMON_ENABLE_SPLUNKFORWARDER: False
DISCOVERY_URL_ROOT: 'http://localhost:{{ DISCOVERY_NGINX_PORT }}'
roles:
- role: swapfile
SWAPFILE_SIZE: 4GB
......@@ -35,28 +32,23 @@
- lms
- forum
- xqueue
- ecommerce
nginx_default_sites:
- lms
- role: nginx
nginx_sites:
- ecommerce
when: SANDBOX_ENABLE_ECOMMERCE
- role: edxlocal
when: EDXAPP_MYSQL_HOST == 'localhost'
- role: memcache
when: "'localhost' in ' '.join(EDXAPP_MEMCACHE)"
- role: mongo
- role: mongo_3_2
when: "'localhost' in EDXAPP_MONGO_HOSTS"
- role: rabbitmq
rabbitmq_ip: 127.0.0.1
- role: edxapp
celery_worker: True
- edxapp
- role: ecommerce
when: SANDBOX_ENABLE_ECOMMERCE
- ecommerce
- role: ecomworker
ECOMMERCE_WORKER_BROKER_HOST: 127.0.0.1
when: SANDBOX_ENABLE_ECOMMERCE
- analytics_api
- insights
# not ready yet: - edx_notes_api
......@@ -66,6 +58,7 @@
- role: elasticsearch
when: "'localhost' in EDXAPP_ELASTIC_SEARCH_CONFIG|map(attribute='host')"
- forum
- discovery
- role: notifier
NOTIFIER_DIGEST_TASK_INTERVAL: 5
- role: xqueue
......
#!/usr/bin/env python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_ami
version_added: "1.3"
short_description: create or destroy an image in ec2
description:
- Creates or deletes ec2 images.
options:
instance_id:
description:
- instance id of the image to create
required: false
default: null
name:
description:
- The name of the new image to create
required: false
default: null
wait:
description:
- wait for the AMI to be in state 'available' before returning.
required: false
default: "no"
choices: [ "yes", "no" ]
wait_timeout:
description:
- how long before wait gives up, in seconds
default: 300
state:
description:
- create or deregister/delete image
required: false
default: 'present'
description:
description:
- An optional human-readable string describing the contents and purpose of the AMI.
required: false
default: null
no_reboot:
description:
- An optional flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance. The default choice is "no".
required: false
default: no
choices: [ "yes", "no" ]
image_id:
description:
- Image ID to be deregistered.
required: false
default: null
device_mapping:
version_added: "2.0"
description:
- An optional list of device hashes/dictionaries with custom configurations (same block-device-mapping parameters)
- "Valid properties include: device_name, volume_type, size (in GB), delete_on_termination (boolean), no_device (boolean), snapshot_id, iops (for io1 volume_type)"
required: false
default: null
delete_snapshot:
description:
- Whether or not to delete an AMI while deregistering it.
required: false
default: null
tags:
description:
- a hash/dictionary of tags to add to the new image; '{"key":"value"}' and '{"key":"value","key":"value"}'
required: false
default: null
version_added: "2.0"
launch_permissions:
description:
- Users and groups that should be able to launch the ami. Expects dictionary with a key of user_ids and/or group_names. user_ids should be a list of account ids. group_name should be a list of groups, "all" is the only acceptable value currently.
required: false
default: null
version_added: "2.0"
author: "Evan Duffield (@scicoin-project) <eduffield@iacquire.com>"
extends_documentation_fragment:
- aws
- ec2
'''
# Thank you to iAcquire for sponsoring development of this module.
EXAMPLES = '''
# Basic AMI Creation
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
instance_id: i-xxxxxx
wait: yes
name: newtest
tags:
Name: newtest
Service: TestService
register: instance
# Basic AMI Creation, without waiting
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
instance_id: i-xxxxxx
wait: no
name: newtest
register: instance
# AMI Creation, with a custom root-device size and another EBS attached
- ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
instance_id: i-xxxxxx
name: newtest
device_mapping:
- device_name: /dev/sda1
size: XXX
delete_on_termination: true
volume_type: gp2
- device_name: /dev/sdb
size: YYY
delete_on_termination: false
volume_type: gp2
register: instance
# AMI Creation, excluding a volume attached at /dev/sdb
- ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
instance_id: i-xxxxxx
name: newtest
device_mapping:
- device_name: /dev/sda1
size: XXX
delete_on_termination: true
volume_type: gp2
- device_name: /dev/sdb
no_device: yes
register: instance
# Deregister/Delete AMI
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: "{{ instance.image_id }}"
delete_snapshot: True
state: absent
# Deregister AMI
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: "{{ instance.image_id }}"
delete_snapshot: False
state: absent
# Update AMI Launch Permissions, making it public
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: "{{ instance.image_id }}"
state: present
launch_permissions:
group_names: ['all']
# Allow AMI to be launched by another account
- ec2_ami:
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: "{{ instance.image_id }}"
state: present
launch_permissions:
user_ids: ['123456789012']
'''
import sys
import time
try:
import boto
import boto.ec2
from boto.ec2.blockdevicemapping import BlockDeviceType, BlockDeviceMapping
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def create_image(module, ec2):
"""
Creates new AMI
module : AnsibleModule object
ec2: authenticated ec2 connection object
"""
instance_id = module.params.get('instance_id')
name = module.params.get('name')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
description = module.params.get('description')
no_reboot = module.params.get('no_reboot')
device_mapping = module.params.get('device_mapping')
tags = module.params.get('tags')
launch_permissions = module.params.get('launch_permissions')
try:
params = {'instance_id': instance_id,
'name': name,
'description': description,
'no_reboot': no_reboot}
if device_mapping:
bdm = BlockDeviceMapping()
for device in device_mapping:
if 'device_name' not in device:
module.fail_json(msg = 'Device name must be set for volume')
device_name = device['device_name']
del device['device_name']
bd = BlockDeviceType(**device)
bdm[device_name] = bd
params['block_device_mapping'] = bdm
image_id = ec2.create_image(**params)
except boto.exception.BotoServerError, e:
if e.error_code == 'InvalidAMIName.Duplicate':
images = ec2.get_all_images()
for img in images:
if img.name == name:
module.exit_json(msg="AMI name already present", image_id=img.id, state=img.state, changed=False)
else:
module.fail_json(msg="Error in retrieving duplicate AMI details")
else:
module.fail_json(msg="%s: %s" % (e.error_code, e.error_message))
# Wait until the image is recognized. EC2 API has eventual consistency,
# such that a successful CreateImage API call doesn't guarantee the success
# of subsequent DescribeImages API call using the new image id returned.
for i in range(wait_timeout):
try:
img = ec2.get_image(image_id)
break
except boto.exception.EC2ResponseError, e:
if 'InvalidAMIID.NotFound' in e.error_code and wait:
time.sleep(1)
else:
module.fail_json(msg="Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help.")
else:
module.fail_json(msg="timed out waiting for image to be recognized")
# wait here until the image is created
wait_timeout = time.time() + wait_timeout
while wait and wait_timeout > time.time() and (img is None or img.state != 'available'):
img = ec2.get_image(image_id)
time.sleep(3)
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg = "timed out waiting for image to be created")
if tags:
try:
ec2.create_tags(image_id, tags)
except boto.exception.EC2ResponseError, e:
module.fail_json(msg = "Image tagging failed => %s: %s" % (e.error_code, e.error_message))
if launch_permissions:
try:
img = ec2.get_image(image_id)
img.set_launch_permissions(**launch_permissions)
except boto.exception.BotoServerError, e:
module.fail_json(msg="%s: %s" % (e.error_code, e.error_message), image_id=image_id)
module.exit_json(msg="AMI creation operation complete", image_id=image_id, state=img.state, changed=True)
def deregister_image(module, ec2):
"""
Deregisters AMI
"""
image_id = module.params.get('image_id')
delete_snapshot = module.params.get('delete_snapshot')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
img = ec2.get_image(image_id)
if img == None:
module.fail_json(msg = "Image %s does not exist" % image_id, changed=False)
try:
params = {'image_id': image_id,
'delete_snapshot': delete_snapshot}
res = ec2.deregister_image(**params)
except boto.exception.BotoServerError, e:
module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message))
# wait here until the image is gone
img = ec2.get_image(image_id)
wait_timeout = time.time() + wait_timeout
while wait and wait_timeout > time.time() and img is not None:
img = ec2.get_image(image_id)
time.sleep(3)
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg = "timed out waiting for image to be reregistered/deleted")
module.exit_json(msg="AMI deregister/delete operation complete", changed=True)
def update_image(module, ec2):
"""
Updates AMI
"""
image_id = module.params.get('image_id')
launch_permissions = module.params.get('launch_permissions')
if 'user_ids' in launch_permissions:
launch_permissions['user_ids'] = [str(user_id) for user_id in launch_permissions['user_ids']]
img = ec2.get_image(image_id)
if img == None:
module.fail_json(msg = "Image %s does not exist" % image_id, changed=False)
try:
set_permissions = img.get_launch_permissions()
if set_permissions != launch_permissions:
if ('user_ids' in launch_permissions and launch_permissions['user_ids']) or ('group_names' in launch_permissions and launch_permissions['group_names']):
res = img.set_launch_permissions(**launch_permissions)
elif ('user_ids' in set_permissions and set_permissions['user_ids']) or ('group_names' in set_permissions and set_permissions['group_names']):
res = img.remove_launch_permissions(**set_permissions)
else:
module.exit_json(msg="AMI not updated", launch_permissions=set_permissions, changed=False)
module.exit_json(msg="AMI launch permissions updated", launch_permissions=launch_permissions, set_perms=set_permissions, changed=True)
else:
module.exit_json(msg="AMI not updated", launch_permissions=set_permissions, changed=False)
except boto.exception.BotoServerError, e:
module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
instance_id = dict(),
image_id = dict(),
delete_snapshot = dict(),
name = dict(),
wait = dict(type="bool", default=False),
wait_timeout = dict(default=900),
description = dict(default=""),
no_reboot = dict(default=False, type="bool"),
state = dict(default='present'),
device_mapping = dict(type='list'),
tags = dict(type='dict'),
launch_permissions = dict(type='dict')
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
try:
ec2 = ec2_connect(module)
except Exception, e:
module.fail_json(msg="Error while connecting to aws: %s" % str(e))
if module.params.get('state') == 'absent':
if not module.params.get('image_id'):
module.fail_json(msg='image_id needs to be an ami image to registered/delete')
deregister_image(module, ec2)
elif module.params.get('state') == 'present':
if module.params.get('image_id') and module.params.get('launch_permissions'):
# Update image's launch permissions
update_image(module, ec2)
# Changed is always set to true when provisioning new AMI
if not module.params.get('instance_id'):
module.fail_json(msg='instance_id parameter is required for new image')
if not module.params.get('name'):
module.fail_json(msg='name parameter is required for new image')
create_image(module, ec2)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
main()
......@@ -351,6 +351,12 @@ def validate_args():
if (username and not password) or (password and not username):
module.fail_json(msg="Must provide both username and password or neither.")
# Check that if votes is 0 priority is also 0
for member in module.params.get('rs_config').get('members'):
if member.get('votes') == 0 and member.get('priority') != 0:
module.fail_json(msg="Non-voting member {} must have priority 0".
format(member['host']))
return module
......
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: vpc_lookup
short_description: returns a list of subnet Ids using tags as criteria
description:
- Returns a list of subnet Ids for a given set of tags that identify one or more VPCs
version_added: "1.5"
options:
region:
description:
- The AWS region to use. Must be specified if ec2_url
is not used. If not specified then the value of the
EC2_REGION environment variable, if any, is used.
required: false
default: null
aliases: [ 'aws_region', 'ec2_region' ]
aws_secret_key:
description:
- AWS secret key. If not set then the value of
the AWS_SECRET_KEY environment variable is used.
required: false
default: null
aliases: [ 'ec2_secret_key', 'secret_key' ]
aws_access_key:
description:
- AWS access key. If not set then the value of the
AWS_ACCESS_KEY environment variable is used.
required: false
default: null
aliases: [ 'ec2_access_key', 'access_key' ]
tags:
desription:
- tags to lookup
required: false
default: null
type: dict
aliases: []
requirements: [ "boto" ]
author: John Jarvis
'''
EXAMPLES = '''
# Note: None of these examples set aws_access_key, aws_secret_key, or region.
# It is assumed that their matching environment variables are set.
# Return all instances that match the tag "Name: foo"
- local_action:
module: vpc_lookup
tags:
Name: foo
'''
import sys
AWS_REGIONS = ['ap-northeast-1',
'ap-southeast-1',
'ap-southeast-2',
'eu-west-1',
'sa-east-1',
'us-east-1',
'us-west-1',
'us-west-2']
try:
from boto.vpc import VPCConnection
from boto.vpc import connect_to_region
except ImportError:
print "failed=True msg='boto required for this module'"
sys.exit(1)
def main():
module=AnsibleModule(
argument_spec=dict(
region=dict(choices=AWS_REGIONS),
aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'],
no_log=True),
aws_access_key=dict(aliases=['ec2_access_key', 'access_key']),
tags=dict(default=None, type='dict'),
)
)
tags = module.params.get('tags')
aws_secret_key = module.params.get('aws_secret_key')
aws_access_key = module.params.get('aws_access_key')
region = module.params.get('region')
# If we have a region specified, connect to its endpoint.
if region:
try:
vpc = connect_to_region(region, aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
vpc_conn = VPCConnection()
subnet_ids = []
for subnet in vpc_conn.get_all_subnets(filters={'tag:' + tag: value
for tag, value in tags.iteritems()}):
subnet_ids.append(subnet.id)
vpc_ids = []
for vpc in vpc.get_all_vpcs(filters={'tag:' + tag: value
for tag, value in tags.iteritems()}):
vpc_ids.append(vpc.id)
module.exit_json(changed=False, subnet_ids=subnet_ids, vpc_ids=vpc_ids)
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()
......@@ -20,9 +20,18 @@ ANALYTICS_API_PIP_EXTRA_ARGS: "-i {{ COMMON_PYPI_MIRROR_URL }}"
ANALYTICS_API_NGINX_PORT: "18100"
ANALYTICS_API_DEFAULT_DB_NAME: 'analytics-api'
ANALYTICS_API_DEFAULT_USER: 'api001'
ANALYTICS_API_DEFAULT_PASSWORD: 'password'
ANALYTICS_API_DEFAULT_HOST: 'localhost'
ANALYTICS_API_DEFAULT_PORT: '3306'
ANALYTICS_API_DEFAULT_MYSQL_OPTIONS:
connect_timeout: 10
ANALYTICS_API_REPORTS_DB_NAME: 'reports'
ANALYTICS_API_REPORTS_USER: 'reports001'
ANALYTICS_API_REPORTS_PASSWORD: 'password'
ANALYTICS_API_REPORTS_HOST: 'localhost'
ANALYTICS_API_REPORTS_PORT: '3306'
ANALYTICS_API_REPORTS_MYSQL_OPTIONS:
connect_timeout: 10
......@@ -31,19 +40,19 @@ ANALYTICS_API_DATABASES:
default:
ENGINE: 'django.db.backends.mysql'
NAME: '{{ ANALYTICS_API_DEFAULT_DB_NAME }}'
USER: 'api001'
PASSWORD: 'password'
HOST: 'localhost'
PORT: '3306'
USER: '{{ ANALYTICS_API_DEFAULT_USER }}'
PASSWORD: '{{ ANALYTICS_API_DEFAULT_PASSWORD }}'
HOST: '{{ ANALYTICS_API_DEFAULT_HOST }}'
PORT: '{{ ANALYTICS_API_DEFAULT_PORT }}'
OPTIONS: "{{ ANALYTICS_API_DEFAULT_MYSQL_OPTIONS }}"
# read-only user
reports:
ENGINE: 'django.db.backends.mysql'
NAME: '{{ ANALYTICS_API_REPORTS_DB_NAME }}'
USER: 'reports001'
PASSWORD: 'password'
HOST: 'localhost'
PORT: '3306'
USER: '{{ ANALYTICS_API_REPORTS_USER }}'
PASSWORD: '{{ ANALYTICS_API_REPORTS_PASSWORD }}'
HOST: '{{ ANALYTICS_API_REPORTS_HOST }}'
PORT: '{{ ANALYTICS_API_REPORTS_PORT }}'
OPTIONS: "{{ ANALYTICS_API_REPORTS_MYSQL_OPTIONS }}"
ANALYTICS_API_VERSION: "master"
......@@ -54,10 +63,6 @@ ANALYTICS_API_USERS:
ANALYTICS_API_SECRET_KEY: 'Your secret key here'
ANALYTICS_API_TIME_ZONE: 'UTC'
ANALYTICS_API_LANGUAGE_CODE: 'en-us'
ANALYTICS_API_EMAIL_HOST: 'localhost'
ANALYTICS_API_EMAIL_HOST_USER: 'mail_user'
ANALYTICS_API_EMAIL_HOST_PASSWORD: 'mail_password'
ANALYTICS_API_EMAIL_PORT: 587
ANALYTICS_API_AUTH_TOKEN: 'put-your-api-token-here'
......@@ -107,11 +112,6 @@ ANALYTICS_API_SERVICE_CONFIG:
SECRET_KEY: '{{ ANALYTICS_API_SECRET_KEY }}'
TIME_ZONE: '{{ ANALYTICS_API_TIME_ZONE }}'
LANGUAGE_CODE: '{{ANALYTICS_API_LANGUAGE_CODE }}'
# email config
EMAIL_HOST: '{{ ANALYTICS_API_EMAIL_HOST }}'
EMAIL_HOST_PASSWORD: '{{ ANALYTICS_API_EMAIL_HOST_PASSWORD }}'
EMAIL_HOST_USER: '{{ ANALYTICS_API_EMAIL_HOST_USER }}'
EMAIL_PORT: '{{ ANALYTICS_API_EMAIL_PORT }}'
API_AUTH_TOKEN: '{{ ANALYTICS_API_AUTH_TOKEN }}'
STATICFILES_DIRS: ['static']
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ analytics_api_service_name }}/staticfiles"
......
......@@ -98,6 +98,8 @@
{{ role_name|upper }}_HOSTNAME: '~^((stage|prod)-)?{{ role_name|replace('_', '-') }}.*'
{{ role_name|upper }}_DEBIAN_EXTRA_PKGS: []
nginx_{{ role_name }}_gunicorn_hosts:
- 127.0.0.1
......
......@@ -7,7 +7,7 @@
# This allows the dockerfile to update /edx/app/edx_ansible/edx_ansible
# with the currently checked-out configuration repo.
FROM edxops/trusty-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
ARG {{ role_name|upper }}_VERSION=master
......
......@@ -21,5 +21,5 @@ dependencies:
edx_service_user: "{{ '{{' }} {{ role_name }}_user }}"
edx_service_home: "{{ '{{' }} {{ role_name }}_home }}"
edx_service_packages:
debian: "{{ '{{' }} {{ role_name }}_debian_pkgs }}"
debian: "{{ '{{' }} {{ role_name }}_debian_pkgs + {{ role_name|upper }}_DEBIAN_EXTRA_PKGS }}"
redhat: "{{ '{{' }} {{ role_name }}_redhat_pkgs }}"
......@@ -61,7 +61,6 @@
group: "root"
mode: "0440"
validate: 'visudo -cf %s'
when: automated_sudoers_template
with_dict: "{{ AUTOMATED_USERS }}"
- name: Create .ssh directory
......
......@@ -32,10 +32,10 @@ browser_s3_deb_pkgs:
url: https://s3.amazonaws.com/vagrant.testeng.edx.org/google-chrome-stable_55.0.2883.87-1_amd64.deb
trusty_browser_s3_deb_pkgs:
- name: google-chrome-stable_30.0.1599.114-1_amd64.deb
url: https://s3.amazonaws.com/vagrant.testeng.edx.org/google-chrome-stable_30.0.1599.114-1_amd64.deb
- name: firefox-mozilla-build_42.0-0ubuntu1_amd64.deb
url: https://s3.amazonaws.com/vagrant.testeng.edx.org/firefox-mozilla-build_42.0-0ubuntu1_amd64.deb
- name: google-chrome-stable_59.0.3071.115-1_amd64.deb
url: https://s3.amazonaws.com/vagrant.testeng.edx.org/google-chrome-stable_59.0.3071.115-1_amd64.deb
# ChromeDriver
chromedriver_version: 2.27
......
......@@ -44,7 +44,7 @@
get_url:
dest: /tmp/{{ item.name }}
url: "{{ item.url }}"
register: download_deb
register: download_trusty_deb
with_items: "{{ trusty_browser_s3_deb_pkgs }}"
when: ansible_distribution_release == 'trusty'
tags:
......@@ -55,7 +55,7 @@
get_url:
dest: /tmp/{{ item.name }}
url: "{{ item.url }}"
register: download_deb
register: download_xenial_deb
with_items: "{{ browser_s3_deb_pkgs }}"
when: ansible_distribution_release == 'xenial'
tags:
......@@ -65,7 +65,7 @@
- name: install trusty browser packages
shell: gdebi -nq /tmp/{{ item.name }}
with_items: "{{ trusty_browser_s3_deb_pkgs }}"
when: download_deb.changed and
when: download_trusty_deb.changed and
ansible_distribution_release == 'trusty'
tags:
- install
......@@ -74,7 +74,7 @@
- name: install xenial browser packages
shell: gdebi -nq /tmp/{{ item.name }}
with_items: "{{ browser_s3_deb_pkgs }}"
when: download_deb.changed and
when: download_xenial_deb.changed and
ansible_distribution_release == 'xenial'
tags:
- install
......
......@@ -92,13 +92,14 @@ COMMON_ENABLE_DATADOG: False
COMMON_ENABLE_NGINXTRA: False
COMMON_ENABLE_SPLUNKFORWARDER: False
COMMON_ENABLE_NEWRELIC: False
COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE: False
# enables app reporting, you must enable newrelic
# as well
COMMON_ENABLE_NEWRELIC_APP: False
COMMON_ENABLE_MINOS: False
COMMON_TAG_EC2_INSTANCE: False
common_boto_version: '2.34.0'
common_node_version: '6.9.4'
common_node_version: '6.11.1'
common_redhat_pkgs:
- ntp
- lynx
......@@ -156,7 +157,6 @@ common_debian_variants:
# We only have to install old Python for these releases:
old_python_ppa_releases:
- precise
- trusty
common_redhat_variants:
......@@ -209,12 +209,18 @@ COMMON_TRACKING_LOG_ROTATION:
COMMON_EXTRA_CONFIGURATION_SOURCES_CHECKING: false
COMMON_EXTRA_CONFIGURATION_SOURCES: []
COMMON_OAUTH_PUBLIC_URL_ROOT: 'http://127.0.0.1:8000/oauth2'
COMMON_OAUTH_BASE_URL: 'http://127.0.0.1:8000'
COMMON_OAUTH_PUBLIC_URL_ROOT: '{{ COMMON_OAUTH_BASE_URL }}/oauth2'
COMMON_OAUTH_URL_ROOT: '{{ COMMON_OAUTH_PUBLIC_URL_ROOT }}'
COMMON_OAUTH_LOGOUT_URL: '{{ COMMON_OAUTH_PUBLIC_URL_ROOT }}/logout'
COMMON_OAUTH_LOGOUT_URL: '{{ COMMON_OAUTH_BASE_URL }}/logout'
COMMON_OIDC_ISSUER: '{{ COMMON_OAUTH_URL_ROOT }}'
COMMON_JWT_AUDIENCE: 'SET-ME-PLEASE'
COMMON_JWT_ISSUER: '{{ COMMON_OIDC_ISSUER }}'
COMMON_JWT_SECRET_KEY: 'SET-ME-PLEASE'
# Set worker user default
CREATE_SERVICE_WORKER_USERS: True
COMMON_ENABLE_AWS_ROLE: true
......@@ -53,8 +53,6 @@ CREDENTIALS_DJANGO_SETTINGS_MODULE: "credentials.settings.production"
CREDENTIALS_DOMAIN: 'credentials'
CREDENTIALS_URL_ROOT: 'http://{{ CREDENTIALS_DOMAIN }}:18150'
CREDENTIALS_LOGOUT_URL: '{{ CREDENTIALS_URL_ROOT }}/logout/'
CREDENTIALS_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ROOT_URL | default("http://127.0.0.1:8000") }}/oauth2'
CREDENTIALS_OIDC_LOGOUT_URL: '{{ EDXAPP_LMS_ROOT_URL | default("http://127.0.0.1:8000") }}/logout'
CREDENTIALS_SESSION_EXPIRE_AT_BROWSER_CLOSE: false
......@@ -66,7 +64,6 @@ CREDENTIALS_LANGUAGE_CODE: 'en_US.UTF-8'
CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY: 'SET-ME-TO-A-UNIQUE-LONG-RANDOM-STRING'
CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET: 'SET-ME-TO-A-UNIQUE-LONG-RANDOM-STRING'
CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: false
CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_ISSUER: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
CREDENTIALS_SERVICE_USER: 'credentials_service_user'
......@@ -146,17 +143,13 @@ NGINX_CREDENTIALS_GUNICORN_HOSTS:
CREDENTIALS_EXTRA_APPS: []
CREDENTIALS_JWT_AUDIENCE: '{{ EDXAPP_JWT_AUDIENCE | default("SET-ME-PLEASE") }}'
CREDENTIALS_JWT_ISSUER: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
CREDENTIALS_JWT_SECRET_KEY: '{{ EDXAPP_JWT_SECRET_KEY | default("lms-secret") }}'
CREDENTIALS_JWT_AUTH:
JWT_ISSUERS:
- AUDIENCE: '{{ CREDENTIALS_JWT_AUDIENCE }}'
ISSUER: '{{ CREDENTIALS_JWT_ISSUER }}'
SECRET_KEY: '{{ CREDENTIALS_JWT_SECRET_KEY }}'
- AUDIENCE: '{{ COMMON_JWT_AUDIENCE }}'
ISSUER: '{{ COMMON_JWT_ISSUER }}'
SECRET_KEY: '{{ COMMON_JWT_SECRET_KEY }}'
- AUDIENCE: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}'
ISSUER: '{{ CREDENTIALS_JWT_ISSUER }}'
ISSUER: '{{ COMMON_JWT_ISSUER }}'
SECRET_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
CREDENTIALS_SERVICE_CONFIG:
......@@ -166,14 +159,14 @@ CREDENTIALS_SERVICE_CONFIG:
TIME_ZONE: '{{ CREDENTIALS_TIME_ZONE }}'
LANGUAGE_CODE: '{{ CREDENTIALS_LANGUAGE_CODE }}'
OAUTH2_PROVIDER_URL: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
OAUTH2_PROVIDER_URL: '{{ COMMON_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ COMMON_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
SOCIAL_AUTH_EDX_OIDC_LOGOUT_URL: '{{ CREDENTIALS_OIDC_LOGOUT_URL }}'
SOCIAL_AUTH_EDX_OIDC_ISSUER: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_ISSUER }}'
SOCIAL_AUTH_EDX_OIDC_LOGOUT_URL: '{{ COMMON_OAUTH_LOGOUT_URL }}'
SOCIAL_AUTH_EDX_OIDC_ISSUER: '{{ COMMON_JWT_ISSUER }}'
EXTRA_APPS: '{{ CREDENTIALS_EXTRA_APPS }}'
......
......@@ -24,19 +24,24 @@ demo_test_users:
username: honor
hashed_password: "{{ demo_hashed_password }}"
is_staff: false
is_superuser: false
- email: 'audit@example.com'
username: audit
hashed_password: "{{ demo_hashed_password }}"
is_staff: false
is_superuser: false
- email: 'verified@example.com'
username: verified
hashed_password: "{{ demo_hashed_password }}"
is_staff: false
is_superuser: false
demo_staff_user:
email: 'staff@example.com'
username: staff
hashed_password: "{{ demo_hashed_password }}"
is_staff: true
is_superuser: false
SANDBOX_EDXAPP_USERS: []
demo_edxapp_user: 'edxapp'
demo_edxapp_settings: '{{ COMMON_EDXAPP_SETTINGS }}'
demo_edxapp_venv_bin: '{{ COMMON_APP_DIR }}/{{ demo_edxapp_user }}/venvs/{{demo_edxapp_user}}/bin'
......
......@@ -26,12 +26,16 @@
demo_test_and_staff_users: "{{ demo_test_users }}"
when: not DEMO_CREATE_STAFF_USER
- name: build staff, admin, and test user list
set_fact:
demo_test_admin_and_staff_users: "{{ demo_test_and_staff_users + SANDBOX_EDXAPP_USERS }}"
- name: create some test users
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings={{ demo_edxapp_settings }} --service-variant lms manage_user {{ item.username}} {{ item.email }} --initial-password-hash {{ item.hashed_password | quote }}{% if item.is_staff %} --staff{% endif %}"
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings={{ demo_edxapp_settings }} --service-variant lms manage_user {{ item.username}} {{ item.email }} --initial-password-hash {{ item.hashed_password | quote }}{% if item.is_staff %} --staff{% endif %}{% if item.is_superuser %} --superuser{% endif %}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
with_items: "{{ demo_test_and_staff_users }}"
with_items: "{{ demo_test_admin_and_staff_users }}"
when: demo_checkout.changed
- name: enroll test users in the demo course
......
---
SQLITE_FIX_TMP_DIR: "/var/tmp/sqlite_fix"
PYSQLITE_URL: "https://codeload.github.com/ghaering/pysqlite/tar.gz/2.8.3"
PYSQLITE_CREATED_PATH: "pysqlite-2.8.3"
PYSQLITE_TMP_PATH: "{{ SQLITE_FIX_TMP_DIR }}/{{ PYSQLITE_CREATED_PATH }}"
SQLITE_AUTOCONF_URL: "https://www.sqlite.org/2016/sqlite-autoconf-3140100.tar.gz"
SQLITE_AUTOCONF_CREATED_PATH: "sqlite-autoconf-3140100"
SQLITE_TMP_PATH: "{{ SQLITE_FIX_TMP_DIR }}/{{ SQLITE_AUTOCONF_CREATED_PATH }}"
---
- name: Creates directory
file:
path: "{{ SQLITE_FIX_TMP_DIR }}"
state: directory
mode: 0775
when: devstack is defined and devstack
tags:
- devstack
- devstack:install
# Tasks to download and upgrade pysqlite to prevent segfaults when testing in devstack
- name: Download and unzip sqlite autoconf update
unarchive:
src: "{{ SQLITE_AUTOCONF_URL }}"
dest: "{{ SQLITE_FIX_TMP_DIR }}"
remote_src: yes
when: devstack is defined and devstack
tags:
- devstack
- devstack:install
- name: Download and unzip pysqlite update
unarchive:
src: "{{ PYSQLITE_URL }}"
dest: "{{ SQLITE_FIX_TMP_DIR }}"
remote_src: yes
when: devstack is defined and devstack
tags:
- devstack
- devstack:install
# Copy module doesn't support recursive dir copies for remote_src: yes
- name: Copy pysqlite autoconf into pyslite update dir
command: "cp -av . {{ PYSQLITE_TMP_PATH }}/"
args:
chdir: "{{ SQLITE_TMP_PATH }}"
when: devstack is defined and devstack
tags:
- devstack
- devstack:install
- name: Build and install pysqlite update
command: "python setup.py build_static install"
args:
chdir: "{{ PYSQLITE_TMP_PATH }}"
when: devstack is defined and devstack
tags:
- devstack
- devstack:install
- name: Clean up pysqlite install artifacts
file:
state: absent
path: "{{ SQLITE_FIX_TMP_DIR }}/"
when: devstack is defined and devstack
tags:
- devstack
- devstack:install
......@@ -11,6 +11,7 @@
# Defaults for role discovery
#
DISCOVERY_GIT_IDENTITY: !!null
#
# vars are namespace with the module name.
......@@ -21,6 +22,9 @@ discovery_gunicorn_port: 8381
discovery_environment:
DISCOVERY_CFG: "{{ COMMON_CFG_DIR }}/{{ discovery_service_name }}.yml"
discovery_user: "{{ discovery_service_name }}"
discovery_home: "{{ COMMON_APP_DIR }}/{{ discovery_service_name }}"
discovery_code_dir: "{{ discovery_home }}/{{ discovery_service_name }}"
#
# OS packages
......@@ -55,7 +59,20 @@ DISCOVERY_URL_ROOT: 'http://discovery:{{ DISCOVERY_NGINX_PORT }}'
DISCOVERY_LOGOUT_URL: '{{ DISCOVERY_URL_ROOT }}/logout/'
DISCOVERY_SECRET_KEY: 'Your secret key here'
DISCOVERY_LANGUAGE_CODE: 'en-us'
DISCOVERY_LANGUAGE_CODE: 'en'
## Configuration for django-parler package. For more information visit
## https://django-parler.readthedocs.io/en/latest/configuration.html#parler-languages
DISCOVERY_PARLER_DEFAULT_LANGUAGE_CODE: '{{DISCOVERY_LANGUAGE_CODE}}'
DISCOVERY_PARLER_LANGUAGES :
1:
- code: 'en'
default:
fallbacks:
- '{{DISCOVERY_PARLER_DEFAULT_LANGUAGE_CODE}}'
hide_untranslated: 'False'
DISCOVERY_DEFAULT_PARTNER_ID: 1
DISCOVERY_SESSION_EXPIRE_AT_BROWSER_CLOSE: false
......@@ -94,10 +111,21 @@ DISCOVERY_EMAIL_HOST_PASSWORD: ''
DISCOVERY_PUBLISHER_FROM_EMAIL: !!null
DISCOVERY_OPENEXCHANGERATES_API_KEY: ''
DISCOVERY_GUNICORN_EXTRA: ''
DISCOVERY_EXTRA_APPS: []
DISCOVERY_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
DOMAIN: "{{ COMMON_GIT_MIRROR }}"
PATH: "{{ COMMON_GIT_PATH }}"
REPO: 'course-discovery.git'
VERSION: "{{ DISCOVERY_VERSION }}"
DESTINATION: "{{ discovery_code_dir }}"
SSH_KEY: "{{ DISCOVERY_GIT_IDENTITY }}"
discovery_service_config_overrides:
ELASTICSEARCH_URL: '{{ DISCOVERY_ELASTICSEARCH_URL }}'
ELASTICSEARCH_INDEX_NAME: '{{ DISCOVERY_ELASTICSEARCH_INDEX_NAME }}'
......@@ -121,5 +149,11 @@ discovery_service_config_overrides:
PUBLISHER_FROM_EMAIL: '{{ DISCOVERY_PUBLISHER_FROM_EMAIL }}'
OPENEXCHANGERATES_API_KEY: '{{ DISCOVERY_OPENEXCHANGERATES_API_KEY }}'
LANGUAGE_CODE: '{{DISCOVERY_LANGUAGE_CODE}}'
PARLER_DEFAULT_LANGUAGE_CODE: '{{DISCOVERY_PARLER_DEFAULT_LANGUAGE_CODE}}'
PARLER_LANGUAGES : '{{DISCOVERY_PARLER_LANGUAGES}}'
# See edx_django_service_automated_users for an example of what this should be
DISCOVERY_AUTOMATED_USERS: {}
......@@ -20,9 +20,10 @@
# }
dependencies:
- role: edx_django_service
edx_django_service_repo: 'course-discovery'
edx_django_service_version: '{{ DISCOVERY_VERSION }}'
edx_django_service_repos: '{{ DISCOVERY_REPOS }}'
edx_django_service_name: '{{ discovery_service_name }}'
edx_django_service_user: '{{ discovery_user }}'
edx_django_service_home: '{{ COMMON_APP_DIR }}/{{ discovery_service_name }}'
edx_django_service_config_overrides: '{{ discovery_service_config_overrides }}'
edx_django_service_debian_pkgs_extra: '{{ discovery_debian_pkgs }}'
edx_django_service_gunicorn_port: '{{ discovery_gunicorn_port }}'
......
......@@ -5,9 +5,10 @@ docker_tools_deps_deb_pkgs:
- ca-certificates
- python-pip
docker_apt_keyserver: "hkp://ha.pool.sks-keyservers.net:80"
docker_apt_key_id: "58118E89F3A912897C070ADBF76221572C52609D"
docker_repo: "deb https://apt.dockerproject.org/repo ubuntu-xenial main"
docker_apt_key_url: "https://download.docker.com/linux/ubuntu/gpg"
docker_repos:
- "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
- "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} edge"
docker_group: "docker"
docker_users: []
......@@ -29,24 +29,23 @@
- name: add docker apt key
apt_key:
keyserver: "{{ docker_apt_keyserver }}"
id: "{{ docker_apt_key_id }}"
url: "{{ docker_apt_key_url }}"
tags:
- install
- install:configuration
- name: add docker repo
apt_repository:
repo: "{{ docker_repo }}"
repo: "{{ item }}"
with_items: "{{ docker_repos }}"
tags:
- install
- install:configuration
- name: install docker-engine
apt:
name: "docker-engine"
name: "docker-ce"
update_cache: yes
cache_valid_time: "{{ cache_valid_time }}"
tags:
- install
- install:system-requirements
......
......@@ -20,6 +20,8 @@ ECOMMERCE_PIP_EXTRA_ARGS: "-i {{ COMMON_PYPI_MIRROR_URL }}"
ECOMMERCE_NGINX_PORT: "18130"
ECOMMERCE_SSL_NGINX_PORT: 48130
ECOMMERCE_MEMCACHE: [ 'localhost:11211' ]
ECOMMERCE_DEFAULT_DB_NAME: 'ecommerce'
ECOMMERCE_DATABASE_USER: "ecomm001"
ECOMMERCE_DATABASE_PASSWORD: "password"
......@@ -44,65 +46,81 @@ ECOMMERCE_DATABASES:
ECOMMERCE_VERSION: "master"
ECOMMERCE_DJANGO_SETTINGS_MODULE: "ecommerce.settings.production"
ECOMMERCE_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ROOT_URL | default("http://127.0.0.1:8000") }}/oauth2'
ECOMMERCE_OIDC_LOGOUT_URL: '{{ EDXAPP_LMS_ROOT_URL | default("http://127.0.0.1:8000") }}/logout'
ECOMMERCE_SESSION_EXPIRE_AT_BROWSER_CLOSE: false
ECOMMERCE_SECRET_KEY: 'Your secret key here'
ECOMMERCE_TIME_ZONE: 'UTC'
ECOMMERCE_LANGUAGE_CODE: 'en-us'
ECOMMERCE_LANGUAGE_CODE: 'en'
ECOMMERCE_LANGUAGE_COOKIE_NAME: 'openedx-language-preference'
ECOMMERCE_EDX_API_KEY: 'PUT_YOUR_API_KEY_HERE' # This should match the value set for edxapp
ECOMMERCE_ECOMMERCE_URL_ROOT: 'http://localhost:8002'
ECOMMERCE_LOGOUT_URL: '{{ ECOMMERCE_ECOMMERCE_URL_ROOT }}/logout/'
ECOMMERCE_LMS_URL_ROOT: 'http://127.0.0.1:8000'
ECOMMERCE_JWT_SECRET_KEY: '{{ EDXAPP_JWT_SECRET_KEY | default("lms-secret") }}'
ECOMMERCE_JWT_ALGORITHM: 'HS256'
ECOMMERCE_JWT_VERIFY_EXPIRATION: true
ECOMMERCE_JWT_DECODE_HANDLER: 'ecommerce.extensions.api.handlers.jwt_decode_handler'
ECOMMERCE_JWT_ISSUERS:
- '{{ ECOMMERCE_OAUTH_URL_ROOT }}'
- '{{ COMMON_JWT_ISSUER }}'
- 'ecommerce_worker' # Must match the value of JWT_ISSUER configured for the ecommerce worker.
ECOMMERCE_JWT_LEEWAY: 1
# NOTE: We have an array of keys to allow for support of multiple when, for example,
# we change keys. This will ensure we continue to operate with JWTs issued signed with the old key
# while migrating to the new key.
ECOMMERCE_JWT_SECRET_KEYS:
- '{{ ECOMMERCE_JWT_SECRET_KEY }}'
- '{{ COMMON_JWT_SECRET_KEY }}'
# Used to automatically configure OAuth2 Client
ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_KEY : 'ecommerce-key'
ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_SECRET : 'ecommerce-secret'
ECOMMERCE_SOCIAL_AUTH_REDIRECT_IS_HTTPS: false
ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_ISSUER: '{{ ECOMMERCE_OAUTH_URL_ROOT }}'
# Settings for affiliate cookie tracking
ECOMMERCE_AFFILIATE_COOKIE_NAME: '{{ EDXAPP_AFFILIATE_COOKIE_NAME | default("dev_affiliate_id") }}'
ECOMMERCE_OSCAR_FROM_EMAIL: 'oscar@example.com'
# NOTE: The contents of the certificates should be set in private configuration
ecommerce_apple_pay_merchant_certificate_directory: '/edx/etc/ssl'
ecommerce_apple_pay_merchant_certificate_filename: 'apple_pay_merchant.pem'
ecommerce_apple_pay_merchant_certificate_path: '{{ ecommerce_apple_pay_merchant_certificate_directory }}/{{ ecommerce_apple_pay_merchant_certificate_filename }}'
ECOMMERCE_APPLE_PAY_MERCHANT_CERTIFICATE: |
Your PEM file, containing a public and private key,
should be set in private configuration. This is how you
implement a multi-line string in YAML.
ECOMMERCE_APPLE_PAY_MERCHANT_ID_DOMAIN_ASSOCIATION: |
This value should also be in private configuration. It, too,
will span multiple lines.
ECOMMERCE_APPLE_PAY_MERCHANT_IDENTIFIER: 'merchant.com.example'
ECOMMERCE_APPLE_PAY_COUNTRY_CODE: 'US'
# CyberSource related
ECOMMERCE_CYBERSOURCE_PROFILE_ID: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_MERCHANT_ID: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_ACCESS_KEY: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_SECRET_KEY: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_SOP_ACCESS_KEY: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_SOP_PROFILE_ID: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_SOP_SECRET_KEY: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_SOP_PAYMENT_PAGE_URL: 'https://testsecureacceptance.cybersource.com/silent/pay'
ECOMMERCE_CYBERSOURCE_TRANSACTION_KEY: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_PAYMENT_PAGE_URL: 'https://set-me-please'
ECOMMERCE_CYBERSOURCE_RECEIPT_PAGE_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/commerce/checkout/receipt/'
ECOMMERCE_CYBERSOURCE_CANCEL_PAGE_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/commerce/checkout/cancel/'
ECOMMERCE_CYBERSOURCE_SOAP_API_URL: 'https://set-me-please'
ECOMMERCE_OSCAR_FROM_EMAIL: 'oscar@example.com'
# PayPal related
ECOMMERCE_PAYPAL_MODE: 'SET-ME-PLEASE'
ECOMMERCE_CYBERSOURCE_PAYMENT_PAGE_URL: 'https://testsecureacceptance.cybersource.com/pay'
ECOMMERCE_CYBERSOURCE_RECEIPT_PAGE_URL: '/checkout/receipt/'
ECOMMERCE_CYBERSOURCE_CANCEL_PAGE_URL: '/checkout/cancel-checkout/'
ECOMMERCE_CYBERSOURCE_SEND_LEVEL_2_3_DETAILS: true
ECOMMERCE_CYBERSOURCE_SOAP_API_URL: 'https://ics2wstest.ic3.com/commerce/1.x/transactionProcessor/CyberSourceTransaction_1.140.wsdl'
# PayPal
ECOMMERCE_PAYPAL_MODE: 'sandbox'
ECOMMERCE_PAYPAL_CLIENT_ID: 'SET-ME-PLEASE'
ECOMMERCE_PAYPAL_CLIENT_SECRET: 'SET-ME-PLEASE'
ECOMMERCE_PAYPAL_RECEIPT_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/commerce/checkout/receipt/'
ECOMMERCE_PAYPAL_CANCEL_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/commerce/checkout/cancel/'
ECOMMERCE_PAYPAL_ERROR_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/commerce/checkout/error/'
ECOMMERCE_PAYPAL_RECEIPT_URL: '/checkout/receipt/'
ECOMMERCE_PAYPAL_CANCEL_URL: '/checkout/cancel-checkout/'
ECOMMERCE_PAYPAL_ERROR_URL: '/checkout/error/'
ECOMMERCE_PAYMENT_PROCESSOR_CONFIG:
edx:
cybersource:
profile_id: '{{ ECOMMERCE_CYBERSOURCE_PROFILE_ID }}'
merchant_id: '{{ ECOMMERCE_CYBERSOURCE_MERCHANT_ID }}'
profile_id: '{{ ECOMMERCE_CYBERSOURCE_PROFILE_ID }}'
access_key: '{{ ECOMMERCE_CYBERSOURCE_ACCESS_KEY }}'
secret_key: '{{ ECOMMERCE_CYBERSOURCE_SECRET_KEY }}'
transaction_key: '{{ ECOMMERCE_CYBERSOURCE_TRANSACTION_KEY }}'
......@@ -110,6 +128,17 @@ ECOMMERCE_PAYMENT_PROCESSOR_CONFIG:
receipt_page_url: '{{ ECOMMERCE_CYBERSOURCE_RECEIPT_PAGE_URL }}'
cancel_page_url: '{{ ECOMMERCE_CYBERSOURCE_CANCEL_PAGE_URL }}'
soap_api_url: '{{ ECOMMERCE_CYBERSOURCE_SOAP_API_URL }}'
send_level_2_3_details: '{{ ECOMMERCE_CYBERSOURCE_SEND_LEVEL_2_3_DETAILS }}'
sop_profile_id: '{{ ECOMMERCE_CYBERSOURCE_SOP_PROFILE_ID }}'
sop_access_key: '{{ ECOMMERCE_CYBERSOURCE_SOP_ACCESS_KEY }}'
sop_secret_key: '{{ ECOMMERCE_CYBERSOURCE_SOP_SECRET_KEY }}'
sop_payment_page_url: '{{ ECOMMERCE_CYBERSOURCE_SOP_PAYMENT_PAGE_URL }}'
# NOTE: These are simple placeholders meant to show what keys are needed for Apple Pay. These values
# should be overwritten in private configuration.
apple_pay_merchant_identifier: '{{ ECOMMERCE_APPLE_PAY_MERCHANT_IDENTIFIER }}'
apple_pay_merchant_id_domain_association: '{{ ECOMMERCE_APPLE_PAY_MERCHANT_ID_DOMAIN_ASSOCIATION }}'
apple_pay_merchant_id_certificate_path: '{{ ecommerce_apple_pay_merchant_certificate_path }}'
apple_pay_country_code: '{{ ECOMMERCE_APPLE_PAY_COUNTRY_CODE }}'
paypal:
mode: '{{ ECOMMERCE_PAYPAL_MODE }}'
client_id: '{{ ECOMMERCE_PAYPAL_CLIENT_ID }}'
......@@ -146,6 +175,7 @@ ECOMMERCE_SERVICE_CONFIG:
SECRET_KEY: '{{ ECOMMERCE_SECRET_KEY }}'
TIME_ZONE: '{{ ECOMMERCE_TIME_ZONE }}'
LANGUAGE_COOKIE_NAME: '{{ ECOMMERCE_LANGUAGE_COOKIE_NAME }}'
LANGUAGE_CODE: '{{ ECOMMERCE_LANGUAGE_CODE }}'
EDX_API_KEY: '{{ ECOMMERCE_EDX_API_KEY }}'
OSCAR_FROM_EMAIL: '{{ ECOMMERCE_OSCAR_FROM_EMAIL }}'
......@@ -159,7 +189,7 @@ ECOMMERCE_SERVICE_CONFIG:
COMMERCE_API_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/api/commerce/v1/'
LMS_DASHBOARD_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}/dashboard'
JWT_AUTH:
JWT_SECRET_KEY: '{{ ECOMMERCE_JWT_SECRET_KEY }}'
JWT_SECRET_KEY: '{{ COMMON_JWT_SECRET_KEY }}'
JWT_ALGORITHM: '{{ ECOMMERCE_JWT_ALGORITHM }}'
JWT_VERIFY_EXPIRATION: '{{ ECOMMERCE_JWT_VERIFY_EXPIRATION }}'
JWT_LEEWAY: '{{ ECOMMERCE_JWT_LEEWAY }}'
......@@ -169,10 +199,10 @@ ECOMMERCE_SERVICE_CONFIG:
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ ECOMMERCE_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_EDX_OIDC_LOGOUT_URL: '{{ ECOMMERCE_OIDC_LOGOUT_URL }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ COMMON_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_EDX_OIDC_LOGOUT_URL: '{{ COMMON_OAUTH_LOGOUT_URL }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ ECOMMERCE_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
SOCIAL_AUTH_EDX_OIDC_ISSUER: '{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_ISSUER }}'
SOCIAL_AUTH_EDX_OIDC_ISSUER: '{{ COMMON_JWT_ISSUER }}'
AFFILIATE_COOKIE_KEY: '{{ ECOMMERCE_AFFILIATE_COOKIE_NAME }}'
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ ecommerce_service_name }}/staticfiles"
......@@ -192,6 +222,11 @@ ECOMMERCE_SERVICE_CONFIG:
ENABLE_COMPREHENSIVE_THEMING: "{{ ECOMMERCE_ENABLE_COMPREHENSIVE_THEMING }}"
DEFAULT_SITE_THEME: "{{ ECOMMERCE_DEFAULT_SITE_THEME }}"
CACHES:
default:
BACKEND: 'django.core.cache.backends.memcached.MemcachedCache'
KEY_PREFIX: 'ecommerce'
LOCATION: '{{ ECOMMERCE_MEMCACHE }}'
ECOMMERCE_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
......
......@@ -9,7 +9,7 @@
#
##
# Role includes for role ecommerce
#
#
dependencies:
- common
- supervisor
......@@ -25,6 +25,6 @@ dependencies:
- role: edx_themes
theme_users:
- "{{ ecommerce_user }}"
when: "{{ ECOMMERCE_ENABLE_COMPREHENSIVE_THEMING }}"
when: ECOMMERCE_ENABLE_COMPREHENSIVE_THEMING
- oraclejdk
......@@ -68,24 +68,6 @@
- install
- install:app-requirements
# This is a hacked fix for the fact that the table `thumbnail_kvstore` exists in
# some environments, which won't need the 3rd party newly introduced migration
# to create this table, so we fake the migration.
# This is required for the Ginkgo release.
# TODO: Delete this task for the Hawthorn release.
- name: fake thumbnails
shell: >
table_exists=`mysql -uroot -ss -e "SELECT EXISTS(SELECT * FROM information_schema.tables WHERE table_schema = '{{ ECOMMERCE_DEFAULT_DB_NAME }}' AND table_name = 'thumbnail_kvstore')"`;
if [ "$table_exists" -eq "1" ]; then {{ ecommerce_venv_dir }}/bin/python ./manage.py migrate thumbnail 0001 --fake; fi;
args:
chdir: "{{ ecommerce_code_dir }}"
become_user: "{{ ecommerce_user }}"
environment: "{{ ecommerce_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes"
tags:
- migrate
- migrate:db
- name: Migrate
shell: >
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
......@@ -170,6 +152,28 @@
- install
- install:configuration
- name: Create Apple Pay certificates directory
file:
path: "{{ ecommerce_apple_pay_merchant_certificate_directory }}"
state: directory
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
tags:
- install
- install:configuration
- name: Write Apple Pay merchant certificates
copy:
content: "{{ ECOMMERCE_APPLE_PAY_MERCHANT_CERTIFICATE }}"
dest: "{{ ecommerce_apple_pay_merchant_certificate_path }}"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: "0644"
no_log: true
tags:
- install
- install:configuration
- name: Setup the ecommence env file
template:
src: "./{{ ecommerce_home }}/{{ ecommerce_service_name }}_env.j2"
......
......@@ -16,7 +16,9 @@ edx_django_service_name_devstack_logs:
- '{{ supervisor_log_dir }}/{{ edx_django_service_name }}-stdout.log'
- '{{ supervisor_log_dir }}/{{ edx_django_service_name }}-stderr.log'
edx_django_service_git_protocol: '{{ COMMON_GIT_PROTOCOL }}'
edx_django_service_git_domain: '{{ COMMON_GIT_MIRROR }}'
edx_django_service_git_path: '{{ COMMON_GIT_PATH }}'
edx_django_service_version: 'master'
edx_django_service_git_identity: null
edx_django_service_django_settings_module: null
......@@ -76,9 +78,9 @@ edx_django_service_basic_auth_exempted_paths: '{{ edx_django_service_basic_auth_
edx_django_service_newrelic_appname: '{{ COMMON_ENVIRONMENT }}-{{ COMMON_DEPLOYMENT }}-{{ edx_django_service_name }}'
edx_django_service_repos:
- PROTOCOL: '{{ COMMON_GIT_PROTOCOL }}'
DOMAIN: '{{ COMMON_GIT_MIRROR }}'
PATH: '{{ COMMON_GIT_PATH }}'
- PROTOCOL: '{{ edx_django_service_git_protocol }}'
DOMAIN: '{{ edx_django_service_git_domain }}'
PATH: '{{ edx_django_service_git_path }}'
REPO: '{{ edx_django_service_repo }}.git'
VERSION: '{{ edx_django_service_version }}'
DESTINATION: '{{ edx_django_service_code_dir }}'
......
......@@ -232,6 +232,7 @@
owner: root
group: "{{ common_web_user }}"
mode: 0640
when: nginx_app_dir is defined
notify: reload nginx
tags:
- install
......@@ -244,6 +245,7 @@
state: link
owner: root
group: root
when: nginx_app_dir is defined
notify: reload nginx
tags:
- install
......
......@@ -17,6 +17,12 @@
edx_service_name: edx_service
edx_service_repos: []
# A few roles meta this role but don't need a config file written
# this allows them to not pass a config and the tasks will skip
# and not write out a config at all.
edx_service_config: {}
#
# OS packages
#
......
......@@ -95,7 +95,7 @@
src: "config.yml.j2"
dest: "{{ COMMON_CFG_DIR }}/{{ edx_service_name }}.yml"
mode: "0644"
when: edx_service_config is defined
when: edx_service_config
tags:
- install
- install:configuration
......
......@@ -79,6 +79,7 @@ EDXAPP_MYSQL_PASSWORD: 'password'
EDXAPP_MYSQL_PASSWORD_READ_ONLY: 'password'
EDXAPP_MYSQL_PASSWORD_ADMIN: 'password'
EDXAPP_MYSQL_OPTIONS: {}
EDXAPP_MYSQL_ATOMIC_REQUESTS: True
EDXAPP_MYSQL_REPLICA_DB_NAME: "{{ EDXAPP_MYSQL_DB_NAME }}"
EDXAPP_MYSQL_REPLICA_USER: "{{ EDXAPP_MYSQL_USER }}"
EDXAPP_MYSQL_REPLICA_PASSWORD: "{{ EDXAPP_MYSQL_PASSWORD }}"
......@@ -135,7 +136,6 @@ EDXAPP_COMMENTS_SERVICE_KEY: 'password'
EDXAPP_EDXAPP_SECRET_KEY: "DUMMY KEY CHANGE BEFORE GOING TO PRODUCTION"
EDXAPP_ANALYTICS_API_KEY: ""
EDXAPP_LTI_USER_EMAIL_DOMAIN: "lti.example.com"
# 900s, or 15 mins
EDXAPP_LTI_AGGREGATE_SCORE_PASSBACK_DELAY: 900
......@@ -162,8 +162,10 @@ EDXAPP_PARTNER_SUPPORT_EMAIL: ''
EDXAPP_AUDIT_CERT_CUTOFF_DATE: null
EDXAPP_PLATFORM_NAME: 'Your Platform Name Here'
EDXAPP_PLATFORM_DESCRIPTION: 'Your Platform Description Here'
EDXAPP_STUDIO_NAME: 'Studio'
EDXAPP_STUDIO_SHORT_NAME: 'Studio'
EDXAPP_ANALYTICS_DASHBOARD_NAME: "{{ EDXAPP_PLATFORM_NAME }} Insights"
EDXAPP_CAS_SERVER_URL: ""
EDXAPP_CAS_EXTRA_LOGIN_PARAMS: ""
......@@ -233,6 +235,14 @@ EDXAPP_PDF_RECEIPT_LOGO_PATH: ""
EDXAPP_SOCIAL_AUTH_OAUTH_SECRETS: ""
EDXAPP_ACE_CHANNEL_SAILTHRU_API_KEY: ""
EDXAPP_ACE_CHANNEL_SAILTHRU_API_SECRET: ""
EDXAPP_ACE_ENABLED_CHANNELS: []
EDXAPP_ACE_ENABLED_POLICIES: []
EDXAPP_ACE_CHANNEL_SAILTHRU_DEBUG: True
EDXAPP_ACE_CHANNEL_SAILTHRU_TEMPLATE_NAME: !!null
EDXAPP_ACE_ROUTING_KEY: 'edx.lms.core.low'
# Display a language selector in the LMS/CMS header.
EDXAPP_SHOW_HEADER_LANGUAGE_SELECTOR: false
......@@ -245,7 +255,6 @@ EDXAPP_FEATURES:
ENABLE_INSTRUCTOR_ANALYTICS: false
PREVIEW_LMS_BASE: "{{ EDXAPP_PREVIEW_LMS_BASE }}"
ENABLE_GRADE_DOWNLOADS: true
USE_CUSTOM_THEME: "{{ edxapp_use_custom_theme }}"
ENABLE_MKTG_SITE: "{{ EDXAPP_ENABLE_MKTG_SITE }}"
AUTOMATIC_AUTH_FOR_TESTING: "{{ EDXAPP_ENABLE_AUTO_AUTH }}"
ENABLE_THIRD_PARTY_AUTH: "{{ EDXAPP_ENABLE_THIRD_PARTY_AUTH }}"
......@@ -270,7 +279,6 @@ EDXAPP_FEATURES:
SHOW_HEADER_LANGUAGE_SELECTOR: "{{ EDXAPP_SHOW_HEADER_LANGUAGE_SELECTOR }}"
SHOW_FOOTER_LANGUAGE_SELECTOR: "{{ EDXAPP_SHOW_FOOTER_LANGUAGE_SELECTOR }}"
EDXAPP_BOOK_URL: ""
# This needs to be set to localhost
# if xqueue is run on the same server
# as the lms (it's sent in the request)
......@@ -278,7 +286,6 @@ EDXAPP_SITE_NAME: 'localhost'
EDXAPP_LMS_SITE_NAME: "{{ EDXAPP_SITE_NAME }}"
EDXAPP_CMS_SITE_NAME: 'localhost'
EDXAPP_MEDIA_URL: "/media"
EDXAPP_ANALYTICS_SERVER_URL: ""
EDXAPP_FEEDBACK_SUBMISSION_EMAIL: ""
EDXAPP_CELERY_BROKER_HOSTNAME: ""
EDXAPP_LOGGING_ENV: 'sandbox'
......@@ -302,6 +309,9 @@ EDXAPP_RATE_LIMITED_USER_AGENTS: []
EDXAPP_LANG: 'en_US.UTF-8'
EDXAPP_LANGUAGE_CODE : 'en'
EDXAPP_LANGUAGE_COOKIE: 'openedx-language-preference'
EDXAPP_CERTIFICATE_TEMPLATE_LANGUAGES:
'en': 'English'
'es': 'Español'
EDXAPP_TIME_ZONE: 'America/New_York'
EDXAPP_HELP_TOKENS_BOOKS:
......@@ -425,6 +435,11 @@ EDXAPP_INSTALL_PRIVATE_REQUIREMENTS: false
# - name: git+https://git.myproject.org/MyProject#egg=MyProject
EDXAPP_EXTRA_REQUIREMENTS: []
# List of custom middlewares that should be used in edxapp to process
# incoming HTTP resquests. Should be a list of plain strings that fully
# qualify Python classes or functions that can be used as Django middleware.
EDXAPP_EXTRA_MIDDLEWARE_CLASSES: []
EDXAPP_GOOGLE_ANALYTICS_ACCOUNT: "None"
EDXAPP_OPTIMIZELY_PROJECT_ID: "None"
......@@ -481,18 +496,18 @@ EDXAPP_VIRTUAL_UNIVERSITIES: []
# lms: <num workers>
# cms: <num workers>
EDXAPP_WORKERS: !!null
EDXAPP_ANALYTICS_DATA_TOKEN: ""
EDXAPP_ANALYTICS_DATA_URL: ""
# Dashboard URL, assumes that the insights role is installed locally
EDXAPP_ANALYTICS_DASHBOARD_URL: "http://localhost:18110/courses"
EDXAPP_REGISTRATION_EXTRA_FIELDS:
confirm_email: "hidden"
level_of_education: "optional"
gender: "optional"
year_of_birth: "optional"
mailing_address: "hidden"
goals: "optional"
honor_code: "required"
terms_of_service: "hidden"
city: "hidden"
country: "required"
......@@ -527,6 +542,8 @@ EDXAPP_CELERY_WORKERS:
monitor: False
max_tasks_per_child: 1
EDXAPP_RECALCULATE_GRADES_ROUTING_KEY: 'edx.lms.core.default'
EDXAPP_POLICY_CHANGE_GRADES_ROUTING_KEY: 'edx.lms.core.default'
EDXAPP_BULK_EMAIL_ROUTING_KEY_SMALL_JOBS: 'edx.lms.core.low'
EDXAPP_LMS_CELERY_QUEUES: "{{ edxapp_workers|selectattr('service_variant', 'equalto', 'lms')|map(attribute='queue')|map('regex_replace', '(.*)', 'edx.lms.core.\\1')|list }}"
EDXAPP_CMS_CELERY_QUEUES: "{{ edxapp_workers|selectattr('service_variant', 'equalto', 'cms')|map(attribute='queue')|map('regex_replace', '(.*)', 'edx.cms.core.\\1')|list }}"
......@@ -534,6 +551,8 @@ EDXAPP_DEFAULT_CACHE_VERSION: "1"
EDXAPP_OAUTH_ENFORCE_SECURE: True
EDXAPP_OAUTH_EXPIRE_CONFIDENTIAL_CLIENT_DAYS: 365
EDXAPP_OAUTH_EXPIRE_PUBLIC_CLIENT_DAYS: 30
# This turns on deletion of access tokens, refresh tokens, and grants when consumed (not bulk deletions)
EDXAPP_OAUTH_DELETE_EXPIRED: True
# Directory for edxapp application configuration files
EDXAPP_CFG_DIR: "{{ COMMON_CFG_DIR }}/edxapp"
......@@ -609,6 +628,12 @@ EDXAPP_PROFILE_IMAGE_SECRET_KEY: placeholder_secret_key
EDXAPP_PROFILE_IMAGE_MAX_BYTES: 1048576
EDXAPP_PROFILE_IMAGE_MIN_BYTES: 100
EDXAPP_PROFILE_IMAGE_SIZES_MAP:
full: 500
large: 120
medium: 50
small: 30
EDXAPP_PARSE_KEYS: {}
# In a production environment when using separate clusters, you'll
......@@ -664,6 +689,28 @@ EDXAPP_SESSION_SAVE_EVERY_REQUEST: false
EDXAPP_SESSION_COOKIE_SECURE: false
EDXAPP_VIDEO_IMAGE_MAX_AGE: 31536000
# This is django storage configuration for Video Image settings.
# You can configure S3 or Swift in lms/envs/common.py
EDXAPP_VIDEO_IMAGE_SETTINGS:
VIDEO_IMAGE_MAX_BYTES : 2097152
VIDEO_IMAGE_MIN_BYTES : 2048
STORAGE_KWARGS:
location: "{{ edxapp_media_dir }}/"
base_url: "{{ EDXAPP_MEDIA_URL }}/"
DIRECTORY_PREFIX: 'video-images/'
EDXAPP_VIDEO_TRANSCRIPTS_MAX_AGE: 31536000
# This is django storage configuration for Video Transcripts settings.
EDXAPP_VIDEO_TRANSCRIPTS_SETTINGS:
VIDEO_TRANSCRIPTS_MAX_BYTES : 3145728
STORAGE_KWARGS:
location: "{{ edxapp_media_dir }}/"
base_url: "{{ EDXAPP_MEDIA_URL }}/"
DIRECTORY_PREFIX: 'video-transcripts/'
# Course Block Structures
EDXAPP_BLOCK_STRUCTURES_SETTINGS:
# Delay, in seconds, after a new edit of a course is published
......@@ -682,15 +729,39 @@ EDXAPP_BLOCK_STRUCTURES_SETTINGS:
# Configuration settings needed for the LMS to communicate with the Enterprise service.
EDXAPP_ENTERPRISE_API_URL: "{{ EDXAPP_LMS_ROOT_URL }}/enterprise/api/v1"
EDXAPP_ENTERPRISE_SERVICE_WORKER_EMAIL: "enterprise_worker@example.com"
EDXAPP_ENTERPRISE_SERVICE_WORKER_USERNAME: "enterprise_worker"
EDXAPP_ENTERPRISE_COURSE_ENROLLMENT_AUDIT_MODES:
- audit
- honor
EDXAPP_ENTERPRISE_ENROLLMENT_API_URL: "{{ EDXAPP_LMS_ROOT_URL }}/api/enrollment/v1/"
# The default value of this needs to be a 16 character string
EDXAPP_ENTERPRISE_REPORTING_SECRET: '0000000000000000'
EDXAPP_ENTERPRISE_SUPPORT_URL: ''
EDXAPP_ENTERPRISE_TAGLINE: ''
# The assigned ICP license number for display in the platform footer
EDXAPP_ICP_LICENSE: !!null
# Base Cookie Domain to share cookie across edx domains
EDXAPP_BASE_COOKIE_DOMAIN: "{{ EDXAPP_LMS_SITE_NAME }}"
# The minimum and maximum length for the password account field
EDXAPP_PASSWORD_MIN_LENGTH: 2
EDXAPP_PASSWORD_MAX_LENGTH: 75
# The age at which a learner no longer requires parental consent, or None
EDXAPP_PARENTAL_CONSENT_AGE_LIMIT: 13
# Scorm Xblock configurations
EDXAPP_SCORM_PKG_STORAGE_DIR: !!null
EDXAPP_SCORM_PLAYER_LOCAL_STORAGE_ROOT: !!null
#-------- Everything below this line is internal to the role ------------
#Use YAML references (& and *) and hash merge <<: to factor out shared settings
......@@ -810,7 +881,7 @@ edxapp_databases:
PASSWORD: "{{ EDXAPP_MYSQL_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_HOST }}"
PORT: "{{ EDXAPP_MYSQL_PORT }}"
ATOMIC_REQUESTS: True
ATOMIC_REQUESTS: "{{ EDXAPP_MYSQL_ATOMIC_REQUESTS }}"
CONN_MAX_AGE: "{{ EDXAPP_MYSQL_CONN_MAX_AGE }}"
OPTIONS: "{{ EDXAPP_MYSQL_OPTIONS }}"
student_module_history:
......@@ -826,7 +897,6 @@ edxapp_databases:
edxapp_generic_auth_config: &edxapp_generic_auth
EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST: "{{ EDXAPP_EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST }}"
ECOMMERCE_API_SIGNING_KEY: "{{ EDXAPP_ECOMMERCE_API_SIGNING_KEY }}"
ANALYTICS_DATA_TOKEN: "{{ EDXAPP_ANALYTICS_DATA_TOKEN }}"
DEFAULT_FILE_STORAGE: "{{ EDXAPP_DEFAULT_FILE_STORAGE }}"
AWS_ACCESS_KEY_ID: "{{ EDXAPP_AWS_ACCESS_KEY_ID }}"
AWS_SECRET_ACCESS_KEY: "{{ EDXAPP_AWS_SECRET_ACCESS_KEY }}"
......@@ -865,7 +935,6 @@ edxapp_generic_auth_config: &edxapp_generic_auth
ADDITIONAL_OPTIONS: "{{ EDXAPP_CONTENTSTORE_ADDITIONAL_OPTS }}"
DOC_STORE_CONFIG: *edxapp_generic_default_docstore
DATABASES: "{{ edxapp_databases }}"
ANALYTICS_API_KEY: "{{ EDXAPP_ANALYTICS_API_KEY }}"
EMAIL_HOST_USER: "{{ EDXAPP_EMAIL_HOST_USER }}"
EMAIL_HOST_PASSWORD: "{{ EDXAPP_EMAIL_HOST_PASSWORD }}"
YOUTUBE_API_KEY: "{{ EDXAPP_YOUTUBE_API_KEY }}"
......@@ -908,7 +977,6 @@ generic_env_config: &edxapp_generic_env
OAUTH_OIDC_ISSUER: "{{ EDXAPP_LMS_ISSUER }}"
XBLOCK_FS_STORAGE_BUCKET: "{{ EDXAPP_XBLOCK_FS_STORAGE_BUCKET }}"
XBLOCK_FS_STORAGE_PREFIX: "{{ EDXAPP_XBLOCK_FS_STORAGE_PREFIX }}"
ANALYTICS_DATA_URL: "{{ EDXAPP_ANALYTICS_DATA_URL }}"
ANALYTICS_DASHBOARD_URL: '{{ EDXAPP_ANALYTICS_DASHBOARD_URL }}'
CELERY_BROKER_VHOST: "{{ EDXAPP_CELERY_BROKER_VHOST }}"
CELERY_BROKER_USE_SSL: "{{ EDXAPP_CELERY_BROKER_USE_SSL }}"
......@@ -932,9 +1000,10 @@ generic_env_config: &edxapp_generic_env
LMS_BASE: "{{ EDXAPP_LMS_BASE }}"
CMS_BASE: "{{ EDXAPP_CMS_BASE }}"
LMS_ROOT_URL: "{{ EDXAPP_LMS_ROOT_URL }}"
BOOK_URL: "{{ EDXAPP_BOOK_URL }}"
PARTNER_SUPPORT_EMAIL: "{{ EDXAPP_PARTNER_SUPPORT_EMAIL }}"
PLATFORM_NAME: "{{ EDXAPP_PLATFORM_NAME }}"
PLATFORM_DESCRIPTION: "{{ EDXAPP_PLATFORM_DESCRIPTION }}"
ANALYTICS_DASHBOARD_NAME: "{{ EDXAPP_ANALYTICS_DASHBOARD_NAME }}"
STUDIO_NAME: "{{ EDXAPP_STUDIO_NAME }}"
STUDIO_SHORT_NAME: "{{ EDXAPP_STUDIO_SHORT_NAME }}"
CERT_QUEUE: 'certificates'
......@@ -963,11 +1032,11 @@ generic_env_config: &edxapp_generic_env
MEDIA_URL: "{{ EDXAPP_MEDIA_URL }}/"
MEDIA_ROOT: "{{ edxapp_media_dir }}/"
ANALYTICS_SERVER_URL: "{{ EDXAPP_ANALYTICS_SERVER_URL }}"
FEEDBACK_SUBMISSION_EMAIL: "{{ EDXAPP_FEEDBACK_SUBMISSION_EMAIL }}"
TIME_ZONE: "{{ EDXAPP_TIME_ZONE }}"
LANGUAGE_CODE: "{{ EDXAPP_LANGUAGE_CODE }}"
LANGUAGE_COOKIE: "{{ EDXAPP_LANGUAGE_COOKIE }}"
CERTIFICATE_TEMPLATE_LANGUAGES: "{{ EDXAPP_CERTIFICATE_TEMPLATE_LANGUAGES }}"
MKTG_URL_LINK_MAP: "{{ EDXAPP_MKTG_URL_LINK_MAP }}"
MKTG_URLS: "{{ EDXAPP_MKTG_URLS }}"
SUPPORT_SITE_LINK: "{{ EDXAPP_SUPPORT_SITE_LINK }}"
......@@ -1019,7 +1088,6 @@ generic_env_config: &edxapp_generic_env
SESSION_COOKIE_DOMAIN: "{{ EDXAPP_SESSION_COOKIE_DOMAIN }}"
SESSION_COOKIE_NAME: "{{ EDXAPP_SESSION_COOKIE_NAME }}"
COMMENTS_SERVICE_KEY: "{{ EDXAPP_COMMENTS_SERVICE_KEY }}"
THEME_NAME: "{{ edxapp_theme_name }}"
TECH_SUPPORT_EMAIL: "{{ EDXAPP_TECH_SUPPORT_EMAIL }}"
CONTACT_EMAIL: "{{ EDXAPP_CONTACT_EMAIL }}"
BUGS_EMAIL: "{{ EDXAPP_BUGS_EMAIL }}"
......@@ -1043,6 +1111,10 @@ generic_env_config: &edxapp_generic_env
REGISTRATION_EXTRA_FIELDS: "{{ EDXAPP_REGISTRATION_EXTRA_FIELDS }}"
XBLOCK_SETTINGS: "{{ EDXAPP_XBLOCK_SETTINGS }}"
EDXMKTG_USER_INFO_COOKIE_NAME: "{{ EDXAPP_EDXMKTG_USER_INFO_COOKIE_NAME }}"
VIDEO_IMAGE_MAX_AGE: "{{ EDXAPP_VIDEO_IMAGE_MAX_AGE }}"
VIDEO_IMAGE_SETTINGS: "{{ EDXAPP_VIDEO_IMAGE_SETTINGS }}"
VIDEO_TRANSCRIPTS_MAX_AGE: "{{ EDXAPP_VIDEO_TRANSCRIPTS_MAX_AGE }}"
VIDEO_TRANSCRIPTS_SETTINGS: "{{ EDXAPP_VIDEO_TRANSCRIPTS_SETTINGS }}"
BLOCK_STRUCTURES_SETTINGS: "{{ EDXAPP_BLOCK_STRUCTURES_SETTINGS }}"
# Deprecated, maintained for backward compatibility
......@@ -1067,6 +1139,12 @@ generic_env_config: &edxapp_generic_env
HELP_TOKENS_BOOKS: "{{ EDXAPP_HELP_TOKENS_BOOKS }}"
# License for serving content in China
ICP_LICENSE: "{{ EDXAPP_ICP_LICENSE }}"
# Base Cookie Domain to share cookie across edx domains
BASE_COOKIE_DOMAIN: "{{ EDXAPP_BASE_COOKIE_DOMAIN }}"
POLICY_CHANGE_GRADES_ROUTING_KEY: "{{ EDXAPP_POLICY_CHANGE_GRADES_ROUTING_KEY }}"
PROCTORING_SETTINGS: "{{ EDXAPP_PROCTORING_SETTINGS }}"
EXTRA_MIDDLEWARE_CLASSES: "{{ EDXAPP_EXTRA_MIDDLEWARE_CLASSES }}"
lms_auth_config:
<<: *edxapp_generic_auth
......@@ -1101,6 +1179,9 @@ lms_auth_config:
render_template: 'edxmako.shortcuts.render_to_string'
PROCTORING_BACKEND_PROVIDER: "{{ EDXAPP_PROCTORING_BACKEND_PROVIDER }}"
SOCIAL_AUTH_OAUTH_SECRETS: "{{ EDXAPP_SOCIAL_AUTH_OAUTH_SECRETS }}"
ACE_CHANNEL_SAILTHRU_API_KEY: "{{ EDXAPP_ACE_CHANNEL_SAILTHRU_API_KEY }}"
ACE_CHANNEL_SAILTHRU_API_SECRET: "{{ EDXAPP_ACE_CHANNEL_SAILTHRU_API_SECRET }}"
ENTERPRISE_REPORTING_SECRET: "{{ EDXAPP_ENTERPRISE_REPORTING_SECRET }}"
lms_env_config:
......@@ -1108,6 +1189,7 @@ lms_env_config:
OAUTH_ENFORCE_SECURE: "{{ EDXAPP_OAUTH_ENFORCE_SECURE }}"
OAUTH_EXPIRE_CONFIDENTIAL_CLIENT_DAYS: "{{ EDXAPP_OAUTH_EXPIRE_CONFIDENTIAL_CLIENT_DAYS }}"
OAUTH_EXPIRE_PUBLIC_CLIENT_DAYS: "{{ EDXAPP_OAUTH_EXPIRE_PUBLIC_CLIENT_DAYS }}"
OAUTH_DELETE_EXPIRED: "{{ EDXAPP_OAUTH_DELETE_EXPIRED }}"
PAID_COURSE_REGISTRATION_CURRENCY: "{{ EDXAPP_PAID_COURSE_REGISTRATION_CURRENCY }}"
GIT_REPO_DIR: "{{ EDXAPP_GIT_REPO_DIR }}"
SITE_NAME: "{{ EDXAPP_LMS_SITE_NAME }}"
......@@ -1123,11 +1205,11 @@ lms_env_config:
PROFILE_IMAGE_BACKEND: "{{ EDXAPP_PROFILE_IMAGE_BACKEND }}"
PROFILE_IMAGE_MIN_BYTES: "{{ EDXAPP_PROFILE_IMAGE_MIN_BYTES }}"
PROFILE_IMAGE_MAX_BYTES: "{{ EDXAPP_PROFILE_IMAGE_MAX_BYTES }}"
PROFILE_IMAGE_SIZES_MAP: "{{ EDXAPP_PROFILE_IMAGE_SIZES_MAP }}"
EDXNOTES_PUBLIC_API: "{{ EDXAPP_EDXNOTES_PUBLIC_API }}"
EDXNOTES_INTERNAL_API: "{{ EDXAPP_EDXNOTES_INTERNAL_API }}"
LTI_USER_EMAIL_DOMAIN: "{{ EDXAPP_LTI_USER_EMAIL_DOMAIN }}"
LTI_AGGREGATE_SCORE_PASSBACK_DELAY: "{{ EDXAPP_LTI_AGGREGATE_SCORE_PASSBACK_DELAY }}"
PROCTORING_SETTINGS: "{{ EDXAPP_PROCTORING_SETTINGS }}"
CREDIT_HELP_LINK_URL: "{{ EDXAPP_CREDIT_HELP_LINK_URL }}"
MAILCHIMP_NEW_USER_LIST_ID: "{{ EDXAPP_MAILCHIMP_NEW_USER_LIST_ID }}"
CONTACT_MAILING_ADDRESS: "{{ EDXAPP_CONTACT_MAILING_ADDRESS }}"
......@@ -1137,9 +1219,21 @@ lms_env_config:
API_DOCUMENTATION_URL: "{{ EDXAPP_API_DOCUMENTATION_URL }}"
AUTH_DOCUMENTATION_URL: "{{ EDXAPP_AUTH_DOCUMENTATION_URL }}"
RECALCULATE_GRADES_ROUTING_KEY: "{{ EDXAPP_RECALCULATE_GRADES_ROUTING_KEY }}"
BULK_EMAIL_ROUTING_KEY_SMALL_JOBS: "{{ EDXAPP_BULK_EMAIL_ROUTING_KEY_SMALL_JOBS }}"
CELERY_QUEUES: "{{ EDXAPP_LMS_CELERY_QUEUES }}"
ALTERNATE_WORKER_QUEUES: "cms"
ENTERPRISE_COURSE_ENROLLMENT_AUDIT_MODES: "{{ EDXAPP_ENTERPRISE_COURSE_ENROLLMENT_AUDIT_MODES }}"
PASSWORD_MIN_LENGTH: "{{ EDXAPP_PASSWORD_MIN_LENGTH }}"
PASSWORD_MAX_LENGTH: "{{ EDXAPP_PASSWORD_MAX_LENGTH }}"
ENTERPRISE_ENROLLMENT_API_URL: "{{ EDXAPP_ENTERPRISE_ENROLLMENT_API_URL }}"
ENTERPRISE_SUPPORT_URL: "{{ EDXAPP_ENTERPRISE_SUPPORT_URL }}"
PARENTAL_CONSENT_AGE_LIMIT: "{{ EDXAPP_PARENTAL_CONSENT_AGE_LIMIT }}"
ACE_ENABLED_CHANNELS: "{{ EDXAPP_ACE_ENABLED_CHANNELS }}"
ACE_ENABLED_POLICIES: "{{ EDXAPP_ACE_ENABLED_POLICIES }}"
ACE_CHANNEL_SAILTHRU_DEBUG: "{{ EDXAPP_ACE_CHANNEL_SAILTHRU_DEBUG }}"
ACE_CHANNEL_SAILTHRU_TEMPLATE_NAME: "{{ EDXAPP_ACE_CHANNEL_SAILTHRU_TEMPLATE_NAME }}"
ACE_ROUTING_KEY: "{{ EDXAPP_ACE_ROUTING_KEY }}"
ENTERPRISE_TAGLINE: "{{ EDXAPP_ENTERPRISE_TAGLINE }}"
cms_auth_config:
<<: *edxapp_generic_auth
......@@ -1200,12 +1294,10 @@ worker_core_mult:
cms: 2
# Stanford-style Theming
# Turn theming on and off with edxapp_use_custom_theme
# Set theme name with edxapp_theme_name
# Stanford, for example, uses edxapp_theme_name: 'stanford'
#
# TODO: change variables to ALL-CAPS, since they are meant to be externally overridden
edxapp_use_custom_theme: false
edxapp_theme_name: ""
edxapp_theme_source_repo: 'https://{{ COMMON_GIT_MIRROR }}/Stanford-Online/edx-theme.git'
edxapp_theme_version: 'master'
......@@ -1220,6 +1312,7 @@ github_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/github.txt"
custom_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/custom.txt"
local_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/local.txt"
base_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/base.txt"
django_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/django.txt"
post_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/post.txt"
paver_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/paver.txt"
private_requirements_file: "{{ edxapp_code_dir }}/requirements/edx/edx-private.txt"
......@@ -1239,6 +1332,7 @@ edxapp_requirements_files:
- "{{ custom_requirements_file }}"
- "{{ local_requirements_file }}"
- "{{ base_requirements_file }}"
- "{{ django_requirements_file }}"
- "{{ post_requirements_file }}"
- "{{ paver_requirements_file }}"
......@@ -1290,3 +1384,10 @@ edxapp_cms_variant: cms
# Worker Settings
worker_django_settings_module: '{{ EDXAPP_SETTINGS }}'
# Add default service worker users
SERVICE_WORKER_USERS:
- email: "{{ EDXAPP_ENTERPRISE_SERVICE_WORKER_EMAIL }}"
username: "{{ EDXAPP_ENTERPRISE_SERVICE_WORKER_USERNAME }}"
is_staff: true
is_superuser: false
......@@ -7,4 +7,4 @@ dependencies:
- role: edx_themes
theme_users:
- "{{ edxapp_user }}"
when: "{{ EDXAPP_ENABLE_COMPREHENSIVE_THEMING }}"
when: EDXAPP_ENABLE_COMPREHENSIVE_THEMING
......@@ -300,7 +300,7 @@
- install:app-requirements
- name: compiling all py files in the edx-platform repo
shell: "{{ edxapp_venv_bin }}/python -m compileall -q -x .git/.* {{ edxapp_code_dir }}"
shell: "{{ edxapp_venv_bin }}/python -m compileall -q -x '.git/.*|node_modules/.*' {{ edxapp_code_dir }}"
become_user: "{{ edxapp_user }}"
tags:
- install
......@@ -417,3 +417,14 @@
become_user: "{{ common_web_user }}"
tags:
- manage
- name: create service worker users
shell: "{{ edxapp_venv_bin }}/python ./manage.py lms --settings={{ edxapp_settings }} --service-variant lms manage_user {{ item.username}} {{ item.email }} --unusable-password {% if item.is_staff %} --staff{% endif %}"
args:
chdir: "{{ edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
with_items: "{{ SERVICE_WORKER_USERS }}"
when: CREATE_SERVICE_WORKER_USERS
tags:
- manage
- manage:db
......@@ -109,16 +109,6 @@
- install
- install:base
# adding chris-lea nodejs repo
# TODO: 16.04
- name: add ppas for current versions of nodejs
apt_repository:
repo: "{{ edxapp_chrislea_ppa }}"
when: ansible_distribution_release == 'precise'
tags:
- install
- install:base
- name: install system packages on which LMS and CMS rely
apt:
name: "{{ item }}"
......
......@@ -11,6 +11,7 @@ edxapp_requirements_files:
- "{{ custom_requirements_file }}"
- "{{ local_requirements_file }}"
- "{{ base_requirements_file }}"
- "{{ django_requirements_file }}"
- "{{ post_requirements_file }}"
- "{{ paver_requirements_file }}"
- "{{ development_requirements_file }}"
......
......@@ -45,7 +45,10 @@ FORUM_USE_TCP: false
# wait this long before attempting to restart it
FORUM_RESTART_DELAY: 60
forum_environment:
# Set to rebuild the forum ElasticSearch index from the database.
FORUM_REBUILD_INDEX: false
forum_base_env: &forum_base_env
RBENV_ROOT: "{{ forum_rbenv_root }}"
GEM_HOME: "{{ forum_gem_root }}"
GEM_PATH: "{{ forum_gem_root }}"
......@@ -65,8 +68,18 @@ forum_environment:
LISTEN_HOST: "{{ FORUM_LISTEN_HOST }}"
LISTEN_PORT: "{{ FORUM_LISTEN_PORT }}"
forum_env:
<<: *forum_base_env
devstack_forum_env:
<<: *forum_base_env
RACK_ENV: "development"
SINATRA_ENV: "development"
SEARCH_SERVER: "http://edx.devstack.elasticsearch:9200/"
MONGOHQ_URL: "mongodb://cs_comments_service:password@edx.devstack.mongo:27017/cs_comments_service"
forum_user: "forum"
forum_ruby_version: "1.9.3-p551"
forum_ruby_version: "2.4.1"
forum_source_repo: "https://github.com/edx/cs_comments_service.git"
forum_version: "master"
......
......@@ -54,7 +54,7 @@
- name: install comments service bundle
shell: "bundle install --deployment --path {{ forum_gem_root }} chdir={{ forum_code_dir }}"
become_user: "{{ forum_user }}"
environment: "{{ forum_environment }}"
environment: "{{ forum_base_env }}"
notify: restart the forum service
tags:
- install
......@@ -65,12 +65,23 @@
args:
chdir: "{{ forum_code_dir }}"
become_user: "{{ forum_user }}"
environment: "{{ forum_environment }}"
environment: "{{ forum_base_env }}"
when: migrate_db is defined and migrate_db|lower == "yes"
tags:
- migrate
- migrate:db
- name: rebuild elasticsearch indexes
command: "{{ forum_code_dir }}/bin/rake search:rebuild_index"
args:
chdir: "{{ forum_code_dir }}"
become_user: "{{ forum_user }}"
environment: "{{ forum_base_env }}"
when: migrate_db is defined and migrate_db|lower == "yes" and FORUM_REBUILD_INDEX|bool
tags:
- migrate
- migrate:db
# call supervisorctl update. this reloads
# the supervisorctl config and restarts
# the services if any of the configurations
......
......@@ -44,11 +44,27 @@
- install
- install:base
- name: setup the forum env
- name: setup the forum env for stage/prod
template:
src: forum_env.j2
dest: "{{ forum_app_dir }}/forum_env"
owner: "{{ forum_user }}"
owner: "{{ forum_user }}"
group: "{{ common_web_user }}"
mode: 0644
notify:
- restart the forum service
tags:
- install
- install:base
- install:configuration
with_items:
- "{{ forum_env }}"
- name: setup the forum env for devstack
template:
src: forum_env.j2
dest: "{{ forum_app_dir }}/devstack_forum_env"
owner: "{{ forum_user }}"
group: "{{ common_web_user }}"
mode: 0644
notify:
......@@ -56,6 +72,9 @@
tags:
- install
- install:base
when: devstack is defined and devstack
with_items:
- "{{ devstack_forum_env }}"
- name: create {{ forum_data_dir }}
file:
......@@ -67,7 +86,7 @@
tags:
- install
- install:base
- include: deploy.yml
tags:
tags:
- deploy
# {{ ansible_managed }}
{% for name,value in forum_environment.items() -%}
{% for name,value in item.items() -%}
{%- if value -%}
export {{ name }}="{{ value }}"
{% endif %}
{% endif %}
{%- endfor %}
eval "$(rbenv init -)"
......@@ -96,3 +96,11 @@
tags:
- install
- install:code
- name: Run git clean after checking out code
shell: cd {{ item.DESTINATION }} && git clean -xdf
become: true
with_items: "{{ GIT_REPOS }}"
tags:
- install
- install:code
......@@ -13,16 +13,13 @@
GO_SERVER_SERVICE_NAME: "go-server"
GO_SERVER_USER: "go"
GO_SERVER_GROUP: "{{ GO_SERVER_USER }}"
GO_SERVER_VERSION: "17.1.0-4511"
GO_SERVER_VERSION: "17.10.0-5380"
GO_SERVER_HOME: "/var/lib/go-server"
GO_SERVER_CONF_HOME: "/etc/go"
GO_SERVER_PLUGIN_DIR: "{{ GO_SERVER_HOME }}/plugins/external/"
# Java version settings
GO_SERVER_ORACLEJDK_VERSION: "8u65"
GO_SERVER_ORACLEJDK_BASE: "jdk1.8.0_65"
GO_SERVER_ORACLEJDK_BUILD: "b17"
GO_SERVER_ORACLEJDK_LINK: "/usr/lib/jvm/java-8-oracle"
#Openjdk PPA Apt source
openjdk_apt_source: "ppa:openjdk-r/ppa"
# java tuning
GO_SERVER_JAVA_HOME: "{{ GO_SERVER_ORACLEJDK_LINK }}"
......@@ -31,7 +28,7 @@ GO_SERVER_JAVA_HOME: "{{ GO_SERVER_ORACLEJDK_LINK }}"
GO_SERVER_APT_SOURCE: "deb https://download.gocd.io /"
GO_SERVER_APT_KEY_URL: "https://download.gocd.io/GOCD-GPG-KEY.asc"
GO_SERVER_APT_NAME: "go-server"
GO_SERVER_APT_PKGS: ["apache2-utils"]
GO_SERVER_APT_PKGS: ["apache2-utils","openjdk-8-jdk"]
# gocd-oauth-login
GO_SERVER_OAUTH_LOGIN_VERSION: "1.2"
......
......@@ -18,11 +18,3 @@
# my_role_var0: "foo"
# my_role_var1: "bar"
# }
dependencies:
- role: oraclejdk
tags: java
oraclejdk_version: "{{ GO_SERVER_ORACLEJDK_VERSION }}"
oraclejdk_base: "{{ GO_SERVER_ORACLEJDK_BASE }}"
oraclejdk_build: "{{ GO_SERVER_ORACLEJDK_BUILD }}"
oraclejdk_link: "{{ GO_SERVER_ORACLEJDK_LINK }}"
......@@ -40,10 +40,9 @@
url: "{{ GO_SERVER_APT_KEY_URL }}"
state: present
- name: install go-server using apt-get
apt:
name: "{{ GO_SERVER_APT_NAME }}={{ GO_SERVER_VERSION }}"
update_cache: yes
- name: install openjdk ppa repository
apt_repository:
repo: "{{ openjdk_apt_source }}"
state: present
- name: install other needed system packages
......@@ -54,6 +53,12 @@
cache_valid_time: 3600
with_items: "{{ GO_SERVER_APT_PKGS }}"
- name: install go-server using apt-get
apt:
name: "{{ GO_SERVER_APT_NAME }}={{ GO_SERVER_VERSION }}"
update_cache: yes
state: present
- name: create go-server plugin directory
file:
path: "{{ GO_SERVER_PLUGIN_DIR }}"
......
......@@ -88,3 +88,6 @@ esac
# Remove the tarball.
rm -f "$gocd_backup_location"
#Remove backup from serverBackups folder
rm -rf "$(dirname $backup_path)/${backup_dir_name}"
......@@ -51,6 +51,8 @@ INSIGHTS_THEME_SCSS: 'sass/themes/open-edx.scss'
INSIGHTS_RESEARCH_URL: 'https://www.edx.org/research-pedagogy'
INSIGHTS_OPEN_SOURCE_URL: 'http://set-me-please'
INSIGHTS_DOMAIN: 'insights'
# Comma-delimited list of field names to include in the Learner List CSV download
# e.g., "username,segments,cohort,engagements.videos_viewed,last_updated"
# Default (null) includes all available fields, in alphabetical order
......@@ -79,6 +81,13 @@ INSIGHTS_LMS_COURSE_SHORTCUT_BASE_URL: "URL_FOR_LMS_COURSE_LIST_PAGE"
INSIGHTS_SESSION_EXPIRE_AT_BROWSER_CLOSE: false
INSIGHTS_CDN_DOMAIN: !!null
INSIGHTS_CORS_ORIGIN_WHITELIST_EXTRA: []
INSIGHTS_CORS_ORIGIN_WHITELIST_DEFAULT:
- "{{ INSIGHTS_DOMAIN }}"
INSIGHTS_CORS_ORIGIN_WHITELIST: "{{ INSIGHTS_CORS_ORIGIN_WHITELIST_DEFAULT + INSIGHTS_CORS_ORIGIN_WHITELIST_EXTRA }}"
#
# This block of config is dropped into /edx/etc/insights.yml
# and is read in by analytics_dashboard/settings/production.py
......@@ -119,7 +128,6 @@ INSIGHTS_CONFIG:
# static file config
STATICFILES_DIRS: ["{{ insights_static_path }}"]
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ insights_service_name }}/staticfiles"
THEME_SCSS: '{{ INSIGHTS_THEME_SCSS }}'
RESEARCH_URL: '{{ INSIGHTS_RESEARCH_URL }}'
OPEN_SOURCE_URL: '{{ INSIGHTS_OPEN_SOURCE_URL }}'
# db config
......@@ -136,6 +144,8 @@ INSIGHTS_CONFIG:
SESSION_EXPIRE_AT_BROWSER_CLOSE: "{{ INSIGHTS_SESSION_EXPIRE_AT_BROWSER_CLOSE }}"
CMS_COURSE_SHORTCUT_BASE_URL: "{{ INSIGHTS_CMS_COURSE_SHORTCUT_BASE_URL }}"
LEARNER_API_LIST_DOWNLOAD_FIELDS: "{{ INSIGHTS_LEARNER_API_LIST_DOWNLOAD_FIELDS }}"
# CDN url to serve assets from
CDN_DOMAIN: "{{ INSIGHTS_CDN_DOMAIN }}"
INSIGHTS_NEWRELIC_APPNAME: "{{ COMMON_ENVIRONMENT }}-{{ COMMON_DEPLOYMENT }}-analytics-api"
INSIGHTS_PIP_EXTRA_ARGS: "-i {{ COMMON_PYPI_MIRROR_URL }}"
......@@ -166,6 +176,7 @@ insights_environment:
DJANGO_SETTINGS_MODULE: "analytics_dashboard.settings.production"
ANALYTICS_DASHBOARD_CFG: "{{ COMMON_CFG_DIR }}/{{ insights_service_name }}.yml"
PATH: "{{ insights_nodeenv_bin }}:{{ insights_venv_dir }}/bin:{{ ansible_env.PATH }}"
THEME_SCSS: '{{ INSIGHTS_THEME_SCSS }}'
insights_service_name: insights
......@@ -207,7 +218,5 @@ insights_debian_pkgs:
- gettext
insights_release_specific_debian_pkgs:
precise:
- openjdk-7-jdk
xenial:
- openjdk-8-jdk
......@@ -58,17 +58,7 @@
production: yes
state: latest
become_user: "{{ insights_user }}"
tags:
- install
- install:app-requirements
environment: "{{ insights_environment }}"
- name: install bower dependencies
shell: ". {{ insights_venv_dir }}/bin/activate && . {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/bower install --production --config.interactive=false"
args:
chdir: "{{ insights_code_dir }}"
become_user: "{{ insights_user }}"
tags:
- install
- install:app-requirements
......@@ -84,11 +74,12 @@
- migrate
- migrate:db
- name: run r.js optimizer
shell: ". {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/r.js -o build.js"
- name: run webpack
shell: ". {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/webpack --config webpack.prod.config.js"
args:
chdir: "{{ insights_code_dir }}"
become_user: "{{ insights_user }}"
environment: "{{ insights_environment }}"
tags:
- assets
- assets:gather
......@@ -101,7 +92,6 @@
environment: "{{ insights_environment }}"
with_items:
- "collectstatic --noinput"
- "compress"
tags:
- assets
- assets:gather
......
......@@ -30,16 +30,11 @@ JENKINS_ADMIN_AWS_CREDENTIALS: !!null
jenkins_admin_role_name: jenkins_admin
jenkins_admin_version: "1.630"
# repo for nodejs
jenkins_chrislea_ppa: "ppa:chris-lea/node.js"
jenkins_admin_version: "1.658"
#
# OS packages
#
jenkins_admin_debian_repos:
- "deb http://cosmos.cites.illinois.edu/pub/ubuntu/ precise-backports main universe"
jenkins_admin_debian_pkgs:
# These are copied from the edxapp
# role so that we can create virtualenvs
......@@ -56,7 +51,6 @@ jenkins_admin_debian_pkgs:
# libopenblas-base, it will cause
# problems for numpy
- gfortran
- libatlas3gf-base
- liblapack-dev
- g++
- libxml2-dev
......@@ -78,7 +72,6 @@ jenkins_admin_debian_pkgs:
- libpng12-dev
# for status.edx.org
- ruby
- ruby1.9.1
# for check-migrations
- mysql-client
# for aws cli scripting
......@@ -93,57 +86,10 @@ jenkins_admin_gem_pkgs:
jenkins_admin_redhat_pkgs: []
jenkins_admin_plugins:
- { name: "greenballs", version: "1.14" }
- { name: "rebuild", version: "1.21" }
- { name: "build-user-vars-plugin", version: "1.1" }
- { name: "matrix-auth", version: "1.2" }
- { name: "matrix-project", version: "1.3" }
- { name: "mailer", version: "1.9" }
- { name: "build-user-vars-plugin", version: "1.3" }
- { name: "credentials", version: "1.15" }
- { name: "ssh-credentials", version: "1.7.1" }
- { name: "ssh-agent", version: "1.4.1" }
- { name: "token-macro", version: "1.10" }
- { name: "parameterized-trigger", version: "2.25" }
- { name: "multiple-scms", version: "0.3" }
- { name: "maven-plugin", version: "2.5" }
- { name: "copy-project-link", version: "1.2" }
- { name: "scriptler", version: "2.6.1" }
- { name: "rebuild", version: "1.21" }
- { name: "ssh-slaves", version: "1.6" }
- { name: "translation", version: "1.11" }
- { name: "dynamicparameter", version: "0.2.0" }
- { name: "hipchat", version: "0.1.6" }
- { name: "throttle-concurrents", version: "1.8.3" }
- { name: "mask-passwords", version: "2.7.2" }
- { name: "jquery", version: "1.7.2-1" }
- { name: "dashboard-view", version: "2.9.4" }
- { name: "build-pipeline-plugin", version: "1.4.3" }
- { name: "s3", version: "0.6" }
- { name: "tmpcleaner", version: "1.1" }
- { name: "jobConfigHistory", version: "2.8" }
- { name: "build-timeout", version: "1.14" }
- { name: "next-build-number", version: "1.1" }
- { name: "nested-view", version: "1.14" }
- { name: "timestamper", version: "1.5.14" }
- { name: "github-api", version: "1.55" }
- { name: "postbuild-task", version: "1.8" }
- { name: "notification", version: "1.5" }
- { name: "copy-to-slave", version: "1.4.3" }
- { name: "github", version: "1.9.1" }
- { name: "copyartifact", version: "1.31" }
- { name: "shiningpanda", version: "0.21" }
- { name: "htmlpublisher", version: "1.3" }
- { name: "github-oauth", version: "0.20" }
- { name: "build-name-setter", version: "1.3" }
- { name: "jenkins-flowdock-plugin", version: "1.1.3" }
- { name: "simple-parameterized-builds-report", version: "1.3" }
- { name: "git-client", version: "1.19.0"}
- { name: "git", version: "2.4.0"}
jenkins_admin_plugins: [] # Plugins installed manually, not tracked here.
jenkins_admin_jobs:
- 'backup-jenkins'
# See templates directory for potential basic jobs you could add to your jenkins.
jenkins_admin_jobs: []
# Supervisor related settings
jenkins_supervisor_user: "{{ jenkins_user }}"
......
#!/bin/bash -x
# This script will monitor two NATs and route to a backup nat
# if the primary fails.
set -e
# Health Check variables
Num_Pings=3
Ping_Timeout=2
Wait_Between_Pings=2
Wait_for_Instance_Stop=60
Wait_for_Instance_Start=300
ID_UPDATE_INTERVAL=150
send_message() {
message_file=/var/tmp/message-$$.json
message_string=$1
if [ -z $message_string ]; then
message_string="Unknown error for $VPC_NAME NAT monitor"
fi
message_body=$2
cat << EOF > $message_file
{"Subject":{"Data":"$message_string"},"Body":{"Text":{"Data": "$message_body"}}}
EOF
echo `date` "-- $message_body"
BASE_PROFILE=$AWS_DEFAULT_PROFILE
export AWS_DEFAULT_PROFILE=$AWS_MAIL_PROFILE
aws ses send-email --from $NAT_MONITOR_FROM_EMAIL --to $NAT_MONITOR_TO_EMAIL --message file://$message_file
export AWS_DEFAULT_PROFILE=$BASE_PROFILE
}
trap send_message ERR SIGHUP SIGINT SIGTERM
# Determine the NAT instance private IP so we can ping the other NAT instance, take over
# its route, and reboot it. Requires EC2 DescribeInstances, ReplaceRoute, and Start/RebootInstances
# permissions. The following example EC2 Roles policy will authorize these commands:
# {
# "Statement": [
# {
# "Action": [
# "ec2:DescribeInstances",
# "ec2:CreateRoute",
# "ec2:ReplaceRoute",
# "ec2:StartInstances",
# "ec2:StopInstances"
# ],
# "Effect": "Allow",
# "Resource": "*"
# }
# ]
# }
COUNTER=0
echo `date` "-- Running NAT monitor"
while [ . ]; do
# Re check thi IDs and IPs periodically
# This is useful in case the primary nat changes by some
# other means than this script.
if [ $COUNTER -eq 0 ]; then
# NAT instance variables
PRIMARY_NAT_ID=`aws ec2 describe-route-tables --filters Name=tag:aws:cloudformation:stack-name,Values=$VPC_NAME Name=tag:aws:cloudformation:logical-id,Values=PrivateRouteTable | jq '.RouteTables[].Routes[].InstanceId|strings' -r`
BACKUP_NAT_ID=`aws ec2 describe-instances --filters Name=tag:aws:cloudformation:stack-name,Values=$VPC_NAME Name=tag:aws:cloudformation:logical-id,Values=NATDevice,BackupNATDevice | jq '.Reservations[].Instances[].InstanceId' -r | grep -v $PRIMARY_NAT_ID`
NAT_RT_ID=`aws ec2 describe-route-tables --filters Name=tag:aws:cloudformation:stack-name,Values=$VPC_NAME Name=tag:aws:cloudformation:logical-id,Values=PrivateRouteTable | jq '.RouteTables[].RouteTableId' -r`
# Get the primary NAT instance's IP
PRIMARY_NAT_IP=`aws ec2 describe-instances --instance-ids $PRIMARY_NAT_ID | jq -r ".Reservations[].Instances[].PrivateIpAddress"`
BACKUP_NAT_IP=`aws ec2 describe-instances --instance-ids $BACKUP_NAT_ID | jq -r ".Reservations[].Instances[].PrivateIpAddress"`
let "COUNTER += 1"
let "COUNTER %= $ID_UPDATE_INTERVAL"
fi
# Check the health of both instances.
primary_pingresult=`ping -c $Num_Pings -W $Ping_Timeout $PRIMARY_NAT_IP| grep time= | wc -l`
if [ "$primary_pingresult" == "0" ]; then
backup_pingresult=`ping -c $Num_Pings -W $Ping_Timeout $BACKUP_NAT_IP| grep time= | wc -l`
if [ "$backup_pingresult" == "0" ]; then
send_message "Error monitoring NATs for $VPC_NAME." "ERROR -- Both NATs($PRIMARY_NAT_ID and $BACKUP_NAT_ID) were unreachable."
else #Backup nat is healthy.
send_message "Primary $VPC_NAME NAT failed ping" "-- NAT($PRIMARY_NAT_ID) heartbeat failed, consider using $BACKUP_NAT_ID for $NAT_RT_ID default route
Command for re-routing:
aws ec2 replace-route --route-table-id $NAT_RT_ID --destination-cidr-block 0.0.0.0/0 --instance-id $BACKUP_NAT_ID"
fi
else
echo `date` "-- PRIMARY NAT ($PRIMARY_NAT_ID $PRIMARY_NAT_IP) reports healthy to pings"
sleep $Wait_Between_Pings
fi
done
......@@ -24,6 +24,9 @@ dependencies:
- role: jenkins_master
jenkins_plugins: "{{ jenkins_admin_plugins }}"
jenkins_version: "{{ jenkins_admin_version }}"
jenkins_deb_url: "https://pkg.jenkins.io/debian/binary/jenkins_{{ jenkins_version }}_all.deb"
jenkins_custom_plugins: []
jenkins_bundled_plugins: []
- role: supervisor
supervisor_app_dir: "{{ jenkins_supervisor_app_dir }}"
supervisor_data_dir: "{{ jenkins_supervisor_data_dir }}"
......
......@@ -33,13 +33,6 @@
- fail: msg="JENKINS_ADMIN_S3_PROFILE.secret_key is not defined."
when: JENKINS_ADMIN_S3_PROFILE.secret_key is not defined
- name: add admin specific apt repositories
apt_repository:
repo: "{{ item }}"
state: "present"
update_cache: "yes"
with_items: "{{ jenkins_admin_debian_repos }}"
- name: create the scripts directory
file:
path: "{{ jenkins_admin_scripts_dir }}"
......@@ -114,7 +107,7 @@
group: "{{ jenkins_group }}"
mode: 0755
state: directory
with_items: jenkins_admin_jobs
with_items: "{{ jenkins_admin_jobs }}"
- name: create admin job config files
template:
......@@ -123,12 +116,7 @@
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0644
with_items: jenkins_admin_jobs
# adding chris-lea nodejs repo
- name: add ppas for current versions of nodejs
apt_repository:
repo: "{{ jenkins_chrislea_ppa }}"
with_items: "{{ jenkins_admin_jobs }}"
- name: install system packages for edxapp virtualenvs
apt:
......@@ -153,11 +141,10 @@
state: present
version: "{{ item.version }}"
user_install: no
with_items: jenkins_admin_gem_pkgs
with_items: "{{ jenkins_admin_gem_pkgs }}"
- name: get s3 one time url
local_action:
module: "s3"
s3:
bucket: "{{ JENKINS_ADMIN_BACKUP_BUCKET }}"
object: "{{ JENKINS_ADMIN_BACKUP_S3_KEY }}"
mode: "geturl"
......@@ -168,7 +155,7 @@
- name: download s3 backup
get_url:
url: "{{ s3_one_time_url.url }}"
dest: "/tmp/{{ JENKINS_ADMIN_BACKUP_S3_KEY | basename }}"
dest: "/tmp/jenkins_backup.tar.gz"
mode: 0644
owner: "{{ jenkins_user }}"
when: JENKINS_ADMIN_BACKUP_BUCKET is defined and JENKINS_ADMIN_BACKUP_S3_KEY is defined
......@@ -192,8 +179,4 @@
service:
name: "jenkins"
state: "started"
when: JENKINS_ADMIN_BACKUP_BUCKET and JENKINS_BACKUP_S3_KEY
- include: nat_monitor.yml
tags:
- nat-monitor
when: JENKINS_ADMIN_BACKUP_BUCKET is defined and JENKINS_ADMIN_BACKUP_S3_KEY is defined
---
# Nat monitors should be defined as a list of dictionaries
# e.g.
# NAT_MONITORS:
# - vpc_name: 'loadtest-edx'
# region: 'us-east-1'
# deployment: 'edx'
#
# To receive E-mails, ses should be setup with the
# aws account that is defined by the JENKINS_ADMIN_MAIL_PROFILE
# and the from adress should be verified
# JENKINS_ADMIN_MAIL_PROFILE: 'aws_account_name'
# JENKINS_ADMIN_FROM_EMAIL: 'admin@example.com'
# JENKINS_ADMIN_TO_EMAIL: 'alert@example.com'
- fail: msg="NAT_MONITORS is not defined."
when: NAT_MONITORS is not defined
- name: upload the monitor script
copy:
dest="{{ jenkins_admin_scripts_dir }}/nat-monitor.sh"
src="nat-monitor.sh"
owner="{{ jenkins_user }}"
group="{{ jenkins_group }}"
mode="755"
become_user: "{{ jenkins_user }}"
notify: restart nat monitor
- name: create a supervisor config
template:
src="nat-monitor.conf.j2" dest="{{ jenkins_supervisor_available_dir }}/nat-monitor.conf"
owner="{{ jenkins_user }}"
group="{{ jenkins_group }}"
become_user: "{{ jenkins_user }}"
notify: restart nat monitor
- name: enable the supervisor config
file:
src="{{ jenkins_supervisor_available_dir }}/nat-monitor.conf"
dest="{{ jenkins_supervisor_cfg_dir }}/nat-monitor.conf"
state=link
force=yes
mode=0644
become_user: "{{ jenkins_user }}"
when: not disable_edx_services
notify: restart nat monitor
- name: update supervisor configuration
shell: "{{ jenkins_supervisor_ctl }} -c {{ jenkins_supervisor_cfg }} update"
register: supervisor_update
changed_when: supervisor_update.stdout is defined and supervisor_update.stdout != ""
when: not disable_edx_services
# Have to use shell here because supervisorctl doesn't support
# process groups.
- name: ensure nat monitor is started
shell: "{{ jenkins_supervisor_ctl }} -c {{ jenkins_supervisor_cfg }} start nat_monitor:*"
when: not disable_edx_services
{% for m in NAT_MONITORS %}
[program:nat_monitor_{{ m.vpc_name|replace('-','_') }}]
environment=VPC_NAME="{{ m.vpc_name }}",AWS_DEFAULT_REGION="{{ m.region }}",AWS_DEFAULT_PROFILE="{{ m.deployment }}",AWS_MAIL_PROFILE="{{ JENKINS_ADMIN_MAIL_PROFILE }}",NAT_MONITOR_FROM_EMAIL="{{ JENKINS_ADMIN_FROM_EMAIL }}",NAT_MONITOR_TO_EMAIL="{{ JENKINS_ADMIN_TO_EMAIL }}"
user={{ jenkins_supervisor_service_user }}
directory={{ jenkins_admin_scripts_dir }}
stdout_logfile={{ jenkins_supervisor_log_dir }}/%(program_name)s-stdout.log
stderr_logfile={{ jenkins_supervisor_log_dir }}/%(program_name)s-stderr.log
command={{ jenkins_admin_scripts_dir }}/nat-monitor.sh
killasgroup=true
stopasgroup=true
{% endfor %}
[group:nat_monitor]
programs={%- for m in NAT_MONITORS %}nat_monitor_{{ m.vpc_name|replace('-','_') }}{%- if not loop.last %},{%- endif %}{%- endfor %}
build_jenkins_user_uid: 1002
build_jenkins_group_gid: 1004
build_jenkins_version: jenkins_2.60.3
build_jenkins_jvm_args: '-Djava.awt.headless=true -Xmx8192m'
build_jenkins_configuration_scripts:
- 1addJarsToClasspath.groovy
- 2checkInstalledPlugins.groovy
- 3importCredentials.groovy
- 3mainConfiguration.groovy
- 3setGlobalProperties.groovy
- 3shutdownCLI.groovy
- 4configureEc2Plugin.groovy
- 4configureGHOAuth.groovy
- 4configureGHPRB.groovy
- 4configureGit.groovy
- 4configureGithub.groovy
- 4configureHipChat.groovy
- 4configureJobConfigHistory.groovy
- 4configureMailerPlugin.groovy
- 4configureMaskPasswords.groovy
- 5createLoggers.groovy
# plugins
build_jenkins_plugins_list:
- name: 'antisamy-markup-formatter'
version: '1.3'
group: 'org.jenkins-ci.plugins'
- name: 'script-security'
version: '1.27'
group: 'org.jenkins-ci.plugins'
- name: 'mailer'
version: '1.16'
group: 'org.jenkins-ci.plugins'
- name: 'cvs'
version: '2.12'
group: 'org.jenkins-ci.plugins'
- name: 'ldap'
version: '1.11'
group: 'org.jenkins-ci.plugins'
- name: 'windows-slaves'
version: '1.0'
group: 'org.jenkins-ci.plugins'
- name: 'ant'
version: '1.2'
group: 'org.jenkins-ci.plugins'
- name: 'matrix-auth'
version: '1.2'
group: 'org.jenkins-ci.plugins'
- name: 'matrix-project'
version: '1.7'
group: 'org.jenkins-ci.plugins'
- name: 'credentials'
version: '2.1.8'
group: 'org.jenkins-ci.plugins'
- name: 'ssh-credentials'
version: '1.11'
group: 'org.jenkins-ci.plugins'
- name: 'external-monitor-job'
version: '1.4'
group: 'org.jenkins-ci.plugins'
- name: 'translation'
version: '1.12'
group: 'org.jenkins-ci.plugins'
- name: 'subversion'
version: '2.5'
group: 'org.jenkins-ci.plugins'
- name: 'junit'
version: '1.3'
group: 'org.jenkins-ci.plugins'
- name: 'pam-auth'
version: '1.2'
group: 'org.jenkins-ci.plugins'
- name: 'maven-plugin'
version: '2.8'
group: 'org.jenkins-ci.main'
- name: 'ssh-slaves'
version: '1.20'
group: 'org.jenkins-ci.plugins'
- name: 'javadoc'
version: '1.3'
group: 'org.jenkins-ci.plugins'
- name: 'ansicolor'
version: '0.4.1'
group: 'org.jenkins-ci.plugins'
- name: 'bouncycastle-api'
version: '2.16.0'
group: 'org.jenkins-ci.plugins'
- name: 'build-flow-plugin'
version: '0.17'
group: 'com.cloudbees.plugins'
- name: 'build-flow-test-aggregator'
version: '1.0'
group: 'org.zeroturnaround.jenkins'
- name: 'build-flow-toolbox-plugin'
version: '0.1'
group: 'org.jenkins-ci.plugins'
- name: 'buildgraph-view'
version: '1.1.1'
group: 'org.jenkins-ci.plugins'
- name: 'build-name-setter'
version: '1.3'
group: 'org.jenkins-ci.plugins'
- name: 'build-timeout'
version: '1.14.1'
group: 'org.jenkins-ci.plugins'
- name: 'build-user-vars-plugin'
version: '1.5'
group: 'org.jenkins-ci.plugins'
- name: 'cobertura'
version: '1.9.6'
group: 'org.jenkins-ci.plugins'
- name: 'copyartifact'
version: '1.32.1'
group: 'org.jenkins-ci.plugins'
- name: 'credentials-binding'
version: '1.7'
group: 'org.jenkins-ci.plugins'
- name: 'ec2'
version: '1.28'
group: 'org.jenkins-ci.plugins'
- name: 'envinject'
version: '2.0'
group: 'org.jenkins-ci.plugins'
- name: 'exclusive-execution'
version: '0.8'
group: 'org.jenkins-ci.plugins'
- name: 'flexible-publish'
version: '0.15.2'
group: 'org.jenkins-ci.plugins'
- name: 'ghprb'
version: '1.36.0'
group: 'org.jenkins-ci.plugins'
- name: 'github'
version: '1.26.0'
group: 'com.coravy.hudson.plugins.github'
- name: 'github-oauth'
version: '0.24'
group: 'org.jenkins-ci.plugins'
- name: 'gradle'
version: '1.24'
group: 'org.jenkins-ci.plugins'
- name: 'groovy'
version: '2.0'
group: 'org.jenkins-ci.plugins'
- name: 'groovy-postbuild'
version: '2.2'
group: 'org.jvnet.hudson.plugins'
- name: 'hipchat'
version: '0.1.9'
group: 'org.jvnet.hudson.plugins'
- name: 'hockeyapp'
version: '1.2.1'
group: 'org.jenkins-ci.plugins'
- name: 'htmlpublisher'
version: '1.10'
group: 'org.jenkins-ci.plugins'
- name: 'jobConfigHistory'
version: '2.10'
group: 'org.jenkins-ci.plugins'
- name: 'job-dsl'
version: '1.45'
group: 'org.jenkins-ci.plugins'
- name: 'mask-passwords'
version: '2.8'
group: 'org.jenkins-ci.plugins'
- name: 'monitoring'
version: '1.56.0'
group: 'org.jvnet.hudson.plugins'
- name: 'multiple-scms'
version: '0.6'
group: 'org.jenkins-ci.plugins'
- name: 'nodelabelparameter'
version: '1.7.2'
group: 'org.jenkins-ci.plugins'
- name: 'parameterized-trigger'
version: '2.25'
group: 'org.jenkins-ci.plugins'
- name: 'PrioritySorter'
version: '2.9'
group: 'org.jenkins-ci.plugins'
- name: 'rebuild'
version: '1.25'
group: 'com.sonyericsson.hudson.plugins.rebuild'
- name: 'run-condition'
version: '1.0'
group: 'org.jenkins-ci.plugins'
- name: 'shiningpanda'
version: '0.21'
group: 'org.jenkins-ci.plugins'
- name: 'ssh-agent'
version: '1.14'
group: 'org.jenkins-ci.plugins'
- name: 'text-finder'
version: '1.10'
group: 'org.jenkins-ci.plugins'
- name: 'thinBackup'
version: '1.7.4'
group: 'org.jvnet.hudson.plugins'
- name: 'timestamper'
version: '1.5.15'
group: 'org.jenkins-ci.plugins'
- name: 'violations'
version: '0.7.11'
group: 'org.jenkins-ci.plugins'
- name: 'xunit'
version: '1.93'
group: 'org.jenkins-ci.plugins'
# ghprb
build_jenkins_ghprb_white_list_phrase: '.*[Aa]dd\W+to\W+whitelist.*'
build_jenkins_ghprb_ok_phrase: '.*ok\W+to\W+test.*'
build_jenkins_ghprb_retest_phrase: '.*jenkins\W+run\W+all.*'
build_jenkins_ghprb_skip_phrase: '.*\[[Ss]kip\W+ci\].*'
build_jenkins_ghprb_cron_schedule: 'H/5 * * * *'
# github
JENKINS_GITHUB_CONFIG: ''
# hipchat
build_jenkins_hipchat_room: 'testeng'
# ec2
build_jenkins_instance_cap: '250'
# seed
build_jenkins_seed_name: 'manually_seed_one_job'
# logs
build_jenkins_log_list:
- LOG_RECORDER: 'Ghprb'
LOGGERS:
- name: 'org.jenkinsci.plugins.ghprb.GhprbPullRequest'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.ghprb.GhprbRootAction'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.ghprb.GhprbRepository'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.ghprb.GhprbGitHub'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.ghprb.Ghprb'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.ghprb.GhprbTrigger'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.ghprb.GhprbBuilds'
log_level: 'ALL'
- LOG_RECORDER: 'GithubPushLogs'
LOGGERS:
- name: 'com.cloudbees.jenkins.GitHubPushTrigger'
log_level: 'ALL'
- name: 'org.jenkinsci.plugins.github.webhook.WebhookManager'
log_level: 'ALL'
- name: 'com.cloudbees.jenkins.GitHubWebHook'
log_level: 'ALL'
- name: 'hudson.plugins.git.GitSCM'
log_level: 'ALL'
# job config history
build_jenkins_history_max_days: '15'
build_jenkins_history_exclude_pattern: 'queue|nodeMonitors|UpdateCenter|global-build-stats|GhprbTrigger'
---
dependencies:
- common
- role: jenkins_common
jenkins_common_version: '{{ build_jenkins_version }}'
jenkins_common_user_uid: '{{ build_jenkins_user_uid }}'
jenkins_common_group_gid: '{{ build_jenkins_group_gid }}'
jenkins_common_jvm_args: '{{ build_jenkins_jvm_args }}'
jenkins_common_configuration_scripts: '{{ build_jenkins_configuration_scripts }}'
jenkins_common_template_files: '{{ build_jenkins_template_files }}'
jenkins_common_plugins_list: '{{ build_jenkins_plugins_list }}'
jenkins_common_ghprb_white_list_phrase: '{{ build_jenkins_ghprb_white_list_phrase }}'
jenkins_common_ghprb_ok_phrase: '{{ build_jenkins_ghprb_ok_phrase }}'
jenkins_common_ghprb_retest_phrase: '{{ build_jenkins_ghprb_retest_phrase }}'
jenkins_common_ghprb_skip_phrase: '{{ build_jenkins_ghprb_skip_phrase }}'
jenkins_common_ghprb_cron_schedule: '{{ build_jenkins_ghprb_cron_schedule }}'
jenkins_common_github_configs: '{{ JENKINS_GITHUB_CONFIG }}'
jenkins_common_hipchat_room: '{{ build_jenkins_hipchat_room }}'
jenkins_common_instance_cap: '{{ build_jenkins_instance_cap }}'
jenkins_common_seed_name: '{{ build_jenkins_seed_name }}'
jenkins_common_log_list: '{{ build_jenkins_log_list }}'
jenkins_common_history_max_days: '{{ build_jenkins_history_max_days }}'
jenkins_common_history_exclude_pattern: '{{ build_jenkins_history_exclude_pattern }}'
jenkins_common_server_name: '{{ JENKINS_SERVER_NAME }}'
jenkins_common_user: jenkins
jenkins_common_group: jenkins
jenkins_common_home: /var/lib/jenkins
jenkins_common_config_path: '{{ jenkins_common_home }}/init-configs'
jenkins_common_port: 8080
jenkins_common_version: jenkins_1.651.3
jenkins_common_war_source: https://s3.amazonaws.com/edx-testeng-tools/jenkins
jenkins_common_nginx_port: 80
jenkins_common_protocol_https: true
JENKINS_SERVER_NAME: jenkins.example.org
jenkins_common_debian_pkgs:
- nginx
- git
- curl
- maven
- daemon
- psmisc
jenkins_common_configuration_git_url: https://github.com/edx/jenkins-configuration.git
jenkins_common_jenkins_configuration_branch: master
jenkins_common_configuration_src_path: src/main/groovy
jenkins_common_git_home: '{{ jenkins_common_home }}/git'
jenkins_common_configuration_scripts: []
jenkins_common_non_plugin_template_files:
- credentials
- ec2_config
- ghprb_config
- git_config
- github_config
- hipchat_config
- job_config_history
- log_config
- mailer_config
- main_config
- mask_passwords_config
- properties_config
- security
- seed_config
# Jenkins default config values
jenkins_common_jvm_args: ''
# main
jenkins_common_main_system_message: ''
jenkins_common_main_num_executors: 1
jenkins_common_main_labels:
- 'dsl-seed-runner'
- 'backup-runner'
jenkins_common_main_quiet_period: 5
jenkins_common_main_scm_retry: 2
jenkins_common_main_disable_remember: true
jenkins_common_main_env_vars:
- NAME: 'BROWSERMOB_PROXY_PORT'
VALUE: '9090'
- NAME: 'GITHUB_OWNER_WHITELIST'
VALUE: '{{ JENKINS_MAIN_GITHUB_OWNER_WHITELIST }}'
jenkins_common_main_executable: '/bin/bash'
jenkins_common_formatter_type: 'rawhtml'
jenkins_common_disable_syntax_highlighting: false
# system properties
jenkins_common_system_properties:
- KEY: "hudson.footerURL"
VALUE: "http://www.example.com"
JENKINS_MAIN_URL: 'https://jenkins.example.org/'
JENKINS_MAIN_ADMIN_EMAIL: 'jenkins <admin@example.org>'
# plugins
jenkins_common_plugins_list: []
# ec2
jenkins_common_use_instance_profile_for_creds: false
jenkins_common_instance_cap: ''
JENKINS_EC2_PRIVATE_KEY: ''
JENKINS_EC2_REGION: ''
JENKINS_EC2_ACCESS_KEY_ID: ''
JENKINS_EC2_SECRET_ACCESS_KEY: ''
JENKINS_EC2_AMIS: []
# ghprb
jenkins_common_ghprb_server: 'https://api.github.com'
jenkins_common_ghprb_request_testing: ''
jenkins_common_ghprb_white_list_phrase: ''
jenkins_common_ghprb_ok_phrase: ''
jenkins_common_ghprb_retest_phrase: ''
jenkins_common_ghprb_skip_phrase: ''
jenkins_common_ghprb_cron_schedule: ''
jenkins_common_ghprb_use_comments: false
jenkins_common_ghprb_use_detailed_comments: false
jenkins_common_ghprb_manage_webhooks: false
jenkins_common_ghprb_failure_as: 'failure'
jenkins_common_ghprb_auto_close_fails: false
jenkins_commmon_ghprb_display_errors: false
jenkins_common_ghprb_github_auth: ''
jenkins_common_ghprb_simple_status: ''
jenkins_common_ghprb_publish_jenkins_url: ''
jenkins_common_ghprb_build_log_lines:
jenkins_common_ghprb_results:
- STATUS: 'FAILURE'
MESSAGE: 'Test FAILed.'
- STATUS: 'SUCCESS'
MESSAGE: 'Test PASSed.'
JENKINS_GHPRB_ADMIN_LIST: []
JENKINS_GHPRB_CREDENTIAL_ID: ''
JENKINS_GHPRB_SHARED_SECRET: ''
# credentials
JENKINS_SECRET_FILES_LIST: []
JENKINS_USERNAME_PASSWORD_LIST: []
JENKINS_SECRET_TEXT_LIST: []
JENKINS_CERTIFICATES_LIST: []
JENKINS_MASTER_SSH_LIST: []
JENKINS_CUSTOM_SSH_LIST: []
# security
jenkins_common_security_scopes: 'read:org,user:email'
JENKINS_SECURITY_CLIENT_ID: ''
JENKINS_SECURITY_CLIENT_SECRET: ''
JENKINS_SECURITY_GROUPS: []
# git
JENKINS_GIT_NAME: 'jenkins'
JENKINS_GIT_EMAIL: 'jenkins@example.com'
# github
jenkins_common_github_configs:
- CREDENTIAL_ID: ''
MANAGE_HOOKS: false
USE_CUSTOM_API_URL: false
GITHUB_API_URL: ''
CACHE_SIZE: 20
# hipchat
jenkins_common_hipchat_room: ''
jenkins_common_hipchat_v2_enabled: true
JENKINS_HIPCHAT_API_TOKEN: ''
# seed
jenkins_common_seed_name: 'seed_job'
jenkins_common_seed_path: '{{ jenkins_common_config_path }}/xml/seed_job.xml'
# logs
jenkins_common_log_list:
- LOG_RECORDER: 'Sample Log'
LOGGERS:
- name: 'org.jenkinsci.plugins.example.Class'
log_level: 'ALL'
# job config history
jenkins_common_history_root: ''
jenkins_common_history_max_entries: ''
jenkins_common_history_max_days: ''
jenkins_common_history_max_entries_page: ''
jenkins_common_history_skip_duplicates: true
jenkins_common_history_exclude_pattern: ''
jenkins_common_history_save_module_config: false
jenkins_common_history_show_build_badges: 'always'
jenkins_common_history_excluded_users: ''
# mailer
jenkins_common_mailer_port: 465
jenkins_common_mailer_use_ssl: true
jenkins_common_mailer_char_set: 'UTF-8'
JENKINS_MAILER_SMTP_SERVER: ''
JENKINS_MAILER_REPLY_TO_ADDRESS: 'jenkins'
JENKINS_MAILER_DEFAULT_SUFFIX: '@example.com'
JENKINS_MAILER_SMTP_AUTH_USERNAME: ''
JENKINS_MAILER_SMTP_AUTH_PASSWORD: ''
# mask passwords
JENKINS_MASK_PASSWORDS_CLASSES: []
JENKINS_MASK_PASSWORDS_PAIRS: []
# This confirms that mongo is running and is accessible on localhost
# It could expose internal network problems, in which case the worker should not be used
# Mongo seems to spend a bit of time starting.
i=0
while [ $i -lt 45 ]; do
mongo --quiet --eval 'db.getMongo().getDBNames()' 2>/dev/null 1>&2
if [ $? -eq 0 ]; then
break
else
sleep 2
i=$[$i+1]
fi
done
mongo --quiet --eval 'db.getMongo().getDBNames()'
<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description>Run one dsl job at a time.</description>
<keepDependencies>false</keepDependencies>
<properties>
<jenkins.model.BuildDiscarderProperty>
<strategy class="hudson.tasks.LogRotator">
<daysToKeep>-1</daysToKeep>
<numToKeep>20</numToKeep>
<artifactDaysToKeep>-1</artifactDaysToKeep>
<artifactNumToKeep>-1</artifactNumToKeep>
</strategy>
</jenkins.model.BuildDiscarderProperty>
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
<hudson.model.StringParameterDefinition>
<name>DSL_SCRIPT</name>
<description>Path to dsl script to run, from the root of the https://github.com/edx/jenkins-job-dsl repo (i.e. sample/jobs/sampleJob.groovy)</description>
<defaultValue>sample/jobs/sampleJob.groovy</defaultValue>
</hudson.model.StringParameterDefinition>
<hudson.model.StringParameterDefinition>
<name>BRANCH</name>
<description>Branch of jenkins-job-dsl repo to use</description>
<defaultValue>*/master</defaultValue>
</hudson.model.StringParameterDefinition>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>
</properties>
<scm class="hudson.plugins.git.GitSCM" plugin="git@2.2.4">
<configVersion>2</configVersion>
<userRemoteConfigs>
<hudson.plugins.git.UserRemoteConfig>
<url>https://github.com/edx/jenkins-job-dsl.git</url>
</hudson.plugins.git.UserRemoteConfig>
</userRemoteConfigs>
<branches>
<hudson.plugins.git.BranchSpec>
<name>${BRANCH}</name>
</hudson.plugins.git.BranchSpec>
</branches>
<doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
<submoduleCfg class="list"/>
<extensions/>
</scm>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders>
<hudson.tasks.Shell>
<command>#!/usr/bin/env bash
# exit if user-supplied parameter does not exisit
if [ ! -e ${DSL_SCRIPT} ]; then
echo &quot;DSL Script &apos;{DSL_SCRIPT}&apos; does not exist. Please try again&quot;
exit 1
fi
</command>
</hudson.tasks.Shell>
<hudson.plugins.gradle.Gradle plugin="gradle@1.24">
<description>tert</description>
<switches></switches>
<tasks>libs
assemble</tasks>
<rootBuildScriptDir></rootBuildScriptDir>
<buildFile></buildFile>
<gradleName>(Default)</gradleName>
<useWrapper>true</useWrapper>
<makeExecutable>true</makeExecutable>
<fromRootBuildScriptDir>true</fromRootBuildScriptDir>
<useWorkspaceAsHome>true</useWorkspaceAsHome>
</hudson.plugins.gradle.Gradle>
<javaposse.jobdsl.plugin.ExecuteDslScripts plugin="job-dsl@1.45">
<targets>${DSL_SCRIPT}</targets>
<usingScriptText>false</usingScriptText>
<ignoreExisting>false</ignoreExisting>
<removedJobAction>IGNORE</removedJobAction>
<removedViewAction>IGNORE</removedViewAction>
<lookupStrategy>JENKINS_ROOT</lookupStrategy>
<additionalClasspath>lib/snakeyaml-1.17.jar
src/main/groovy</additionalClasspath>
</javaposse.jobdsl.plugin.ExecuteDslScripts>
</builders>
<publishers/>
<buildWrappers/>
</project>
---
dependencies:
- common
- role: nginx
nginx_app_dir: "/etc/nginx"
nginx_log_dir: "/var/log/nginx"
nginx_data_dir: "{{ nginx_app_dir }}"
nginx_conf_dir: "{{ nginx_app_dir }}/conf.d"
nginx_sites_available_dir: "{{ nginx_app_dir }}/sites-available"
nginx_sites_enabled_dir: "{{ nginx_app_dir }}/sites-enabled"
nginx_server_static_dir: "{{ nginx_data_dir }}/server-static"
nginx_htpasswd_file: "{{ nginx_app_dir }}/nginx.htpasswd"
nginx_default_sites: "jenkins"
nginx_template_dir: "etc/nginx/sites-available"
nginx_sites: jenkins
jenkins_nginx_port: "{{ jenkins_common_nginx_port }}"
jenkins_server_name: "{{ JENKINS_SERVER_NAME }}"
jenkins_port: "{{ jenkins_common_port }}"
jenkins_protocol_https: "{{ jenkins_common_protocol_https }}"
tags: jenkins:promote-to-production
- role: oraclejdk
tags: java
---
- name: Install jenkins specific system packages
apt:
name: '{{ item }}'
state: present
update_cache: yes
with_items: '{{ jenkins_common_debian_pkgs }}'
tags:
- jenkins
- install
- install:system-requirements
- name: Create jenkins group with specified gid
group:
name: '{{ jenkins_common_group }}'
gid: '{{ jenkins_common_group_gid }}'
state: present
when: jenkins_common_group_gid is defined
tags:
- install
- install:system-requirements
- name: Create jenkins group
group:
name: '{{ jenkins_common_group }}'
state: present
when: jenkins_common_group_gid is not defined or not jenkins_common_group_gid
tags:
- install
- install:system-requirements
- name: Create the jenkins user with specified uid and add to the group
user:
name: '{{ jenkins_common_user }}'
append: yes
uid: '{{ jenkins_common_user_uid }}'
groups: '{{ jenkins_common_group }}'
when: jenkins_common_user_uid is defined
tags:
- install
- install:system-requirements
- name: Create the jenkins user and add to the group
user:
name: '{{ jenkins_common_user }}'
append: yes
groups: '{{ jenkins_common_group }}'
when: jenkins_common_user_uid is not defined or not jenkins_common_user_uid
tags:
- install
- install:system-requirements
- name: Create necessary folders
file:
path: '{{ item }}'
state: directory
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
with_items:
- /usr/share/jenkins
- '{{ jenkins_common_home }}/init.groovy.d'
- '{{ jenkins_common_config_path }}'
- '{{ jenkins_common_home }}/utils'
- '{{ jenkins_common_home }}/plugins'
- '{{ jenkins_common_git_home }}'
- /var/log/jenkins
- /var/cache/jenkins
tags:
- install
- install:base
- name: Download Jenkins war file
get_url:
url: '{{ jenkins_common_war_source }}/{{ jenkins_common_version }}.war'
dest: /usr/share/jenkins/jenkins.war
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
force: yes
tags:
- install
- install:app-requirements
- name: Add Jenkins systemd configuration
template:
src: "etc/systemd/system/jenkins.service.j2"
dest: "/etc/systemd/system/jenkins.service"
tags:
- install
- install:system-requirements
- name: Configure logrotate for jenkins application log
template:
src: "etc/logrotate.d/jenkins_log.j2"
dest: "/etc/logrotate.d/jenkins"
tags:
- install
- install:system-requirements
- name: Add env vars
template:
src: "jenkins-env.sh.j2"
dest: "/etc/profile.d/jenkins-env.sh"
owner: root
group: root
mode: "0755"
tags:
- install
- install:base
- name: Download jenkins-configuration repo
git:
repo: '{{ jenkins_common_configuration_git_url }}'
dest: '{{ jenkins_common_git_home }}/jenkins-configuration'
version: '{{ jenkins_common_jenkins_configuration_branch }}'
become: true
become_user: '{{ jenkins_common_user }}'
tags:
- install
- install:base
- install:jenkins-configuration
- name: Run gradle libs
shell: './gradlew libs'
args:
chdir: '{{ jenkins_common_git_home }}/jenkins-configuration'
environment:
UTILS_PATH: '{{ jenkins_common_home }}/utils'
JENKINS_VERSION: '{{ jenkins_common_version }}'
become: true
become_user: '{{ jenkins_common_user }}'
tags:
- install
- install:base
- install:jenkins-configuration
- name: Copy init scripts into init.groovy.d
command: 'cp {{ jenkins_common_git_home }}/jenkins-configuration/{{ jenkins_common_configuration_src_path }}/{{ item }} {{ jenkins_common_home }}/init.groovy.d/'
with_items: '{{ jenkins_common_configuration_scripts }}'
become: true
become_user: '{{ jenkins_common_user }}'
tags:
- install
- install:base
- install:jenkins-configuration
- name: Create jenkins config sub folders
file:
path: '{{ item }}'
state: directory
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
with_items:
- '{{ jenkins_common_config_path }}/credentials'
- '{{ jenkins_common_config_path }}/ec2'
- '{{ jenkins_common_config_path }}/xml'
tags:
- install
- install:base
- name: Copy non plugins template files
template:
src: '{{ role_path }}/templates/config/{{ item }}.yml.j2'
dest: '{{ jenkins_common_config_path }}/{{ item }}.yml'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
with_items: '{{ jenkins_common_non_plugin_template_files }}'
register: templates_copied
tags:
- install
- install:base
- install:jenkins-configuration
- name: Update Github OAUTH settings when promoting jenkins instance to production
template:
src: '{{ role_path }}/templates/config/security.yml.j2'
dest: '{{ jenkins_common_config_path }}/security.yml'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
when: '"security" in jenkins_common_non_plugin_template_files and templates_copied is not defined'
tags:
- jenkins:promote-to-production
- name: Copy plugins.yml config file
template:
src: '{{ role_path }}/templates/config/plugins.yml.j2'
dest: '{{jenkins_common_config_path }}/plugins.yml'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
tags:
- install
- install:base
- install:plugins
- install:jenkins-configuration
- name: Copy ec2 config files
template:
src: '{{ item }}'
dest: '{{ jenkins_common_config_path }}/ec2/'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
with_fileglob:
- '{{ role_path }}/files/ec2/*'
tags:
- install
- install:base
- install:jenkins-configuration
- name: Copy xml config files
template:
src: '{{ item }}'
dest: '{{ jenkins_common_config_path }}/xml/'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
with_fileglob:
- '{{ role_path }}/files/xml/*'
tags:
- install
- install:base
- install:jenkins-configuration
- name: Run plugins.gradle
shell: './gradlew -b plugins.gradle plugins'
args:
chdir: '{{ jenkins_common_git_home }}/jenkins-configuration'
environment:
PLUGIN_OUTPUT_DIR: '{{ jenkins_common_home }}/plugins'
PLUGIN_CONFIG: '{{ jenkins_common_config_path }}/plugins.yml'
become: true
become_user: '{{ jenkins_common_user }}'
tags:
- install
- install:base
- install:plugins
- install:jenkins-configuration
- name: Copy secret file credentials
copy:
content: "{{ item.content }}"
dest: '{{ jenkins_common_config_path }}/credentials/{{ item.name }}'
with_items: '{{ JENKINS_SECRET_FILES_LIST }}'
no_log: yes
tags:
- install
- install:base
- install:jenkins-configuration
- name: Copy ssh key credentials
copy:
content: "{{ item.content }}"
dest: '{{ jenkins_common_config_path }}/credentials/{{ item.name }}'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
with_items: '{{ JENKINS_CUSTOM_SSH_LIST }}'
no_log: yes
tags:
- install
- install:base
- install:jenkins-configuration
- name: Copy ec2 key
copy:
content: '{{ JENKINS_EC2_PRIVATE_KEY }}'
dest: '{{ jenkins_common_config_path }}/ec2/id_rsa'
owner: '{{ jenkins_common_user }}'
group: '{{ jenkins_common_group }}'
no_log: yes
tags:
- install
- install:base
- install:jenkins-configuration
- name: Start Jenkins Service
systemd:
name: jenkins
daemon_reload: yes
state: restarted
tags:
- manage
- manage:start
- install:plugins
- install:jenkins-configuration
- jenkins:promote-to-production
---
{% for file in JENKINS_SECRET_FILES_LIST %}
- credentialType: 'secretFile'
scope: '{{ file.scope }}'
name: '{{ file.name }}'
path: 'credentials/{{ file.name }}'
description: '{{ file.description }}'
id: '{{ file.id }}'
{% endfor %}
{% for userPass in JENKINS_USERNAME_PASSWORD_LIST %}
- credentialType: 'usernamePassword'
scope: '{{ userPass.scope }}'
username: '{{ userPass.username }}'
password: '{{ userPass.password }}'
description: '{{ userPass.description }}'
id: '{{ userPass.id }}'
{% endfor %}
{% for text in JENKINS_SECRET_TEXT_LIST %}
- credentialType: 'secretText'
scope: '{{ text.scope }}'
secretText: '{{ text.secretText }}'
description: '{{ text.description }}'
id: '{{ text.id }}'
{% endfor %}
{% for cert in JENKINS_CERTIFICATES_LIST %}
- credentialType: 'certificate'
scope: '{{ cert.scope }}'
path: '{{ cert.path }}'
password: ''{{ cert.password }}'
description: '{{ cert.description }}'
id: '{{ cert.id }}'
{% endfor %}
{% for master_ssh in JENKINS_MASTER_SSH_LIST %}
- credentialType: 'ssh'
scope: '{{ master_ssh.scope }}'
username: '{{ master_ssh.username }}'
isJenkinsMasterSsh: true
passphrase: '{{ master_ssh.passphrase }}'
description: '{{ master_ssh.description }}'
id: '{{ master_ssh.id }}'
{% endfor %}
{% for custom_ssh in JENKINS_CUSTOM_SSH_LIST %}
- credentialType: 'ssh'
scope: '{{ custom_ssh.scope }}'
username: '{{ custom_ssh.username }}'
isJenkinsMasterSsh: false
path: 'credentials/{{ custom_ssh.name }}'
passphrase: '{{ custom_ssh.passphrase }}'
description: '{{ custom_ssh.description }}'
id: '{{ custom_ssh.id }}'
{% endfor %}
---
CLOUDS:
- NAME: '{{ JENKINS_EC2_REGION }}'
ACCESS_KEY_ID: '{{ JENKINS_EC2_ACCESS_KEY_ID }}'
SECRET_ACCESS_KEY: '{{ JENKINS_EC2_SECRET_ACCESS_KEY }}'
USE_INSTANCE_PROFILE_FOR_CREDS: {{ jenkins_common_use_instance_profile_for_creds }}
REGION: '{{ JENKINS_EC2_REGION }}'
EC2_PRIVATE_KEY_PATH: '{{ jenkins_common_config_path }}/ec2/id_rsa'
INSTANCE_CAP: '{{ jenkins_common_instance_cap }}'
AMIS:
{% for ami in JENKINS_EC2_AMIS %}
- AMI_ID: '{{ ami.AMI_ID }}'
AVAILABILITY_ZONE: '{{ ami.AVAILABILITY_ZONE }}'
SPOT_CONFIG:
SPOT_MAX_BID_PRICE: '{{ ami.SPOT_CONFIG.SPOT_MAX_BID_PRICE }}'
SPOT_INSTANCE_BID_TYPE: '{{ ami.SPOT_CONFIG.SPOT_INSTANCE_BID_TYPE }}'
SECURITY_GROUPS: '{{ ami.SECURITY_GROUPS }}'
REMOTE_FS_ROOT: '{{ ami.REMOTE_FS_ROOT }}'
SSH_PORT: '{{ ami.SSH_PORT }}'
INSTANCE_TYPE: '{{ ami.INSTANCE_TYPE }}'
LABEL_STRING: '{{ ami.LABEL_STRING }}'
MODE: '{{ ami.MODE }}'
DESCRIPTION: '{{ ami.DESCRIPTION }}'
INIT_SCRIPT_PATH: '{{ ami.INIT_SCRIPT_PATH }}'
TEMP_DIR: '{{ ami.TEMP_DIR }}'
USER_DATA: '{{ ami.USER_DATA }}'
NUM_EXECUTORS: '{{ ami.NUM_EXECUTORS }}'
REMOTE_ADMIN: '{{ ami.REMOTE_ADMIN }}'
ROOT_COMMAND_PREFIX: '{{ ami.ROOT_COMMAND_PREFIX }}'
JVM_OPTIONS: '{{ ami.JVM_OPTIONS }}'
STOP_ON_TERMINATE: {{ ami.STOP_ON_TERMINATE }}
SUBNET_ID: '{{ ami.SUBNET_ID }}'
TAGS:
{% for tag in ami.TAGS %}
- NAME: '{{ tag.NAME }}'
VALUE: '{{ tag.VALUE }}'
{% endfor %}
IDLE_TERMINATION_MINUTES: '{{ ami.IDLE_TERMINATION_MINUTES }}'
USE_PRIVATE_DNS_NAME: {{ ami.USE_PRIVATE_DNS_NAME }}
INSTANCE_CAP: '{{ ami.INSTANCE_CAP }}'
IAM_INSTANCE_PROFILE: '{{ ami.IAM_INSTANCE_PROFILE }}'
USE_EPHEMERAL_DEVICES: {{ ami.USE_EPHEMERAL_DEVICES }}
LAUNCH_TIMEOUT: '{{ ami.LAUNCH_TIMEOUT }}'
{% endfor %}
---
SERVER_API_URL: '{{ jenkins_common_ghprb_server }}'
ADMIN_LIST:
{% for admin in JENKINS_GHPRB_ADMIN_LIST %}
- '{{ admin }}'
{% endfor %}
REQUEST_TESTING_PHRASE: '{{ jenkins_common_ghprb_request_testing }}'
WHITE_LIST_PHRASE: '{{ jenkins_common_ghprb_white_list_phrase }}'
OK_PHRASE: '{{ jenkins_common_ghprb_ok_phrase }}'
RETEST_PHRASE: '{{ jenkins_common_ghprb_retest_phrase }}'
SKIP_PHRASE: '{{ jenkins_common_ghprb_skip_phrase }}'
CRON_SCHEDULE: '{{ jenkins_common_ghprb_cron_schedule }}'
USE_COMMENTS: {{ jenkins_common_ghprb_use_comments }}
USE_DETAILED_COMMENTS: {{ jenkins_common_ghprb_use_detailed_comments }}
MANAGE_WEBHOOKS: {{ jenkins_common_ghprb_manage_webhooks }}
UNSTABLE_AS: '{{ jenkins_common_ghprb_failure_as }}'
AUTO_CLOSE_FAILED_PRS: {{ jenkins_common_ghprb_auto_close_fails }}
DISPLAY_ERRORS_DOWNSTREAM: {{ jenkins_commmon_ghprb_display_errors }}
BLACK_LIST_LABELS:
{% for blacklist in JENKINS_GHPRB_BLACK_LIST %}
- '{{ blacklist }}'
{% endfor %}
WHITE_LIST_LABELS:
{% for whitelist in JENKINS_GHPRB_WHITE_LIST %}
- '{{ whitelist }}'
{% endfor %}
GITHUB_AUTH: '{{ jenkins_common_ghprb_github_auth }}'
SIMPLE_STATUS: '{{ jenkins_common_ghprb_simple_status }}'
PUBLISH_JENKINS_URL: '{{ jenkins_common_ghprb_publish_jenkins_url }}'
BUILD_LOG_LINES_TO_DISPLAY: {{ jenkins_common_ghprb_build_log_lines }}
RESULT_MESSAGES:
{% for message in jenkins_common_ghprb_results %}
- STATUS: '{{ message.STATUS }}'
MESSAGE: '{{ message.MESSAGE }}'
{% endfor %}
CREDENTIALS_ID: '{{ JENKINS_GHPRB_CREDENTIAL_ID }}'
SHARED_SECRET: '{{ JENKINS_GHPRB_SHARED_SECRET }}'
---
NAME: '{{ JENKINS_GIT_NAME }}'
EMAIL: '{{ JENKINS_GIT_EMAIL }}'
---
{% for config in jenkins_common_github_configs %}
- CREDENTIAL_ID: '{{ config.CREDENTIAL_ID }}'
MANAGE_HOOKS: '{{ config.MANAGE_HOOKS }}'
USE_CUSTOM_API_URL: '{{ config.USE_CUSTOM_API_URL }}'
API_URL: '{{ config.API_URL }}'
CACHE_SIZE: {{ config.CACHE_SIZE}}
{% endfor %}
---
API_TOKEN: '{{ JENKINS_HIPCHAT_API_TOKEN }}'
ROOM: '{{ jenkins_common_hipchat_room }}'
V2_ENABLED: {{ jenkins_common_hipchat_v2_enabled }}
---
HISTORY_ROOT_DIR: '{{ jenkins_common_history_root }}'
MAX_HISTORY_ENTRIES: '{{ jenkins_common_history_max_entries }}'
MAX_DAYS_TO_KEEP_ENTRIES: '{{ jenkins_common_history_max_days }}'
MAX_ENTRIES_PER_PAGE: '{{ jenkins_common_history_max_entries_page }}'
SKIP_DUPLICATE_HISTORY: '{{ jenkins_common_history_skip_duplicates }}'
EXCLUDE_PATTERN: '{{ jenkins_common_history_exclude_pattern }}'
SAVE_MODULE_CONFIGURATION: '{{ jenkins_common_history_save_module_config }}'
SHOW_BUILD_BADGES: '{{ jenkins_common_history_show_build_badges }}'
EXCLUDED_USERS: '{{ jenkins_common_history_excluded_users }}'
---
{% for recorder in jenkins_common_log_list %}
- LOG_RECORDER: '{{ recorder.LOG_RECORDER }}'
LOGGERS:
{% for log in recorder.LOGGERS %}
- name: '{{ log.name }}'
log_level: '{{ log.log_level }}'
{% endfor %}
{% endfor %}
---
SMTP_SERVER: '{{ JENKINS_MAILER_SMTP_SERVER }}'
REPLY_TO_ADDRESS: '{{ JENKINS_MAILER_REPLY_TO_ADDRESS }}'
DEFAULT_SUFFIX: '{{ JENKINS_MAILER_DEFAULT_SUFFIX }}'
SMTP_AUTH_USERNAME: '{{ JENKINS_MAILER_SMTP_AUTH_USERNAME }}'
SMTP_AUTH_PASSWORD: '{{ JENKINS_MAILER_SMTP_AUTH_PASSWORD }}'
SMTP_PORT: '{{ jenkins_common_mailer_port }}'
USE_SSL: '{{ jenkins_common_mailer_use_ssl }}'
CHAR_SET: '{{ jenkins_common_mailer_char_set }}'
---
MAIN:
WORKSPACE_ROOT_DIR: '${ITEM_ROOTDIR}/workspace'
BUILD_RECORD_ROOT_DIR: '${ITEM_ROOTDIR}/builds'
SYSTEM_MESSAGE: '{{ jenkins_common_main_system_message }}'
NUMBER_OF_EXECUTORS: {{ jenkins_common_main_num_executors }}
LABELS:
{% for label in jenkins_common_main_labels %}
- '{{ label }}'
{% endfor %}
USAGE: 'EXCLUSIVE'
QUIET_PERIOD: {{ jenkins_common_main_quiet_period }}
SCM_RETRY_COUNT: {{ jenkins_common_main_scm_retry }}
DISABLE_REMEMBER_ME: {{ jenkins_common_main_disable_remember }}
GLOBAL_PROPERTIES:
ENVIRONMENT_VARIABLES:
{% for env in jenkins_common_main_env_vars %}
- NAME: '{{ env.NAME }}'
VALUE: '{{ env.VALUE }}'
{% endfor %}
TOOL_LOCATIONS:
LOCATION:
URL: '{{ JENKINS_MAIN_URL }}'
ADMIN_EMAIL: '{{ JENKINS_MAIN_ADMIN_EMAIL }}'
SHELL:
EXECUTABLE: '{{ jenkins_common_main_executable }}'
FORMATTER:
FORMATTER_TYPE: '{{ jenkins_common_formatter_type }}'
DISABLE_SYNTAX_HIGHLIGHTING: {{ jenkins_common_disable_syntax_highlighting }}
CLI:
CLI_ENABLED: false
---
MASKED_PARAMETER_CLASSES:
{% for class in JENKINS_MASK_PASSWORDS_CLASSES %}
- '{{ class }}'
{% endfor %}
NAME_PASSWORD_PAIRS:
{% for pair in JENKINS_MASK_PASSWORDS_PAIRS %}
- NAME: '{{ pair.NAME }}'
PASSWORD: '{{ pair.PASSWORD }}'
{% endfor %}
---
{% for plugin in jenkins_common_plugins_list %}
- name: '{{ plugin.name }}'
version: '{{ plugin.version }}'
group: '{{ plugin.group }}'
{% endfor %}
---
{% for key_value in jenkins_common_system_properties %}
- KEY: '{{ key_value.KEY }}'
VALUE: "{{ key_value.VALUE }}"
{% endfor %}
---
OAUTH_SETTINGS:
GITHUB_WEB_URI: 'https://github.com'
GITHUB_API_URI: 'https://api.github.com'
CLIENT_ID: '{{ JENKINS_SECURITY_CLIENT_ID }}'
CLIENT_SECRET: '{{ JENKINS_SECURITY_CLIENT_SECRET }}'
SCOPES: '{{ jenkins_common_security_scopes }}'
SECURITY_GROUPS:
{% for group in JENKINS_SECURITY_GROUPS %}
- NAME: '{{ group.NAME }}'
PERMISSIONS:
{% for permission in group.PERMISSIONS %}
- {{ permission }}
{% endfor %}
USERS:
{% for user in group.USERS %}
- {{ user }}
{% endfor %}
{% endfor %}
---
NAME: '{{ jenkins_common_seed_name }}'
XML_PATH: '{{ jenkins_common_seed_path }}'
# Put in place by ansible
/var/log/jenkins/*jenkins.log {
weekly
copytruncate
missingok
rotate 52
compress
delaycompress
notifempty
}
[Unit]
Description=Jenkins
[Service]
Type=forking
Environment="JENKINS_HOME={{ jenkins_common_home }}"
Environment="JENKINS_CONFIG_PATH={{ jenkins_common_config_path }}"
PassEnvironment=JENKINS_HOME JENKINS_CONFIG_PATH
User=jenkins
Group=jenkins
ExecStart=/usr/bin/java \
{{ jenkins_common_jvm_args }} \
-jar /usr/share/jenkins/jenkins.war \
--daemon \
--logfile=/var/log/jenkins/jenkins.log \
--webroot=/var/cache/jenkins \
--httpPort={{ jenkins_common_port }} \
--ajp13Port=-1
[Install]
WantedBy=multi-user.target
export JENKINS_HOME='{{ jenkins_common_home }}'
export JENKINS_CONFIG_PATH='{{ jenkins_common_config_path }}'
export JENKINS_VERSION='{{ jenkins_common_version }}'
export JENKINS_WAR_SOURCE='{{ jenkins_common_war_source}}'
......@@ -7,7 +7,7 @@ jenkins_nginx_port: 80
jenkins_protocol_https: true
jenkins_version: "1.638"
jenkins_deb_url: "http://pkg.jenkins-ci.org/debian/binary/jenkins_{{ jenkins_version }}_all.deb"
jenkins_deb_url: "https://pkg.jenkins.io/debian-stable/binary/jenkins_{{ jenkins_version }}_all.deb"
jenkins_deb: "jenkins_{{ jenkins_version }}_all.deb"
# Jenkins jvm args are set when starting the Jenkins service, e.g., "-Xmx1024m"
jenkins_jvm_args: ""
......
---
dependencies:
- common
- nginx
- role: oraclejdk
tags: java
oraclejdk_version: "7u51"
oraclejdk_base: "jdk1.7.0_51"
oraclejdk_build: "b13"
oraclejdk_link: "/usr/lib/jvm/java-7-oracle"
......@@ -14,7 +14,3 @@ jenkins_debian_pkgs:
# packer direct download URL
packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip"
# custom firefox
custom_firefox_version: 42.0
custom_firefox_url: "https://ftp.mozilla.org/pub/firefox/releases/{{ custom_firefox_version }}/linux-x86_64/en-US/firefox-{{ custom_firefox_version }}.tar.bz2"
......@@ -61,19 +61,3 @@
# done with it
- name: Remove shallow-clone
file: path={{ jenkins_home }}/shallow-clone state=absent
# Although firefox is installed through the browsers role, install
# a newer copy under the jenkins home directory. This will allow
# platform pull requests to use a custom firefox path to a different
# version
- name: Install custom firefox to jenkins home
get_url:
url: "{{ custom_firefox_url }}"
dest: "{{ jenkins_home }}/firefox-{{ custom_firefox_version }}.tar.bz2"
- name: unpack custom firefox version
unarchive:
src: "{{ jenkins_home }}/firefox-{{ custom_firefox_version }}.tar.bz2"
dest: "{{ jenkins_home }}"
creates: "{{ jenkins_home }}/firefox"
copy: no
......@@ -40,6 +40,7 @@
keypair: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
instance_initiated_shutdown_behavior: "{{ instance_initiated_shutdown_behavior }}"
image: "{{ ami }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: yes
......@@ -66,6 +67,10 @@
ttl: 300
record: "{{ dns_name }}.{{ dns_zone }}"
value: "{{ item.public_dns_name }}"
register: task_result
until: task_result|succeeded
retries: 5
delay: 30
with_items: "{{ ec2.instances }}"
- name: Add DNS names for services
......@@ -78,6 +83,10 @@
ttl: 300
record: "{{ item[1] }}-{{ dns_name }}.{{ dns_zone }}"
value: "{{ item[0].public_dns_name }}"
register: task_result
until: task_result|succeeded
retries: 5
delay: 30
with_nested:
- "{{ ec2.instances }}"
- ['studio', 'ecommerce', 'preview', 'discovery', 'credentials']
......
......@@ -62,6 +62,13 @@ localdev_accounts:
repo: "credentials"
}
- {
user: "{{ discovery_user|default('None') }}",
home: "{{ discovery_home|default('None') }}",
env: "discovery_env",
repo: "discovery"
}
# Helpful system packages for local dev
local_dev_pkgs:
- vim
......
......@@ -10,9 +10,11 @@
##
# Defaults for role mariadb
#
MARIADB_APT_KEY_XENIAL_ID: '0xF1656F24C74CD1D8'
MARIADB_APT_KEY_ID: '0xcbcb082a1bb943db'
# Note: version is determined by repo
MARIADB_REPO: "deb http://mirrors.syringanetworks.net/mariadb/repo/10.0/ubuntu precise main"
MARIADB_REPO: "deb http://mirrors.syringanetworks.net/mariadb/repo/10.0/ubuntu {{ ansible_distribution_release }} main"
MARIADB_CREATE_DBS: yes
MARIADB_CLUSTERED: no
......@@ -29,69 +31,107 @@ MARIADB_HAPROXY_HOSTS:
MARIADB_LISTEN_ALL: false
MARIADB_DATABASES:
- "{{ EDXAPP_MYSQL_DB_NAME|default('edxapp') }}"
- "{{ XQUEUE_MYSQL_DB_NAME|default('xqueue') }}"
MARIADB_ANALYTICS_DATABASES:
- "{{ ANALYTICS_API_CONFIG['DATABASES']['default']['NAME']|default('analytics-api') }}"
- "{{ ANALYTICS_API_CONFIG['DATABASES']['reports']['NAME']|default('reports') }}"
- {
db: "{{ ECOMMERCE_DEFAULT_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ INSIGHTS_DATABASE_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ XQUEUE_MYSQL_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ EDXAPP_MYSQL_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ EDXAPP_MYSQL_CSMH_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ EDX_NOTES_API_MYSQL_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ ANALYTICS_API_DEFAULT_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ ANALYTICS_API_REPORTS_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ CREDENTIALS_DEFAULT_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ DISCOVERY_DEFAULT_DB_NAME | default(None) }}",
encoding: "utf8"
}
- {
db: "{{ HIVE_METASTORE_DATABASE_NAME | default(None) }}",
encoding: "latin1"
}
MARIADB_USERS:
- name: "{{ EDXAPP_MYSQL_USER|default('edxapp001') }}"
pass: "{{ EDXAPP_MYSQL_PASSWORD|default('password') }}"
priv: "{{ EDXAPP_MYSQL_DB_NAME|default('edxapp') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ XQUEUE_MYSQL_USER|default('xqueue001') }}"
pass: "{{ XQUEUE_MYSQL_PASSWORD|default('password') }}"
priv: "{{ XQUEUE_MYSQL_DB_NAME|default('xqueue') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ COMMON_MYSQL_MIGRATE_USER|default('migrate') }}"
pass: "{{ COMMON_MYSQL_MIGRATE_PASSWORD|default('password') }}"
priv: "{{ EDXAPP_MYSQL_DB_NAME|default('edxapp') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ COMMON_MYSQL_MIGRATE_USER|default('migrate') }}"
pass: "{{ COMMON_MYSQL_MIGRATE_PASSWORD|default('password') }}"
priv: "{{ XQUEUE_MYSQL_DB_NAME|default('xqueue') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ COMMON_MYSQL_READ_ONLY_USER|default('read_only') }}"
pass: "{{ COMMON_MYSQL_READ_ONLY_PASS|default('password') }}"
priv: "*.*:SELECT"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ COMMON_MYSQL_ADMIN_USER|default('admin') }}"
pass: "{{ COMMON_MYSQL_ADMIN_PASS|default('password') }}"
priv: "*.*:CREATE USER"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ EDX_NOTES_API_MYSQL_DB_USER|default('notes001') }}"
pass: "{{ EDX_NOTES_API_MYSQL_DB_PASS|default('secret') }}"
priv: "{{ EDX_NOTES_API_MYSQL_DB_NAME|default('edx-notes-api') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
MARIADB_ANALYTICS_USERS:
- name: "{{ ANALYTICS_API_CONFIG['DATABASES']['default']['USER']|default('api001') }}"
pass: "{{ ANALYTICS_API_CONFIG['DATABASES']['default']['PASSWORD']|default('password') }}"
priv: "{{ ANALYTICS_API_CONFIG['DATABASES']['default']['NAME'] }}.*:ALL/reports.*:SELECT"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ ANALYTICS_API_CONFIG['DATABASES']['reports']['USER']|default('reports001') }}"
pass: "{{ ANALYTICS_API_CONFIG['DATABASES']['reports']['PASSWORD']|default('password') }}"
priv: "{{ ANALYTICS_API_CONFIG['DATABASES']['reports']['NAME'] }}.*:SELECT"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ COMMON_MYSQL_MIGRATE_USER|default('migrate') }}"
pass: "{{ COMMON_MYSQL_MIGRATE_PASSWORD|default('password') }}"
priv: "{{ ANALYTICS_API_CONFIG['DATABASES']['default']['NAME']|default('analytics-api') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
- name: "{{ COMMON_MYSQL_MIGRATE_USER|default('migrate') }}"
pass: "{{ COMMON_MYSQL_MIGRATE_PASSWORD|default('password') }}"
priv: "{{ ANALYTICS_API_CONFIG['DATABASES']['reports']['NAME']|default('reports') }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
- {
db: "{{ ECOMMERCE_DEFAULT_DB_NAME | default(None) }}",
user: "{{ ECOMMERCE_DATABASE_USER | default(None) }}",
pass: "{{ ECOMMERCE_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ INSIGHTS_DATABASE_NAME | default(None) }}",
user: "{{ INSIGHTS_MYSQL_USER | default(None) }}",
pass: "{{ INSIGHTS_MYSQL_USER | default(None) }}"
}
- {
db: "{{ XQUEUE_MYSQL_DB_NAME | default(None) }}",
user: "{{ XQUEUE_MYSQL_USER | default(None) }}",
pass: "{{ XQUEUE_MYSQL_PASSWORD | default(None) }}"
}
- {
db: "{{ EDXAPP_MYSQL_DB_NAME | default(None) }}",
user: "{{ EDXAPP_MYSQL_USER | default(None) }}",
pass: "{{ EDXAPP_MYSQL_PASSWORD | default(None) }}"
}
- {
db: "{{ EDXAPP_MYSQL_CSMH_DB_NAME | default(None) }}",
user: "{{ EDXAPP_MYSQL_CSMH_USER | default(None) }}",
pass: "{{ EDXAPP_MYSQL_CSMH_PASSWORD | default(None) }}"
}
- {
db: "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}",
user: "{{ PROGRAMS_DATABASE_USER | default(None) }}",
pass: "{{ PROGRAMS_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_NAME | default(None) }}",
user: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_USER | default(None) }}",
pass: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ HIVE_METASTORE_DATABASE_NAME | default(None) }}",
user: "{{ HIVE_METASTORE_DATABASE_USER | default(None) }}",
pass: "{{ HIVE_METASTORE_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ CREDENTIALS_DEFAULT_DB_NAME | default(None) }}",
user: "{{ CREDENTIALS_MYSQL_USER | default(None) }}",
pass: "{{ CREDENTIALS_MYSQL_PASSWORD | default(None) }}"
}
- {
db: "{{ DISCOVERY_DEFAULT_DB_NAME | default(None) }}",
user: "{{ DISCOVERY_MYSQL_USER | default(None) }}",
pass: "{{ DISCOVERY_MYSQL_PASSWORD | default(None) }}"
}
#
# OS packages
......
......@@ -17,17 +17,17 @@
- name: setup bootstrap on primary
lineinfile:
dest: "/etc/mysql/conf.d/galera.cnf"
regexp: "^wsrep_cluster_address=gcomm://{{ hostvars.keys()|sort|join(',') }}$"
regexp: "^wsrep_cluster_address=gcomm://{{ groups[group_names[0]]|sort|join(',') }}$"
line: "wsrep_cluster_address=gcomm://"
when: ansible_hostname == hostvars[hostvars.keys()[0]].ansible_hostname and not mariadb_bootstrap.stat.exists
when: inventory_hostname == hostvars[groups[group_names[0]][0]].inventory_hostname and not mariadb_bootstrap.stat.exists
- name: fetch debian.cnf file so start-stop will work properly
fetch:
src: /etc/mysql/debian.cnf
dest: /tmp/debian.cnf
src: "/etc/mysql/debian.cnf"
dest: "/tmp/debian.cnf"
fail_on_missing: yes
flat: yes
when: ansible_hostname == hostvars[hostvars.keys()[0]].ansible_hostname and not mariadb_bootstrap.stat.exists
when: inventory_hostname == hostvars[groups[group_names[0]][0]].inventory_hostname and not mariadb_bootstrap.stat.exists
register: mariadb_new_debian_cnf
- name: copy fetched file to other cluster members
......@@ -53,5 +53,10 @@
# This is needed for mysql-check in haproxy or other mysql monitor
# scripts to prevent haproxy checks exceeding `max_connect_errors`.
- name: create haproxy monitor user
command: "mysql -e \"INSERT INTO mysql.user (Host,User) values ('{{ item }}','{{ MARIADB_HAPROXY_USER }}'); FLUSH PRIVILEGES;\""
with_items: "{{ MARIADB_HAPROXY_HOSTS }}"
mysql_user:
name: "{{ MARIADB_HAPROXY_USER }}"
host: "{{ item }}"
password: ""
priv: "*.*:USAGE,RELOAD"
state: present
with_items: MARIADB_HAPROXY_HOSTS
......@@ -28,7 +28,13 @@
- name: Add mariadb apt key
apt_key:
url: "{{ COMMON_UBUNTU_APT_KEYSERVER }}{{ MARIADB_APT_KEY_ID }}"
when: ansible_distribution_release != 'xenial'
- name: Add Xenial mariadb apt key
apt_key:
url: "{{ COMMON_UBUNTU_APT_KEYSERVER }}{{ MARIADB_APT_KEY_XENIAL_ID }}"
when: ansible_distribution_release == 'xenial'
- name: add the mariadb repo to the sources list
apt_repository:
repo: "{{ MARIADB_REPO }}"
......@@ -57,38 +63,69 @@
- name: start everything
service: name=mysql state=started
- name: create all databases
- name: create databases
mysql_db:
db: "{{ item }}"
db: "{{ item.db }}"
state: present
encoding: utf8
encoding: "{{ item.encoding }}"
when: item != None and item != '' and MARIADB_CREATE_DBS|bool
with_items: "{{ MARIADB_DATABASES }}"
when: MARIADB_CREATE_DBS|bool
- name: create all analytics dbs
mysql_db:
db: "{{ item }}"
state: present
encoding: utf8
with_items: "{{ MARIADB_ANALYTICS_DATABASES }}"
when: MARIADB_CREATE_DBS|bool and ANALYTICS_API_CONFIG is defined
- name: create all users/privs
- name: create database users
mysql_user:
name: "{{ item.name }}"
name: "{{ item.user }}"
password: "{{ item.pass }}"
priv: "{{ item.priv }}"
host: "{{ item.host }}"
priv: "{{ item.db }}.*:SELECT,INSERT,UPDATE,DELETE"
host: "{{ MARIADB_HOST_PRIV }}"
append_privs: yes
when: item.db != None and item.db != ''
with_items: "{{ MARIADB_USERS }}"
when: MARIADB_CREATE_DBS|bool
- name: create all analytics users/privs
- name: setup the migration db user
mysql_user:
name: "{{ item.name }}"
password: "{{ item.pass }}"
priv: "{{ item.priv }}"
host: "{{ item.host }}"
name: "{{ COMMON_MYSQL_MIGRATE_USER }}"
password: "{{ COMMON_MYSQL_MIGRATE_PASS }}"
priv: "{{ item.db }}.*:ALL"
host: "{{ MARIADB_HOST_PRIV }}"
append_privs: yes
with_items: "{{ MARIADB_ANALYTICS_USERS }}"
when: MARIADB_CREATE_DBS|bool and ANALYTICS_API_CONFIG is defined
when: item != None and item != ''
with_items: "{{ MARIADB_DATABASES }}"
- name: create api user for the analytics api
mysql_user:
name: "api001"
password: "{{ ANALYTICS_API_DATABASES.default.PASSWORD }}"
priv: '{{ ANALYTICS_API_DATABASES.default.NAME }}.*:SELECT,INSERT,UPDATE,DELETE/reports.*:SELECT'
host: "{{ MARIADB_HOST_PRIV }}"
when: ANALYTICS_API_SERVICE_CONFIG is defined
- name: create read-only reports user for the analytics-api
mysql_user:
name: reports001
password: "{{ ANALYTICS_API_DATABASES.reports.PASSWORD }}"
priv: '{{ ANALYTICS_API_DATABASES.reports.NAME }}.*:SELECT'
host: "{{ MARIADB_HOST_PRIV }}"
when: ANALYTICS_API_SERVICE_CONFIG is defined
- name: setup the edx-notes-api db user
mysql_user:
name: "{{ EDX_NOTES_API_MYSQL_DB_USER }}"
password: "{{ EDX_NOTES_API_MYSQL_DB_PASS }}"
priv: "{{ EDX_NOTES_API_MYSQL_DB_NAME }}.*:SELECT,INSERT,UPDATE,DELETE"
host: "{{ MARIADB_HOST_PRIV }}"
when: EDX_NOTES_API_MYSQL_DB_USER is defined
- name: setup the read-only db user
mysql_user:
name: "{{ COMMON_MYSQL_READ_ONLY_USER }}"
password: "{{ COMMON_MYSQL_READ_ONLY_PASS }}"
priv: "*.*:SELECT"
host: "{{ MARIADB_HOST_PRIV }}"
- name: setup the admin db user
mysql_user:
name: "{{ COMMON_MYSQL_ADMIN_USER }}"
password: "{{ COMMON_MYSQL_ADMIN_PASS }}"
priv: "*.*:CREATE USER"
host: "{{ MARIADB_HOST_PRIV }}"
......@@ -14,6 +14,10 @@ mongo_log_dir: "{{ COMMON_LOG_DIR }}/mongo"
mongo_journal_dir: "{{ COMMON_DATA_DIR }}/mongo/mongodb/journal"
mongo_user: mongodb
# The MONGODB_REPO variable should use {{ ansible_distribution_release }}
# instead of hard coding a release name. Since we are already accidentally
# running precise binaries on trusty, we are going to leave this alone for
# mongo 3.0 and remedy it when we upgrade to mongo 3.2
MONGODB_REPO: "deb http://repo.mongodb.org/apt/ubuntu precise/mongodb-org/3.0 multiverse"
MONGODB_APT_KEY: "7F0CEB10"
MONGODB_APT_KEYSERVER: "keyserver.ubuntu.com"
......
// Add super user
conn = new Mongo();
db = conn.getDB("admin");
db.createUser(
{
"user": "{{ MONGO_ADMIN_USER }}",
"pwd": "{{ MONGO_ADMIN_PASSWORD }}",
"roles": ["root"]
}
);
mongo_logappend: true
#This way, when mongod receives a SIGUSR1, it'll close and reopen its log file handle
mongo_logrotate: reopen
mongo_version: 3.2.16
mongo_port: "27017"
mongo_extra_conf: ''
mongo_key_file: '/etc/mongodb_key'
pymongo_version: 3.2.2
mongo_data_dir: "{{ COMMON_DATA_DIR }}/mongo"
mongo_log_dir: "{{ COMMON_LOG_DIR }}/mongo"
mongo_journal_dir: "{{ COMMON_DATA_DIR }}/mongo/mongodb/journal"
mongo_user: mongodb
MONGODB_REPO: "deb http://repo.mongodb.org/apt/ubuntu {{ ansible_distribution_release }}/mongodb-org/3.2 multiverse"
MONGODB_APT_KEY: "7F0CEB10"
MONGODB_APT_KEYSERVER: "keyserver.ubuntu.com"
mongodb_debian_pkgs:
- "mongodb-org={{ mongo_version }}"
- "mongodb-org-server={{ mongo_version }}"
- "mongodb-org-shell={{ mongo_version }}"
- "mongodb-org-mongos={{ mongo_version }}"
- "mongodb-org-tools={{ mongo_version }}"
# Vars Meant to be overridden
MONGO_ADMIN_USER: 'admin'
MONGO_ADMIN_PASSWORD: 'password'
MONGO_USERS:
- user: cs_comments_service
password: password
database: cs_comments_service
roles: readWrite
- user: edxapp
password: password
database: edxapp
roles: readWrite
# This default setting is approriate for a single machine installation
# This will need to be overridden for setups where mongo is on its own server
# and/or you are configuring mongo replication. If the override value is
# 0.0.0.0 mongo will listen on all IPs. The value may also be set to a
# specific IP.
MONGO_BIND_IP: 127.0.0.1
MONGO_REPL_SET: "rs0"
MONGO_AUTH: true
MONGO_CLUSTER_KEY: "CHANGEME"
# Cluster member configuration
# Fed directly into mongodb_replica_set module
MONGO_RS_CONFIG:
_id: '{{ MONGO_REPL_SET }}'
members:
- host: '127.0.0.1'
# Storage engine options in 3.2: "mmapv1" or "wiredTiger"
# 3.2 and 3.4 default to wiredTiger
MONGO_STORAGE_ENGINE: "wiredTiger"
# List of dictionaries as described in the mount_ebs role's default
# for the volumes.
# Useful if you want to store your mongo data and/or journal on separate
# disks from the root volume. By default, they will end up mongo_data_dir
# on the root disk.
MONGO_VOLUMES: []
# WiredTiger takes a number of optional configuration settings
# which can be defined as a yaml structure in your secure configuration.
MONGO_STORAGE_ENGINE_OPTIONS: !!null
mongo_logpath: "{{ mongo_log_dir }}/mongodb.log"
mongo_dbpath: "{{ mongo_data_dir }}/mongodb"
# In environments that do not require durability (devstack / Jenkins)
# you can disable the journal to reduce disk usage
mongo_enable_journal: true
MONGO_LOG_SERVERSTATUS: true
[Unit]
Description="Disable Transparent Hugepage before MongoDB boots"
Before=mongod.service
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled'
ExecStart=/bin/bash -c 'echo never > /sys/kernel/mm/transparent_hugepage/defrag'
[Install]
RequiredBy=mongod.service
---
dependencies:
- common
- role: mount_ebs
volumes: "{{ MONGO_VOLUMES }}"
---
- name: Add disable transparent huge pages systemd service (http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/)
copy:
src: etc/systemd/system/disable-transparent-hugepages.service
dest: "/etc/systemd/system/disable-transparent-hugepages.service"
owner: root
group: root
mode: 0644
tags:
- "hugepages"
- "install"
- "install:configuration"
- name: Enable/start disable transparent huge pages service (http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/)
service:
name: disable-transparent-hugepages
enabled: yes
state: started
tags:
- "hugepages"
- "manage"
- "manage:start"
- name: install python pymongo for mongo_user ansible module
pip:
name: pymongo
state: present
version: "{{ pymongo_version }}"
extra_args: "-i {{ COMMON_PYPI_MIRROR_URL }}"
tags:
- "install"
- "install:app-requirements"
- name: add the mongodb signing key
apt_key:
id: "{{ MONGODB_APT_KEY }}"
keyserver: "{{ MONGODB_APT_KEYSERVER }}"
state: present
tags:
- "install"
- "install:app-requirements"
- name: add the mongodb repo to the sources list
apt_repository:
repo: "{{ MONGODB_REPO }}"
state: present
tags:
- "install"
- "install:app-requirements"
- "mongo_packages"
- name: install mongo server and recommends
apt:
pkg: "{{ item }}"
state: present
install_recommends: yes
force: yes
update_cache: yes
with_items: "{{ mongodb_debian_pkgs }}"
tags:
- "install"
- "install:app-requirements"
- "mongo_packages"
- name: create mongo dirs
file:
path: "{{ item }}"
state: directory
owner: "{{ mongo_user }}"
group: "{{ mongo_user }}"
with_items:
- "{{ mongo_data_dir }}"
- "{{ mongo_dbpath }}"
- "{{ mongo_log_dir }}"
- "{{ mongo_journal_dir }}"
tags:
- "install"
- "install:app-configuration"
- name: add serverStatus logging script
template:
src: "log-mongo-serverStatus.sh.j2"
dest: "{{ COMMON_BIN_DIR }}/log-mongo-serverStatus.sh"
owner: "{{ mongo_user }}"
group: "{{ mongo_user }}"
mode: 0700
when: MONGO_LOG_SERVERSTATUS
tags:
- "install"
- "install:app-configuration"
- name: add serverStatus logging script to cron
cron:
name: mongostat logging job
minute: "*/3"
job: /edx/bin/log-mongo-serverStatus.sh >> {{ mongo_log_dir }}/serverStatus.log 2>&1
become: yes
when: MONGO_LOG_SERVERSTATUS
tags:
- "install"
- "install:app-configuration"
# This will error when run on a new replica set, so we ignore_errors
# and connect anonymously next.
- name: determine if there is a replica set already
mongodb_rs_status:
host: "{{ ansible_lo['ipv4']['address'] }}"
username: "{{ MONGO_ADMIN_USER }}"
password: "{{ MONGO_ADMIN_PASSWORD }}"
run_once: true
register: authed_replica_set_already_configured
ignore_errors: true
tags:
- "manage"
- "manage:db-replication"
- name: Try checking the replica set with no user/pass in case this is a new box
mongodb_rs_status:
host: "{{ ansible_lo['ipv4']['address'] }}"
run_once: true
register: unauthed_replica_set_already_configured
when: authed_replica_set_already_configured.failed is defined
ignore_errors: true
tags:
- "manage"
- "manage:db-replication"
# We use these in the templates but also to control a whole bunch of logic
- name: set facts that default to not initializing a replica set
set_fact:
initialize_replica_set: false
skip_replica_set: false
tags:
- "install"
- "install:app-configuration"
- "update_mongod_conf"
- "manage"
- "manage:db-replication"
# If either auth or unauthed access comes back with a replica set, we
# do not want to initialize one. Since initialization requires a bunch
# of extra templating and restarting, it's not something we want to do on
# existing boxes.
- name: track if you have a replica set
set_fact:
initialize_replica_set: true
skip_replica_set: true
when: authed_replica_set_already_configured.status is not defined
and unauthed_replica_set_already_configured.status is not defined
tags:
- "manage"
- "manage:db-replication"
- name: warn about unconfigured replica sets
debug: msg="You do not appear to have a Replica Set configured, deploying one for you"
when: initialize_replica_set
tags:
- "manage"
- "manage:db-replication"
- name: copy mongodb key file
copy:
content: "{{ MONGO_CLUSTER_KEY }}"
dest: "{{ mongo_key_file }}"
mode: 0600
owner: mongodb
group: mongodb
register: update_mongod_key
tags:
- "manage"
- "manage:db-replication"
- "mongodb_key"
# If skip_replica_set is true, this template will not contain a replica set stanza
# because of the fact above.
- name: copy configuration template
template:
src: mongod.conf.j2
dest: /etc/mongod.conf
backup: yes
register: update_mongod_conf
tags:
- "install"
- "install:app-configuration"
- "manage"
- "manage:db-replication"
- "update_mongod_conf"
- name: install logrotate configuration
template:
src: mongo_logrotate.j2
dest: /etc/logrotate.d/hourly/mongo
tags:
- "install"
- "install:app-configuration"
- "logrotate"
- name: restart mongo service if we changed our configuration
service:
name: mongod
state: restarted
when: update_mongod_conf.changed or update_mongod_key.changed
tags:
- "manage"
- "manage:start"
- "manage:db-replication"
- name: wait for mongo server to start
wait_for:
port: 27017
delay: 2
tags:
- "manage"
- "manage:start"
- "manage:db-replication"
# We only try passwordless superuser creation when
# we're initializing the replica set and need to use
# the localhost exemption to create a user who will be
# able to initialize the replica set.
# We can only create the users on one machine, the one
# where we will initialize the replica set. If we
# create users on multiple hosts, then they will fail
# to come into the replica set.
- name: create super user
mongodb_user:
name: "{{ MONGO_ADMIN_USER }}"
password: "{{ MONGO_ADMIN_PASSWORD }}"
database: admin
roles: root
when: initialize_replica_set
run_once: true
tags:
- "manage"
- "manage:db-replication"
# Now that the localhost exemption has been used to create the superuser, we need
# to add replica set to our configuration. This will never happen if we detected
# a replica set in the 'determine if there is a replica set already' task.
- name: Unset our skip initializing replica set fact so that mongod.conf gets a replica set
set_fact:
skip_replica_set: false
when: initialize_replica_set
tags:
- "manage"
- "manage:db-replication"
- name: re-copy configuration template with replica set enabled
template:
src: mongod.conf.j2
dest: /etc/mongod.conf
backup: yes
when: initialize_replica_set
tags:
- "manage"
- "manage:db-replication"
- name: restart mongo service
service:
name: mongod
state: restarted
when: initialize_replica_set
tags:
- "manage"
- "manage:db-replication"
- name: wait for mongo server to start
wait_for:
port: 27017
delay: 2
when: initialize_replica_set
tags:
- "manage"
- "manage:db-replication"
- name: configure replica set
mongodb_replica_set:
username: "{{ MONGO_ADMIN_USER }}"
password: "{{ MONGO_ADMIN_PASSWORD }}"
rs_config: "{{ MONGO_RS_CONFIG }}"
run_once: true
register: replset_status
tags:
- "manage"
- "manage:db"
- "manage:db-replication"
- "manage:db-replication-configuration"
# During initial replica set configuration, it can take a few seconds to vote
# a primary and for all members to reflect that status. During that window,
# use creation or other writes can fail. The best wait/check seems to be repeatedly
# checking the replica set status until we see a PRIMARY in the results.
- name: Wait for the replica set to update and (if needed) elect a primary
mongodb_rs_status:
host: "{{ ansible_lo['ipv4']['address'] }}"
username: "{{ MONGO_ADMIN_USER }}"
password: "{{ MONGO_ADMIN_PASSWORD }}"
register: status
until: status.status is defined and 'PRIMARY' in status.status.members|map(attribute='stateStr')|list
retries: 5
delay: 2
run_once: true
tags:
- "manage"
- "manage:db"
- "manage:db-replication"
- name: create mongodb users in a replica set
mongodb_user:
database: "{{ item.database }}"
login_database: 'admin'
login_user: "{{ MONGO_ADMIN_USER }}"
login_password: "{{ MONGO_ADMIN_PASSWORD }}"
name: "{{ item.user }}"
password: "{{ item.password }}"
roles: "{{ item.roles }}"
state: present
replica_set: "{{ MONGO_REPL_SET }}"
with_items: "{{ MONGO_USERS }}"
run_once: true
tags:
- "manage"
- "manage:db"
- "manage:db-users"
- "manage:db-replication"
- name: ensure mongo starts at boot time
service:
name: mongod
enabled: yes
tags:
- "manage"
- "manage:start"
#!/usr/bin/env bash
# Using JSON.stringify forces output of normal JSON, as opposed to Mongo's weird non-compliant extended JSON
/usr/bin/mongo -u {{ MONGO_ADMIN_USER }} --authenticationDatabase admin -p '{{ MONGO_ADMIN_PASSWORD }}' --quiet <<< 'JSON.stringify(db.serverStatus())'
{{ mongo_log_dir }}/serverStatus.log {
create
compress
copytruncate
delaycompress
dateext
dateformat -%Y%m%d-%s
missingok
notifempty
daily
rotate 90
size 1M
}
{{ mongo_log_dir }}/mongodb.log {
create
compress
copytruncate
delaycompress
dateext
dateformat -%Y%m%d-%s
missingok
notifempty
daily
rotate 90
size 1M
postrotate
/usr/bin/killall -USR1 mongod
endscript
}
# {{ ansible_managed }}
# mongodb.conf
storage:
# Where to store the data.
dbPath: {{ mongo_dbpath }}
# Storage Engine
engine: {{ MONGO_STORAGE_ENGINE }}
# Enable journaling, http://www.mongodb.org/display/DOCS/Journaling
journal:
{% if mongo_enable_journal %}
enabled: true
{% else %}
enabled: false
{% endif %}
{% if MONGO_STORAGE_ENGINE_OPTIONS %}
{{ MONGO_STORAGE_ENGINE_OPTIONS | to_nice_yaml }}
{% endif %}
systemLog:
#where to log
destination: file
path: "{{ mongo_logpath }}"
{% if mongo_logappend %}
logAppend: true
{% else %}
logAppend: false
{% endif %}
logRotate: {{ mongo_logrotate }}
{% if not skip_replica_set %}
replication:
replSetName: {{ MONGO_REPL_SET }}
security:
authorization: {{ MONGO_AUTH | ternary("enabled", "disabled") }}
keyFile: {{ mongo_key_file }}
{% endif %}
net:
bindIp: {{ MONGO_BIND_IP }}
port: {{ mongo_port }}
{{ mongo_extra_conf }}
---
MONGO_CLIENT_MONGODB_APT_KEY: "7F0CEB10"
MONGO_CLIENT_MONGODB_APT_KEYSERVER: "keyserver.ubuntu.com"
MONGO_CLIENT_MONGODB_REPO: "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse"
mongo_client_version: 3.2.12
mongo_client_debian_pkgs:
- "mongodb-org-shell={{ mongo_client_version }}"
- "mongodb-org-tools={{ mongo_client_version }}"
---
- name: add the mongodb signing key
apt_key:
id: "{{ MONGO_CLIENT_MONGODB_APT_KEY }}"
keyserver: "{{ MONGO_CLIENT_MONGODB_APT_KEYSERVER }}"
state: present
tags:
- install
- install:system-requirements
- name: add the mongodb repo to the sources list
apt_repository:
repo: "{{ MONGO_CLIENT_MONGODB_REPO }}"
state: present
tags:
- install
- install:system-requirements
- name: install mongo shell
apt:
pkg: "{{ item }}"
state: present
install_recommends: yes
force: yes
update_cache: yes
with_items: "{{ mongo_client_debian_pkgs }}"
tags:
- install
- install:system-requirements
......@@ -2,13 +2,14 @@
base_url: "https://cloud.mongodb.com/download/agent"
pkg_arch: "amd64"
pkg_format: "deb"
os_version: "ubuntu1604"
agents:
- agent: mongodb-mms-monitoring-agent
version: "5.7.0.368-1"
version: "6.0.0.381-1"
config: "/etc/mongodb-mms/monitoring-agent.config"
dir: "monitoring"
- agent: mongodb-mms-backup-agent
version: "5.4.0.493-1"
version: "5.8.0.655-1"
config: "/etc/mongodb-mms/backup-agent.config"
dir: "backup"
......@@ -10,18 +10,33 @@
msg: "MMSAPIKEY is required"
when: MMSAPIKEY is not defined
- name: download mongo mms agent
- name: download trusty mongo mms agent
get_url:
url: "{{ base_url }}/{{ item.dir }}/{{ item.agent }}_{{ item.version }}_{{ pkg_arch }}.{{ pkg_format }}"
dest: "/tmp/{{ item.agent }}-{{ item.version }}.{{ pkg_format }}"
register: download_mms_deb
with_items: "{{ agents }}"
when: ansible_distribution_release == 'trusty'
- name: download xenial mongo mms agent
get_url:
url: "{{ base_url }}/{{ item.dir }}/{{ item.agent }}_{{ item.version }}_{{ pkg_arch }}.{{ os_version }}.{{ pkg_format }}"
dest: "/tmp/{{ item.agent }}-{{ item.version }}.{{ pkg_format }}"
register: download_mms_deb
with_items: "{{ agents }}"
when: ansible_distribution_release == 'xenial'
- name: install mongo mms agent
apt:
deb: "/tmp/{{ item.agent }}-{{ item.version }}.deb"
when: download_mms_deb.changed
notify: restart mms
with_items: "{{ agents }}"
- name: add group ID to monitoring-agent.config
lineinfile:
dest: "{{ item.config }}"
regexp: "^mmsGroupId="
line: "mmsGroupId={{ MMSGROUPID }}"
with_items: "{{ agents }}"
- name: add key to monitoring-agent.config
......@@ -29,10 +44,9 @@
dest: "{{ item.config }}"
regexp: "^mmsApiKey="
line: "mmsApiKey={{ MMSAPIKEY }}"
notify: restart mms
with_items: "{{ agents }}"
- name: start mms service
- name: start mms service
service:
name: "{{ item.agent }}"
state: started
......
......@@ -13,3 +13,7 @@
volumes: []
UNMOUNT_DISKS: false
# WARNING! FORCE_REFORMAT_DISKS will cause your volumes to always be reformatted
# even if all the volume's attributes already match what you've defined in volumes[]
# Enable this flag at your own risk with an abundance of caution
FORCE_REFORMAT_DISKS: false
......@@ -23,9 +23,19 @@
src: "{{ (ansible_mounts | selectattr('device', 'equalto', item.device) | first | default({'device': None})).device }}"
fstype: "{{ (ansible_mounts | selectattr('device', 'equalto', item.device) | first | default({'fstype': None})).fstype }}"
state: unmounted
when: "{{ UNMOUNT_DISKS and (ansible_mounts | selectattr('device', 'equalto', item.device) | first | default({'fstype': None})).fstype != item.fstype }}"
when: "UNMOUNT_DISKS and (ansible_mounts | selectattr('device', 'equalto', item.device) | first | default({'fstype': None})).fstype != item.fstype"
with_items: "{{ volumes }}"
# If there are disks we want to be unmounting, but we can't because UNMOUNT_DISKS is false
# that is an errorable condition, since it will cause the format step to fail
- name: Check that we don't want to unmount disks to change fstype when UNMOUNT_DISKS is false
fail: msg="Found disks mounted with the wrong filesystem type, but can't unmount them. This role will need to be re-run with -e 'UNMOUNT_DISKS=True' if you believe that is safe."
when:
"not UNMOUNT_DISKS and
volumes | selectattr('device', 'equalto', item.device) | list | length != 0 and
(volumes | selectattr('device', 'equalto', item.device) | first).fstype != item.fstype"
with_items: "{{ ansible_mounts }}"
# Noop & reports "ok" if fstype is correct
# Errors if fstype is wrong and disk is mounted (hence above task)
- name: Create filesystem
......@@ -33,7 +43,7 @@
dev: "{{ item.device }}"
fstype: "{{ item.fstype }}"
# Necessary because AWS gives some ephemeral disks the wrong fstype by default
force: true
force: "{{ FORCE_REFORMAT_DISKS }}"
with_items: "{{ volumes }}"
# This can fail if one volume is mounted on a child directory as another volume
......@@ -57,7 +67,7 @@
# If there are disks we want to be unmounting, but we can't because UNMOUNT_DISKS is false
# that is an errorable condition, since it can easily allow us to double mount a disk.
- name: Check that we don't want to unmount disks when UNMOUNT_DISKS is false
- name: Check that we don't want to unmount disks to change mountpoint when UNMOUNT_DISKS is false
fail: msg="Found disks mounted in the wrong place, but can't unmount them. This role will need to be re-run with -e 'UNMOUNT_DISKS=True' if you believe that is safe."
when:
not UNMOUNT_DISKS and
......
......@@ -33,22 +33,6 @@
# Thought that instead of performing all those steps and get the repo, why not directly use this repo
# `deb http://repo.mysql.com/apt/ubuntu/ precise mysql-5.6`, I just picked this line and directly used it and it worked for us.
- name: Add MySQL community apt key
apt_key:
id: 8C718D3B5072E1F5
keyserver: "{{ COMMON_EDX_PPA_KEY_SERVER }}"
state: present
when: ansible_distribution_release == 'precise'
# Despite ondrej's ppa having 12.04 support, we would need to do shenanigans and uninstalling
# to switch back cleanly without publishing a new base devstack box. Easier to just clean this
# up with Ficus.
- name: Install MySQL from their community PPA
apt_repository:
repo: "deb http://repo.mysql.com/apt/ubuntu/ precise mysql-5.6"
update_cache: yes
when: ansible_distribution_release == 'precise'
- name: Install mysql-5.6 and dependencies
apt:
name: "{{ item }}"
......
......@@ -17,17 +17,25 @@
# vars are namespaced with the module name.
#
NEO4J_SERVER_NAME: "localhost"
NEO4J_AUTH_ENABLED: "true"
neo4j_gpg_key_url: https://debian.neo4j.org/neotechnology.gpg.key
neo4j_apt_repository: "deb http://debian.neo4j.org/repo stable/"
neo4j_version: "3.0.3"
neo4j_defaults_file: "/etc/default/neo4j"
neo4j_version: "3.2.2"
neo4j_server_config_file: "/etc/neo4j/neo4j.conf"
neo4j_wrapper_config_file: "/etc/neo4j/neo4j-wrapper.conf"
neo4j_https_port: 7473 # default in package is 7473
neo4j_http_port: 7474 # default in package is 7474
neo4j_listen_address: "0.0.0.0"
neo4j_heap_max_size: "3000"
neo4j_heap_max_size: "6000m"
neo4j_page_cache_size: "6000m"
neo4j_log_dir: "/var/log/neo4j"
# Properties file settings
neo4j_https_settings_key: "dbms.connector.https.address"
neo4j_http_settings_key: "dbms.connector.http.address"
neo4j_https_settings_key: "dbms.connector.https.listen_address"
neo4j_http_settings_key: "dbms.connector.http.listen_address"
# Deprecated files to delete
deprecated_neo4j_wrapper_config_file: "/etc/neo4j/neo4j-wrapper.conf"
deprecated_neo4j_https_settings_key: "dbms.connector.https.address"
deprecated_neo4j_http_settings_key: "dbms.connector.http.address"
......@@ -2,7 +2,7 @@
dependencies:
- common
- role: oraclejdk
oraclejdk_version: "8u60"
oraclejdk_base: "jdk1.8.0_60"
oraclejdk_build: "b27"
oraclejdk_version: "8u131"
oraclejdk_base: "jdk1.8.0_131"
oraclejdk_build: "b11"
oraclejdk_link: "/usr/lib/jvm/java-8-oracle"
......@@ -37,6 +37,14 @@
- install
- install:system-requirements
- name: remove deprecated config file
file:
state: absent
path: "{{ deprecated_neo4j_wrapper_config_file }}"
tags:
- install
- install:base
- name: install neo4j
apt:
name: "neo4j={{neo4j_version}}"
......@@ -45,17 +53,48 @@
- install
- install:base
- name: enable or disable authentication
lineinfile:
dest: "{{ neo4j_server_config_file }}"
regexp: "dbms.security.auth_enabled="
line: "dbms.security.auth_enabled={{ NEO4J_AUTH_ENABLED }}"
tags:
- install
- install:configuration
- name: set neo4j page cache size
lineinfile:
dest: "{{ neo4j_server_config_file }}"
regexp: "dbms.memory.pagecache.size="
line: "dbms.memory.pagecache.size={{ neo4j_page_cache_size }}"
tags:
- install
- install:configuration
- name: set neo4j heap size
lineinfile:
dest: "{{ neo4j_wrapper_config_file }}"
regexp: "dbms.memory.heap.max_size="
line: "dbms.memory.heap.max_size={{ neo4j_heap_max_size }}"
dest: "{{ neo4j_server_config_file }}"
regexp: "{{ item }}="
line: "{{ item }}={{ neo4j_heap_max_size }}"
with_items:
- "dbms.memory.heap.max_size"
- "dbms.memory.heap.initial_size"
tags:
- install
- install:configuration
- name: allow format migration (when updating neo4j versions)
lineinfile:
dest: "{{ neo4j_server_config_file }}"
regexp: "dbms.allow_format_migration="
line: "dbms.allow_format_migration=true"
tags:
- install
- install:configuration
- name: set to listen on specific port for https
lineinfile:
create: yes
dest: "{{ neo4j_server_config_file }}"
regexp: "{{ neo4j_https_settings_key }}="
line: "{{ neo4j_https_settings_key }}={{ neo4j_listen_address }}:{{ neo4j_https_port }}"
......@@ -65,6 +104,7 @@
- name: set to listen on specific port for http
lineinfile:
create: yes
dest: "{{ neo4j_server_config_file }}"
regexp: "{{ neo4j_http_settings_key }}="
line: "{{ neo4j_http_settings_key }}={{ neo4j_listen_address }}:{{ neo4j_http_port }}"
......@@ -72,9 +112,50 @@
- install
- install:configuration
- name: remove deprecated listen address lines
lineinfile:
state: absent
dest: "{{ neo4j_server_config_file }}"
regexp: "{{ item }}"
with_items:
- "{{ deprecated_neo4j_https_settings_key }}"
- "{{ deprecated_neo4j_http_settings_key }}"
tags:
- install
- install:configuration
- name: Create neo4j logging dir
file:
path: "{{ neo4j_log_dir }}"
state: directory
owner: neo4j
mode: "0755"
tags:
- install
- install:base
- name: Create neo4j default file
file:
path: "{{ neo4j_defaults_file }}"
state: touch
owner: neo4j
mode: "0755"
tags:
- install
- install:base
- name: set max open files to 40000
lineinfile:
dest: "{{ neo4j_defaults_file }}"
regexp: "#NEO4J_ULIMIT_NOFILE=40000"
line: "NEO4J_ULIMIT_NOFILE=40000"
tags:
- install
- install:base
- name: restart neo4j
service:
name: neo4j
service:
name: neo4j
state: restarted
tags:
- manage
......
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
##
# Defaults for role newrelic_infrastructure
#
#
# vars are namespace with the module name.
#
---
NEWRELIC_INFRASTRUCTURE_LICENSE_KEY: "SPECIFY_KEY_HERE"
NEWRELIC_INFRASTRUCTURE_DEBIAN_REPO: 'deb https://download.newrelic.com/infrastructure_agent/linux/apt {{ ansible_distribution_release }} main'
NEWRELIC_INFRASTRUCTURE_DEBIAN_KEY_URL: 'https://download.newrelic.com/infrastructure_agent/gpg/newrelic-infra.gpg'
# Any extra config you want to specify
# https://docs.newrelic.com/docs/infrastructure/new-relic-infrastructure/configuration/infrastructure-config-file-template-newrelic-infrayml
NEWRELIC_INFRASTRUCTURE_EXTRA_CONFIG: ''
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
# newrelic_infrastructure
#
# Overview:
#
# Installs the New Relic Infrastructure service https://newrelic.com/infrastructure
##
# Dependencies:
#
# Example play:
# roles:
# - common
# - newrelic_infrastructure
#
#
- name: install license key
template:
src: etc/newrelic-infra.j2
dest: /etc/newrelic-infra.yml
mode: 0600
register: license_key_file
tags:
- install
- install:configuration
- name: Add apt key for New Relic Infrastructure
apt_key:
url: "{{ NEWRELIC_INFRASTRUCTURE_DEBIAN_KEY_URL }}"
state: present
tags:
- install
- install:app-requirements
- name: Install apt repository for New Relic Infrastructure
apt_repository:
repo: "{{ NEWRELIC_INFRASTRUCTURE_DEBIAN_REPO }}"
state: present
update_cache: yes
tags:
- install
- install:app-requirements
- name: Install New Relic Infrastructure agent
apt:
name: "newrelic-infra"
tags:
- install
- install:code
- name: Restart the infrastructure agent if the license key changes
service:
name: newrelic-infra
state: restarted
when: license_key_file.changed
tags:
- install
- install:configuration
license_key: {{ NEWRELIC_INFRASTRUCTURE_LICENSE_KEY }}
{{ NEWRELIC_INFRASTRUCTURE_EXTRA_CONFIG }}
......@@ -48,18 +48,43 @@ NGINX_LOG_FORMAT_NAME: 'p_combined'
# headers to reflect the properties of the incoming request.
NGINX_SET_X_FORWARDED_HEADERS: False
# Increasing these values allows studio to process more complex operations.
# Default timeouts limit CMS connections to 60 seconds.
NGINX_CMS_PROXY_CONNECT_TIMEOUT: !!null
NGINX_CMS_PROXY_SEND_TIMEOUT: !!null
NGINX_CMS_PROXY_READ_TIMEOUT: !!null
NGINX_SERVER_ERROR_IMG: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Pendleton_Sinking_Ship.jpg/640px-Pendleton_Sinking_Ship.jpg'
NGINX_SERVER_ERROR_IMG_ALT: ''
NGINX_SERVER_ERROR_LANG: 'en'
NGINX_SERVER_ERROR_STYLE_H1: 'font-family: "Helvetica Neue",Helvetica,Roboto,Arial,sans-serif; margin-bottom: .3em; font-size: 2.0em; line-height: 1.25em; text-rendering: optimizeLegibility; font-weight: bold; color: #000000;'
NGINX_SERVER_ERROR_STYLE_P_H2: 'font-family: "Helvetica Neue",Helvetica,Roboto,Arial,sans-serif; margin-bottom: .3em; line-height: 1.25em; text-rendering: optimizeLegibility; font-weight: bold; font-size: 1.8em; color: #5b5e63;'
NGINX_SERVER_ERROR_STYLE_P: 'font-family: Georgia,Cambria,"Times New Roman",Times,serif; margin: auto; margin-bottom: 1em; font-weight: 200; line-height: 1.4em; font-size: 1.1em; max-width: 80%;'
NGINX_SERVER_ERROR_STYLE_DIV: 'margin: auto; width: 800px; text-align: center; padding:20px 0px 0px 0px;'
NGINX_SERVER_HTML_FILES:
- file: rate-limit.html
lang: "{{ NGINX_SERVER_ERROR_LANG }}"
title: 'Rate limit exceeded'
msg: 'If think you have encountered this message in error please let us know at <a href="mailto:{{ EDXAPP_TECH_SUPPORT_EMAIL|default("technical@example.com") }}">{{ EDXAPP_TECH_SUPPORT_EMAIL|default("technical@example.com") }}</a>'
img: "{{ NGINX_SERVER_ERROR_IMG }}"
img_alt: "{{ NGINX_SERVER_ERROR_IMG_ALT }}"
heading: 'Uh oh, we are having some server issues..'
style_h1: "{{ NGINX_SERVER_ERROR_STYLE_H1 }}"
style_p_h2: "{{ NGINX_SERVER_ERROR_STYLE_P_H2 }}"
style_p: "{{ NGINX_SERVER_ERROR_STYLE_P }}"
style_div: "{{ NGINX_SERVER_ERROR_STYLE_DIV }}"
- file: server-error.html
lang: "{{ NGINX_SERVER_ERROR_LANG }}"
title: 'Server error'
msg: 'We have been notified of the error, if it persists please let us know at <a href="mailto:{{ EDXAPP_TECH_SUPPORT_EMAIL|default("technical@example.com") }}">{{ EDXAPP_TECH_SUPPORT_EMAIL|default("technical@example.com") }}</a>'
img: "{{ NGINX_SERVER_ERROR_IMG }}"
img_alt: "{{ NGINX_SERVER_ERROR_IMG_ALT }}"
heading: 'Uh oh, we are having some server issues..'
style_h1: "{{ NGINX_SERVER_ERROR_STYLE_H1 }}"
style_p_h2: "{{ NGINX_SERVER_ERROR_STYLE_P_H2 }}"
style_p: "{{ NGINX_SERVER_ERROR_STYLE_P }}"
style_div: "{{ NGINX_SERVER_ERROR_STYLE_DIV }}"
NGINX_APT_REPO: deb http://nginx.org/packages/ubuntu/ {{ ansible_distribution_release }} nginx
......@@ -153,3 +178,8 @@ NGINX_CREATE_HTPASSWD_FILE: >
XQUEUE_ENABLE_BASIC_AUTH|bool or
XSERVER_ENABLE_BASIC_AUTH|bool
}}
# Extra settings to add to site configuration for Studio
NGINX_EDXAPP_CMS_APP_EXTRA: ""
# Extra settings to add to site configuration for LMS
NGINX_EDXAPP_LMS_APP_EXTRA: ""
......@@ -111,6 +111,7 @@
with_items:
- { src: 'etc/nginx/nginx.conf.j2', dest: '/etc/nginx/nginx.conf', group: '{{ common_web_user }}', mode: "0644" }
- { src: 'edx/app/nginx/sites-available/edx-release.j2', dest: '{{ nginx_sites_available_dir }}/edx-release', group: 'root', mode: "0600" }
- { src: 'edx/app/nginx/sites-available/maps.j2', dest: '{{ nginx_sites_available_dir }}/maps', group: 'root', mode: "0600" }
notify: restart nginx
tags:
- install
......@@ -131,12 +132,15 @@
- name: Creating link for common nginx configuration
file:
src: "{{ nginx_sites_available_dir }}/edx-release"
dest: "{{ nginx_sites_enabled_dir }}/edx-release"
src: "{{ nginx_sites_available_dir }}/{{ item }}"
dest: "{{ nginx_sites_enabled_dir }}/{{ item }}"
state: link
owner: root
group: root
notify: reload nginx
with_items:
- "edx-release"
- "maps"
tags:
- install
- install:configuration
......
......@@ -104,6 +104,17 @@ error_page {{ k }} {{ v }};
proxy_redirect off;
proxy_pass http://cms-backend;
{% if NGINX_CMS_PROXY_CONNECT_TIMEOUT %}
proxy_connect_timeout {{ NGINX_CMS_PROXY_CONNECT_TIMEOUT }};
{% endif %}
{% if NGINX_CMS_PROXY_SEND_TIMEOUT %}
proxy_send_timeout {{ NGINX_CMS_PROXY_SEND_TIMEOUT }};
{% endif %}
{% if NGINX_CMS_PROXY_READ_TIMEOUT %}
proxy_read_timeout {{ NGINX_CMS_PROXY_READ_TIMEOUT }};
{% endif %}
{{ NGINX_EDXAPP_CMS_APP_EXTRA }}
}
location / {
......@@ -123,6 +134,13 @@ error_page {{ k }} {{ v }};
try_files $uri @proxy_to_cms_app;
}
# The api is accessed using OAUTH2 which
# uses the authorization header so we can't have
# basic auth on it as well.
location /api {
try_files $uri @proxy_to_cms_app;
}
{% include "robots.j2" %}
{% include "static-files.j2" %}
......
......@@ -40,14 +40,14 @@ server {
{% if NGINX_REDIRECT_TO_HTTPS %}
{% if NGINX_HTTPS_REDIRECT_STRATEGY == "scheme" %}
# Redirect http to https over single instance
if ($scheme != "https")
{
if ($scheme != "https")
{
set $do_redirect_to_https "true";
}
{% elif NGINX_HTTPS_REDIRECT_STRATEGY == "forward_for_proto" %}
# Forward to HTTPS if we're an HTTP request... and the server is behind ELB
if ($http_x_forwarded_proto = "http")
# Forward to HTTPS if we're an HTTP request... and the server is behind ELB
if ($http_x_forwarded_proto = "http")
{
set $do_redirect_to_https "true";
}
......@@ -81,6 +81,11 @@ server {
try_files $uri @proxy_to_app;
}
# Allow access for Apple Pay domain validation
location /.well-known/apple-developer-merchantid-domain-association {
try_files $uri @proxy_to_app;
}
{% include "robots.j2" %}
location @proxy_to_app {
......
{% if EDXAPP_SCORM_PKG_STORAGE_DIR %}
location ~ ^/{{ EDXAPP_MEDIA_URL }}/{{ EDXAPP_SCORM_PKG_STORAGE_DIR }}/(?P<file>.*) {
add_header 'Access-Control-Allow-Origin' $cors_origin;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
root {{ edxapp_media_dir }}/{{ EDXAPP_SCORM_PKG_STORAGE_DIR }};
try_files /$file =404;
expires 604800s;
}
{% endif %}
......@@ -4,6 +4,19 @@ upstream insights_app_server {
{% endfor %}
}
# The Origin request header indicates where a fetch originates from. It doesn't include any path information,
# but only the server name (e.g. https://www.example.com).
# See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin for details.
#
# Here we set the value that is included in the Access-Control-Allow-Origin response header. If the origin is one
# of our known hosts--served via HTTP or HTTPS--we allow for CORS. Otherwise, we set the "null" value, disallowing CORS.
map $http_origin $cors_origin {
default "null";
{% for host in INSIGHTS_CORS_ORIGIN_WHITELIST %}
"~*^https?:\/\/{{ host|replace('.', '\.') }}$" $http_origin;
{% endfor %}
}
server {
listen {{ INSIGHTS_NGINX_PORT }} default_server;
......@@ -20,6 +33,13 @@ server {
location ~ ^/static/(?P<file>.*) {
root {{ COMMON_DATA_DIR }}/{{ insights_service_name }};
add_header Cache-Control "max-age=31536000";
add_header 'Access-Control-Allow-Origin' $cors_origin;
add_header 'Access-Control-Allow-Methods' 'HEAD, GET, OPTIONS';
# Inform downstream caches to take certain headers into account when reading/writing to cache.
add_header 'Vary' 'Accept-Encoding,Origin';
try_files /staticfiles/$file =404;
}
......
......@@ -43,6 +43,23 @@ geo $http_x_forwarded_for $embargo {
}
{%- endif %}
{% if EDXAPP_CORS_ORIGIN_WHITELIST|length > 0 %}
# The Origin request header indicates where a fetch originates from. It doesn't include any path information,
# but only the server name (e.g. https://www.example.com).
# See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin for details.
#
# Here we set the value that is included in the Access-Control-Allow-Origin response header. If the origin is one
# of our known hosts--served via HTTP or HTTPS--we allow for CORS. Otherwise, we set the "null" value, disallowing CORS.
map $http_origin $cors_origin {
default "null";
{% for host in EDXAPP_CORS_ORIGIN_WHITELIST %}
"~*^https?:\/\/{{ host|replace('.', '\.') }}$" $http_origin;
{% endfor %}
}
{% endif %}
server {
# LMS configuration file for nginx, templated by ansible
......@@ -149,6 +166,8 @@ error_page {{ k }} {{ v }};
proxy_redirect off;
proxy_pass http://lms-backend;
{{ NGINX_EDXAPP_LMS_APP_EXTRA }}
}
location / {
......@@ -197,6 +216,11 @@ error_page {{ k }} {{ v }};
try_files $uri @proxy_to_lms_app;
}
# Consent API
location /consent/api {
try_files $uri @proxy_to_lms_app;
}
# Need a separate location for the image uploads endpoint to limit upload sizes
location ~ ^/api/profile_images/[^/]*/[^/]*/upload$ {
try_files $uri @proxy_to_lms_app;
......@@ -275,5 +299,6 @@ location ~ ^{{ EDXAPP_MEDIA_URL }}/(?P<file>.*) {
{% include "robots.j2" %}
{% include "static-files.j2" %}
{% include "extra_locations_lms.j2" ignore missing %}
}
# nginx maps are defined at the top level and are global
# cache header for static files
map $status $cache_header_long_lived {
default "max-age=315360000";
404 "no-cache";
}
map $status $cache_header_short_lived {
default "max-age=300";
404 "no-cache";
}
{% if EDXAPP_SCORM_PLAYER_LOCAL_STORAGE_ROOT %}
# w/in scorm/, override default return 403 for these file types
location ~ ^/static/scorm/(?:.*)(?:\.xml|\.json) {
try_files /{{ EDXAPP_SCORM_PLAYER_LOCAL_STORAGE_ROOT }}/$file =404;
}
location ~ "/scorm/(?P<file>.*)" {
add_header 'Access-Control-Allow-Origin' $cors_origin;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
try_files /{{ EDXAPP_SCORM_PLAYER_LOCAL_STORAGE_ROOT }}/$file =404;
}
{% endif %}
......@@ -16,14 +16,14 @@
# http://www.red-team-design.com/firefox-doesnt-allow-cross-domain-fonts-by-default
location ~ "/static/(?P<collected>.*\.[0-9a-f]{12}\.(eot|otf|ttf|woff|woff2)$)" {
expires max;
add_header "Cache-Control" $cache_header_long_lived always;
add_header Access-Control-Allow-Origin *;
try_files /staticfiles/$collected /course_static/$collected =404;
}
# Set django-pipelined files to maximum cache time
location ~ "/static/(?P<collected>.*\.[0-9a-f]{12}\..*)" {
expires max;
add_header "Cache-Control" $cache_header_long_lived always;
# Without this try_files, files that have been run through
# django-pipeline return 404s
try_files /staticfiles/$collected /course_static/$collected =404;
......@@ -31,15 +31,17 @@
# Set django-pipelined files for studio to maximum cache time
location ~ "/static/(?P<collected>[0-9a-f]{7}/.*)" {
expires max;
add_header "Cache-Control" $cache_header_long_lived always;
# Without this try_files, files that have been run through
# django-pipeline return 404s
try_files /staticfiles/$collected /course_static/$collected =404;
}
# Expire other static files immediately (there should be very few / none of these)
expires epoch;
{% include "static-files-extra.j2" ignore missing %}
# Non-hashed files (there should be very few / none of these)
add_header "Cache-Control" $cache_header_short_lived always;
}
<!DOCTYPE html>
<!DOCTYPE html lang="{{ item.lang }}">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
......@@ -6,37 +6,18 @@
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style media="screen" type="text/css">
h1, h2{
font-family: "Helvetica Neue",Helvetica,Roboto,Arial,sans-serif;
margin-bottom: .3em;
font-size: 2.0em;
line-height: 1.25em;
text-rendering: optimizeLegibility;
font-weight: bold;
color: #000000;
}
h2 {
font-size: 1.8em;
color: #5b5e63;
}
p {
font-family: Georgia,Cambria,"Times New Roman",Times,serif;
margin: auto;
margin-bottom: 1em;
font-weight: 200;
line-height: 1.4em;
font-size: 1.1em;
max-width: 80%;
}
h1 { {{ item.style_h1 }} }
p-h2 { {{ item.style_p_h2 }} }
p { {{ item.style_p }} }
</style>
</head>
<body>
<div style="margin: auto; width: 800px; text-align: center; padding:20px 0px 0px 0px;">
<main style="{{ item.style_div }}">
<h1>{{ item.heading }}</h1>
<img src="{{ item.img}}" alt="">
<h2>{{ item.title }}</h2>
<p>{{ item.msg }}
</div
<img src="{{ item.img }}" alt="{{ item.img_alt }}">
<p class="p-h2">{{ item.title }}</p>
<p>{{ item.msg }}</p>
</main>
</body>
</html>
---
NOTIFIER_WEB_USER: "www-data"
NOTIFIER_VENV_DIR: "{{ notifier_app_dir }}/virtualenvs/notifier"
NOTIFIER_DB_DIR: "{{ notifier_app_dir }}/db"
NOTIFIER_SOURCE_REPO: "https://github.com/edx/notifier.git"
NOTIFIER_CODE_DIR: "{{ notifier_app_dir }}/src"
NOTIFIER_VERSION: "master"
......@@ -12,6 +11,14 @@ NOTIFIER_DIGEST_TASK_INTERVAL: "1440"
NOTIFIER_FORUM_DIGEST_TASK_BATCH_SIZE: "5"
NOTIFIER_FORUM_DIGEST_TASK_RATE_LIMIT: "60/m"
NOTIFIER_DB_DIR: "{{ notifier_app_dir }}/db" # Deprecated: use NOTIFIER_DATABASE_NAME instead
NOTIFIER_DATABASE_NAME: "{{ NOTIFIER_DB_DIR }}/notifier.db"
NOTIFIER_DATABASE_ENGINE: "django.db.backends.sqlite3"
NOTIFIER_DATABASE_USER: ""
NOTIFIER_DATABASE_PASSWORD: ""
NOTIFIER_DATABASE_HOST: ""
NOTIFIER_DATABASE_PORT: ""
NOTIFIER_THEME_NAME: ""
NOTIFIER_THEME_REPO: ""
NOTIFIER_THEME_VERSION: "master"
......@@ -90,7 +97,12 @@ notifier_env_vars:
EMAIL_SENDER_POSTAL_ADDRESS: "{{ NOTIFIER_EMAIL_SENDER_POSTAL_ADDRESS }}"
NOTIFIER_LANGUAGE: "{{ NOTIFIER_LANGUAGE }}"
NOTIFIER_ENV: "{{ NOTIFIER_ENV }}"
NOTIFIER_DB_DIR: "{{ NOTIFIER_DB_DIR }}"
NOTIFIER_DATABASE_NAME: "{{ NOTIFIER_DATABASE_NAME }}"
NOTIFIER_DATABASE_ENGINE: "{{ NOTIFIER_DATABASE_ENGINE }}"
NOTIFIER_DATABASE_USER: "{{ NOTIFIER_DATABASE_USER }}"
NOTIFIER_DATABASE_PASSWORD: "{{ NOTIFIER_DATABASE_PASSWORD }}"
NOTIFIER_DATABASE_HOST: "{{ NOTIFIER_DATABASE_HOST }}"
NOTIFIER_DATABASE_PORT: "{{ NOTIFIER_DATABASE_PORT }}"
EMAIL_BACKEND: "{{ NOTIFIER_EMAIL_BACKEND }}"
EMAIL_HOST: "{{ NOTIFIER_EMAIL_HOST }}"
EMAIL_PORT: "{{ NOTIFIER_EMAIL_PORT }}"
......
......@@ -103,7 +103,7 @@
chdir: "{{ NOTIFIER_CODE_DIR }}"
become: true
become_user: "{{ notifier_user }}"
environment: notifier_env_vars
environment: "{{ notifier_env_vars }}"
when: migrate_db is defined and migrate_db|lower == "yes"
tags:
- "install"
......
......@@ -20,6 +20,7 @@
apt:
name: "{{ item }}"
state: present
update_cache: yes
with_items: "{{ notifier_debian_pkgs }}"
tags:
- "install"
......
---
oauth2_proxy_app_dir: "{{ COMMON_APP_DIR }}/oauth2_proxy"
oauth2_proxy_conf_dir: "{{ COMMON_CFG_DIR }}/oauth2_proxy"
oauth2_proxy_user: "oauth2_proxy"
# We define this tuple here separately because we need to know it for downloading the right tarball. Given that they
# bake in both the version number -- which doesn't always match the actual Git tag they release off -- and the Go version,
# it's nearly impossible to use only `oauth2_proxy_version` to build a valid URL.
oauth2_proxy_version: "2.2.0"
oauth2_proxy_version_tuple: "2.2.0.linux-amd64.go1.8.1"
oauth2_proxy_pkg_name: "oauth2_proxy-{{ oauth2_proxy_version_tuple }}"
oauth2_proxy_release_url: "https://github.com/bitly/oauth2_proxy/releases/download/v2.2/{{ oauth2_proxy_pkg_name }}.tar.gz"
oauth2_proxy_release_sha256: "1c16698ed0c85aa47aeb80e608f723835d9d1a8b98bd9ae36a514826b3acce56"
oauth2_proxy_listen_port: 4180
oauth2_proxy_listen_addr: "0.0.0.0"
oauth2_proxy_upstreams: ["localhost:80"] # List of address:port values acting as upstreams/backends.
oauth2_proxy_request_logging: true
oauth2_proxy_pass_basic_auth: true # Pass Basic Authorization header to upstream(s).
oauth2_proxy_pass_user_headers: true # Passes X-Forwarded-User and X-Forwarded-Email to upstream(s).
oauth2_proxy_pass_host_header: true # Pass original Host header to upstream(s). If false, Host header will come from upstream address.
oauth2_proxy_pass_access_token: true # Pass OAuth access token via X-Forwarded-Access-Token header to upstream(s).
oauth2_proxy_email_domains: ["example.com"] # Which e-mail domains, if any, to validate for. Needed for things like validating a specific G Suite apps domain, etc.
oauth2_proxy_provider: "google" # OAuth provider type.
oauth2_proxy_client_id: "CHANGEME-OAUTH2-CLIENT-ID" # OAuth client ID.
oauth2_proxy_client_secret: "CHANGEME-OAUTH2-CLIENT-SECRET" # OAuth client secret.
oauth2_proxy_custom_templates_dir: "" # Directory having template overrides for the login/error pages.
oauth2_proxy_cookie_name: "_oauth2_proxy" # Client-side browser cookie name.
oauth2_proxy_cookie_secret: "CHANGEME-COOKIE-SECRET" # Cookie encryption secret.
oauth2_proxy_cookie_domain: "example.com" # Domain pattern for this cookie.
oauth2_proxy_cookie_expire: "168h" # How long before the cookie expires. (168h = 7 days)
oauth2_proxy_cookie_refresh: "4h" # How long since cookie issuance (and since last refresh) to validate existing OAuth token.
oauth2_proxy_cookie_secure: true # Whether or not cookie is HTTPS only.
oauth2_proxy_cookie_httponly: true # Whether or not cookie is browser-only (i.e. Javascript can't access it)
oauth2_proxy_services:
- { service: "oauth2_proxy", host: "localhost", port: "{{ oauth2_proxy_listen_port }}" }
oauth2_proxy_config:
http_address: "{{ oauth2_proxy_listen_addr }}:{{ oauth2_proxy_listen_port }}"
upstreams: "{{ oauth2_proxy_upstreams }}"
request_logging: "{{ oauth2_proxy_request_logging }}"
pass_basic_auth: "{{ oauth2_proxy_pass_basic_auth }}"
pass_user_headers: "{{ oauth2_proxy_pass_user_headers }}"
pass_host_header: "{{ oauth2_proxy_pass_host_header }}"
pass_access_token: "{{ oauth2_proxy_pass_access_token }}"
email_domains: "{{ oauth2_proxy_email_domains }}"
provider: "{{ oauth2_proxy_provider }}"
client_id: "{{ oauth2_proxy_client_id }}"
client_secret: "{{ oauth2_proxy_client_secret }}"
custom_templates_dir: "{{ oauth2_proxy_custom_templates_dir }}"
cookie_name: "{{ oauth2_proxy_cookie_name }}"
cookie_secret: "{{ oauth2_proxy_cookie_secret }}"
cookie_domain: "{{ oauth2_proxy_cookie_domain }}"
cookie_expire: "{{ oauth2_proxy_cookie_expire }}"
cookie_refresh: "{{ oauth2_proxy_cookie_refresh }}"
cookie_secure: "{{ oauth2_proxy_cookie_secure }}"
cookie_httponly: "{{ oauth2_proxy_cookie_httponly }}"
---
dependencies:
- role: common
tags:
- always # We want to make sure the role always runs, otherwise the system isn't in a state to install Python/Supervisord.
- config-encoders
- supervisor
---
- name: create the supervisor config
template:
src: oauth2_proxy_supervisor.conf.j2
dest: "{{ supervisor_available_dir }}/oauth2_proxy.conf"
owner: "{{ supervisor_user }}"
group: "{{ supervisor_user }}"
mode: 0644
become_user: "{{ supervisor_user }}"
register: oauth2_proxy_supervisor
tags:
- install
- install:configuration
- name: enable the supervisor config
file:
src: "{{ supervisor_available_dir }}/oauth2_proxy.conf"
dest: "{{ supervisor_cfg_dir }}/oauth2_proxy.conf"
owner: "{{ supervisor_user }}"
state: link
force: yes
mode: 0644
become_user: "{{ supervisor_user }}"
register: oauth2_proxy_supervisor
tags:
- install
- install:configuration
- name: download oauth2_proxy release
get_url:
url: "{{ oauth2_proxy_release_url }}"
dest: "/tmp/oauth2_proxy.tar.gz"
force: yes
sha256sum: "{{ oauth2_proxy_release_sha256 }}"
tags:
- install
- install:configuration
- name: extract the oauth2_proxy release
unarchive:
src: "/tmp/oauth2_proxy.tar.gz"
dest: "/tmp"
remote_src: True
tags:
- install
- install:configuration
- name: move the oauth2_proxy binary into place
command: "mv /tmp/{{ oauth2_proxy_pkg_name }}/oauth2_proxy {{ oauth2_proxy_app_dir }}/"
tags:
- install
- install:configuration
- name: update oauth2_proxy configuration
template:
src: oauth2_proxy.cfg.j2
dest: "{{ oauth2_proxy_conf_dir }}/oauth2_proxy.cfg"
owner: "{{ oauth2_proxy_user }}"
group: "{{ common_web_group }}"
mode: 0644
tags:
- install
- install:configuration
- name: update supervisor configuration
shell: "{{ supervisor_ctl }} -c {{ supervisor_cfg }} update"
register: supervisor_update
changed_when: supervisor_update.stdout is defined and supervisor_update.stdout != ""
when: not disable_edx_services
tags:
- manage
- manage:start
- manage:update
- name: ensure oauth2_proxy is started
supervisorctl:
name: oauth2_proxy
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
state: started
tags:
- manage
- manage:start
- include: test.yml
tags:
- deploy
- include: tag_ec2.yml
when: COMMON_TAG_EC2_INSTANCE
tags:
- deploy
- set_fact:
oauth2_proxy_installed: true
---
# oauth2_proxy
#
# Dependencies:
#
# * common
- name: create application user
user:
name: "{{ oauth2_proxy_user }}"
home: "{{ oauth2_proxy_app_dir }}"
createhome: yes
shell: /bin/false
generate_ssh_key: yes
tags:
- install
- install:base
- name: set oauth2_proxy app dir permissions
file:
path: "{{ oauth2_proxy_app_dir }}"
state: directory
owner: "{{ oauth2_proxy_user }}"
group: "{{ common_web_group }}"
tags:
- install
- install:base
- name: set oauth2_proxy conf dir permissions
file:
path: "{{ oauth2_proxy_conf_dir }}"
state: directory
owner: "{{ oauth2_proxy_user }}"
group: "{{ common_web_group }}"
tags:
- install
- install:base
- include: deploy.yml
tags:
- deploy
---
- name: get instance information
action: ec2_facts
- name: tag instance
ec2_tag:
resource: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
tags:
"version:oauth2_proxy" : "{{ oauth2_proxy_version }} {{ oauth2_proxy_release_sha256 }}"
---
- name: test that the required service are listening
wait_for:
port: "{{ item.port }}"
host: "{{ item.host }}"
timeout: 30
with_items: "{{ oauth2_proxy_services }}"
[program:oauth2_proxy]
command={{ oauth2_proxy_app_dir }}/oauth2_proxy -config {{ oauth2_proxy_conf_dir }}/oauth2_proxy.cfg
priority=999
user={{ oauth2_proxy_user }}
stdout_logfile={{ supervisor_log_dir }}/%(program_name)s-stdout.log
stderr_logfile={{ supervisor_log_dir }}/%(program_name)s-stderr.log
stopsignal=QUIT
......@@ -259,19 +259,25 @@
- "install"
- "install:app-configuration"
- set_fact:
permissions: "{{ permissions|default([])+[{'vhost':item,'configure_priv':'.*','read_priv':'.*','write_priv':'.*'}] }}"
with_items:
- "{{ RABBITMQ_VHOSTS }}"
tags:
- users
- maintenance
- "manage"
- "manage:app-users"
- name: Add admin users
rabbitmq_user:
user: "{{ item[0].name }}"
password: "{{ item[0].password }}"
read_priv: '.*'
write_priv: '.*'
configure_priv: '.*'
user: "{{ item.name }}"
password: "{{ item.password }}"
tags: "administrator"
state: present
vhost: "{{ item[1] }}"
with_nested:
state: "{{ item.state | default('present') }}"
permissions: "{{ permissions }}"
with_items:
- "{{rabbitmq_auth_config.admins}}"
- "{{ RABBITMQ_VHOSTS }}"
when: "'admins' in rabbitmq_auth_config"
tags:
- users
......
......@@ -28,6 +28,8 @@
with_items:
- "systemctl disable apt-daily.service"
- "systemctl disable apt-daily.timer"
- "systemctl disable apt-daily-upgrade.timer"
ignore_errors: true
- name: Disable unattended-upgrades
file:
......
......@@ -10,13 +10,13 @@
#
#
# Tasks for role splunk-server
#
#
# Overview:
#
#
#
# Dependencies:
#
#
#
# Example play:
#
#
......@@ -44,7 +44,7 @@
owner: splunk
group: splunk
mode: "0400"
when: "{{ SPLUNK_SSL_CERT is defined and SPLUNK_SSL_CERT | length > 0 }}"
when: SPLUNK_SSL_CERT is defined and SPLUNK_SSL_CERT | length > 0
with_together:
- [forwarder.pem, cacert.pem]
- ["{{ SPLUNK_SSL_CERT }}", "{{ SPLUNK_SSL_ROOT_CA }}"]
......@@ -150,9 +150,9 @@
- install:configuration
- name: restart splunk
service:
service:
name: splunk
state: restarted
state: restarted
tags:
- "install"
- "install:configuration"
......
......@@ -47,6 +47,8 @@ SPLUNKFORWARDER_SERVERS:
# entire fleet. #
###############################################################################
SPLUNKFORWARDER_HOST_VALUE: !!null
SPLUNKFORWARDER_LOG_ITEMS:
- source: '{{ COMMON_LOG_DIR }}/lms'
recursive: true
......@@ -76,6 +78,10 @@ SPLUNKFORWARDER_LOG_ITEMS:
recursive: true
index: '{{ COMMON_ENVIRONMENT }}-{{ COMMON_DEPLOYMENT }}'
sourcetype: 'rabbitmq'
- source: '/var/log/neo4j'
recursive: true
index: '{{ COMMON_ENVIRONMENT }}-{{ COMMON_DEPLOYMENT }}'
sourcetype: 'neo4j'
#
# OS packages
......
......@@ -37,6 +37,8 @@
dest: "/tmp/{{ SPLUNKFORWARDER_DEB }}"
url: "{{ SPLUNKFORWARDER_PACKAGE_URL }}"
register: download_deb
until: download_deb|succeeded
retries: 5
- name: Install splunk forwarder
shell: "gdebi -nq /tmp/{{ SPLUNKFORWARDER_DEB }}"
......@@ -115,7 +117,7 @@
owner: splunk
group: splunk
mode: "0400"
when: "{{ item.ssl_cert is defined }}"
when: item.ssl_cert is defined
with_items: "{{ SPLUNKFORWARDER_SERVERS }}"
- name: Write root CA to disk
......@@ -125,7 +127,7 @@
owner: splunk
group: splunk
mode: "0400"
when: "{{ item.ssl_cert is defined }}"
when: item.ssl_cert is defined
with_items: "{{ SPLUNKFORWARDER_SERVERS }}"
- name: Create inputs and outputs configuration
......
# {{ ansible_managed }}
{% if SPLUNKFORWARDER_HOST_VALUE is defined %}
[default]
host = {{ SPLUNKFORWARDER_HOST_VALUE }}
{% endif %}
{% for loggable in SPLUNKFORWARDER_LOG_ITEMS%}
[monitor://{{ loggable.source }}]
blacklist = \.(gz)$
{% if loggable.blacklist is defined %}
blacklist = {{ loggable.blacklist }}
{% else %}
blacklist = ((\.(gz))|\d)$
{% endif %}
{% if loggable.recursive | default(False) %}
{# There's a bug in which "recursive" must be unset for logs to be forwarded #}
{# See https://answers.splunk.com/answers/420901/splunk-not-matching-files-with-wildcard-in-monitor.html #}
......
......@@ -115,36 +115,18 @@
- install
- install:base
# 12.04, 14.04, etc.
# 14.04
- name: Create supervisor upstart job
template:
src: "etc/init/supervisor-upstart.conf.j2"
dest: "/etc/init/{{ supervisor_service }}.conf"
owner: root
group: root
when: ansible_distribution_release == 'precise' or ansible_distribution_release == 'trusty'
when: ansible_distribution_release == 'trusty'
tags:
- install
- install:base
# This script is aws specific and looks up instances
# tags and enables services based on the 'services' tag
# on instance startup.
# TODO: 16.04 this cannot simply be dropped, enabling needs to be moved somewhere
# also should not be here, should be in the aws role if it's aws specific
- name: create pre_supervisor upstart job
template:
src: "etc/init/pre_supervisor.conf.j2"
dest: "/etc/init/pre_supervisor.conf"
owner: root
group: root
when: >
supervisor_service == "supervisor" and disable_edx_services and not devstack
and (ansible_distribution_release == 'precise' or ansible_distribution_release == 'trusty')
tags:
- to-remove
- aws-specfic
# NB: with systemd, pre_supervisor is a pre-task for supervisor, not a separate service
- name: Create supervisor systemd job
template:
......
description "Tasks before supervisord"
start on runlevel [2345]
task
{% if credentials_code_dir is defined %}
{% set credentials_command = "--credentials-env " + credentials_home + "/credentials_env --credentials-code-dir " + credentials_code_dir + " --credentials-python " + COMMON_BIN_DIR + "/python.credentials" %}
{% else %}
{% set credentials_command = "" %}
{% endif %}
{% if discovery_code_dir is defined %}
{% set discovery_command = "--discovery-env " + discovery_home + "/discovery_env --discovery-code-dir " + discovery_code_dir + " --discovery-python " + COMMON_BIN_DIR + "/python.discovery" %}
{% else %}
{% set discovery_command = "" %}
{% endif %}
exec {{ supervisor_venv_dir }}/bin/python {{ supervisor_app_dir }}/pre_supervisor_checks.py --available={{ supervisor_available_dir }} --enabled={{ supervisor_cfg_dir }} {% if SUPERVISOR_HIPCHAT_API_KEY is defined %}--hipchat-api-key {{ SUPERVISOR_HIPCHAT_API_KEY }} --hipchat-room {{ SUPERVISOR_HIPCHAT_ROOM }} {% endif %} {% if edxapp_code_dir is defined %}--edxapp-python {{ COMMON_BIN_DIR }}/python.edxapp --edxapp-code-dir {{ edxapp_code_dir }} --edxapp-env {{ edxapp_app_dir }}/edxapp_env{% endif %} {% if xqueue_code_dir is defined %}--xqueue-code-dir {{ xqueue_code_dir }} --xqueue-python {{ COMMON_BIN_DIR }}/python.xqueue {% endif %} {% if ecommerce_code_dir is defined %}--ecommerce-env {{ ecommerce_home }}/ecommerce_env --ecommerce-code-dir {{ ecommerce_code_dir }} --ecommerce-python {{ COMMON_BIN_DIR }}/python.ecommerce {% endif %} {% if insights_code_dir is defined %}--insights-env {{ insights_home }}/insights_env --insights-code-dir {{ insights_code_dir }} --insights-python {{ COMMON_BIN_DIR }}/python.insights {% endif %} {% if analytics_api_code_dir is defined %}--analytics-api-env {{ analytics_api_home }}/analytics_api_env --analytics-api-code-dir {{ analytics_api_code_dir }} --analytics-api-python {{ COMMON_BIN_DIR }}/python.analytics_api {% endif %} {{ discovery_command }} {{ credentials_command }}
......@@ -28,12 +28,17 @@ PermissionsStartOnly=true
User={{ supervisor_service_user }}
Type=forking
TimeoutStartSec=432000
TimeoutSec=432000
ExecStart={{ supervisor_venv_dir }}/bin/supervisord --configuration {{ supervisor_cfg }}
ExecReload={{ supervisor_venv_dir }}/bin/supervisorctl reload
ExecStop={{ supervisor_venv_dir }}/bin/supervisorctl shutdown
# Trust supervisor to kill all its children
# Otherwise systemd will see that ExecStop ^ comes back synchronously and say "Oh, I can kill everyone in this cgroup"
# https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStop=
# https://www.freedesktop.org/software/systemd/man/systemd.kill.html
KillMode=none
[Install]
WantedBy=multi-user.target
......@@ -44,9 +44,9 @@ case "$1" in
"bokchoy")
# Run some of the bok-choy tests
paver test_bokchoy -t discussion/test_discussion.py:DiscussionTabSingleThreadTest
paver test_bokchoy -t studio/test_studio_outline.py:WarningMessagesTest.test_unreleased_published_locked --fasttest
paver test_bokchoy -t lms/test_lms_matlab_problem.py:MatlabProblemTest --fasttest
paver test_bokchoy -t discussion/test_discussion.py::DiscussionTabSingleThreadTest
paver test_bokchoy -t studio/test_studio_outline.py::WarningMessagesTest::test_unreleased_published_locked --fasttest
paver test_bokchoy -t lms/test_lms_matlab_problem.py::MatlabProblemTest --fasttest
;;
"lettuce")
......
......@@ -14,7 +14,7 @@ jenkins_tools_plugins:
- { name: "ssh-credentials", version: "1.12" }
- { name: "ssh-agent", version: "1.13" }
- { name: "bouncycastle-api", version: "1.648.3" }
- { name: "token-macro", version: "1.12.1" }
- { name: "token-macro", version: "2.1" }
- { name: "parameterized-trigger", version: "2.32" }
- { name: "conditional-buildstep", version: "1.3.5" }
- { name: "run-condition", version: "0.10" }
......@@ -44,8 +44,10 @@ jenkins_tools_plugins:
- { name: "plain-credentials", version: "1.2" }
- { name: "github-oauth", version: "0.24" }
- { name: "gradle", version: "1.25" }
- { name: "credentials-binding", version: "1.9" }
- { name: "credentials-binding", version: "1.10" }
- { name: "envinject", version: "1.92.1" }
- { name: "email-ext", version: "2.57.2" }
- { name: "text-finder", version: "1.10"}
# matrix-auth is now pinned to avoid Jenkins overriding
# 1.3 and later requires icon-shim
......@@ -66,3 +68,13 @@ jenkins_tools_bundled_plugins:
- "pam-auth"
- "ssh-credentials"
- "ssh-slaves"
jenkins_tools_debian_pkgs:
- nginx
- git
- maven
- daemon
- python-pycurl
- psmisc
- mysql-client-core-5.6
- ruby-sass
......@@ -6,6 +6,14 @@ dependencies:
- role: jenkins_master
jenkins_plugins: "{{ jenkins_tools_plugins }}"
jenkins_version: "{{ jenkins_tools_version }}"
jenkins_deb_url: "http://pkg.jenkins-ci.org/debian-stable/binary/jenkins_{{ jenkins_version }}_all.deb"
jenkins_deb_url: "https://pkg.jenkins.io/debian-stable/binary/jenkins_{{ jenkins_version }}_all.deb"
jenkins_custom_plugins: []
jenkins_bundled_plugins: "{{ jenkins_tools_bundled_plugins }}"
jenkins_debian_pkgs: "{{ jenkins_tools_debian_pkgs }}"
# Needed to be able to build docker images. Used by Docker Image Builder Jobs.
- role: docker-tools
docker_users:
- '{{ jenkins_user }}'
- role: mongo_client
---
# The deadsnakes PPA is required to install python3.5 on Precise and Trusty.
# The deadsnakes PPA is required to install python3.5 on Trusty.
# Xenial comes with python3.5 installed.
- name: add deadsnakes repository
apt_repository:
repo: "ppa:fkrull/deadsnakes"
when: ansible_distribution_release == 'precise' or ansible_distribution_release == 'trusty'
when: ansible_distribution_release == 'trusty'
tags:
- install
- install:system-requirements
......@@ -15,7 +15,7 @@
with_items:
- python3.5
- python3.5-dev
when: ansible_distribution_release == 'precise' or ansible_distribution_release == 'trusty'
when: ansible_distribution_release == 'trusty'
tags:
- install
- install:system-requirements
......@@ -126,9 +126,11 @@
url: "https://github.com/{{ item.name }}.keys"
return_content: true
# We don't care if absent users lack ssh keys
when: item.get('state', 'present') == 'present'
when: item.get('state', 'present') == 'present' and item.github is defined
with_items: "{{ user_info }}"
register: github_users_return
until: github_users_return|succeeded
retries: 5
- name: Print warning if github user(s) missing ssh key
debug:
......@@ -150,6 +152,9 @@
exclusive: yes
key: "https://github.com/{{ item.name }}.keys"
when: item.github is defined and item.get('state', 'present') == 'present'
register: task_result
until: task_result|succeeded
retries: 5
with_items: "{{ user_info }}"
- name: Create bashrc file for normal users
......
......@@ -4,7 +4,6 @@
# The current process is described here: https://openedx.atlassian.net/wiki/x/dQArCQ
#
ANALYTICS_API_EMAIL_HOST_PASSWORD: !!null
ANALYTICS_PIPELINE_OUTPUT_DATABASE_PASSWORD: !!null
ANALYTICS_SCHEDULE_MASTER_SSH_CREDENTIAL_PASSPHRASE: !!null
COMMON_HTPASSWD_PASS: !!null
......
......@@ -100,7 +100,6 @@
# SUBDOMAIN_COURSE_LISTINGS: false
# PREVIEW_LMS_BASE: "{{ EDXAPP_PREVIEW_LMS_BASE }}"
# ENABLE_GRADE_DOWNLOADS: true
# USE_CUSTOM_THEME: "{{ edxapp_use_custom_theme }}"
# ENABLE_MKTG_SITE: "{{ EDXAPP_ENABLE_MKTG_SITE }}"
# AUTOMATIC_AUTH_FOR_TESTING: "{{ EDXAPP_ENABLE_AUTO_AUTH }}"
# ENABLE_THIRD_PARTY_AUTH: "{{ EDXAPP_ENABLE_THIRD_PARTY_AUTH }}"
......
# Example ansible commands
# Three node replica set
# ansible-playbook -i '203.0.113.12,203.0.113.20,203.0.113.68' -u ubuntu edx-east/mongo_3_2.yml -e@sample_vars/test-mongo.yml
# Single node
# ansible-playbook -i '203.0.113.12' -u ubuntu edx-east/mongo_3_2.yml -e@sample_vars/test-mongo.yml
# Passwords and relication keys in this file are examples and must be changed.
# You must change any variable with the string "CHANGEME" in it
MONGO_HEARTBEAT_TIMEOUT_SECS: 3
EDXAPP_MONGO_HOSTS: "{{ MONGO_RS_CONFIG.members|map(attribute='host')|list }}"
MONGO_VOLUMES:
- device: /dev/xvdb
mount: /edx/var/mongo
options: "defaults,noatime"
fstype: ext4
- device: /dev/xvdc
mount: /edx/var/mongo/mongodb/journal
options: "defaults,noatime"
fstype: ext4
##### edx-secure/ansible/vars/stage-edx.yml #####
MONGO_ADMIN_USER: 'admin'
MONGO_ADMIN_PASSWORD: 'CHANGEME_794jtB7zLIvDjHGu2gD6wKUU'
MONGO_MONITOR_USER: 'cloud-manager'
MONGO_MONITOR_PASSWORD: 'CHANGEME_7DJ9FTWHJx4TCSPxSmx1k3DD'
MONGO_BACKUP_USER: 'backup'
MONGO_BACKUP_PASSWORD: 'CHANGEME_XbJA3LouKV5QDv2NQixnOrQj'
MONGO_REPL_SET: 'test-repl-set'
MONGO_RS_CONFIG:
_id: '{{ MONGO_REPL_SET }}'
members:
# Must use private IPs here, mongo role assumes internal ips when checking if node is in this list
- host: '203.0.113.12'
- host: '203.0.113.20'
- host: '203.0.113.68'
MONGO_CLUSTER_KEY: |
CHANGEME/CHANGE/ME/CHANGE/ME9YeSrVDYxont1rDh2nBAEGB30PhwG9ghtPY
c1QUc2etVfMnE9vbUhLimU/Xb4j4yLRDurOTi8eYoE8eAvAquLalcz7URMuw8Qt3
fIyFa3wSXyE04rpsoBrpG53HwwFrN3pra3x4YPs8g77v50V56gfwaStNJ3KPpa5w
RukdFXnCUPRyONSJEYwjPzI2WucnAZqlDYre6qjxL+6hCjZ4vS/RPgfoHGTUQ62W
9k2TiWar/c1nL6rZvGhGJHFmZalyL9pJ4SAaYoFPhCmcHusyzjlM8p27AsyJwDyr
kSI/JPBLMLDoiLUAPHGz1jrGM+iOgTilmfPVy+0UVc9Bf2H4Vs1zKJpUM2RNAPJ7
S9DzB6q8WtRothbEtwnppWojceid202uLEYCpqhCcH6LR0lTcyJiXCRyHAtue813
5Djv1m3Z8p2z6B+3ab7CDq+WV9OrBI7+eynnwYGgp4eIHQNNSb1/x/8TeiVMQYyJ
ONj4PbgVwsdhL+RUuVqCzjK0F4B4FOSSKXbu07L4F/PALqVugH/YebAUAJVo027r
ca669FSrQ8q6Jgx3M1mCoZkp23CVt3B28+EwpyABh6cwxIrTIvxU6cvxX8M2piz+
63nKUKoStNhmRA0EGfbY9WRmk1RNlC2jVJAvvJUnNXnouNF2DGV4pRNGlb7yfS+n
S+3ZZpUDpTLx36CWGPJ1ZpwuZ0p5JPbCSW6gpFZqGFZsQERg6L8Q9FkwESnbfw+V
oDiVJlClJA2AFXMnAt9q1dhM7OVBj12x9YI5yf1Lw0vVLb7JDmWI7IGaibyxtjFi
jO4bAEl4RZu3364nFH/nVf6kV2S29pAREMqxbcR5O75OuHFN9cqG7BhYClg+5mWg
mGKLLgpXsJxd6bMGjxH1uc30E2qbU1mkrW29Ocl5DFuXevK2dxVj71ZiYESIUg87
KRdC8S3Mljym9ruu4nDC3Sk4xLLuUGp/yD2O0B0dZTfYOJdt
COMMON_MONGO_READ_ONLY_USER: 'read_only'
COMMON_MONGO_READ_ONLY_PASS: "CHANGEME correct horse battery staple"
EDXAPP_MONGO_PASSWORD: 'CHANGEME_H8uoZEZJun9BeR5u8mMyA4yh'
EDXAPP_MONGO_USER: 'edxapp003'
FORUM_MONGO_USER: "comments001"
FORUM_MONGO_PASSWORD: "CHANGEME_j5fhX0pOwEL1S5WUFZkbZAyZ"
login_host: "{{ EDXAPP_MONGO_HOSTS[1] }}"
repl_set: "{{ EDXAPP_MONGO_REPLICA_SET }}"
MONGO_USERS:
- user: "{{ EDXAPP_MONGO_USER }}"
password: "{{ EDXAPP_MONGO_PASSWORD }}"
database: "{{ EDXAPP_MONGO_DB_NAME }}"
roles: readWrite
- user: "{{ COMMON_MONGO_READ_ONLY_USER }}"
password: "{{ COMMON_MONGO_READ_ONLY_PASS }}"
database: "{{ EDXAPP_MONGO_DB_NAME }}"
roles:
- { db: "{{ EDXAPP_MONGO_DB_NAME }}", role: "read" }
- { db: "admin", role: "clusterMonitor" }
- user: "{{ MONGO_MONITOR_USER }}"
password: "{{ MONGO_MONITOR_PASSWORD }}"
database: "admin"
roles: clusterMonitor
- user: "{{ MONGO_BACKUP_USER }}"
password: "{{ MONGO_BACKUP_PASSWORD }}"
database: "admin"
roles: backup
EDXAPP_MONGO_DB_NAME: 'test-mongo-db'
EDXAPP_MONGO_PORT: 27017
EDXAPP_MONGO_REPLICA_SET: '{{ MONGO_REPL_SET }}'
......@@ -8,7 +8,6 @@
- "cluster1"
- "cluster2"
- "cluster3"
MONGO_CLUSTERED: yes
MONGO_CLUSTER_KEY: 'password'
ELASTICSEARCH_CLUSTERED: yes
MARIADB_CLUSTERED: yes
......
......@@ -5,6 +5,7 @@
vars:
migrate_db: 'yes'
devstack: true
edx_django_service_is_devstack: true
disable_edx_services: true
mongo_enable_journal: false
EDXAPP_NO_PREREQ_INSTALL: 0
......@@ -17,6 +18,7 @@
COMMON_SECURITY_UPDATES: true
SECURITY_UPGRADE_ON_ANSIBLE: true
MONGO_AUTH: false
DISCOVERY_URL_ROOT: 'http://localhost:{{ DISCOVERY_NGINX_PORT }}'
vars_files:
- roles/edxapp/vars/devstack.yml
roles:
......@@ -26,7 +28,7 @@
- mysql
- edxlocal
- memcache
- mongo
- mongo_3_2
- role: rabbitmq
rabbitmq_ip: 127.0.0.1
- edxapp
......@@ -36,6 +38,7 @@
- ecommerce
- role: ecomworker
ECOMMERCE_WORKER_BROKER_HOST: 127.0.0.1
- discovery
- role: notifier
NOTIFIER_DIGEST_TASK_INTERVAL: "5"
- browsers
......
......@@ -2,7 +2,7 @@ ansible==2.2.0.0
PyYAML==3.12
Jinja2==2.8
MarkupSafe==0.23
boto==2.33.0
boto==2.48.0
ecdsa==0.11
paramiko==2.0.2
pycrypto==2.6.1
......@@ -17,9 +17,10 @@ requests==2.9.1
datadog==0.8.0
networkx==1.11
pathlib2==2.1.0
boto3==1.4.4
# Needed for the mongo_* modules (playbooks/library/mongo_*)
pymongo==3.1
pymongo==3.2.2
# Needed for the mysql_db module
MySQL-python==1.2.5
# Needed for SES limits check job
boto3==1.4.4
awscli==1.11.58
......@@ -49,9 +49,11 @@ To modify configuration file:
6. Wait for Travis CI to run the builds.
7. Upon completion, examine the Travis CI logs to find where your Dockerfile
was built (search for "docker build -t"). Find the amount of time the build
took by comparing the output of the date command before the build command
starts and the date command after the build command completes.
was built (search for "docker build -t"). Your Dockerfile should be built
by one of the build jobs with "MAKE_TARGET=docker.test.shard". Find the
amount of time the build took by comparing the output of the date command
before the build command starts and the date command after the build
command completes.
8. Round build time to a whole number, and add it to the
configuration/util/parsefiles\_config.yml file.
......
import boto3
import argparse
import sys
import yaml
from pprint import pprint
def find_active_instances(cluster_file, region):
"""
Determines if a given cluster has at least one ASG and at least one active instance.
Input:
cluster_file: a yaml file containing a dictionary of triples that specify the particular cluster to monitor.
The keys of each entry in the dictionary are 'env', 'deployment', and 'cluster', specifying the environment, deployment,
and cluster to find ASG's and active instances for.
"""
with open(cluster_file, 'r') as f:
cluster_map = yaml.safe_load(f)
asg = boto3.client('autoscaling', region)
all_groups = asg.describe_auto_scaling_groups()
# dictionary that contains the environment/deployment/cluster triple as the key and the value is a list of the asgs that match the triple
all_matching_asgs = {}
# all the triples for which an autoscaling group does not exist
not_matching_triples = []
# check if there exists at least one ASG for each triple
for triple in cluster_map:
#the asgs that match this particular triple
cluster_asgs = []
for g in all_groups['AutoScalingGroups']:
match_env = False
match_deployment = False
match_cluster = False
for tag in g['Tags']:
if tag['Key'] == 'environment' and tag['Value'] == triple['env']:
match_env = True
if tag['Key'] == 'deployment' and tag['Value'] == triple['deployment']:
match_deployment = True
if tag['Key'] == 'cluster' and tag['Value'] == triple['cluster']:
match_cluster = True
if match_env and match_cluster and match_deployment:
cluster_asgs += [g]
if not cluster_asgs:
not_matching_triples += [triple]
else:
triple_str = triple['env'] + '-' + triple['deployment'] + '-' + triple['cluster']
all_matching_asgs[triple_str] = cluster_asgs
#The triples that have no active instances
no_active_instances_triples = []
#check that each triple has at least one active instance in at least one of its ASG's
for triple in all_matching_asgs:
asgs = all_matching_asgs[triple]
triple_has_active_instances = False
for asg in asgs:
for instance in asg['Instances']:
if instance['LifecycleState'] == 'InService':
triple_has_active_instances = True
if not triple_has_active_instances:
no_active_instances_triples += [triple]
if no_active_instances_triples or not_matching_triples:
if not_matching_triples:
print('Fail. There are no autoscaling groups found for the following cluster(s):')
pprint(not_matching_triples)
if no_active_instances_triples:
print("Fail. There are no active instances for the following cluster(s)")
for triple in no_active_instances_triples:
print('environment: ' + triple.split('-')[0])
print('deployment: ' + triple.split('-')[1])
print('cluster: ' + triple.split('-')[2])
print('----')
sys.exit(1)
print("Success. ASG's with active instances found for all of the cluster triples.")
sys.exit(0)
if __name__=="__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--file', help='Yaml file of env/deployment/cluster triples that we want to find active instances for', required=True)
parser.add_argument('-r', '--region', help="Region that we want to find ASG's and active instances in", default='us-east-1', required=True)
args = parser.parse_args()
find_active_instances(args.file, args.region)
Course Permutation Tool for Developer Environments
########################
This is a tool to add default courses to developer environments, specifically for
devstack and sandboxes. The goal is for developers to have access to courses with
metadata that matches that in production, and to provide a way to generate course
permutations. It will be comprised of a permutations JSON file, which
includes permutation options and default values, and also a Python script that will
generate the final file that will get passed into a course creation script.
More info to come once finalized
{
"permutation_data": {
"start": [
"past",
"future",
null
],
"end": [
"past",
"future",
null
],
"seats": [
[
"audit"
],
[
"verified"
],
[
"audit",
"verified"
],
[],
null
],
"display_name": [
"International Project Management",
"Cybersecurity Fundamentals",
"",
null
],
"mobile_available": [
true,
false,
null
]
},
"default_data": {
"start": "past",
"end": "future",
"seats": [
{
"type": [
"audit",
"verified"
],
"upgrade_deadline": "future"
}
],
"display_name": "International Project Management",
"mobile_available": true
}
}
#!/usr/bin/env bash
set -euo pipefail
#
# Thin wrapper around logstash. You will first have to install logstash. Simply
# downloading the tar.gz from their site is sufficient. Note that logstash may have
# different JVM version requiements than what is available on your machine.
#
# https://www.elastic.co/products/logstash
#
# Assumes that logstash is in your path.
#
# Copies an index from an elasticsearch source server to a target server.
# The target server can be the same as the source.
#
# Usage:
# copy-index.sh SOURCE_SERVER SOURCE_INDEX TARGET_SERVER TARGET_INDEX [WORKERS]
#
# Example:
# ./copy-index.sh http://localhost source_index http://localhost target_index
#
SOURCE_SERVER=$1
SOURCE_INDEX=$2
TARGET_SERVER=$3
TARGET_INDEX=$4
WORKERS="${5:-6}"
read -d '' filter <<EOF || true #read won't find its delimiter and exit with status 1, this is intentional
input {
elasticsearch {
hosts => "$SOURCE_SERVER"
index => "$SOURCE_INDEX" #content for forums
scroll => "12h" #must be as long as the run takes to complete
scan => true #scan through all indexes efficiently
docinfo => true #necessary to move document_type and document_id over
}
}
output {
elasticsearch {
hosts => "$TARGET_SERVER"
index => "$TARGET_INDEX" #same as above
manage_template => false
document_type => "%{[@metadata][_type]}"
document_id => "%{[@metadata][_id]}"
}
stdout {
codec => "dots" #Print a dot when stuff gets moved so we know it's working
}
}
filter {
mutate {
remove_field => ["@timestamp", "@version"] #these fields get added by logstash for some reason
}
}
EOF
logstash -w "$WORKERS" -e "$filter"
#!/usr/bin/env bash
set -euo pipefail
#
# Thin wrapper around rake search:catchup for cs_comment_service (forums).
#
# Reindexes documents created since WINDOW ago.
# If SLEEP_TIME is set to any number greater than 0, loops indefinitely. Since re-
# indexing can only yield correct results, the only risk of setting WINDOW too large
# is poor performance.
#
# Usage:
# source ../forum_env; ./incremental-reindex.sh INDEX [WINDOW] [SLEEP_TIME] [BATCH_SIZE]
#
# Args:
# INDEX The index to re-index
# WINDOW Number of minutes ago to re-index from
# SLEEP_TIME Number of seconds to sleep between re-indexing
# BATCH_SIZE Number of documents to index per batch
#
# Example:
# ./incremental-reindex.sh content 30
#
INDEX="$1"
WINDOW="${2:-5}"
SLEEP_TIME="${3:-60}"
BATCH_SIZE="${4:-500}"
if [ "$SLEEP_TIME" -ge "$((WINDOW * 60))" ]; then
echo 'ERROR: SLEEP_TIME must not be longer than WINDOW, or else documents may be missed.'
exit 1
fi
while : ; do
echo "reindexing documents newer than $WINDOW minutes..."
rake search:catchup["$WINDOW","$INDEX","$BATCH_SIZE"]
echo "done. Sleeping $SLEEP_TIME seconds..."
sleep "$SLEEP_TIME"
[ "$SLEEP_TIME" -le 0 ] && break
done
deepdiff==3.1.0
elasticsearch==0.4.5
# -*- coding: utf-8 -*-
"""
Verifies that an index was correctly copied from one ES host to another.
"""
import itertools
import pprint
import random
from deepdiff import DeepDiff
from elasticsearch import Elasticsearch
from elasticsearch.helpers import scan
from argparse import ArgumentParser
description = """
Compare two Elasticsearch indices
"""
SCAN_ITER_STEP = 50
SCAN_MATCH_THRESHOLD = .9
RANDOM_CHECK_SIZE = 10
RANDOM_CHECKS_BEFORE_RESET = 100
def parse_args():
"""
Parse the arguments for the script.
"""
parser = ArgumentParser(description=description)
parser.add_argument(
'-o', '--old', dest='old', required=True, nargs=2,
help='Hostname and index of old ES host, e.g. https://localhost:9200 content'
)
parser.add_argument(
'-n', '--new', dest='new', required=True, nargs=2,
help='Hostname of new ES host, e.g. https://localhost:9200 content'
)
parser.add_argument(
'-s', '--scan', dest='scan', action="store_true",
help='Run a full scan comparison instead of a random selection.'
)
parser.add_argument(
'-c', '--check-percentage', dest='check_percentage', type=float, default=.1,
help='Percentage of randomly found docs to check between old and new indices (default: .1)'
)
return parser.parse_args()
def grouper(iterable, n):
"""
Collect data into fixed-length chunks or blocks
from the import itertools recipe list: https://docs.python.org/3/library/itertools.html#recipes
"""
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.izip_longest(*args)
def docs_match(old_doc, new_doc):
"""
Return True if the the docs match, minus the ignorable fields
Args:
old_doc: a dict of an elasticsearch doc from the old cluster
new_doc: a dict of an elasticsearch doc from the new cluster
"""
"""
example doc:
{'dictionary_item_added': {
"root['_source']['_id']",
"root['_source']['abuse_flaggers']",
"root['_source']['anonymous']",
"root['_source']['anonymous_to_peers']",
"root['_source']['at_position_list']",
"root['_source']['author_username']",
"root['_source']['closed']",
"root['_source']['comment_count']",
"root['_source']['historical_abuse_flaggers']",
"root['_source']['pinned']",
"root['_source']['thread_type']",
"root['_source']['visible']",
"root['_source']['votes']",
"root['found']"},
'dictionary_item_removed': {
"root['_source']['id']",
"root['_source']['thread_id']",
"root['_source']['votes_point']",
"root['exists']"},
'values_changed': {
"root['_index']": {
'new_value': u'content_20170324145539907',
'old_value': u'content_20151207225034'},
"root['_source']['body']": {
'new_value': u'encryption neglect hypothesize polluters wining pitiably prophetess apostrophe foretelling assignments diaphragms trustees scroll scruffs shrivels characterizes digraph lasted sharked rewind chamoix charier protoplasm rapports isolated upbraid mortgaged cuddled indefinitely sinful insaner slenderized cemetery deject soundly preventable',
'old_value': u'embellishing orbitals complying alternation welching sepulchered grate blench placenta landslide dependance hurdle predicted chaplet earsplitting assess awol necrosis freeways skipper delicatessen sponsorship bellboys antiseptics gabardines admittedly screechier professional roughness educations nutting valences iridescence deductions'},
"root['_source']['title']": {
'new_value': u'southpaw afterward playgoers roughed requites arrived byplay ninetieth textural rental foreclosing',
   'old_value': u'guttersnipes corduroys ghostly discourtesies'},
"root['_source']['updated_at']": {
'new_value': u'2017-03-29T18:51:19Z',
   'old_value': u'2017-03-28T12:58:02Z'},
"root['_version']": {
'new_value': 20,
'old_value': 1}}}
"""
ignorable_fields = [
"root['exists']",
"root['found']",
"root['_index']",
"root['updated_at']",
"root['_version']",
"root['_score']",
]
diff_types = ['dictionary_item_added', 'dictionary_item_removed', 'values_changed']
diff_doc = DeepDiff(old_doc, new_doc)
if 'values_changed' not in diff_doc:
diff_doc['values_changed'] = set()
#if this fails something is horribly wrong
if set(diff_doc.keys()) != set(diff_types):
print 'ERROR: expected to be diffing dictionaries, got something else! id: {}'.format(
new_doc['_id'])
for diff_type in diff_types:
for field in ignorable_fields:
if diff_type in diff_doc:
#values_changed is a set, the other two are dicts
if isinstance(diff_doc[diff_type], set):
diff_doc[diff_type].discard(field)
else:
diff_doc[diff_type].pop(field, None)
return all(len(diff_doc[diff_type]) == 0 for diff_type in diff_types)
def find_matching_ids(es, index, ids, docs):
"""
Finds out how many of the ids in the given ids are in the given index in the given
ES deployment.
We also compare documents to ensure that those still match, skipping a few fields
(see docs_match() for which ones).
Args:
es - Elasticsearch instance corresponding to the cluster we want to check
index - name of the index that we want to check
ids - a list of dictionaries of the form {'_id': <id>} of the ids we want to check.
docs - a dictionary of the form {'<id>': document}, where "document"s are full ES docs
"""
body = {'docs': ids}
search_result = es.mget(index=index, body=body)
matching = 0
for elt in search_result['docs']:
# Checks whether or not there was a document matching the id at all.
# 'exists' is 0.9.x
# 'found' is 1.5.x
if elt.get('exists', False) or elt.get('found', False):
if docs_match(docs[elt['_id']], elt):
matching += 1
else:
print 'FAILURE: Documents with id {id} do not match: '.format(
id=elt['_id']
) + repr({'diff': DeepDiff(docs[elt['_id']], elt), 'new': elt, 'old': docs[elt['_id']]})
else:
print 'ERROR: Document with id {id} missing: {doc}'.format(
id=elt['_id'], doc=docs[elt['_id']]
)
return matching
def scan_documents(old_es, new_es, old_index, new_index):
"""
Scan for matching documents
In order to match the two indices without having to deal with ordering issues,
we pull a set of documents from the old ES index, and then try to find matching
documents with the same _id in the new ES index. This process is batched to avoid
making individual network calls to the new ES index.
"""
matching = 0
total = 0
old_iter = scan(old_es, index=old_index)
for old_elts in grouper(old_iter, SCAN_ITER_STEP):
old_elt_ids = []
old_elt_docs = {}
for elt in old_elts:
if elt is not None:
old_elt_ids.append({'_id': elt['_id']})
old_elt_docs[elt['_id']] = elt
matching += find_matching_ids(new_es, new_index, old_elt_ids, old_elt_docs)
total += len(old_elt_ids)
if total % 100 == 0:
print 'processed {} items'.format(total)
ratio = float(matching)/total
print "{}: scanned documents matching ({} out of {}, {:.6}%)".format(
'OK' if ratio > SCAN_MATCH_THRESHOLD else 'FAILURE', matching, total, ratio * 100
)
def random_checks(old_es, new_es, old_index, new_index, total_document_count, check_percentage):
"""
Check random documents
This is meant to be a random search trying to spot checks on whether
or not data was moved over correctly. Runs a lot faster than the full scan.
"""
total = 0
matching = 0
current_offset = -1
while float(total) / total_document_count < check_percentage:
# We only want to page a certain amount before regenerating a new set of
# random documents.
if current_offset > RANDOM_CHECKS_BEFORE_RESET or current_offset < 0:
seed = random.randint(0, 1000)
current_offset = 0
body = {
'size': RANDOM_CHECK_SIZE,
'from': current_offset,
'query': {
'function_score': {
'functions': [{
'random_score': {
'seed': seed
}
}]
}
}
}
results = old_es.search(
index=old_index, body=body
)
ids = []
docs = {}
for elt in results['hits']['hits']:
ids.append({'_id': elt['_id']})
docs[elt['_id']] = elt
matching += find_matching_ids(new_es, new_index, ids, docs)
num_elts = len(ids)
total += num_elts
current_offset += num_elts
if total % 100 == 0:
print 'processed {} items'.format(total)
ratio = float(matching) / total
print "{}: random documents matching ({} out of {}, {}%)".format(
'OK' if ratio > SCAN_MATCH_THRESHOLD else 'FAILURE', matching, total, int(ratio * 100)
)
def check_mappings(old_mapping, new_mapping):
"""
Verify that the two mappings match in terms of keys and properties
Args:
- old_mapping (dict) - the mappings from the older ES
- new_mapping(dict) - the mappings from the newer ES
"""
deep_diff = DeepDiff(old_mapping, new_mapping)
if deep_diff != {}:
print "FAILURE: Index mappings do not match"
pprint.pprint(deep_diff)
else:
print "OK: Index mappings match"
def main():
"""
Run the verification.
"""
args = parse_args()
old_es = Elasticsearch([args.old[0]])
new_es = Elasticsearch([args.new[0]])
old_index = args.old[1]
new_index = args.new[1]
old_stats = old_es.indices.stats(index=old_index)['indices'].values()[0]['primaries']
new_stats = new_es.indices.stats(index=new_index)['indices'].values()[0]['primaries']
#compare document count
old_count = old_stats['docs']['count']
new_count = new_stats['docs']['count']
print "{}: Document count ({} = {})".format(
'OK' if old_count == new_count else 'FAILURE', old_count, new_count
)
old_size = old_stats['store']['size_in_bytes']
new_size = new_stats['store']['size_in_bytes']
print "{}: Index size ({} = {})".format(
'OK' if old_count == new_count else 'FAILURE', old_size, new_size
)
def get_mappings(es, index):
# for 1.5.x, there is an extra 'mappings' field that holds the mappings.
mappings = es.indices.get_mapping(index=index).values()[0]
new_style = mappings.get('mappings', None)
return new_style if new_style is not None else mappings
# Verify that the mappings match between old and new
old_mapping = get_mappings(old_es, old_index)
new_mapping = get_mappings(new_es, new_index)
check_mappings(old_mapping, new_mapping)
if args.scan:
scan_documents(old_es, new_es, old_index, new_index)
else:
random_checks(old_es, new_es, old_index, new_index, new_count, args.check_percentage)
"""
index.stats()
elasticsearch.scroll()
use without scan during downtime
elasticsearch.helpers.scan is an iterator (whew)
sample first, then full validation
is old subset of new?
is number of edits small?
no numeric ids
can use random scoring?
{"size": 1, "query": {"function_score": {"functions":[{"random_score": {"seed": 123456}}]}}}
use that with scroll and check some number
can't use scroll with sorting. Maybe just keep changing the seed?
It's kinda slow, but probably fine
get `size` at a time
are random sorts going to get the same docs on both clusters?
Alternative: random score with score cutoff? Or script field and search/cutoff
Might also be able to use track_scores with scan&scroll on 1.5 and a score cutoff
"""
if __name__ == '__main__':
main()
......@@ -47,7 +47,6 @@ VIRTUAL_ENV="/tmp/bootstrap"
PYTHON_BIN="${VIRTUAL_ENV}/bin"
ANSIBLE_DIR="/tmp/ansible"
CONFIGURATION_DIR="/tmp/configuration"
EDX_PPA="deb http://ppa.edx.org precise main"
EDX_PPA_KEY_SERVER="keyserver.ubuntu.com"
EDX_PPA_KEY_ID="B41E5E3969464050"
......@@ -70,10 +69,7 @@ if [[ $(id -u) -ne 0 ]] ;then
exit 1;
fi
if grep -q 'Precise Pangolin' /etc/os-release
then
SHORT_DIST="precise"
elif grep -q 'Trusty Tahr' /etc/os-release
if grep -q 'Trusty Tahr' /etc/os-release
then
SHORT_DIST="trusty"
elif grep -q 'Xenial Xerus' /etc/os-release
......@@ -82,7 +78,7 @@ then
else
cat << EOF
This script is only known to work on Ubuntu Precise, Trusty and Xenial,
This script is only known to work on Ubuntu Trusty and Xenial,
exiting. If you are interested in helping make installation possible
on other platforms, let us know.
......
#!/bin/bash
##
## Installs the pre-requisites for running edX on a single Ubuntu 12.04
## Installs the pre-requisites for running edX on a single Ubuntu 16.04
## instance. This script is provided as a convenience and any of these
## steps could be executed manually.
##
......@@ -54,7 +54,6 @@ VERSION_VARS=(
ECOMMERCE_WORKER_VERSION
)
EXTRA_VARS="-e SANDBOX_ENABLE_ECOMMERCE=True $EXTRA_VARS"
for var in ${VERSION_VARS[@]}; do
# Each variable can be overridden by a similarly-named environment variable,
# or OPENEDX_RELEASE, if provided.
......@@ -90,4 +89,4 @@ sudo -H pip install -r requirements.txt
##
## Run the edx_sandbox.yml playbook in the configuration/playbooks directory
##
cd /var/tmp/configuration/playbooks && sudo -E ansible-playbook -c local ./edx_sandbox.yml -i "localhost," $EXTRA_VARS
cd /var/tmp/configuration/playbooks && sudo -E ansible-playbook -c local ./edx_sandbox.yml -i "localhost," $EXTRA_VARS "$@"
......@@ -127,9 +127,9 @@ fi
if [[ -z $ami ]]; then
if [[ $server_type == "full_edx_installation" ]]; then
ami="ami-dca185ca"
ami="ami-dd9d81a6"
elif [[ $server_type == "ubuntu_16.04" || $server_type == "full_edx_installation_from_scratch" ]]; then
ami="ami-20631a36"
ami="ami-1d4e7a66"
fi
fi
......@@ -137,6 +137,10 @@ if [[ -z $instance_type ]]; then
instance_type="t2.large"
fi
if [[ -z $instance_initiated_shutdown_behavior ]]; then
instance_initiated_shutdown_behavior="terminate"
fi
if [[ -z $enable_newrelic ]]; then
enable_newrelic="false"
fi
......@@ -157,6 +161,10 @@ if [[ -z $edx_demo_course ]]; then
edx_demo_course="false"
fi
if [[ -z $enable_automatic_auth_for_testing ]]; then
enable_automatic_auth_for_testing="false"
fi
if [[ -z $enable_client_profiling ]]; then
enable_client_profiling="false"
fi
......@@ -183,7 +191,6 @@ edx_ansible_source_repo: ${configuration_source_repo}
edx_platform_repo: ${edx_platform_repo}
EDXAPP_PLATFORM_NAME: $sandbox_platform_name
EDXAPP_COMPREHENSIVE_THEME_DIRS: $edxapp_comprehensive_theme_dirs
EDXAPP_STATIC_URL_BASE: $static_url_base
EDXAPP_LMS_NGINX_PORT: 80
......@@ -265,10 +272,12 @@ COMMON_USER_INFO:
USER_CMD_PROMPT: '[$name_tag] '
COMMON_ENABLE_NEWRELIC_APP: $enable_newrelic
COMMON_ENABLE_DATADOG: $enable_datadog
COMMON_OAUTH_BASE_URL: "https://${deploy_host}"
FORUM_NEW_RELIC_ENABLE: $enable_newrelic
ENABLE_PERFORMANCE_COURSE: $performance_course
ENABLE_DEMO_TEST_COURSE: $demo_test_course
ENABLE_EDX_DEMO_COURSE: $edx_demo_course
EDXAPP_ENABLE_AUTO_AUTH: $enable_automatic_auth_for_testing
EDXAPP_NEWRELIC_LMS_APPNAME: sandbox-${dns_name}-edxapp-lms
EDXAPP_NEWRELIC_CMS_APPNAME: sandbox-${dns_name}-edxapp-cms
EDXAPP_NEWRELIC_WORKERS_APPNAME: sandbox-${dns_name}-edxapp-workers
......@@ -306,6 +315,7 @@ security_group: $security_group
ami: $ami
region: $region
zone: $zone
instance_initiated_shutdown_behavior: $instance_initiated_shutdown_behavior
instance_tags:
environment: $environment
github_username: $github_username
......
......@@ -88,14 +88,6 @@ if [[ ! -z "$configurationprivaterepo" ]]; then
fi
fi
configurationinternal_params=""
if [[ ! -z "$configurationinternalrepo" ]]; then
configurationinternal_params="--configuration-internal-repo $configurationinternalrepo"
if [[ ! -z "$configurationinternalversion" ]]; then
configurationinternal_params="$configurationinternal_params --configuration-internal-version $configurationinternalversion"
fi
fi
hipchat_params=""
if [[ ! -z "$hipchat_room_id" ]] && [[ ! -z "$hipchat_api_token" ]]; then
hipchat_params="--hipchat-room-id $hipchat_room_id --hipchat-api-token $hipchat_api_token"
......@@ -135,4 +127,16 @@ cd util/vpc-tools/
echo "$vars" > /var/tmp/$BUILD_ID-extra-vars.yml
cat /var/tmp/$BUILD_ID-extra-vars.yml
configuration_internal_var="configuration_internal_version"
configurationinternalversion=$(grep "$configuration_internal_var" "/var/tmp/$BUILD_ID-extra-vars.yml" | awk -F: '{print $2}')
configurationinternal_params=""
if [[ ! -z "$configurationinternalrepo" ]]; then
configurationinternal_params="--configuration-internal-repo $configurationinternalrepo"
if [[ ! -z "$configurationinternalversion" ]]; then
configurationinternal_params="$configurationinternal_params --configuration-internal-version $configurationinternalversion"
fi
fi
python -u abbey.py -p $play -t m3.large -d $deployment -e $environment $base_params $blessed_params $playbookdir_params --vars /var/tmp/$BUILD_ID-extra-vars.yml -c $BUILD_NUMBER --configuration-version $configuration --configuration-secure-version $configuration_secure -k $jenkins_admin_ec2_key --configuration-secure-repo $jenkins_admin_configuration_secure_repo $configurationprivate_params $configurationinternal_params $hipchat_params $cleanup_params $notification_params $datadog_params $region_params $identity_params
#!/usr/bin/python3
# This script is used by the monioring/check-seslimits Jenkins job
import boto3
import argparse
import sys
# Copied from https://stackoverflow.com/a/41153081
class ExtendAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
items = getattr(namespace, self.dest) or []
items.extend(values)
setattr(namespace, self.dest, items)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--critical', required=True, type=float,
help="Critical threshold in percentage")
parser.add_argument('-w', '--warning', required=False, type=float,
help="Warning threshold in percentage (Optional)")
parser.add_argument('-r', '--region', dest='regions', nargs='+',
action=ExtendAction, required=True,
help="AWS regions to check")
args = parser.parse_args()
if args.warning and args.warning >= args.critical:
warn_str = "Warning threshold ({})".format(args.warning)
crit_str = "Critical threshold ({})".format(args.critical)
print("ERROR: {} >= {}".format(warn_str, crit_str))
sys.exit(1)
exit_code = 0
session = boto3.session.Session()
for region in args.regions:
ses = session.client('ses', region_name=region)
data = ses.get_send_quota()
limit = data["Max24HourSend"]
current = data["SentLast24Hours"]
percent = current/limit
level = None
if percent >= args.critical:
level = "CRITICAL"
elif args.warning and percent >= args.warning:
level = "WARNING"
if level:
print("{} {}/{} ({}%) - {}".format(region, current, limit, percent,
level))
exit_code += 1
sys.exit(exit_code)
"""
CloudFlare API
https://api.cloudflare.com/#zone-analytics-dashboard
"""
import requests
import argparse
import sys
CLOUDFLARE_API_ENDPOINT = "https://api.cloudflare.com/client/v4/"
def calcualte_cache_hit_rate(zone_id, auth_key, email, threshold):
HEADERS = {"Accept": "application/json",
"X-Auth-Key": auth_key,
"X-Auth-Email": email}
# for the past one hour, -59 indicates minutes, we can go
# beyond that as well, for example for last 15
# hours it will be -899
PARAMS = {"since": "-59", "continuous": "true"}
res = requests.get(CLOUDFLARE_API_ENDPOINT + "zones/" + zone_id
+ "/analytics/dashboard", headers=HEADERS,
params=PARAMS)
try:
data = res.json()
all_req = float(data["result"]["timeseries"][0]["requests"]["all"])
cached_req = float(data["result"]["timeseries"][0]["requests"]["cached"])
current_cache_hit_rate = cached_req / all_req * 100
if current_cache_hit_rate < threshold:
sys.exit(1)
except Exception as error:
print("JSON Error: {} \n Content returned from API call: {}".format(error, res.text))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-z', '--zone', required=True,
help="Cloudflare's Zone ID")
parser.add_argument('-k', '--auth_key', required=True,
help="Authentication Key")
parser.add_argument('-e', '--email', required=True,
help="email to use for authentication for CloudFlare API")
parser.add_argument('-t', '--threshold', required=True,
help="Threshold limit to be passed to check against it")
args = parser.parse_args()
calcualte_cache_hit_rate(args.zone, args.auth_key, args.email, args.threshold)
# Needed for CloudFlare cache hit rate job
requests==2.9.1
......@@ -7,6 +7,7 @@
"venv_dir": "/edx/app/edx_ansible/venvs/edx_ansible",
"ami": "{{env `JENKINS_WORKER_AMI`}}",
"test_platform_version": "{{env `TEST_PLATFORM_VERSION`}}",
"security_group": "{{env `AWS_SECURITY_GROUP`}}",
"delete_or_keep": "{{env `DELETE_OR_KEEP_AMI`}}",
"remote_branch": "{{env `REMOTE_BRANCH`}}"
},
......@@ -21,7 +22,7 @@
"ssh_username": "ubuntu",
"ami_description": "jenkins worker",
"iam_instance_profile": "jenkins-worker",
"security_group_id": "sg-75af5e18",
"security_group_id": "{{user `security_group`}}",
"tags": {
"delete_or_keep": "{{user `delete_or_keep`}}"
}
......
......@@ -5,6 +5,7 @@
"playbook_remote_dir": "/tmp/packer-edx-playbooks",
"venv_dir": "/edx/app/edx_ansible/venvs/edx_ansible",
"ami": "{{env `JENKINS_WORKER_AMI`}}",
"security_group": "{{env `AWS_SECURITY_GROUP`}}",
"delete_or_keep": "{{env `DELETE_OR_KEEP_AMI`}}",
"remote_branch": "{{env `REMOTE_BRANCH`}}"
},
......@@ -19,7 +20,7 @@
"ssh_username": "ubuntu",
"ami_description": "jenkins worker android",
"iam_instance_profile": "jenkins-worker",
"security_group_id": "sg-75af5e18",
"security_group_id": "{{user `security_group`}}",
"tags": {
"delete_or_keep": "{{user `delete_or_keep`}}"
}
......
......@@ -6,6 +6,7 @@
"playbook_remote_dir": "/tmp/packer-edx-playbooks",
"venv_dir": "/edx/app/edx_ansible/venvs/edx_ansible",
"ami": "{{env `JENKINS_WORKER_AMI`}}",
"security_group": "{{env `AWS_SECURITY_GROUP`}}",
"delete_or_keep": "{{env `DELETE_OR_KEEP_AMI`}}",
"remote_branch": "{{env `REMOTE_BRANCH`}}"
},
......@@ -20,7 +21,7 @@
"ssh_username": "ubuntu",
"ami_description": "jenkins worker loadtest driver",
"iam_instance_profile": "jenkins-worker",
"security_group_id": "sg-75af5e18",
"security_group_id": "{{user `security_group`}}",
"tags": {
"delete_or_keep": "{{user `delete_or_keep`}}"
}
......
......@@ -2,10 +2,12 @@
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"new_relic_key": "{{env `NEW_RELIC_KEY`}}",
"playbook_remote_dir": "/tmp/packer-edx-playbooks",
"venv_dir": "/edx/app/edx_ansible/venvs/edx_ansible",
"ami": "{{env `JENKINS_WORKER_AMI`}}",
"test_platform_version": "{{env `TEST_PLATFORM_VERSION`}}",
"security_group": "{{env `AWS_SECURITY_GROUP`}}",
"delete_or_keep": "{{env `DELETE_OR_KEEP_AMI`}}",
"remote_branch": "{{env `REMOTE_BRANCH`}}"
},
......@@ -20,7 +22,7 @@
"ssh_username": "ubuntu",
"ami_description": "jenkins worker",
"iam_instance_profile": "jenkins-worker",
"security_group_id": "sg-75af5e18",
"security_group_id": "{{user `security_group`}}",
"tags": {
"delete_or_keep": "{{user `delete_or_keep`}}"
}
......@@ -50,6 +52,7 @@
"command": ". {{user `venv_dir`}}/bin/activate && ansible-playbook",
"inventory_groups": "jenkins_worker",
"extra_arguments": [
"-e \"NEWRELIC_LICENSE_KEY={{user `new_relic_key`}}\"",
"-vvv"
]
}]
......
......@@ -5,6 +5,7 @@
"playbook_remote_dir": "/tmp/packer-edx-playbooks",
"venv_dir": "/edx/app/edx_ansible/venvs/edx_ansible",
"ami": "{{env `JENKINS_WORKER_AMI`}}",
"security_group": "{{env `AWS_SECURITY_GROUP`}}",
"delete_or_keep": "{{env `DELETE_OR_KEEP_AMI`}}",
"remote_branch": "{{env `REMOTE_BRANCH`}}"
},
......@@ -19,7 +20,7 @@
"ssh_username": "ubuntu",
"ami_description": "jenkins worker sitespeedio",
"iam_instance_profile": "jenkins-worker",
"security_group_id": "sg-75af5e18",
"security_group_id": "{{user `security_group`}}",
"tags": {
"delete_or_keep": "{{user `delete_or_keep`}}"
}
......
......@@ -2,6 +2,7 @@
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"security_group": "{{env `AWS_SECURITY_GROUP`}}",
"ami": "{{env `WEBPAGETEST_BASE_AMI`}}"
},
"builders": [{
......@@ -15,7 +16,7 @@
"ssh_username": "ubuntu",
"ami_description": "webpagetest",
"iam_instance_profile": "jenkins-worker",
"security_group_id": "sg-75af5e18"
"security_group_id": "{{user `security_group`}}"
}],
"provisioners": [{
"type": "shell",
......
......@@ -9,9 +9,11 @@ import argparse
TRAVIS_BUILD_DIR = os.environ.get("TRAVIS_BUILD_DIR")
DOCKER_PATH_ROOT = pathlib2.Path(TRAVIS_BUILD_DIR, "docker", "build")
DOCKER_PLAYS_PATH = pathlib2.Path(TRAVIS_BUILD_DIR, "docker", "plays")
CONFIG_FILE_PATH = pathlib2.Path(TRAVIS_BUILD_DIR, "util", "parsefiles_config.yml")
LOGGER = logging.getLogger(__name__)
def build_graph(git_dir, roles_dirs, aws_play_dirs, docker_play_dirs):
"""
Builds a dependency graph that shows relationships between roles and playbooks.
......@@ -149,6 +151,7 @@ def _open_yaml_file(file_str):
LOGGER.error("error in configuration file: %s" % str(exc))
sys.exit(1)
def change_set_to_roles(files, git_dir, roles_dirs, playbooks_dirs, graph):
"""
Converts change set consisting of a number of files to the roles that they represent/contain.
......@@ -181,8 +184,9 @@ def change_set_to_roles(files, git_dir, roles_dirs, playbooks_dirs, graph):
items.add(_get_role_name_from_file(file_path))
return items
def get_plays(files, git_dir, playbooks_dirs):
"""
"""
Determines which files in the change set are aws playbooks
files: A list of files modified by a commit range.
......@@ -210,7 +214,8 @@ def get_plays(files, git_dir, playbooks_dirs):
plays.add(_get_playbook_name_from_file(file_path))
return plays
def _get_playbook_name_from_file(path):
"""
Gets name of playbook from the filepath, which is the last part of the filepath.
......@@ -220,7 +225,7 @@ def _get_playbook_name_from_file(path):
"""
# get last part of filepath
return path.stem
def _get_role_name_from_file(path):
"""
......@@ -235,6 +240,7 @@ def _get_role_name_from_file(path):
# name of role is the next part of the file path after "roles"
return dirs[dirs.index("roles")+1]
def get_dependencies(roles, graph):
"""
Determines all roles dependent on set of roles and returns set containing both.
......@@ -257,6 +263,7 @@ def get_dependencies(roles, graph):
return items
def get_docker_plays(roles, graph):
"""Gets all docker plays that contain at least role in common with roles."""
......@@ -291,6 +298,7 @@ def get_docker_plays(roles, graph):
return items
def filter_docker_plays(plays, repo_path):
"""Filters out docker plays that do not have a Dockerfile."""
......@@ -306,6 +314,7 @@ def filter_docker_plays(plays, repo_path):
return items
def _get_role_name(role):
"""
Resolves a role name from either a simple declaration or a dictionary style declaration.
......@@ -330,6 +339,66 @@ def _get_role_name(role):
LOGGER.warning("role %s could not be resolved to a role name." % role)
return None
def _get_modified_dockerfiles(files, git_dir):
"""
Return changed files under docker/build directory
:param files:
:param git_dir:
:return:
"""
items = set()
candidate_files = {f for f in DOCKER_PATH_ROOT.glob("**/*")}
for f in files:
file_path = pathlib2.Path(git_dir, f)
if file_path in candidate_files:
play = items.add(_get_play_name(file_path))
if play is not None:
items.add(play)
return items
def get_modified_dockerfiles_plays(files, git_dir):
"""
Return changed files under docker/plays directory
:param files:
:param git_dir:
:return:
"""
items = set()
candidate_files = {f for f in DOCKER_PLAYS_PATH.glob("*.yml")}
for f in files:
file_path = pathlib2.Path(git_dir, f)
if file_path in candidate_files:
items.add(_get_playbook_name_from_file(file_path))
return items
def _get_play_name(path):
"""
Gets name of play from the filepath, which is the token
after either "docker/build" in the file path.
Input:
path: A path to the changed file under docker/build dir
"""
# attempt to extract Docker image name from file path; splits the path of a file over
# "docker/build/", because the first token after "docker/build/" is the image name
suffix = (str(path)).split(str(os.path.join('docker', 'build', '')))
# if file path contains "docker/build/"
if len(suffix) > 1:
# split suffix over separators to file path components separately
suffix_parts = suffix[1].split(os.sep)
# first token will be image name; <repo>/docker/build/<image>/...
return suffix_parts[0]
return None
def arg_parse():
parser = argparse.ArgumentParser(description = 'Given a commit range, analyze Ansible dependencies between roles and playbooks '
......@@ -387,5 +456,12 @@ if __name__ == '__main__':
# filter out docker plays without a Dockerfile
docker_plays = filter_docker_plays(docker_plays, TRAVIS_BUILD_DIR)
# prints Docker plays
print " ".join(str(play) for play in docker_plays)
# Add playbooks to the list whose docker file has been modified
modified_docker_files = _get_modified_dockerfiles(change_set, TRAVIS_BUILD_DIR)
# Add plays to the list which got changed in docker/plays directory
docker_plays_dir = get_modified_dockerfiles_plays(change_set, TRAVIS_BUILD_DIR)
all_plays = set(set(docker_plays) | set( modified_docker_files) | set(docker_plays_dir))
print " ".join(all_plays)
......@@ -17,7 +17,6 @@ weights:
- nginx: 1
- xqueue: 2
- trusty-common: 5
- precise-common: 4
- xenial-common: 6
- ecommerce: 6
- rabbitmq: 2
......@@ -29,3 +28,4 @@ weights:
- ecomworker: 4
- notes: 2
- notifier: 2
- mongo: 1
......@@ -146,7 +146,7 @@ def parse_args():
group = parser.add_mutually_exclusive_group()
group.add_argument('-b', '--base-ami', required=False,
help="ami to use as a base ami",
default="ami-0568456c")
default="ami-cd0f5cb6")
group.add_argument('--blessed', action='store_true',
help="Look up blessed ami for env-dep-play.",
default=False)
......@@ -330,7 +330,6 @@ fi
VIRTUAL_ENV_VERSION="15.0.2"
PIP_VERSION="8.1.2"
SETUPTOOLS_VERSION="24.0.3"
EDX_PPA="deb http://ppa.edx.org precise main"
EDX_PPA_KEY_SERVER="keyserver.ubuntu.com"
EDX_PPA_KEY_ID="B41E5E3969464050"
......@@ -353,10 +352,7 @@ if [[ $(id -u) -ne 0 ]] ;then
exit 1;
fi
if grep -q 'Precise Pangolin' /etc/os-release
then
SHORT_DIST="precise"
elif grep -q 'Trusty Tahr' /etc/os-release
if grep -q 'Trusty Tahr' /etc/os-release
then
SHORT_DIST="trusty"
elif grep -q 'Xenial Xerus' /etc/os-release
......@@ -365,7 +361,7 @@ then
else
cat << EOF
This script is only known to work on Ubuntu Precise, Trusty and Xenial,
This script is only known to work on Ubuntu Trusty and Xenial,
exiting. If you are interested in helping make installation possible
on other platforms, let us know.
......@@ -391,7 +387,7 @@ apt-get install -y software-properties-common python-software-properties
add-apt-repository -y ppa:git-core/ppa
# For older distributions we need to install a PPA for Python 2.7.10
if [[ "precise" = "$SHORT_DIST" || "trusty" = "$SHORT_DIST" ]]; then
if [[ "trusty" = "$SHORT_DIST" ]]; then
# Add python PPA
apt-key adv --keyserver "$EDX_PPA_KEY_SERVER" --recv-keys "$EDX_PPA_KEY_ID"
......@@ -427,12 +423,8 @@ pip install virtualenv=="$VIRTUAL_ENV_VERSION"
# python3 is required for certain other things
# (currently xqwatcher so it can run python2 and 3 grader code,
# but potentially more in the future). It's not available on Ubuntu 12.04,
# but in those cases we don't need it anyways.
if [[ -n "$(apt-cache search --names-only '^python3-pip$')" ]]; then
/usr/bin/apt-get update
/usr/bin/apt-get install -y python3-pip python3-dev
fi
# but potentially more in the future).
/usr/bin/apt-get install -y python3-pip python3-dev
# this is missing on 14.04 (base package on 12.04)
# we need to do this on any build, since the above apt-get
......@@ -725,6 +717,9 @@ def create_ami(instance_id, name, description):
conf_secure_tag = "{} {}".format(args.configuration_secure_repo, args.configuration_secure_version)
img.add_tag("version:configuration_secure", conf_secure_tag)
time.sleep(AWS_API_WAIT_TIME)
conf_internal_tag = "{} {}".format(args.configuration_internal_repo, args.configuration_internal_version)
img.add_tag("version:configuration_internal", conf_internal_tag)
time.sleep(AWS_API_WAIT_TIME)
img.add_tag("cache_id", args.cache_id)
time.sleep(AWS_API_WAIT_TIME)
......
......@@ -17,12 +17,9 @@ It relies on some component applying the proper tags and performing pre-retireme
"""
import argparse
import boto
import boto.ec2
import boto.sqs
import boto3
import json
import subprocess
from boto.sqs.message import RawMessage
import logging
import os
from distutils import spawn
......@@ -35,40 +32,37 @@ class LifecycleHandler:
INSTANCE_TERMINATION = 'autoscaling:EC2_INSTANCE_TERMINATING'
TEST_NOTIFICATION = 'autoscaling:TEST_NOTIFICATION'
NUM_MESSAGES = 10
WAIT_TIME_SECONDS = 10
WAIT_TIME_SECONDS = 1
VISIBILITY_TIMEOUT = 10
def __init__(self, profile, queue, hook, dry_run, bin_directory=None):
def __init__(self, region, queue, hook, dry_run, bin_directory=None):
logging.basicConfig(level=logging.INFO)
self.queue = queue
self.hook = hook
self.profile = profile
self.region = region
if bin_directory:
os.environ["PATH"] = bin_directory + os.pathsep + os.environ["PATH"]
self.aws_bin = spawn.find_executable('aws')
self.python_bin = spawn.find_executable('python')
self.region = os.environ.get('AWS_REGION','us-east-1')
self.base_cli_command ="{python_bin} {aws_bin} ".format(
python_bin=self.python_bin,
aws_bin=self.aws_bin)
if self.profile:
self.base_cli_command += "--profile {profile} ".format(profile=self.profile)
if self.region:
self.base_cli_command += "--region {region} ".format(region=self.region)
self.dry_run = dry_run
self.ec2_con = boto.ec2.connect_to_region(self.region)
self.sqs_con = boto.sqs.connect_to_region(self.region)
self.dry_run = args.dry_run
self.ec2_con = boto3.client('ec2',region_name=self.region)
self.sqs_con = boto3.client('sqs',region_name=self.region)
def process_lifecycle_messages(self):
queue = self.sqs_con.get_queue(self.queue)
queue_url = self.sqs_con.get_queue_url(QueueName=self.queue)['QueueUrl']
queue = boto3.resource('sqs', region_name=self.region).Queue(queue_url)
# Needed to get unencoded message for ease of processing
queue.set_message_class(RawMessage)
for sqs_message in queue.get_messages(LifecycleHandler.NUM_MESSAGES,
wait_time_seconds=LifecycleHandler.WAIT_TIME_SECONDS):
body = json.loads(sqs_message.get_body_encoded())
for sqs_message in self.sqs_con.receive_message(QueueUrl=queue_url, MaxNumberOfMessages=LifecycleHandler.NUM_MESSAGES, VisibilityTimeout=LifecycleHandler.VISIBILITY_TIMEOUT,
WaitTimeSeconds=LifecycleHandler.WAIT_TIME_SECONDS).get('Messages', []):
body = json.loads(sqs_message['Body'])
as_message = json.loads(body['Message'])
logging.info("Proccessing message {message}.".format(message=as_message))
......@@ -113,7 +107,7 @@ class LifecycleHandler:
def delete_sqs_message(self, queue, sqs_message, as_message, dry_run):
if not dry_run:
logging.info("Deleting message with body {message}".format(message=as_message))
self.sqs_con.delete_message(queue, sqs_message)
self.sqs_con.delete_message(QueueUrl=queue.url, ReceiptHandle=sqs_message['ReceiptHandle'])
else:
logging.info("Would have deleted message with body {message}".format(message=as_message))
......@@ -154,10 +148,12 @@ class LifecycleHandler:
"""
Simple boto call to get the instance based on the instance-id
"""
instances = self.ec2_con.get_only_instances([instance_id])
reservations = self.ec2_con.describe_instances(InstanceIds=[instance_id]).get('Reservations', [])
instances = []
if len(reservations) == 1:
instances = reservations[0].get('Instances', [])
if len(instances) == 1:
return self.ec2_con.get_only_instances([instance_id])[0]
return self.ec2_con.describe_instances(InstanceIds=[instance_id])['Reservations'][0]['Instances'][0]
else:
return None
......@@ -167,9 +163,13 @@ class LifecycleHandler:
with the value 'true'
"""
instance = self.get_ec2_instance_by_id(instance_id)
tags_dict = {}
if instance:
if 'safe_to_retire' in instance.tags and instance.tags['safe_to_retire'].lower() == 'true':
tags_dict = {}
for t in instance['Tags']:
tags_dict[t['Key']] = t['Value']
if 'safe_to_retire' in tags_dict and tags_dict['safe_to_retire'].lower() == 'true':
logging.info("Instance with id {id} is safe to retire.".format(id=instance_id))
return True
else:
......@@ -184,9 +184,9 @@ class LifecycleHandler:
if __name__=="__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--profile',
help='The boto profile to use '
'per line.',default=None)
parser.add_argument('-r', '--region',
help='The aws region to use '
'per line.',default='us-east-1')
parser.add_argument('-b', '--bin-directory', required=False, default=None,
help='The bin directory of the virtual env '
'from which to run the AWS cli (optional)')
......@@ -201,5 +201,5 @@ if __name__=="__main__":
parser.set_defaults(dry_run=False)
args = parser.parse_args()
lh = LifecycleHandler(args.profile, args.queue, args.hook, args.dry_run, args.bin_directory)
lh = LifecycleHandler(args.region, args.queue, args.hook, args.dry_run, args.bin_directory)
lh.process_lifecycle_messages()
......@@ -66,10 +66,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", MEMORY.to_s]
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
......
......@@ -20,8 +20,8 @@ SCRIPT
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.box = "xenial64"
config.vm.box_url = "http://files.vagrantup.com/xenial64.box"
# Turn off shared folders
#config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
......
......@@ -26,6 +26,7 @@ VERSION_VARS = [
'NOTIFIER_VERSION',
'ECOMMERCE_VERSION',
'ECOMMERCE_WORKER_VERSION',
'DISCOVERY_VERSION',
]
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
......@@ -51,6 +52,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.network :forwarded_port, guest: 8110, host: 8110 # Insights
config.vm.network :forwarded_port, guest: 50070, host: 50070 # HDFS Admin UI
config.vm.network :forwarded_port, guest: 8088, host: 8088 # Hadoop Resource Manager
config.vm.network :forwarded_port, guest: 18381, host: 18381 # Course discovery
end
config.ssh.insert_key = true
......@@ -63,10 +65,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", MEMORY.to_s]
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
# Make LC_ALL default to en_US.UTF-8 instead of en_US.
......
......@@ -2,7 +2,7 @@ Vagrant.require_version ">= 1.8.7"
VAGRANTFILE_API_VERSION = "2"
MEMORY = 4096
MEMORY = 6144
CPU_COUNT = 2
vm_guest_ip = "192.168.33.10"
......@@ -21,8 +21,11 @@ VERSION_VARS = [
'xqueue_version',
'demo_version',
'NOTIFIER_VERSION',
'INSIGHTS_VERSION',
'ANALYTICS_API_VERSION',
'ECOMMERCE_VERSION',
'ECOMMERCE_WORKER_VERSION',
'DISCOVERY_VERSION',
]
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
......@@ -38,10 +41,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", MEMORY.to_s]
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
# Make LC_ALL default to en_US.UTF-8 instead of en_US.
......
......@@ -7,8 +7,8 @@ CPU_COUNT = 2
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.box = "xenial64"
config.vm.box_url = "http://files.vagrantup.com/xenial64.box"
config.vm.network :private_network, ip: "192.168.33.20"
config.vm.network :forwarded_port, guest: 8080, host: 8080
......@@ -18,10 +18,6 @@ Vagrant.configure("2") do |config|
# You can adjust this to the amount of CPUs your system has available
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
config.vm.provision :ansible do |ansible|
......
......@@ -4,8 +4,8 @@ CPU_COUNT = 2
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.box = "xenial64"
config.vm.box_url = "http://files.vagrantup.com/xenial64.box"
config.vm.network :private_network, ip: "192.168.33.20"
config.vm.network :forwarded_port, guest: 8080, host: 8080
......@@ -15,10 +15,6 @@ Vagrant.configure("2") do |config|
# You can adjust this to the amount of CPUs your system has available
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
config.vm.provision :ansible do |ansible|
......
......@@ -159,10 +159,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
vb.customize ["modifyvm", :id, "--memory", MEMORY.to_s]
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
# Virtio is faster, but the box needs to have support for it. We didn't
# have support in the boxes before Ficus.
if !(boxname.include?("dogwood") || boxname.include?("eucalyptus"))
......
......@@ -21,6 +21,7 @@ VERSION_VARS = [
'NOTIFIER_VERSION',
'ECOMMERCE_VERSION',
'ECOMMERCE_WORKER_VERSION',
'DISCOVERY_VERSION',
]
MOUNT_DIRS = {
......@@ -28,7 +29,7 @@ MOUNT_DIRS = {
:themes => {:repo => "themes", :local => "/edx/app/edxapp/themes", :owner => "edxapp"},
:forum => {:repo => "cs_comments_service", :local => "/edx/app/forum/cs_comments_service", :owner => "forum"},
:ecommerce => {:repo => "ecommerce", :local => "/edx/app/ecommerce/ecommerce", :owner => "ecommerce"},
:ecommerce_worker => {:repo => "ecommerce-worker", :local => "/edx/app/ecommerce_worker/ecommerce_worker", :owner => "ecommerce_worker"},
:ecommerce_worker => {:repo => "ecommerce-worker", :local => "/edx/app/ecommerce_worker/ecommerce_worker", :owner => "ecomworker"},
# This src directory won't have useful permissions. You can set them from the
# vagrant user in the guest OS. "sudo chmod 0777 /edx/src" is useful.
:src => {:repo => "src", :local => "/edx/src", :owner => "root"},
......@@ -42,11 +43,17 @@ end
# to a name and a file path, which are used for retrieving
# a Vagrant box from the internet.
openedx_releases = {
"open-release/ginkgo.master" => "ginkgo-devstack-2017-07-14",
"open-release/ginkgo.1rc1" => "ginkgo-devstack-2017-07-14",
"open-release/ginkgo.1" => "ginkgo-devstack-2017-07-14",
"open-release/ficus.master" => "ficus-devstack-2017-02-07",
"open-release/ficus.1rc1" => "ficus-devstack-2017-01-11",
"open-release/ficus.1rc3" => "ficus-devstack-2017-02-07",
"open-release/ficus.1rc4" => "ficus-devstack-2017-02-07",
"open-release/ficus.1" => "ficus-devstack-2017-02-07",
"open-release/ficus.2" => "ficus-devstack-2017-02-07",
"open-release/ficus.3" => "ficus-devstack-2017-02-07",
"open-release/eucalyptus.master" => "eucalyptus-devstack-2016-09-01",
"open-release/eucalyptus.1rc2" => "eucalyptus-devstack-2016-08-19",
......@@ -63,7 +70,7 @@ openedx_releases = {
# Cypress is deprecated and unsupported
# Birch is deprecated and unsupported
}
openedx_releases.default = "devstack-periodic-2017-06-12"
openedx_releases.default = "master-devstack-2017-09-14"
openedx_release = ENV['OPENEDX_RELEASE']
boxname = ENV['OPENEDX_BOXNAME']
......@@ -134,6 +141,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.network :forwarded_port, guest: 9876, host: 9876 # ORA2 Karma tests
config.vm.network :forwarded_port, guest: 50070, host: 50070 # HDFS Admin UI
config.vm.network :forwarded_port, guest: 8088, host: 8088 # Hadoop Resource Manager
config.vm.network :forwarded_port, guest: 18381, host: 18381 # Course discovery
end
config.ssh.insert_key = true
......@@ -158,10 +166,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
vb.customize ["modifyvm", :id, "--memory", MEMORY.to_s]
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
# Virtio is faster, but the box needs to have support for it. We didn't
# have support in the boxes before Ficus.
if !(boxname.include?("dogwood") || boxname.include?("eucalyptus"))
......
Vagrant.require_version ">= 1.8.7"
unless Vagrant.has_plugin?("vagrant-hostsupdater")
raise "Please install the vagrant-hostsupdater plugin by running `vagrant plugin install vagrant-hostsupdater`"
end
VAGRANTFILE_API_VERSION = "2"
MEMORY = 4096
MEMORY = 6144
CPU_COUNT = 2
# map the name of the git branch that we use for a release
# to a name and a file path, which are used for retrieving
# a Vagrant box from the internet.
openedx_releases = {
"open-release/ginkgo.1rc1" => "ginkgo-fullstack-2017-07-14",
"open-release/ginkgo.1" => "ginkgo-fullstack-2017-08-14",
"open-release/ficus.1rc3" => "ficus-fullstack-2017-02-07",
"open-release/ficus.1rc4" => "ficus-fullstack-2017-02-15",
"open-release/ficus.1" => "ficus-fullstack-2017-02-15",
"open-release/ficus.2" => "ficus-fullstack-2017-03-28",
"open-release/ficus.3" => "ficus-fullstack-2017-04-20",
"open-release/eucalyptus/1rc1" => "eucalyptus-fullstack-1rc1",
"open-release/eucalyptus.1rc2" => "eucalyptus-fullstack-2016-08-19",
......@@ -36,7 +44,7 @@ openedx_releases = {
# Cypress is deprecated and unsupported
# Birch is deprecated and unsupported
}
openedx_releases.default = "ficus-fullstack-2017-02-15"
openedx_releases.default = "ginkgo-fullstack-2017-08-14"
openedx_release = ENV['OPENEDX_RELEASE']
boxname = ENV['OPENEDX_BOXNAME']
......@@ -72,10 +80,6 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
vb.customize ["modifyvm", :id, "--memory", MEMORY.to_s]
vb.customize ["modifyvm", :id, "--cpus", CPU_COUNT.to_s]
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
# Virtio is faster, but the box needs to have support for it. We didn't
# have support in the boxes before Ficus.
if !(boxname.include?("dogwood") || boxname.include?("eucalyptus"))
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment