Unverified Commit e381d949 by José Antonio González Committed by GitHub

Merge branch 'master' into proversity/course-visibility-in-catalog

parents 8c7cde0d 3c0a6040
...@@ -30,3 +30,6 @@ playbooks/edx-east/travis-test.yml ...@@ -30,3 +30,6 @@ playbooks/edx-east/travis-test.yml
## Ansible Artifacts ## Ansible Artifacts
*.retry *.retry
### VisualStudioCode ###
.vscode/*
- Role: edxapp - Role: edxapp
- Added `EDXAPP_DEFAULT_COURSE_VISIBILITY_IN_CATALOG` setting (defaults to `both`). - Added `EDXAPP_DEFAULT_COURSE_VISIBILITY_IN_CATALOG` setting (defaults to `both`).
- Role: edxapp
- Added `EDXAPP_DEFAULT_MOBILE_AVAILABLE` setting (defaults to `false`). - Added `EDXAPP_DEFAULT_MOBILE_AVAILABLE` setting (defaults to `false`).
- Added `EDX_PLATFORM_REVISION` (set from `edx_platform_version`). This is for
edx-platform debugging purposes, and replaces calling dealer.git at startup.
- Role: veda_pipeline_worker
- New role to run all (`deliver, ingest, youtubecallback`) [video pipeline workers](https://github.com/edx/edx-video-pipeline/blob/master/bin/)
- Role: veda_ffmpeg
- New role added to compile ffmpeg for video pipeline. It will be used as a dependency for video pipeline roles.
- Role: edxapp
- Added `EDXAPP_BRANCH_IO_KEY` to configure branch.io journey app banners.
- Role: ecomworker
- Added `ECOMMERCE_WORKER_BROKER_TRANSPORT` with a default value of 'ampq' to be backwards compatible with rabbit. Set to 'redis' if you wish to use redis instead of rabbit as a queue for ecommerce worker.
- Role: ecommerce
- Added `ECOMMERCE_BROKER_TRANSPORT` with a default value of 'ampq' to be backwards compatible with rabbit. Set to 'redis' if you wish to use redis instead of rabbit as a queue for ecommerce.
- Role: credentials
- This role is now dependent on the edx_django_service role. Settings are all the same, but nearly all of the tasks are performed by the edx_django_service role.
- Role: veda_delivery_worker
- New role added to run [video delivery worker](https://github.com/edx/edx-video-pipeline/blob/master/bin/deliver)
- Role: veda_web_frontend
- New role added for [edx-video-pipeline](https://github.com/edx/edx-video-pipeline)
- Role: edxapp - Role: edxapp
- Added `EDXAPP_LMS_INTERNAL_ROOT_URL` setting (defaults to `EDXAPP_LMS_ROOT_URL`). - Added `EDXAPP_LMS_INTERNAL_ROOT_URL` setting (defaults to `EDXAPP_LMS_ROOT_URL`).
...@@ -15,14 +41,21 @@ ...@@ -15,14 +41,21 @@
your configuration to set `EDXAPP_CELERY_BROKER_TRANSPORT` explicitly. your configuration to set `EDXAPP_CELERY_BROKER_TRANSPORT` explicitly.
- Role: edxapp - Role: edxapp
- Added `EDXAPP_LMS_SPLIT_DOC_STORE_READ_PREFERENCE` with a default value of
SECONDARY_PREFERED to distribute read workload across the replica set.
- Changed `EDXAPP_MONGO_HOSTS` to be a comma seperated string, which is
required by pymongo.MongoReplicaSetClient for multiple hosts instead of an
array.
- Added `EDXAPP_MONGO_REPLICA_SET`, which is required to use - Added `EDXAPP_MONGO_REPLICA_SET`, which is required to use
pymongo.MongoReplicaSetClient in PyMongo 2.9.1, whis is required to use the pymongo.MongoReplicaSetClient in PyMongo 2.9.1. This should be set to the
read_preference setting. This should be set to the name of your replica set. name of your replica set.
This setting causes the `EDXAPP_*_READ_PREFERENCE` settings below to be used.
- Added `EDXAPP_MONGO_CMS_READ_PREFERENCE` with a default value of `PRIMARY`.
- Added `EDXAPP_MONGO_LMS_READ_PREFERENCE` with a default value of
`SECONDARY_PREFERED` to distribute the read workload across the replica set
for replicated docstores and contentstores.
- Added `EDXAPP_LMS_SPLIT_DOC_STORE_READ_PREFERENCE` with a default value of
`EDXAPP_MONGO_LMS_READ_PREFERENCE`.
- Added `EDXAPP_LMS_DRAFT_DOC_STORE_CONFIG` with a default value of
`EDXAPP_MONGO_CMS_READ_PREFERENCE`, to enforce consistency between
Studio and the LMS Preview modes.
- Removed `EDXAPP_CONTENTSTORE_ADDITIONAL_OPTS`, since there is no notion of
common options to the content store anymore.
- Role: nginx - Role: nginx
- Modified `lms.j2` , `cms.j2` , `credentials.j2` , `edx_notes_api.j2` and `insights.j2` to enable HTTP Strict Transport Security - Modified `lms.j2` , `cms.j2` , `credentials.j2` , `edx_notes_api.j2` and `insights.j2` to enable HTTP Strict Transport Security
......
...@@ -9,26 +9,19 @@ ...@@ -9,26 +9,19 @@
FROM edxops/xenial-common:latest FROM edxops/xenial-common:latest
MAINTAINER edxops MAINTAINER edxops
USER root
ARG CREDENTIALS_VERSION=master CMD ["/edx/app/supervisor/venvs/supervisor/bin/supervisord", "-n", "--configuration", "/edx/app/supervisor/supervisord.conf"]
ARG REPO_OWNER=edx
ADD . /edx/app/edx_ansible/edx_ansible ADD . /edx/app/edx_ansible/edx_ansible
WORKDIR /edx/app/edx_ansible/edx_ansible/docker/plays WORKDIR /edx/app/edx_ansible/edx_ansible/docker/plays
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN apt-get update
RUN apt install -y xvfb firefox gettext
COPY docker/build/credentials/ansible_overrides.yml / COPY docker/build/credentials/ansible_overrides.yml /
COPY docker/build/devstack/ansible_overrides.yml /devstack/ansible_overrides.yml
RUN sudo /edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook credentials.yml \ RUN sudo /edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook credentials.yml \
-c local -i '127.0.0.1,' \ -c local -i "127.0.0.1," \
-t 'install,assets,devstack:install' \ -t "install,assets,devstack" \
--extra-vars="@/ansible_overrides.yml" \ --extra-vars="@/ansible_overrides.yml" \
--extra-vars="CREDENTIALS_VERSION=$CREDENTIALS_VERSION" \ --extra-vars="@/devstack/ansible_overrides.yml"
--extra-vars="COMMON_GIT_PATH=$REPO_OWNER"
USER root EXPOSE 18150
CMD ["/edx/app/supervisor/venvs/supervisor/bin/supervisord", "-n", "--configuration", "/edx/app/supervisor/supervisord.conf"]
--- ---
credentials_gunicorn_host: 0.0.0.0 COMMON_GIT_PATH: 'edx'
CREDENTIALS_MYSQL: 'db' CREDENTIALS_VERSION: 'master'
CREDENTIALS_DJANGO_SETTINGS_MODULE: 'credentials.settings.devstack'
CREDENTIALS_GUNICORN_EXTRA: '--reload'
CREDENTIALS_MYSQL_MATCHER: '%'
CREDENTIALS_MYSQL_HOST: 'db'
CREDENTIALS_MYSQL_PASSWORD: 'password'
COMMON_MYSQL_MIGRATE_USER: '{{ CREDENTIALS_MYSQL_USER }}' COMMON_MYSQL_MIGRATE_USER: '{{ CREDENTIALS_MYSQL_USER }}'
COMMON_MYSQL_MIGRATE_PASS: '{{ CREDENTIALS_MYSQL_PASSWORD }}' COMMON_MYSQL_MIGRATE_PASS: '{{ CREDENTIALS_MYSQL_PASSWORD }}'
CREDENTIALS_MYSQL_HOST: 'edx.devstack.mysql'
CREDENTIALS_DJANGO_SETTINGS_MODULE: 'credentials.settings.devstack'
CREDENTIALS_GUNICORN_EXTRA: '--reload'
CREDENTIALS_MEMCACHE: ['edx.devstack.memcached:11211']
CREDENTIALS_EXTRA_APPS: ['credentials.apps.edx_credentials_extensions']
CREDENTIALS_URL_ROOT: 'http://localhost:18150'
edx_django_service_is_devstack: true
# NOTE: The creation of demo data requires database access,
# which we don't have when making new images.
credentials_create_demo_data: false
FROM edxops/go-agent:latest
# Install necessary modules for running make requirements in edx-mktg
# Using rvm so we can control the ruby version installed. This also installs gem 2.6.12
RUN bash -c '\curl -sSL https://get.rvm.io | bash -s -- --ignore-dotfiles && \
usermod -aG rvm go && source /etc/profile.d/rvm.sh && \
rvm install ruby-2.4.1 && gem install bundler -v 1.16.0'
# Installs node 8.9.3 and npm 5.5.1 as of 12/13/17. Unlikely to change much since node 9 is a stable version for other OS
RUN curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - && \
apt-get update && apt-get install -y nodejs
# Install php
RUN apt-get update && apt-get install -y \
php5-common \
php5-cli
# Install drush (drupal shell) for access to Drupal commands/Acquia
RUN php -r "readfile('http://files.drush.org/drush.phar');" > drush && \
chmod +x drush && \
sudo mv drush /usr/local/bin
# !!!!NOTICE!!!! ---- Runner of this pipeline take heed!! You must replace acquia_github_key.pem with the REAL key
# material that can checkout private github repositories used as pipeline materials. The key material here is faked and
# is only used to pass CI!
# setup the acquia github identity
ADD docker/build/go-agent-marketing/files/acquia_github_key.pem /var/go/.ssh/acquia_github_key
RUN chmod 600 /var/go/.ssh/acquia_github_key && \
chown go:go /var/go/.ssh/acquia_github_key
Usage
#####
Start the container with this:
``docker run -ti -e GO_SERVER=your.go.server.ip_or_host edx/go-agent-marketing``
If you need to start a few GoCD agents together, you can of course use the
shell to do that. Start a few agents in the background, like this:
``for each in 1 2 3; do docker run -d --link angry_feynman:go-server edx/go-agent-marketing; done``
Getting into the container
##########################
Sometimes, you need a shell inside the container (to create test repositories,
etc). docker provides an easy way to do that:
``docker exec -i -t CONTAINER-ID /bin/bash``
To check the agent logs, you can do this:
``docker exec -i -t CONTAINER-ID tail -f /var/log/go-agent/go-agent.log``
Agent Configuration
###################
The go-agent expects it's configuration to be found at
``/var/lib/go-agent/config/``. Sharing the configuration between containers is
done by mounting a volume at this location that contains any configuration
files necessary.
**Example docker run command:**
``docker run -ti -v /tmp/go-agent/conf:/var/lib/go-agent/config -e GO_SERVER=gocd.sandbox.edx.org 718d75c467c0 bash``
`How to setup auto registration for remote agents`_
Building and Uploading the container to ECS
###########################################
- Build and tag the go-agent docker image
- Follow the README in the go-agent directory to build and tag for go-agent-marketing.
- Copy the Acquia GitHub private key to this path:
- ``docker/build/go-agent-marketing/files/acquia_github_key.pem``
- A dummy key is in the repo file.
- The actual private key is kept in LastPass - see DevOps for access.
- WARNING: Do *NOT* commit/push the real private key to the public
configuration repo!
- Create image
- This must be run from the root of the configuration repository
- ``docker build -f docker/build/go-agent-marketing/Dockerfile .``
- or
- ``make docker.test.go-agent-marketing``
- Log docker in to AWS
- Assume the role of the account you wish to log in to
- ``source assume_role.sh <account name>``
- ``sh -c `aws ecr get-login --region us-east-1```
- You might need to remove the ``-e`` option returned by that command in
order to successfully login.
- Tag image
- ``docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent-marketing:latest``
- ``docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent-marketing:<version_number>``
- upload:
- ``docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent-marketing:latest``
- ``docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent-marketing:<version_number>``
.. _How to setup auto registration for remote agents: https://docs.go.cd/current/advanced_usage/agent_auto_register.html
-----BEGIN RSA PRIVATE KEY-----
This file is junk, replace with the real key when
building the container.
-----END RSA PRIVATE KEY-----
...@@ -3,12 +3,12 @@ Usage ...@@ -3,12 +3,12 @@ Usage
Start the container with this: Start the container with this:
``docker run -ti -e GO_SERVER=your.go.server.ip_or_host gocd/gocd-agent`` ``docker run -ti -e GO_SERVER=your.go.server.ip_or_host edx/go-agent``
If you need to start a few GoCD agents together, you can of course use the If you need to start a few GoCD agents together, you can of course use the
shell to do that. Start a few agents in the background, like this: shell to do that. Start a few agents in the background, like this:
``for each in 1 2 3; do docker run -d --link angry_feynman:go-server gocd/gocd-agent; done`` ``for each in 1 2 3; do docker run -d --link angry_feynman:go-server edx/go-agent; done``
Getting into the container Getting into the container
########################## ##########################
...@@ -53,6 +53,11 @@ Building and Uploading the container to ECS ...@@ -53,6 +53,11 @@ Building and Uploading the container to ECS
- or - or
- ``make docker.test.go-agent`` - ``make docker.test.go-agent``
- Tag image for the go-agent-marketing Dockerfile
- *REQUIRED for go-agent-marketing Dockerfile*
- ``docker tag <image_id> edxops/go-agent``
- Log docker in to AWS - Log docker in to AWS
- Assume the role of the account you wish to log in to - Assume the role of the account you wish to log in to
......
FROM edxops/xenial-common
MAINTAINER edxops
USER root
RUN apt-get update
ADD . /edx/app/edx_ansible/edx_ansible
COPY docker/build/xqwatcher/ansible_overrides.yml /
WORKDIR /edx/app/edx_ansible/edx_ansible/docker/plays
RUN /edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook harstorage.yml \
-i '127.0.0.1,' -c local \
-t "install:base,install:configuration,install:app-requirements,install:code" \
-e@/ansible_overrides.yml
WORKDIR /edx/app/harstorage/harstorage
CMD ["/edx/app/harstorage/venvs/harstorage/bin/paster", "serve", "--daemon", "/edx/app/harstorage/venvs/harstorage/edx/etc/harstorage/production.ini"]
- name: Deploy Credentials - name: Deploy credentials
hosts: all hosts: all
become: True become: True
gather_facts: True gather_facts: True
...@@ -6,8 +6,7 @@ ...@@ -6,8 +6,7 @@
serial_count: 1 serial_count: 1
serial: "{{ serial_count }}" serial: "{{ serial_count }}"
roles: roles:
- nginx - role: nginx
- docker
- role: credentials
nginx_default_sites: nginx_default_sites:
- credentials - credentials
- credentials
- name: Deploy Harstorage
hosts: all
become: True
gather_facts: True
roles:
- docker
- mongo
- harstorage
...@@ -7,12 +7,10 @@ ...@@ -7,12 +7,10 @@
ENABLE_NEWRELIC: False ENABLE_NEWRELIC: False
CLUSTER_NAME: 'credentials' CLUSTER_NAME: 'credentials'
roles: roles:
- aws
- role: nginx - role: nginx
nginx_sites:
- credentials
nginx_default_sites: nginx_default_sites:
- credentials - credentials
- aws
- credentials - credentials
- role: datadog - role: datadog
when: COMMON_ENABLE_DATADOG when: COMMON_ENABLE_DATADOG
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
- xqueue - xqueue
- xserver - xserver
- analytics_api - analytics_api
- credentials
nginx_default_sites: nginx_default_sites:
- lms - lms
- mysql - mysql
...@@ -26,6 +25,7 @@ ...@@ -26,6 +25,7 @@
- edxapp - edxapp
- testcourses - testcourses
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' } - { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- role: redis
- oraclejdk - oraclejdk
- elasticsearch - elasticsearch
- forum - forum
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
- analytics_api - analytics_api
- ecommerce - ecommerce
- credentials - credentials
- veda_web_frontend
- oauth_client_setup - oauth_client_setup
- role: datadog - role: datadog
when: COMMON_ENABLE_DATADOG when: COMMON_ENABLE_DATADOG
......
- name: Deploy Harstorage
hosts: all
become: True
gather_facts: True
vars:
nginx_default_sites:
- harstorage
roles:
- aws
- mongo
- nginx
- harstorage
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
become: True become: True
gather_facts: True gather_facts: True
vars: vars:
COMMON_ENABLE_DATADOG: True COMMON_ENABLE_DATADOG: False
COMMON_ENABLE_SPLUNKFORWARDER: True COMMON_ENABLE_SPLUNKFORWARDER: True
COMMON_ENABLE_NEWRELIC: True COMMON_ENABLE_NEWRELIC: True
COMMON_SECURITY_UPDATES: yes COMMON_SECURITY_UPDATES: yes
...@@ -42,6 +42,14 @@ ...@@ -42,6 +42,14 @@
crcSalt: '<SOURCE>' crcSalt: '<SOURCE>'
blacklist: coverage|private|subset|specific|custom|special|\.gz$ blacklist: coverage|private|subset|specific|custom|special|\.gz$
- source: '/var/lib/jenkins/jobs/edx-platform-*/builds/*/archive/test_root/log/timing.*.log'
index: 'testeng'
recursive: true
sourcetype: 'json_timing_log'
followSymlink: false
crcSalt: '<SOURCE>'
blacklist: coverage|private|subset|specific|custom|special|\.gz$
- source: '/var/log/jenkins/jenkins.log' - source: '/var/log/jenkins/jenkins.log'
index: 'testeng' index: 'testeng'
recursive: false recursive: false
......
...@@ -43,12 +43,6 @@ ...@@ -43,12 +43,6 @@
sourcetype: 'json_timing_log' sourcetype: 'json_timing_log'
followSymlink: false followSymlink: false
- source: '/var/lib/jenkins/jobs/*/builds/*/archive/sitespeed-result/*/data/result.json'
index: 'testeng'
recursive: true
sourcetype: sitespeed_result
followSymlink: false
- source: '/var/log/jenkins/jenkins.log' - source: '/var/log/jenkins/jenkins.log'
index: 'testeng' index: 'testeng'
recursive: false recursive: false
......
...@@ -26,5 +26,4 @@ ...@@ -26,5 +26,4 @@
- memcache - memcache
- mongo_3_2 - mongo_3_2
- browsers - browsers
- browsermob-proxy
- jenkins_worker - jenkins_worker
...@@ -9,5 +9,6 @@ ...@@ -9,5 +9,6 @@
- "roles/ecommerce/defaults/main.yml" - "roles/ecommerce/defaults/main.yml"
- "roles/credentials/defaults/main.yml" - "roles/credentials/defaults/main.yml"
- "roles/discovery/defaults/main.yml" - "roles/discovery/defaults/main.yml"
- "roles/veda_web_frontend/defaults/main.yml"
roles: roles:
- oauth_client_setup - oauth_client_setup
- name: Deploy redis
hosts: all
become: True
gather_facts: True
roles:
- aws
- redis
- role: datadog
when: COMMON_ENABLE_DATADOG
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
- role: newrelic_infrastructure
when: COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE
...@@ -6,9 +6,4 @@ ...@@ -6,9 +6,4 @@
gather_facts: True gather_facts: True
vars: vars:
roles: roles:
- aws
- splunk-server - splunk-server
- role: datadog
when: COMMON_ENABLE_DATADOG
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
...@@ -2,4 +2,7 @@ ...@@ -2,4 +2,7 @@
hosts: all hosts: all
gather_facts: True gather_facts: True
roles: roles:
- aws
- veda_delivery_worker - veda_delivery_worker
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
- name: Deploy edX VEDA Encode Worker - name: Deploy edX VEDA Encode Worker
hosts: all hosts: all
become: True
gather_facts: True gather_facts: True
roles: roles:
- veda_encode_worker - veda_encode_worker
- name: Deploy edX VEDA pipeline Worker
hosts: all
become: True
gather_facts: True
roles:
- aws
- veda_pipeline_worker
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
- name: Deploy edX Video Pipeline Web Frontend - name: Deploy edX Video Pipeline Web Frontend
hosts: all hosts: all
become: True
gather_facts: True gather_facts: True
roles: roles:
- veda_web_frontend - aws
- role: nginx
nginx_default_sites:
- veda_web_frontend
- role: veda_web_frontend
- role: splunkforwarder
when: COMMON_ENABLE_SPLUNKFORWARDER
# TODO! Add new relic instrumentation once all the other pieces of video pipeline are in place.
...@@ -28,6 +28,8 @@ ...@@ -28,6 +28,8 @@
when: elb_pre_post when: elb_pre_post
roles: roles:
- aws - aws
- role: automated
AUTOMATED_USERS: "{{ XQUEUE_AUTOMATED_USERS | default({}) }}"
- role: nginx - role: nginx
nginx_sites: nginx_sites:
- xqueue - xqueue
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
NGINX_SET_X_FORWARDED_HEADERS: True NGINX_SET_X_FORWARDED_HEADERS: True
DISCOVERY_URL_ROOT: 'http://localhost:{{ DISCOVERY_NGINX_PORT }}' DISCOVERY_URL_ROOT: 'http://localhost:{{ DISCOVERY_NGINX_PORT }}'
ecommerce_create_demo_data: true ecommerce_create_demo_data: true
credentials_create_demo_data: true
roles: roles:
- role: swapfile - role: swapfile
SWAPFILE_SIZE: 4GB SWAPFILE_SIZE: 4GB
......
...@@ -70,6 +70,7 @@ ...@@ -70,6 +70,7 @@
become_user: "{{ analytics_api_user }}" become_user: "{{ analytics_api_user }}"
environment: "{{ analytics_api_environment }}" environment: "{{ analytics_api_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes" when: migrate_db is defined and migrate_db|lower == "yes"
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
......
...@@ -60,6 +60,7 @@ ...@@ -60,6 +60,7 @@
sudo_user: "{{ '{{' }} {{ role_name }}_user }}" sudo_user: "{{ '{{' }} {{ role_name }}_user }}"
environment: "{{ '{{' }} {{ role_name }}_migration_environment }}" environment: "{{ '{{' }} {{ role_name }}_migration_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes" when: migrate_db is defined and migrate_db|lower == "yes"
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
......
...@@ -26,9 +26,9 @@ ...@@ -26,9 +26,9 @@
# EDXAPP_AUTOMATED_USERS: # EDXAPP_AUTOMATED_USERS:
# ecom: # ecom:
# sudo_commands: # sudo_commands:
# - command: "/edx/app/edxapp/venvs/edxapp/bin/python /edx/app/edxapp/edx-platform/manage.py lms migrate --list --settings=aws" # - command: "/edx/app/edxapp/venvs/edxapp/bin/python /edx/app/edxapp/edx-platform/manage.py lms showmigrations --settings=aws"
# sudo_user: "edxapp" # sudo_user: "edxapp"
# - command: "/edx/app/edxapp/venvs/edxapp/bin/python /edx/app/edxapp/edx-platform/manage.py cms migrate --list --settings=aws" # - command: "/edx/app/edxapp/venvs/edxapp/bin/python /edx/app/edxapp/edx-platform/manage.py cms showmigrations --settings=aws"
# sudo_user: "edxapp" # sudo_user: "edxapp"
# authorized_keys: # authorized_keys:
# - 'ssh-rsa <REDACTED> ecom+admin@example.com' # - 'ssh-rsa <REDACTED> ecom+admin@example.com'
...@@ -62,7 +62,7 @@ ...@@ -62,7 +62,7 @@
mode: "0440" mode: "0440"
validate: 'visudo -cf %s' validate: 'visudo -cf %s'
with_dict: "{{ AUTOMATED_USERS }}" with_dict: "{{ AUTOMATED_USERS }}"
- name: Create .ssh directory - name: Create .ssh directory
file: file:
path: "/home/{{ item.key }}/.ssh" path: "/home/{{ item.key }}/.ssh"
...@@ -71,7 +71,7 @@ ...@@ -71,7 +71,7 @@
owner: "{{ item.key }}" owner: "{{ item.key }}"
group: "{{ item.key }}" group: "{{ item.key }}"
with_dict: "{{ AUTOMATED_USERS }}" with_dict: "{{ AUTOMATED_USERS }}"
- name: Build authorized_keys file - name: Build authorized_keys file
template: template:
src: "home/automator/.ssh/authorized_keys.j2" src: "home/automator/.ssh/authorized_keys.j2"
...@@ -80,7 +80,7 @@ ...@@ -80,7 +80,7 @@
owner: "{{ item.key }}" owner: "{{ item.key }}"
group: "{{ item.key }}" group: "{{ item.key }}"
with_dict: "{{ AUTOMATED_USERS }}" with_dict: "{{ AUTOMATED_USERS }}"
- name: Build known_hosts file - name: Build known_hosts file
file: file:
path: "/home/{{ item.key }}/.ssh/known_hosts" path: "/home/{{ item.key }}/.ssh/known_hosts"
......
# browsermob-proxy
browsermob_proxy_version: '2.0.0'
browsermob_proxy_url: 'https://github.com/lightbody/browsermob-proxy/releases/download/browsermob-proxy-{{ browsermob_proxy_version }}/browsermob-proxy-{{ browsermob_proxy_version }}-bin.zip'
#!/bin/sh
/etc/browsermob-proxy/bin/browsermob-proxy $*
# Install browsermob-proxy, which is used for page performance testing with bok-choy
---
- name: get zip file
get_url:
url: "{{ browsermob_proxy_url }}"
dest: "/var/tmp/browsermob-proxy-{{ browsermob_proxy_version }}.zip"
register: download_browsermob_proxy
- name: unzip into /var/tmp/
shell: "unzip /var/tmp/browsermob-proxy-{{ browsermob_proxy_version }}.zip"
args:
chdir: "/var/tmp"
when: download_browsermob_proxy.changed
- name: move to /etc/browsermob-proxy/
shell: "mv /var/tmp/browsermob-proxy-{{ browsermob_proxy_version }} /etc/browsermob-proxy"
when: download_browsermob_proxy.changed
- name: change permissions of main script
file:
path: "/etc/browsermob-proxy/bin/browsermob-proxy"
mode: 0755
when: download_browsermob_proxy.changed
- name: add wrapper script /usr/local/bin/browsermob-proxy
copy:
src: browsermob-proxy
dest: /usr/local/bin/browsermob-proxy
when: download_browsermob_proxy.changed
- name: change permissions of wrapper script
file:
path: /usr/local/bin/browsermob-proxy
mode: 0755
when: download_browsermob_proxy.changed
...@@ -98,8 +98,8 @@ COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE: False ...@@ -98,8 +98,8 @@ COMMON_ENABLE_NEWRELIC_INFRASTRUCTURE: False
COMMON_ENABLE_NEWRELIC_APP: False COMMON_ENABLE_NEWRELIC_APP: False
COMMON_ENABLE_MINOS: False COMMON_ENABLE_MINOS: False
COMMON_TAG_EC2_INSTANCE: False COMMON_TAG_EC2_INSTANCE: False
common_boto_version: '2.34.0' common_boto_version: '2.48.0'
common_node_version: '6.11.1' common_node_version: '8.9.3'
common_redhat_pkgs: common_redhat_pkgs:
- ntp - ntp
- lynx - lynx
......
...@@ -9,24 +9,34 @@ ...@@ -9,24 +9,34 @@
# #
## ##
# Role includes for role credentials # Role includes for role credentials
#
# Example:
# #
# dependencies:
# - {
# role: my_role
# my_role_var0: "foo"
# my_role_var1: "bar"
# }
dependencies: dependencies:
- common - role: edx_django_service
- supervisor edx_django_service_version: '{{ CREDENTIALS_VERSION }}'
- role: edx_service edx_django_service_name: '{{ credentials_service_name }}'
edx_service_name: "{{ credentials_service_name }}" edx_django_service_config_overrides: '{{ credentials_service_config_overrides }}'
edx_service_config: "{{ CREDENTIALS_SERVICE_CONFIG }}" edx_django_service_debian_pkgs_extra: '{{ credentials_debian_pkgs }}'
edx_service_repos: "{{ CREDENTIALS_REPOS }}" edx_django_service_gunicorn_port: '{{ credentials_gunicorn_port }}'
edx_service_user: "{{ credentials_user }}" edx_django_service_django_settings_module: '{{ CREDENTIALS_DJANGO_SETTINGS_MODULE }}'
edx_service_home: "{{ credentials_home }}" edx_django_service_environment_extra: '{{ credentials_environment }}'
edx_service_packages: edx_django_service_gunicorn_extra: '{{ CREDENTIALS_GUNICORN_EXTRA }}'
debian: "{{ credentials_debian_pkgs }}" edx_django_service_nginx_port: '{{ CREDENTIALS_NGINX_PORT }}'
redhat: "{{ credentials_redhat_pkgs }}" edx_django_service_ssl_nginx_port: '{{ CREDENTIALS_SSL_NGINX_PORT }}'
edx_django_service_language_code: '{{ CREDENTIALS_LANGUAGE_CODE }}'
edx_django_service_secret_key: '{{ CREDENTIALS_SECRET_KEY }}'
edx_django_service_staticfiles_storage: '{{ CREDENTIALS_STATICFILES_STORAGE }}'
edx_django_service_media_storage_backend: '{{ CREDENTIALS_MEDIA_STORAGE_BACKEND }}'
edx_django_service_memcache: '{{ CREDENTIALS_MEMCACHE }}'
edx_django_service_default_db_host: '{{ CREDENTIALS_MYSQL_HOST }}'
edx_django_service_default_db_name: '{{ CREDENTIALS_DEFAULT_DB_NAME }}'
edx_django_service_default_db_atomic_requests: false
edx_django_service_db_user: '{{ CREDENTIALS_MYSQL_USER }}'
edx_django_service_db_password: '{{ CREDENTIALS_MYSQL_PASSWORD }}'
edx_django_service_social_auth_edx_oidc_key: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}'
edx_django_service_social_auth_edx_oidc_secret: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
edx_django_service_social_auth_redirect_is_https: '{{ CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
edx_django_service_extra_apps: '{{ CREDENTIALS_EXTRA_APPS }}'
edx_django_service_session_expire_at_browser_close: '{{ CREDENTIALS_SESSION_EXPIRE_AT_BROWSER_CLOSE }}'
edx_django_service_automated_users: '{{ CREDENTIALS_AUTOMATED_USERS }}'
edx_django_service_cors_whitelist: '{{ CREDENTIALS_CORS_ORIGIN_WHITELIST }}'
edx_django_service_post_migrate_commands: '{{ credentials_post_migrate_commands }}'
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
# #
# Tasks for role credentials # Tasks for role credentials
# #
# Overview: # Overview: This role's tasks come from edx_django_service.
# #
# #
# Dependencies: # Dependencies:
...@@ -20,233 +20,3 @@ ...@@ -20,233 +20,3 @@
# Example play: # Example play:
# #
# #
- name: add gunicorn configuration file
template:
src: edx/app/credentials/credentials_gunicorn.py.j2
dest: "{{ credentials_home }}/credentials_gunicorn.py"
become_user: "{{ credentials_user }}"
tags:
- install
- install:configuration
- name: add deadsnakes repository
apt_repository:
repo: "ppa:fkrull/deadsnakes"
tags:
- install
- install:system-requirements
- name: install python3.5
apt:
name: "{{ item }}"
with_items:
- python3.5
- python3.5-dev
tags:
- install
- install:system-requirements
- name: build virtualenv
command: "virtualenv --python=python3.5 {{ credentials_venv_dir }}"
args:
creates: "{{ credentials_venv_dir }}/bin/pip"
become_user: "{{ credentials_user }}"
tags:
- install
- install:system-requirements
- name: install nodenv
pip:
name: "nodeenv"
version: "1.1.2"
# NOTE (CCB): Using the "virtualenv" option here doesn't seem to work.
executable: "{{ credentials_venv_dir }}/bin/pip"
become_user: "{{ credentials_user }}"
tags:
- install
- install:system-requirements
- name: create nodeenv
shell: "{{ credentials_venv_dir }}/bin/nodeenv {{ credentials_nodeenv_dir }} --node={{ credentials_node_version }} --prebuilt --force"
become_user: "{{ credentials_user }}"
tags:
- install
- install:system-requirements
- name: install application requirements
command: make production-requirements
args:
chdir: "{{ credentials_code_dir }}"
become_user: "{{ credentials_user }}"
environment: "{{ credentials_environment }}"
tags:
- install
- install:app-requirements
- name: install development requirements
command: make requirements
args:
chdir: "{{ credentials_code_dir }}"
become_user: "{{ credentials_user }}"
environment: "{{ credentials_environment }}"
tags:
- devstack
- devstack:install
- name: migrate database
command: make migrate
args:
chdir: "{{ credentials_code_dir }}"
become_user: "{{ credentials_user }}"
environment: "{{ credentials_migration_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes"
tags:
- migrate
- migrate:db
# var should have more permissive permissions than the rest
- name: create credentials var dirs
file:
path: "{{ item }}"
state: directory
mode: 0775
owner: "{{ credentials_user }}"
group: "{{ common_web_group }}"
with_items:
- "{{ CREDENTIALS_MEDIA_ROOT }}"
tags:
- install
- install:base
- name: write out the supervisor wrapper
template:
src: "edx/app/credentials/credentials.sh.j2"
dest: "{{ credentials_home }}/{{ credentials_service_name }}.sh"
mode: 0650
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
tags:
- install
- install:configuration
- name: write supervisord config
template:
src: "edx/app/supervisor/conf.d.available/credentials.conf.j2"
dest: "{{ supervisor_available_dir }}/{{ credentials_service_name }}.conf"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: 0644
tags:
- install
- install:configuration
- name: write devstack script
template:
src: "edx/app/credentials/devstack.sh.j2"
dest: "{{ credentials_home }}/devstack.sh"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: 0744
tags:
- devstack
- devstack:install
- name: setup the credentials env file
template:
src: "./{{ credentials_home }}/{{ credentials_service_name }}_env.j2"
dest: "{{ credentials_home }}/credentials_env"
owner: "{{ credentials_user }}"
group: "{{ credentials_user }}"
mode: 0644
tags:
- install
- install:configuration
- name: enable supervisor script
file:
src: "{{ supervisor_available_dir }}/{{ credentials_service_name }}.conf"
dest: "{{ supervisor_cfg_dir }}/{{ credentials_service_name }}.conf"
state: link
force: yes
when: not disable_edx_services
tags:
- install
- install:configuration
- name: update supervisor configuration
command: "{{ supervisor_ctl }} -c {{ supervisor_cfg }} update"
when: not disable_edx_services
tags:
- manage
- manage:start
- name: create symlinks from the venv bin dir
file:
src: "{{ credentials_venv_dir }}/bin/{{ item }}"
dest: "{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.credentials"
state: link
with_items:
- python
- pip
- django-admin.py
tags:
- install
- install:app-requirements
- name: create symlinks from the repo dir
file:
src: "{{ credentials_code_dir }}/{{ item }}"
dest: "{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.credentials"
state: link
with_items:
- manage.py
tags:
- install
- install:app-requirements
- name: run collectstatic
command: make static
args:
chdir: "{{ credentials_code_dir }}"
become_user: "{{ credentials_user }}"
environment: "{{ credentials_environment }}"
tags:
- assets
- assets:gather
- name: restart the application
supervisorctl:
state: restarted
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
name: "{{ credentials_service_name }}"
when: not disable_edx_services
become_user: "{{ supervisor_service_user }}"
tags:
- manage
- manage:start
- name: Copying nginx configs for credentials
template:
src: edx/app/nginx/sites-available/credentials.j2
dest: "{{ nginx_sites_available_dir }}/credentials"
owner: root
group: "{{ common_web_user }}"
mode: 0640
notify: reload nginx
tags:
- install
- install:vhosts
- name: Creating nginx config links for credentials
file:
src: "{{ nginx_sites_available_dir }}/credentials"
dest: "{{ nginx_sites_enabled_dir }}/credentials"
state: link
owner: root
group: root
notify: reload nginx
tags:
- install
- install:vhosts
#!/usr/bin/env bash
# {{ ansible_managed }}
{% set credentials_venv_bin = credentials_home + "/venvs/" + credentials_service_name + "/bin" %}
{% if COMMON_ENABLE_NEWRELIC_APP %}
{% set executable = credentials_venv_bin + '/newrelic-admin run-program ' + credentials_venv_bin + '/gunicorn' %}
{% else %}
{% set executable = credentials_venv_bin + '/gunicorn' %}
{% endif %}
{% if COMMON_ENABLE_NEWRELIC_APP %}
export NEW_RELIC_APP_NAME="{{ CREDENTIALS_NEWRELIC_APPNAME }}"
export NEW_RELIC_LICENSE_KEY="{{ NEWRELIC_LICENSE_KEY }}"
{% endif -%}
source {{ credentials_home }}/credentials_env
{{ executable }} -c {{ credentials_home }}/credentials_gunicorn.py {{ CREDENTIALS_GUNICORN_EXTRA }} credentials.wsgi:application
"""
gunicorn configuration file: http://docs.gunicorn.org/en/develop/configure.html
{{ ansible_managed }}
"""
timeout = {{ credentials_gunicorn_timeout }}
bind = "{{ credentials_gunicorn_host }}:{{ credentials_gunicorn_port }}"
pythonpath = "{{ credentials_code_dir }}"
workers = {{ CREDENTIALS_GUNICORN_WORKERS }}
worker_class = "{{ CREDENTIALS_GUNICORN_WORKER_CLASS }}"
{{ CREDENTIALS_GUNICORN_EXTRA_CONF }}
#!/usr/bin/env bash
# {{ ansible_managed }}
source {{ credentials_home }}/credentials_env
COMMAND=$1
case $COMMAND in
start)
{% set credentials_venv_bin = credentials_home + "/venvs/" + credentials_service_name + "/bin" %}
{{ supervisor_venv_bin }}/supervisord --configuration {{ supervisor_cfg }}
# Needed to run bower as root. See explaination around 'credentials_user=root'
echo '{ "allow_root": true }' > /root/.bowerrc
cd /edx/app/edx_ansible/edx_ansible/docker/plays
/edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook credentials.yml -c local -i '127.0.0.1,' \
-t 'install:app-requirements,assets:gather,devstack,migrate' \
--extra-vars="migrate_db=yes" \
--extra-vars="@/ansible_overrides.yml" \
--extra-vars="credentials_user=root" # Needed when sharing the volume with the host machine because node/bower drops
# everything in the code directory by default. So we get issues with permissions
# on folders owned by the developer.
# Need to start supervisord and nginx manually because systemd is hard to run on docker
# http://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/
# Both daemon by default
nginx
/edx/app/supervisor/venvs/supervisor/bin/supervisord --configuration /edx/app/supervisor/supervisord.conf
# Docker requires an active foreground task. Tail the logs to appease Docker and
# provide useful output for development.
cd {{ supervisor_log_dir }}
tail -f {{ credentials_service_name }}-stderr.log -f {{ credentials_service_name }}-stdout.log
;;
open)
cd {{ credentials_code_dir }}/
. {{ credentials_venv_bin }}/activate
/bin/bash
;;
esac
#
# {{ ansible_managed }}
#
{% if nginx_default_sites is defined and "credentials" in nginx_default_sites %}
{% set default_site = "default_server" %}
{% else %}
{% set default_site = "" %}
{% endif %}
upstream credentials_app_server {
{% for host in NGINX_CREDENTIALS_GUNICORN_HOSTS %}
server {{ host }}:{{ credentials_gunicorn_port }} fail_timeout=0;
{% endfor %}
}
# The Origin request header indicates where a fetch originates from. It doesn't include any path information,
# but only the server name (e.g. https://www.example.com).
# See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin for details.
#
# Here we set the value that is included in the Access-Control-Allow-Origin response header. If the origin is one
# of our known hosts--served via HTTP or HTTPS--we allow for CORS. Otherwise, we set the "null" value, disallowing CORS.
map $http_origin $cors_origin {
default "null";
{% for host in CREDENTIALS_CORS_ORIGIN_WHITELIST %}
"~*^https?:\/\/{{ host|replace('.', '\.') }}$" $http_origin;
{% endfor %}
}
server {
server_name {{ CREDENTIALS_HOSTNAME }};
{% if NGINX_ENABLE_SSL %}
listen {{ CREDENTIALS_NGINX_PORT }} {{ default_site }};
listen {{ CREDENTIALS_SSL_NGINX_PORT }} ssl;
ssl_certificate /etc/ssl/certs/{{ NGINX_SSL_CERTIFICATE|basename }};
ssl_certificate_key /etc/ssl/private/{{ NGINX_SSL_KEY|basename }};
# request the browser to use SSL for all connections
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
{% else %}
listen {{ CREDENTIALS_NGINX_PORT }} {{ default_site }};
{% endif %}
location ~ ^{{ CREDENTIALS_MEDIA_URL }}(?P<file>.*) {
root {{ CREDENTIALS_MEDIA_ROOT }};
try_files /$file =404;
}
location ~ ^{{ CREDENTIALS_STATIC_URL }}(?P<file>.*) {
root {{ CREDENTIALS_STATIC_ROOT }};
add_header Cache-Control "max-age=31536000";
add_header 'Access-Control-Allow-Origin' $cors_origin;
# Inform downstream caches to take certain headers into account when reading/writing to cache.
add_header 'Vary' 'Accept-Encoding,Origin';
try_files /$file =404;
}
location / {
try_files $uri @proxy_to_app;
}
{% if NGINX_ROBOT_RULES|length > 0 %}
location /robots.txt {
root {{ nginx_app_dir }};
try_files $uri /robots.txt =404;
}
{% endif %}
location @proxy_to_app {
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header X-Forwarded-Port $http_x_forwarded_port;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://credentials_app_server;
}
# Forward to HTTPS if we're an HTTP request...
if ($http_x_forwarded_proto = "http") {
set $do_redirect "true";
}
# Run our actual redirect...
if ($do_redirect = "true") {
rewrite ^ https://$host$request_uri? permanent;
}
}
...@@ -145,11 +145,10 @@ ECOMMERCE_STATICFILES_STORAGE: 'ecommerce.theming.storage.ThemeStorage' ...@@ -145,11 +145,10 @@ ECOMMERCE_STATICFILES_STORAGE: 'ecommerce.theming.storage.ThemeStorage'
# Celery # Celery
ECOMMERCE_BROKER_USERNAME: 'celery' ECOMMERCE_BROKER_USERNAME: 'celery'
ECOMMERCE_BROKER_PASSWORD: 'celery' ECOMMERCE_BROKER_PASSWORD: 'celery'
# Used as the default RabbitMQ IP.
ECOMMERCE_BROKER_HOST: '{{ ansible_default_ipv4.address }}' ECOMMERCE_BROKER_HOST: '{{ ansible_default_ipv4.address }}'
# Used as the default RabbitMQ port.
ECOMMERCE_BROKER_PORT: 5672 ECOMMERCE_BROKER_PORT: 5672
ECOMMERCE_BROKER_URL: 'amqp://{{ ECOMMERCE_BROKER_USERNAME }}:{{ ECOMMERCE_BROKER_PASSWORD }}@{{ ECOMMERCE_BROKER_HOST }}:{{ ECOMMERCE_BROKER_PORT }}' ECOMMERCE_BROKER_TRANSPORT: 'amqp'
ECOMMERCE_BROKER_URL: '{{ ECOMMERCE_BROKER_TRANSPORT }}://{{ ECOMMERCE_BROKER_USERNAME }}:{{ ECOMMERCE_BROKER_PASSWORD }}@{{ ECOMMERCE_BROKER_HOST }}:{{ ECOMMERCE_BROKER_PORT }}'
ECOMMERCE_DISCOVERY_SERVICE_URL: 'http://localhost:8008' ECOMMERCE_DISCOVERY_SERVICE_URL: 'http://localhost:8008'
ECOMMERCE_ENTERPRISE_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}' ECOMMERCE_ENTERPRISE_URL: '{{ ECOMMERCE_LMS_URL_ROOT }}'
......
...@@ -11,6 +11,10 @@ ...@@ -11,6 +11,10 @@
# Role includes for role ecommerce # Role includes for role ecommerce
# #
dependencies: dependencies:
- role: edx_themes
theme_users:
- '{{ ecommerce_user }}'
when: ECOMMERCE_ENABLE_COMPREHENSIVE_THEMING
- role: edx_django_service - role: edx_django_service
edx_django_service_version: '{{ ECOMMERCE_VERSION }}' edx_django_service_version: '{{ ECOMMERCE_VERSION }}'
edx_django_service_name: '{{ ecommerce_service_name }}' edx_django_service_name: '{{ ecommerce_service_name }}'
...@@ -40,7 +44,4 @@ dependencies: ...@@ -40,7 +44,4 @@ dependencies:
edx_django_service_basic_auth_exempted_paths_extra: edx_django_service_basic_auth_exempted_paths_extra:
- payment - payment
- \.well-known/apple-developer-merchantid-domain-association - \.well-known/apple-developer-merchantid-domain-association
- role: edx_themes
theme_users:
- '{{ ecommerce_user }}'
when: ECOMMERCE_ENABLE_COMPREHENSIVE_THEMING
...@@ -33,8 +33,9 @@ ECOMMERCE_WORKER_BROKER_PASSWORD: 'celery' ...@@ -33,8 +33,9 @@ ECOMMERCE_WORKER_BROKER_PASSWORD: 'celery'
ECOMMERCE_WORKER_BROKER_HOST: '{{ ansible_default_ipv4.address }}' ECOMMERCE_WORKER_BROKER_HOST: '{{ ansible_default_ipv4.address }}'
# Used as the default RabbitMQ port. # Used as the default RabbitMQ port.
ECOMMERCE_WORKER_BROKER_PORT: 5672 ECOMMERCE_WORKER_BROKER_PORT: 5672
ECOMMERCE_WORKER_BROKER_TRANSPORT: 'amqp'
# Default broker URL. See http://celery.readthedocs.org/en/latest/configuration.html#broker-url. # Default broker URL. See http://celery.readthedocs.org/en/latest/configuration.html#broker-url.
ECOMMERCE_WORKER_BROKER_URL: 'amqp://{{ ECOMMERCE_WORKER_BROKER_USERNAME }}:{{ ECOMMERCE_WORKER_BROKER_PASSWORD }}@{{ ECOMMERCE_WORKER_BROKER_HOST }}:{{ ECOMMERCE_WORKER_BROKER_PORT }}' ECOMMERCE_WORKER_BROKER_URL: '{{ ECOMMERCE_WORKER_BROKER_TRANSPORT }}://{{ ECOMMERCE_WORKER_BROKER_USERNAME }}:{{ ECOMMERCE_WORKER_BROKER_PASSWORD }}@{{ ECOMMERCE_WORKER_BROKER_HOST }}:{{ ECOMMERCE_WORKER_BROKER_PORT }}'
ECOMMERCE_WORKER_CONCURRENCY: 4 ECOMMERCE_WORKER_CONCURRENCY: 4
# END CELERY # END CELERY
......
...@@ -14,7 +14,7 @@ IFS="," ...@@ -14,7 +14,7 @@ IFS=","
<repo> - must be one of edx-platform, edx-workers, xqueue, cs_comments_service, credentials, xserver, configuration, <repo> - must be one of edx-platform, edx-workers, xqueue, cs_comments_service, credentials, xserver, configuration,
read-only-certificate-code, edx-analytics-data-api, edx-ora2, insights, ecommerce, course_discovery, read-only-certificate-code, edx-analytics-data-api, edx-ora2, insights, ecommerce, course_discovery,
notifier notifier, video_web_frontend, video_delivery_worker, veda_pipeline_worker, video_encode_worker
<version> - can be a commit or tag <version> - can be a commit or tag
EO EO
...@@ -61,6 +61,10 @@ repos_to_cmd["insights"]="$edx_ansible_cmd insights.yml -e 'INSIGHTS_VERSION=$2' ...@@ -61,6 +61,10 @@ repos_to_cmd["insights"]="$edx_ansible_cmd insights.yml -e 'INSIGHTS_VERSION=$2'
repos_to_cmd["ecommerce"]="$edx_ansible_cmd ecommerce.yml -e 'ECOMMERCE_VERSION=$2'" repos_to_cmd["ecommerce"]="$edx_ansible_cmd ecommerce.yml -e 'ECOMMERCE_VERSION=$2'"
repos_to_cmd["discovery"]="$edx_ansible_cmd discovery.yml -e 'DISCOVERY_VERSION=$2'" repos_to_cmd["discovery"]="$edx_ansible_cmd discovery.yml -e 'DISCOVERY_VERSION=$2'"
repos_to_cmd["notifier"]="$edx_ansible_cmd notifier.yml -e 'NOTIFIER_VERSION=$2'" repos_to_cmd["notifier"]="$edx_ansible_cmd notifier.yml -e 'NOTIFIER_VERSION=$2'"
repos_to_cmd["video_web_frontend"]="$edx_ansible_cmd veda_web_frontend.yml -e 'VEDA_WEB_FRONTEND_VERSION=$2'"
repos_to_cmd["video_delivery_worker"]="$edx_ansible_cmd veda_delivery_worker.yml -e 'VEDA_DELIVERY_WORKER_VERSION=$2'"
repos_to_cmd["veda_pipeline_worker"]="$edx_ansible_cmd veda_pipeline_worker.yml -e 'VEDA_PIPELINE_WORKER_VERSION=$2'"
repos_to_cmd["video_encode_worker"]="$edx_ansible_cmd veda_encode_worker.yml -e 'VEDA_ENCODE_WORKER_VERSION=$2'"
if [[ -z $1 || -z $2 ]]; then if [[ -z $1 || -z $2 ]]; then
......
...@@ -191,7 +191,7 @@ edx_django_service_config: '{{ edx_django_service_config_default|combine(edx_dja ...@@ -191,7 +191,7 @@ edx_django_service_config: '{{ edx_django_service_config_default|combine(edx_dja
edx_django_service_automated_users: edx_django_service_automated_users:
automated_user: automated_user:
sudo_commands: sudo_commands:
- command: '{{ edx_django_service_venv_dir }}/python {{ edx_django_service_code_dir }}/manage.py migrate --list' - command: '{{ edx_django_service_venv_dir }}/python {{ edx_django_service_code_dir }}/manage.py showmigrations'
sudo_user: '{{ edx_django_service_user }}' sudo_user: '{{ edx_django_service_user }}'
authorized_keys: authorized_keys:
- 'SSH authorized key' - 'SSH authorized key'
......
...@@ -111,6 +111,7 @@ ...@@ -111,6 +111,7 @@
become_user: "{{ edx_django_service_user }}" become_user: "{{ edx_django_service_user }}"
environment: "{{ edx_django_service_migration_environment }}" environment: "{{ edx_django_service_migration_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes" when: migrate_db is defined and migrate_db|lower == "yes"
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
...@@ -123,6 +124,7 @@ ...@@ -123,6 +124,7 @@
environment: "{{ edx_django_service_environment }}" environment: "{{ edx_django_service_environment }}"
with_items: '{{ edx_django_service_post_migrate_commands }}' with_items: '{{ edx_django_service_post_migrate_commands }}'
when: migrate_db is defined and migrate_db|lower == "yes" and item.when | bool when: migrate_db is defined and migrate_db|lower == "yes" and item.when | bool
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
......
...@@ -67,6 +67,7 @@ ...@@ -67,6 +67,7 @@
environment: environment:
EDXNOTES_CONFIG_ROOT: "{{ COMMON_CFG_DIR }}" EDXNOTES_CONFIG_ROOT: "{{ COMMON_CFG_DIR }}"
when: migrate_db is defined and migrate_db|lower == "yes" when: migrate_db is defined and migrate_db|lower == "yes"
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
......
...@@ -87,6 +87,14 @@ ...@@ -87,6 +87,14 @@
- install - install
- install:code - install:code
# Download a theme and apply small modifications like SASS changes
# To enable/disable this, set SIMPLETHEME_ENABLE_DEPLOY
# https://github.com/ansible/ansible/issues/19472 prevents including the
# role conditionally
- name: Install a theme through simpletheme
include_role:
name: "simple_theme"
- name: Stat each requirements file with Github URLs to ensure it exists - name: Stat each requirements file with Github URLs to ensure it exists
stat: stat:
path: "{{ item }}" path: "{{ item }}"
...@@ -152,11 +160,11 @@ ...@@ -152,11 +160,11 @@
# Need to use shell rather than pip so that we can maintain the context of our current working directory; some # Need to use shell rather than pip so that we can maintain the context of our current working directory; some
# requirements are pathed relative to the edx-platform repo. Using the pip from inside the virtual environment implicitly # requirements are pathed relative to the edx-platform repo. Using the pip from inside the virtual environment implicitly
# installs everything into that virtual environment. # installs everything into that virtual environment.
shell: "{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}" shell: "{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w {{ item.extra_args|default('') }} {{ item.name }}"
args: args:
chdir: "{{ edxapp_code_dir }}" chdir: "{{ edxapp_code_dir }}"
with_items: with_items:
- "{{ private_requirements_file }}" - "{{ EDXAPP_PRIVATE_REQUIREMENTS }}"
become_user: "{{ edxapp_user }}" become_user: "{{ edxapp_user }}"
environment: environment:
GIT_SSH: "{{ edxapp_git_ssh }}" GIT_SSH: "{{ edxapp_git_ssh }}"
...@@ -239,12 +247,11 @@ ...@@ -239,12 +247,11 @@
- install - install
- install:app-requirements - install:app-requirements
#install with the shell command instead of the ansible npm module so we don't accidentally re-write package.json
- name: install node dependencies - name: install node dependencies
npm: shell: "{{ edxapp_nodeenv_bin }}/npm install"
executable: "{{ edxapp_nodeenv_bin }}/npm" args:
path: "{{ edxapp_code_dir }}" chdir: "{{ edxapp_code_dir }}"
production: "{{ edxapp_npm_production }}"
state: latest
environment: "{{ edxapp_environment }}" environment: "{{ edxapp_environment }}"
become_user: "{{ edxapp_user }}" become_user: "{{ edxapp_user }}"
tags: tags:
......
...@@ -129,6 +129,7 @@ ...@@ -129,6 +129,7 @@
- name: migrate - name: migrate
command: "{{ COMMON_BIN_DIR }}/edxapp-migrate-{{ item }}" command: "{{ COMMON_BIN_DIR }}/edxapp-migrate-{{ item }}"
when: migrate_db is defined and migrate_db|lower == "yes" and COMMON_MYSQL_MIGRATE_PASS and item != "lms-preview" when: migrate_db is defined and migrate_db|lower == "yes" and COMMON_MYSQL_MIGRATE_PASS and item != "lms-preview"
run_once: yes
environment: environment:
DB_MIGRATION_USER: "{{ COMMON_MYSQL_MIGRATE_USER }}" DB_MIGRATION_USER: "{{ COMMON_MYSQL_MIGRATE_USER }}"
DB_MIGRATION_PASS: "{{ COMMON_MYSQL_MIGRATE_PASS }}" DB_MIGRATION_PASS: "{{ COMMON_MYSQL_MIGRATE_PASS }}"
......
...@@ -4,8 +4,25 @@ if [[ -z "${NO_EDXAPP_SUDO:-}" ]]; then ...@@ -4,8 +4,25 @@ if [[ -z "${NO_EDXAPP_SUDO:-}" ]]; then
SUDO='sudo -E -u {{ edxapp_user }} env "PATH=$PATH"' SUDO='sudo -E -u {{ edxapp_user }} env "PATH=$PATH"'
fi fi
remove_unwanted_args () {
ARGS=("")
args_to_remove="(--list|--noinput)"
for var in "$@"; do
# Ignore known unneeded arguments
if [[ "$var" =~ $args_to_remove ]]; then
continue
fi
ARGS+=("$var")
done
}
{% for db in cms_auth_config.DATABASES.keys() %} {% for db in cms_auth_config.DATABASES.keys() %}
{%- if db != 'read_replica' %} {%- if db != 'read_replica' %}
${SUDO:-} {{ edxapp_venv_bin }}/python manage.py cms migrate --database {{ db }} --noinput --settings $EDX_PLATFORM_SETTINGS $@ if [[ $@ =~ .*--list.* ]]; then
remove_unwanted_args $@
${SUDO:-} {{ edxapp_venv_bin }}/python manage.py cms showmigrations --database {{ db }} --settings $EDX_PLATFORM_SETTINGS ${ARGS[@]}
else
${SUDO:-} {{ edxapp_venv_bin }}/python manage.py cms migrate --database {{ db }} --noinput --settings $EDX_PLATFORM_SETTINGS $@
fi
{% endif %} {% endif %}
{% endfor %} {% endfor %}
...@@ -4,8 +4,25 @@ if [[ -z "${NO_EDXAPP_SUDO:-}" ]]; then ...@@ -4,8 +4,25 @@ if [[ -z "${NO_EDXAPP_SUDO:-}" ]]; then
SUDO='sudo -E -u {{ edxapp_user }} env "PATH=$PATH"' SUDO='sudo -E -u {{ edxapp_user }} env "PATH=$PATH"'
fi fi
remove_unwanted_args () {
ARGS=("")
args_to_remove="(--list|--noinput)"
for var in "$@"; do
# Ignore known unneeded arguments
if [[ "$var" =~ $args_to_remove ]]; then
continue
fi
ARGS+=("$var")
done
}
{% for db in lms_auth_config.DATABASES.keys() %} {% for db in lms_auth_config.DATABASES.keys() %}
{%- if db != 'read_replica' %} {%- if db != 'read_replica' %}
${SUDO:-} {{ edxapp_venv_bin }}/python manage.py lms migrate --database {{ db }} --noinput --settings $EDX_PLATFORM_SETTINGS $@ if [[ $@ =~ .*--list.* ]]; then
remove_unwanted_args $@
${SUDO:-} {{ edxapp_venv_bin }}/python manage.py lms showmigrations --database {{ db }} --settings $EDX_PLATFORM_SETTINGS ${ARGS[@]}
else
${SUDO:-} {{ edxapp_venv_bin }}/python manage.py lms migrate --database {{ db }} --noinput --settings $EDX_PLATFORM_SETTINGS $@
fi
{% endif %} {% endif %}
{% endfor %} {% endfor %}
...@@ -14,6 +14,7 @@ edxlocal_databases: ...@@ -14,6 +14,7 @@ edxlocal_databases:
- "{{ ANALYTICS_API_REPORTS_DB_NAME | default(None) }}" - "{{ ANALYTICS_API_REPORTS_DB_NAME | default(None) }}"
- "{{ CREDENTIALS_DEFAULT_DB_NAME | default(None) }}" - "{{ CREDENTIALS_DEFAULT_DB_NAME | default(None) }}"
- "{{ DISCOVERY_DEFAULT_DB_NAME | default(None) }}" - "{{ DISCOVERY_DEFAULT_DB_NAME | default(None) }}"
- "{{ VEDA_WEB_FRONTEND_DEFAULT_DB_NAME | default(None) }}"
edxlocal_database_users: edxlocal_database_users:
- { - {
...@@ -61,3 +62,8 @@ edxlocal_database_users: ...@@ -61,3 +62,8 @@ edxlocal_database_users:
user: "{{ DISCOVERY_MYSQL_USER | default(None) }}", user: "{{ DISCOVERY_MYSQL_USER | default(None) }}",
pass: "{{ DISCOVERY_MYSQL_PASSWORD | default(None) }}" pass: "{{ DISCOVERY_MYSQL_PASSWORD | default(None) }}"
} }
- {
db: "{{ VEDA_WEB_FRONTEND_DEFAULT_DB_NAME | default(None) }}",
user: "{{ VEDA_WEB_FRONTEND_MYSQL_USER | default(None) }}",
pass: "{{ VEDA_WEB_FRONTEND_MYSQL_PASSWORD | default(None) }}"
}
...@@ -7,6 +7,7 @@ FLOWER_BROKER_HOST: "127.0.0.1" ...@@ -7,6 +7,7 @@ FLOWER_BROKER_HOST: "127.0.0.1"
FLOWER_BROKER_PORT: 5672 FLOWER_BROKER_PORT: 5672
FLOWER_ADDRESS: "0.0.0.0" FLOWER_ADDRESS: "0.0.0.0"
FLOWER_PORT: "5555" FLOWER_PORT: "5555"
FLOWER_BROKER_TRANSPORT: 'amqp'
FLOWER_OAUTH2_KEY: "A Client ID from Google's OAUTH2 provider" FLOWER_OAUTH2_KEY: "A Client ID from Google's OAUTH2 provider"
FLOWER_OAUTH2_SECRET: "A Client Secret from Google's OAUTH2 provider" FLOWER_OAUTH2_SECRET: "A Client Secret from Google's OAUTH2 provider"
...@@ -23,11 +24,14 @@ flower_venv_dir: "{{ flower_app_dir }}/venvs/flower" ...@@ -23,11 +24,14 @@ flower_venv_dir: "{{ flower_app_dir }}/venvs/flower"
flower_venv_bin: "{{ flower_venv_dir }}/bin" flower_venv_bin: "{{ flower_venv_dir }}/bin"
flower_python_reqs: flower_python_reqs:
- "flower==0.8.3" # Celery version must match version used by edx-platform
- "celery==3.1.18"
- "flower==0.9.2"
- "redis==2.10.6"
flower_deploy_path: "{{ flower_venv_bin }}:/usr/local/sbin:/usr/local/bin:/usr/bin:/sbin:/bin" flower_deploy_path: "{{ flower_venv_bin }}:/usr/local/sbin:/usr/local/bin:/usr/bin:/sbin:/bin"
flower_broker: "amqp://{{ FLOWER_BROKER_USERNAME }}:{{ FLOWER_BROKER_PASSWORD }}@{{ FLOWER_BROKER_HOST }}:{{ FLOWER_BROKER_PORT }}" flower_broker: "{{ FLOWER_BROKER_TRANSPORT }}://{{ FLOWER_BROKER_USERNAME }}:{{ FLOWER_BROKER_PASSWORD }}@{{ FLOWER_BROKER_HOST }}:{{ FLOWER_BROKER_PORT }}"
flower_environment: flower_environment:
PATH: "{{ flower_deploy_path }}" PATH: "{{ flower_deploy_path }}"
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
dest: "{{ supervisor_available_dir }}/{{ FLOWER_USER }}.conf" dest: "{{ supervisor_available_dir }}/{{ FLOWER_USER }}.conf"
owner: "{{ supervisor_user }}" owner: "{{ supervisor_user }}"
group: "{{ supervisor_user }}" group: "{{ supervisor_user }}"
mode: 0644
become_user: "{{ supervisor_user }}" become_user: "{{ supervisor_user }}"
notify: notify:
- restart flower - restart flower
......
...@@ -67,6 +67,7 @@ ...@@ -67,6 +67,7 @@
become_user: "{{ forum_user }}" become_user: "{{ forum_user }}"
environment: "{{ forum_base_env }}" environment: "{{ forum_base_env }}"
when: migrate_db is defined and migrate_db|lower == "yes" when: migrate_db is defined and migrate_db|lower == "yes"
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
...@@ -78,6 +79,7 @@ ...@@ -78,6 +79,7 @@
become_user: "{{ forum_user }}" become_user: "{{ forum_user }}"
environment: "{{ forum_base_env }}" environment: "{{ forum_base_env }}"
when: migrate_db is defined and migrate_db|lower == "yes" and FORUM_REBUILD_INDEX|bool when: migrate_db is defined and migrate_db|lower == "yes" and FORUM_REBUILD_INDEX|bool
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
......
---
#
# edX Configuration
#
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
harprofiler_role_name: harprofiler
harprofiler_user: "harprofiler"
harprofiler_github_url: https://github.com/edx/harprofiler
harprofiler_version: master
harprofiler_dir: /edx/app/harprofiler
harprofiler_venv_dir: "{{ harprofiler_dir }}/venvs/harprofiler"
harprofiler_validation_script: validate_harprofiler_install.sh
---
dependencies:
- common
- browsers
- oraclejdk
- browsermob-proxy
---
# Installs the harprofiler
- name: create harprofiler user
user:
name: "{{ harprofiler_user }}"
createhome: no
home: "{{ harprofiler_dir }}"
shell: /bin/bash
- name: create harprofiler repo
file:
path: "{{ harprofiler_dir }}"
state: directory
owner: "{{ harprofiler_user }}"
group: "{{ common_web_group }}"
mode: 0755
- name: check out the harprofiler
git:
dest: "{{ harprofiler_dir }}"
repo: "{{ harprofiler_github_url }}"
version: "{{ harprofiler_version }}"
accept_hostkey: yes
become_user: "{{ harprofiler_user }}"
- name: set bashrc for harprofiler user
template:
src: bashrc.j2
dest: "{{ harprofiler_dir }}/.bashrc"
owner: "{{ harprofiler_user }}"
mode: 0755
- name: install requirements
pip:
requirements: "{{ harprofiler_dir }}/requirements.txt"
virtualenv: "{{ harprofiler_venv_dir }}"
become_user: "{{ harprofiler_user }}"
- name: update config file
# harprofiler ships with a default config file. Doing a line-replace for the default
# configuration that does not match what this machine will have
lineinfile:
dest: "{{ harprofiler_dir }}/config.yaml"
regexp: "browsermob_dir"
line: "browsermob_dir: /usr/local"
state: present
- name: create validation shell script
template:
owner: "{{ harprofiler_user }}"
src: validate_harprofiler_install.sh.j2
dest: "{{ harprofiler_dir }}/{{ harprofiler_validation_script }}"
mode: 0755
become_user: "{{ harprofiler_user }}"
- name: test install
shell: "./{{ harprofiler_validation_script }}"
args:
chdir: "{{ harprofiler_dir }}"
become_user: "{{ harprofiler_user }}"
export DISPLAY=:1
source {{ harprofiler_venv_dir }}/bin/activate
#!/usr/bin/env bash
# This script confirms that harprofiler can successfully run on the
# target machine.
source {{ harprofiler_venv_dir }}/bin/activate
cd {{ harprofiler_dir }}
python harprofiler.py
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
##
# Defaults for role harstorage
#
#
# vars are namespaced with the module name.
#
harstorage_role_name: harstorage
harstorage_user: '{{ harstorage_role_name }}'
harstorage_home: '{{ COMMON_APP_DIR }}/{{ harstorage_role_name }}'
harstorage_code_dir: '{{ harstorage_home }}/{{ harstorage_role_name }}'
harstorage_venv_dir: '{{ harstorage_home }}/venvs/{{ harstorage_role_name }}'
harstorage_bin_dir: '{{ harstorage_home }}/bin'
harstorage_etc: '/edx/etc/harstorage'
# Source Code
HARSTORAGE_REPOS:
- PROTOCOL: https
DOMAIN: github.com
PATH: edx
REPO: harstorage
VERSION: e0d/update-requirements
DESTINATION: '{{ harstorage_code_dir }}'
#
# OS packages
#
harstorage_debian_pkgs:
- lib32stdc++6
harstorage_pagespeed_binary: "https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/harstorage/pagespeed_bin"
harstorage_python_pkgs:
- { name: "pylons", version: "1.0.2"}
- { name: "webob", version: "1.5.1"}
- { name: "pymongo", version: "3.2.1"}
- { name: "PasteScript", version: "1.7.5"}
harstorage_redhat_pkgs: []
harstorage_port: "5000"
harstorage_host: "0.0.0.0"
harstorage_version: "1.0"
# mongo packages
mongo_port: "27017"
mongo_repl_set: "repl1"
mongo_admin_user: "admin"
mongo_admin_password: "admin"
harstorage_gunicorn_hosts:
- 127.0.0.1
harstorage_gunicorn_port: '{{ harstorage_port }}'
HARSTORAGE_HOSTNAME: '~^((stage|prod)-)?harstorage.*'
HARSTORAGE_NGINX_PORT: 18170
HARSTORAGE_SSL_NGINX_PORT: 48170
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
##
# Role includes for role harstorage
#
# Example:
#
# dependencies:
# - {
# role: my_role
# my_role_var0: "foo"
# my_role_var1: "bar"
# }
dependencies:
- common
- supervisor
- role: edx_service
edx_service_name: "{{ harstorage_role_name }}"
edx_service_repos: "{{ HARSTORAGE_REPOS }}"
edx_service_user: "{{ harstorage_user }}"
edx_service_home: "{{ harstorage_home }}"
edx_service_packages:
debian: "{{ harstorage_debian_pkgs }}"
redhat: "{{ harstorage_redhat_pkgs }}"
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
#
# Tasks for role harstorage
#
# Overview:
#
#
# Dependencies:
#
#
# Example play:
#
#
- name: install python packages
pip:
name: "{{ item.name }}"
version: "{{ item.version }}"
virtualenv: "{{ harstorage_venv_dir }}"
virtualenv_command: virtualenv
tags:
- install
- install:app-requirements
become_user: "{{ harstorage_user }}"
with_items: "{{ harstorage_python_pkgs }}"
- name: create directories
file:
path: "{{ item }}"
owner: "{{ harstorage_user }}"
group: "{{ harstorage_user }}"
state: directory
mode: 0755
tags:
- install
- install:configuration
with_items:
- "{{ harstorage_etc }}"
- "{{ harstorage_bin_dir }}"
- name: ensure common web user can write to /edx/var/harstorage
file:
path: "{{ COMMON_DATA_DIR }}/{{ harstorage_user }}"
state: directory
mode: 0775
tags:
- install
- install:configuration
- name: download pagespeed
get_url:
url: "{{ harstorage_pagespeed_binary }}"
dest: "{{ harstorage_bin_dir }}"
mode: "0755"
owner: "{{ harstorage_user }}"
- name: setup the harstorage production.ini file
template:
src: '.{{ harstorage_etc }}/production.ini.j2'
dest: '{{ harstorage_etc }}/production.ini'
owner: '{{ harstorage_user }}'
group: '{{ harstorage_user }}'
mode: 0644
tags:
- install
- install:configuration
- name: install harstorage
command: "{{ harstorage_venv_dir }}/bin/python ./setup.py install"
args:
chdir: "{{ harstorage_code_dir }}"
tags:
- install
- install:code
- name: apply config
command: "{{ harstorage_venv_dir }}/bin/paster setup-app {{ harstorage_etc }}/production.ini"
args:
chdir: "{{ harstorage_code_dir }}"
tags:
- install
- install:configuration
- name: write supervisor wrapper script
template:
src: edx/app/harstorage/harstorage.sh.j2
dest: "{{ harstorage_home }}/{{ harstorage_role_name }}.sh"
mode: 0650
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
- name: write supervisord config
template:
src: edx/app/supervisor/conf.d.available/harstorage.conf.j2
dest: "{{ supervisor_available_dir }}/{{ harstorage_role_name }}.conf"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: 0644
- name: enable supervisor script
file:
src: "{{ supervisor_available_dir }}/{{ harstorage_role_name }}.conf"
dest: "{{ supervisor_cfg_dir }}/{{ harstorage_role_name }}.conf"
state: link
force: yes
when: not disable_edx_services
- name: update supervisor configuration
shell: "{{ supervisor_ctl }} -c {{ supervisor_cfg }} update"
when: not disable_edx_services
- name: Copying nginx configs for harstorage
template:
src: "edx/app/nginx/sites-available/harstorage.j2"
dest: "{{ nginx_sites_available_dir }}/harstorage"
owner: root
group: "{{ common_web_user }}"
mode: 0640
notify: reload nginx
tags:
- install
- install:vhosts
- name: Creating nginx config links for discovery
file:
src: "{{ nginx_sites_available_dir }}/harstorage"
dest: "{{ nginx_sites_enabled_dir }}/harstorage"
state: link
owner: root
group: root
notify: reload nginx
tags:
- install
- install:vhosts
#
# {{ ansible_managed }}
#
{% if nginx_default_sites is defined and "harstorage" in nginx_default_sites %}
{% set default_site = "default_server" %}
{% else %}
{% set default_site = "" %}
{% endif %}
upstream harstorage_app_server {
{% for host in harstorage_gunicorn_hosts %}
server {{ host }}:{{ harstorage_gunicorn_port }} fail_timeout=0;
{% endfor %}
}
server {
server_name {{ HARSTORAGE_HOSTNAME }};
{% if NGINX_ENABLE_SSL %}
listen {{ HARSTORAGE_NGINX_PORT }} {{ default_site }};
listen {{ HARSTORAGE_SSL_NGINX_PORT }} ssl;
ssl_certificate /etc/ssl/certs/{{ NGINX_SSL_CERTIFICATE|basename }};
ssl_certificate_key /etc/ssl/private/{{ NGINX_SSL_KEY|basename }};
# request the browser to use SSL for all connections
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
{% else %}
listen {{ HARSTORAGE_NGINX_PORT }} {{ default_site }};
{% endif %}
location ~ ^/static/(?P<file>.*) {
root {{ COMMON_DATA_DIR }}/{{ harstorage_role_name }};
try_files /staticfiles/$file =404;
}
location / {
try_files $uri @proxy_to_app;
}
{% if NGINX_ROBOT_RULES|length > 0 %}
location /robots.txt {
root {{ nginx_app_dir }};
try_files $uri /robots.txt =404;
}
{% endif %}
location @proxy_to_app {
{% if NGINX_SET_X_FORWARDED_HEADERS %}
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $remote_addr;
{% else %}
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header X-Forwarded-Port $http_x_forwarded_port;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
{% endif %}
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://harstorage_app_server;
}
# Forward to HTTPS if we're an HTTP request...
if ($http_x_forwarded_proto = "http") {
set $do_redirect "true";
}
# Run our actual redirect...
if ($do_redirect = "true") {
rewrite ^ https://$host$request_uri? permanent;
}
}
#
# harstorage - Pylons development environment configuration
#
# The %(here)s variable will be replaced with the parent directory of this file
#
[DEFAULT]
debug = false
[server:main]
use = egg:Paste#http
host = {{ harstorage_host }}
port = {{ harstorage_port }}
[app:main]
use = egg:harstorage
full_stack = true
static_files = true
temp_store = {{ COMMON_DATA_DIR }}/{{ harstorage_user }}
bin_store = {{ harstorage_bin_dir }}
ps_enabled = true
static_version = {{ harstorage_version }}
mongo_replicate = false
mongo_replset = {{ mongo_repl_set }}
mongo_host = localhost
mongo_port = {{ mongo_port }}
mongo_db = {{ harstorage_role_name }}
mongo_auth = false
mongo_user = {{ mongo_admin_user }}
mongo_pswd = {{ mongo_admin_password }}
cache_dir = {{ COMMON_DATA_DIR }}/{{ harstorage_user }}
beaker.session.key = harstorage
beaker.session.secret = somesecret
# Logging configuration
[loggers]
keys = root
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s,%(msecs)03d %(levelname)-5.5s [%(name)s] [%(threadName)s] %(message)s
datefmt = %H:%M:%S
...@@ -70,6 +70,7 @@ ...@@ -70,6 +70,7 @@
become_user: "{{ insights_user }}" become_user: "{{ insights_user }}"
environment: "{{ insights_environment }}" environment: "{{ insights_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes" when: migrate_db is defined and migrate_db|lower == "yes"
run_once: yes
tags: tags:
- migrate - migrate
- migrate:db - migrate:db
......
...@@ -223,7 +223,7 @@ JENKINS_GITHUB_CONFIG: '' ...@@ -223,7 +223,7 @@ JENKINS_GITHUB_CONFIG: ''
build_jenkins_hipchat_room: 'testeng' build_jenkins_hipchat_room: 'testeng'
# ec2 # ec2
build_jenkins_instance_cap: '250' build_jenkins_instance_cap: '500'
# seed # seed
build_jenkins_seed_name: 'manually_seed_one_job' build_jenkins_seed_name: 'manually_seed_one_job'
......
...@@ -53,8 +53,6 @@ jenkins_common_main_quiet_period: 5 ...@@ -53,8 +53,6 @@ jenkins_common_main_quiet_period: 5
jenkins_common_main_scm_retry: 2 jenkins_common_main_scm_retry: 2
jenkins_common_main_disable_remember: true jenkins_common_main_disable_remember: true
jenkins_common_main_env_vars: jenkins_common_main_env_vars:
- NAME: 'BROWSERMOB_PROXY_PORT'
VALUE: '9090'
- NAME: 'GITHUB_OWNER_WHITELIST' - NAME: 'GITHUB_OWNER_WHITELIST'
VALUE: '{{ JENKINS_MAIN_GITHUB_OWNER_WHITELIST }}' VALUE: '{{ JENKINS_MAIN_GITHUB_OWNER_WHITELIST }}'
jenkins_common_main_executable: '/bin/bash' jenkins_common_main_executable: '/bin/bash'
......
...@@ -15,5 +15,3 @@ jenkins_debian_pkgs: ...@@ -15,5 +15,3 @@ jenkins_debian_pkgs:
# packer direct download URL # packer direct download URL
packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip" packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip"
jenkins_worker_key_url: null
...@@ -9,10 +9,6 @@ dependencies: ...@@ -9,10 +9,6 @@ dependencies:
- role: edxapp_common - role: edxapp_common
when: platform_worker is defined when: platform_worker is defined
# dependencies for sitespeed worker
- role: sitespeedio
when: sitespeed_worker is defined
# dependencies for android worker # dependencies for android worker
- role: android_sdk - role: android_sdk
when: android_worker is defined when: android_worker is defined
......
...@@ -2,7 +2,6 @@ ...@@ -2,7 +2,6 @@
# jenkins # jenkins
# #
# Provision a Jenkins worker instance. # Provision a Jenkins worker instance.
# - When sitespeed_worker is set, only apply the configuraiton necessary for running sitespeed.io
# - When platform_worker is set, the resulting instance can run edx-platform tests # - When platform_worker is set, the resulting instance can run edx-platform tests
# All jenkins workers # All jenkins workers
...@@ -22,7 +21,5 @@ ...@@ -22,7 +21,5 @@
- include: test.yml - include: test.yml
- include: test_platform_worker.yml - include: test_platform_worker.yml
when: platform_worker is defined when: platform_worker is defined
- include: test_sitespeed_worker.yml
when: sitespeed_worker is defined
- include: test_android_worker.yml - include: test_android_worker.yml
when: android_worker is defined when: android_worker is defined
--- ---
# Requests library is required for both the github status # Requests library is required for the github status script.
# script, as well as the sitespeed cookie script.
- name: Install requests Python library - name: Install requests Python library
pip: name=requests state=present pip: name=requests state=present
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
user: "{{ jenkins_user }}" user: "{{ jenkins_user }}"
state: present state: present
key: "{{ jenkins_worker_key_url }}" key: "{{ jenkins_worker_key_url }}"
when: jenkins_worker_key_url when: jenkins_worker_key_url is defined
ignore_errors: yes ignore_errors: yes
- name: Set key permissions - name: Set key permissions
......
...@@ -16,8 +16,7 @@ ...@@ -16,8 +16,7 @@
### Tests ### ### Tests ###
# Firefox has a specific version, not the latest. This test also ensures it was not # Firefox has a specific version, not the latest. This test also ensures it was not
# pulled in via dependency or misuse/clobbering due to the sitespeed variable, which uses # pulled in via dependency or misuse/clobbering.
# the latest firefox.
- name: Verify firefox version - name: Verify firefox version
shell: firefox --version shell: firefox --version
register: firefox_version register: firefox_version
......
---
# Tests for this role
### Tests ###
# Sitespeed workers should have the latest version of firefox
# Lite test. Ensures we are not using
# the version of firefox specified in a different file.
- name: Verify firefox version
shell: firefox --version
register: firefox_version
- assert:
that:
- "'28.0' not in firefox_version.stdout"
...@@ -89,7 +89,7 @@ ...@@ -89,7 +89,7 @@
delay: 30 delay: 30
with_nested: with_nested:
- "{{ ec2.instances }}" - "{{ ec2.instances }}"
- ['studio', 'ecommerce', 'preview', 'discovery', 'credentials'] - ['studio', 'ecommerce', 'preview', 'discovery', 'credentials', 'veda']
- name: Add new instance to host group - name: Add new instance to host group
local_action: local_action:
......
...@@ -15,7 +15,9 @@ mongo_journal_dir: "{{ COMMON_DATA_DIR }}/mongo/mongodb/journal" ...@@ -15,7 +15,9 @@ mongo_journal_dir: "{{ COMMON_DATA_DIR }}/mongo/mongodb/journal"
mongo_user: mongodb mongo_user: mongodb
MONGODB_REPO: "deb http://repo.mongodb.org/apt/ubuntu {{ ansible_distribution_release }}/mongodb-org/3.2 multiverse" MONGODB_REPO: "deb http://repo.mongodb.org/apt/ubuntu {{ ansible_distribution_release }}/mongodb-org/3.2 multiverse"
MONGODB_APT_KEY: "7F0CEB10" # Key id taken from https://docs.mongodb.com/v3.2/tutorial/install-mongodb-on-ubuntu/
# Changes with each major mongo release, so must be updated with Mongo upgrade
MONGODB_APT_KEY: "EA312927"
MONGODB_APT_KEYSERVER: "keyserver.ubuntu.com" MONGODB_APT_KEYSERVER: "keyserver.ubuntu.com"
mongodb_debian_pkgs: mongodb_debian_pkgs:
......
...@@ -49,7 +49,7 @@ NGINX_LOG_FORMAT_NAME: 'p_combined' ...@@ -49,7 +49,7 @@ NGINX_LOG_FORMAT_NAME: 'p_combined'
# headers to reflect the properties of the incoming request. # headers to reflect the properties of the incoming request.
NGINX_SET_X_FORWARDED_HEADERS: False NGINX_SET_X_FORWARDED_HEADERS: False
# Increasing these values allows studio to process more complex operations. # Increasing these values allows studio to process more complex operations.
# Default timeouts limit CMS connections to 60 seconds. # Default timeouts limit CMS connections to 60 seconds.
NGINX_CMS_PROXY_CONNECT_TIMEOUT: !!null NGINX_CMS_PROXY_CONNECT_TIMEOUT: !!null
...@@ -111,8 +111,6 @@ NGINX_EDXAPP_ERROR_PAGES: ...@@ -111,8 +111,6 @@ NGINX_EDXAPP_ERROR_PAGES:
"504": "{{ nginx_default_error_page }}" "504": "{{ nginx_default_error_page }}"
CMS_HOSTNAME: '~^((stage|prod)-)?studio.*' CMS_HOSTNAME: '~^((stage|prod)-)?studio.*'
ECOMMERCE_HOSTNAME: '~^((stage|prod)-)?ecommerce.*'
CREDENTIALS_HOSTNAME: '~^((stage|prod)-)?credentials.*'
nginx_template_dir: "edx/app/nginx/sites-available" nginx_template_dir: "edx/app/nginx/sites-available"
......
#
# {{ ansible_managed }}
#
{% if "credentials" in nginx_default_sites %}
{% set default_site = "default_server" %}
{% else %}
{% set default_site = "" %}
{% endif %}
upstream credentials_app_server {
{% for host in NGINX_CREDENTIALS_GUNICORN_HOSTS %}
server {{ host }}:{{ credentials_gunicorn_port }} fail_timeout=0;
{% endfor %}
}
server {
server_name {{ CREDENTIALS_HOSTNAME }};
{% if NGINX_ENABLE_SSL %}
listen {{ CREDENTIALS_NGINX_PORT }} {{ default_site }};
listen {{ CREDENTIALS_SSL_NGINX_PORT }} ssl;
{% include "common-settings.j2" %}
ssl_certificate /etc/ssl/certs/{{ NGINX_SSL_CERTIFICATE|basename }};
ssl_certificate_key /etc/ssl/private/{{ NGINX_SSL_KEY|basename }};
{% else %}
listen {{ CREDENTIALS_NGINX_PORT }} {{ default_site }};
{% endif %}
{% if NGINX_ENABLE_SSL or NGINX_REDIRECT_TO_HTTPS %}
# request the browser to use SSL for all connections
add_header Strict-Transport-Security "max-age={{ NGINX_HSTS_MAX_AGE }}";
{% endif %}
# Prevent invalid display courseware in IE 10+ with high privacy settings
add_header P3P '{{ NGINX_P3P_MESSAGE }}';
location ~ ^/static/(?P<file>.*) {
root {{ COMMON_DATA_DIR }}/{{ credentials_service_name }};
try_files /staticfiles/$file =404;
}
location / {
try_files $uri @proxy_to_app;
}
{% include "robots.j2" %}
location @proxy_to_app {
{% if NGINX_SET_X_FORWARDED_HEADERS %}
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $remote_addr;
{% else %}
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header X-Forwarded-Port $http_x_forwarded_port;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
{% endif %}
# newrelic-specific header records the time when nginx handles a request.
proxy_set_header X-Queue-Start "t=${msec}";
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://credentials_app_server;
}
# Nginx does not support nested condition or or conditions so
# there is an unfortunate mix of conditonals here.
{% if NGINX_REDIRECT_TO_HTTPS %}
{% if NGINX_HTTPS_REDIRECT_STRATEGY == "scheme" %}
# Redirect http to https over single instance
if ($scheme != "https")
{
set $do_redirect_to_https "true";
}
{% elif NGINX_HTTPS_REDIRECT_STRATEGY == "forward_for_proto" %}
# Forward to HTTPS if we're an HTTP request... and the server is behind ELB
if ($http_x_forwarded_proto = "http")
{
set $do_redirect_to_https "true";
}
{% endif %}
# Execute the actual redirect
if ($do_redirect_to_https = "true")
{
return 301 https://$host$request_uri;
}
{% endif %}
}
#
# {{ ansible_managed }}
#
{% if nginx_default_sites is defined and "harstorage" in nginx_default_sites %}
{% set default_site = "default_server" %}
{% else %}
{% set default_site = "" %}
{% endif %}
upstream harstorage_app_server {
{% for host in harstorage_gunicorn_hosts %}
server {{ host }}:{{ harstorage_gunicorn_port }} fail_timeout=0;
{% endfor %}
}
server {
server_name {{ HARSTORAGE_HOSTNAME }};
{% if NGINX_ENABLE_SSL %}
listen {{ HARSTORAGE_NGINX_PORT }} {{ default_site }};
listen {{ HARSTORAGE_SSL_NGINX_PORT }} ssl;
ssl_certificate /etc/ssl/certs/{{ NGINX_SSL_CERTIFICATE|basename }};
ssl_certificate_key /etc/ssl/private/{{ NGINX_SSL_KEY|basename }};
# request the browser to use SSL for all connections
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
{% else %}
listen {{ HARSTORAGE_NGINX_PORT }} {{ default_site }};
{% endif %}
location ~ ^/static/(?P<file>.*) {
root {{ COMMON_DATA_DIR }}/{{ harstorage_role_name }};
try_files /staticfiles/$file =404;
}
location / {
try_files $uri @proxy_to_app;
}
{% if NGINX_ROBOT_RULES|length > 0 %}
location /robots.txt {
root {{ nginx_app_dir }};
try_files $uri /robots.txt =404;
}
{% endif %}
location @proxy_to_app {
{% if NGINX_SET_X_FORWARDED_HEADERS %}
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $remote_addr;
{% else %}
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header X-Forwarded-Port $http_x_forwarded_port;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
{% endif %}
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://harstorage_app_server;
}
# Forward to HTTPS if we're an HTTP request...
if ($http_x_forwarded_proto = "http") {
set $do_redirect "true";
}
# Run our actual redirect...
if ($do_redirect = "true") {
rewrite ^ https://$host$request_uri? permanent;
}
}
...@@ -45,6 +45,13 @@ oauth_client_setup_oauth2_clients: ...@@ -45,6 +45,13 @@ oauth_client_setup_oauth2_clients:
secret: "{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}", secret: "{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}",
logout_uri: "{{ DISCOVERY_LOGOUT_URL | default('None') }}" logout_uri: "{{ DISCOVERY_LOGOUT_URL | default('None') }}"
} }
- {
name: "{{ veda_web_frontend_service_name | default('None') }}",
url_root: "{{ VEDA_WEB_FRONTEND_OAUTH2_URL | default('None') }}",
id: "{{ VEDA_WEB_FRONTEND_SOCIAL_AUTH_EDX_OIDC_KEY | default('None') }}",
secret: "{{ VEDA_WEB_FRONTEND_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}",
logout_uri: "{{ VEDA_WEB_FRONTEND_LOGOUT_URL | default('None') }}"
}
# #
# OS packages # OS packages
......
...@@ -4,6 +4,10 @@ ...@@ -4,6 +4,10 @@
- name: Update apt-get - name: Update apt-get
raw: apt-get update -qq raw: apt-get update -qq
register: python_update_result
until: python_update_result.rc == 0
retries: 10
delay: 10
- name: Install packages - name: Install packages
raw: "apt-get install -qq {{ item }}" raw: "apt-get install -qq {{ item }}"
......
...@@ -19,7 +19,6 @@ REDIS_MAX_MEMORY_POLICY: "noeviction" ...@@ -19,7 +19,6 @@ REDIS_MAX_MEMORY_POLICY: "noeviction"
# vars are namespace with the module name. # vars are namespace with the module name.
# #
redis_role_name: redis redis_role_name: redis
redis_ppa: "ppa:chris-lea/redis-server"
redis_user: redis redis_user: redis
redis_group: redis redis_group: redis
......
...@@ -21,16 +21,12 @@ ...@@ -21,16 +21,12 @@
# #
# #
- name: Add the redis ppa
apt_repository:
repo: "{{ redis_ppa }}"
state: present
- name: Install redis system packages - name: Install redis system packages
apt: apt:
name: "{{ item }}" name: "{{ item }}"
install_recommends: yes install_recommends: yes
state: present state: present
update_cache: yes
with_items: "{{ redis_debian_pkgs }}" with_items: "{{ redis_debian_pkgs }}"
notify: notify:
- reload redis - reload redis
......
...@@ -35,51 +35,17 @@ ...@@ -35,51 +35,17 @@
# include /path/to/local.conf # include /path/to/local.conf
# include /path/to/other.conf # include /path/to/other.conf
################################## NETWORK ##################################### ################################ GENERAL #####################################
# By default, if no "bind" configuration directive is specified, Redis listens # By default Redis does not run as a daemon. Use 'yes' if you need it.
# for connections from all the network interfaces available on the server. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# It is possible to listen to just one or multiple selected interfaces using daemonize yes
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind {{ REDIS_BIND_IP }}
# Protected mode is a layer of security protection, in order to avoid that # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# Redis instances left open on the internet are accessed and exploited. # default. You can specify a custom pid file location here.
# pidfile /var/run/redis/redis-server.pid
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
# Accept connections on the specified port, default is 6379 (IANA #815344). # Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket. # If port 0 is specified Redis will not listen on a TCP socket.
port 6379 port 6379
...@@ -92,14 +58,22 @@ port 6379 ...@@ -92,14 +58,22 @@ port 6379
# in order to get the desired effect. # in order to get the desired effect.
tcp-backlog 511 tcp-backlog 511
# Unix socket. # By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
# #
# bind 192.168.1.100 10.0.0.1
bind {{ REDIS_BIND_IP }}
# Specify the path for the Unix socket that will be used to listen for # Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen # incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified. # on a unix socket when not specified.
# #
# unixsocket /tmp/redis.sock # unixsocket /var/run/redis/redis.sock
unixsocketperm 755 # unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable) # Close the connection after a client is idle for N seconds (0 to disable)
timeout 0 timeout 0
...@@ -117,38 +91,9 @@ timeout 0 ...@@ -117,38 +91,9 @@ timeout 0
# Note that to close the connection the double of the time is needed. # Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration. # On other kernels the period depends on the kernel configuration.
# #
# A reasonable value for this option is 300 seconds, which is the new # A reasonable value for this option is 60 seconds.
# Redis default starting with Redis 3.2.1.
tcp-keepalive 0 tcp-keepalive 0
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis/redis.pid
# Specify the server verbosity level. # Specify the server verbosity level.
# This can be one of: # This can be one of:
# debug (a lot of information, useful for development/testing) # debug (a lot of information, useful for development/testing)
...@@ -160,11 +105,11 @@ loglevel notice ...@@ -160,11 +105,11 @@ loglevel notice
# Specify the log file name. Also the empty string can be used to force # Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard # Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null # output for logging but daemonize, logs will be sent to /dev/null
logfile "" logfile /var/log/redis/redis-server.log
# To enable logging to the system logger, just set 'syslog-enabled' to yes, # To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs. # and optionally update the other syslog parameters to suit your needs.
syslog-enabled yes # syslog-enabled no
# Specify the syslog identity. # Specify the syslog identity.
# syslog-ident redis # syslog-ident redis
...@@ -234,7 +179,7 @@ rdbcompression yes ...@@ -234,7 +179,7 @@ rdbcompression yes
rdbchecksum yes rdbchecksum yes
# The filename where to dump the DB # The filename where to dump the DB
dbfilename redis.rdb dbfilename dump.rdb
# The working directory. # The working directory.
# #
...@@ -435,35 +380,6 @@ slave-priority 100 ...@@ -435,35 +380,6 @@ slave-priority 100
# By default min-slaves-to-write is set to 0 (feature disabled) and # By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10. # min-slaves-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# slaves in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover slave instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a masteer.
#
# The listed IP and address normally reported by a slave is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the slave to connect with the master.
#
# Port: The port is communicated by the slave during the replication
# handshake, and is normally the port that the slave is using to
# list for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the slave may be actually reachable via different IP and port
# pairs. The following two options can be used by a slave in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# slave-announce-ip 5.5.5.5
# slave-announce-port 1234
################################## SECURITY ################################### ################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other # Require clients to issue AUTH <PASSWORD> before processing any other
...@@ -918,36 +834,11 @@ notify-keyspace-events "" ...@@ -918,36 +834,11 @@ notify-keyspace-events ""
hash-max-ziplist-entries 512 hash-max-ziplist-entries 512
hash-max-ziplist-value 64 hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space. # Similarly to hashes, small lists are also encoded in a special way in order
# The number of entries allowed per internal list node can be specified # to save a lot of space. The special representation is only used when
# as a fixed maximum size or a maximum number of elements. # you are under the following limits:
# For a fixed maximum size, use -5 through -1, meaning: list-max-ziplist-entries 512
# -5: max size: 64 Kb <-- not recommended for normal workloads list-max-ziplist-value 64
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed # Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range # of just strings that happen to be integers in radix 10 in the range
......
...@@ -39,3 +39,6 @@ server_utils_debian_pkgs: ...@@ -39,3 +39,6 @@ server_utils_debian_pkgs:
- netcat - netcat
server_utils_redhat_pkgs: [] server_utils_redhat_pkgs: []
SERVER_UTILS_EDX_PPA_KEY_ID: "69464050"
SERVER_UTILS_EDX_PPA_KEY_SERVER: "keyserver.ubuntu.com"
...@@ -21,6 +21,14 @@ ...@@ -21,6 +21,14 @@
# #
# #
- name: Check for expired edx key
command: "apt-key list | grep {{ SERVER_UTILS_EDX_PPA_KEY_ID }}"
register: ppa_key_status
- name: remove expired edx key
command: "sudo apt-key adv --keyserver {{ SERVER_UTILS_EDX_PPA_KEY_SERVER }} --recv-keys {{ SERVER_UTILS_EDX_PPA_KEY_ID }}"
when: "'expired' in ppa_key_status.stdout"
- name: Install ubuntu system packages - name: Install ubuntu system packages
apt: apt:
name: "{{ item }}" name: "{{ item }}"
......
Simple theme
############
This role allows you to deploy a basic theme on deploy time. The theme can be
customized via ansible variables in the following ways:
- to redefine SASS variables (like colors)
- to include some static files provided in a local directory (e.g. logo)
- to download some static files from URLs (e.g. logo, favicon)
- in addition the theme can be based on an existing theme from a repository
This role will be included by edxapp. The main use case involves deploying a
theme as part of deploying an instance. The new theme will be enabled when
the instance starts.
Configuration
*************
- The theme name for the deployed theme will be the one specifed in EDXAPP_DEFAULT_SITE_THEME
- The theme will be deployed to a directory of that name.
You have the option to use a skeleton theme. This is the base theme that will be
copied to the target machine, and modified afterwards via the customizations
applied by this role's variables.
Example: if you have a theme in https://github.com/open-craft/edx-theme/tree/harvard-dcex:
- Set EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO: "https://github.com/open-craft/edx-theme/"
- and EDXAPP_COMPREHENSIVE_THEME_VERSION: "harvard-dcex"
If you don't use a skeleton theme, the deployed theme will just contain the SASS
variables definitions you provide through the other variables, and the static files
you provide. For simple changes like colors+logo+image this will be enough.
Static files (like logo and favicon) will be added from the following sources and in
the following order:
- If no skeleton theme nor static files are provided, the theme will have no static files
- If a skeleton theme was provided, its static files will be used
- Local files from SIMPLETHEME_STATIC_FILES_DIR will be copied, replacing previous ones
- Files from SIMPLETHEME_STATIC_FILES_URLS will be downloaded, replacing previous ones
Testing
*******
The intended use of this role is to be run as part of deploy, not after it.
There are other cases in which you may want to run the role independently (after
the instance is running):
- When testing this role.
- If you plan to use it to deploy theme changes. Be aware that this will
overwrite the old theme.
You can use ansible-playbook to test this role independently.
It requires you to pass more variables manually because they're not available
except when running inside "edxapp" role. For instance you might need to pass
edxapp_user (e.g. "vagrant" if you test inside devstack).
Example script to test this role, to be run from devstack, from "vagrant" user:
- export PYTHONUNBUFFERED=1
- source /edx/app/edx_ansible/venvs/edx_ansible/bin/activate
- cd /edx/app/edx_ansible/edx_ansible/playbooks
- ansible-playbook -i localhost, -c local run_role.yml -e role=simple_theme -e configuration_version=master -e edx_platform_version=master -e EDXAPP_DEFAULT_SITE_THEME=mytheme2 -e '{"SIMPLETHEME_SASS_OVERRIDES": [{"variable": "main-color", "value":"#823456"}, {"variable": "action-primary-bg", "value":"$main-color"}]}' -e EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO="https://github.com/open-craft/edx-theme/" -e EDXAPP_COMPREHENSIVE_THEME_VERSION="harvard-dcex" -e edxapp_user=vagrant -e common_web_group=www-data -e SIMPLETHEME_ENABLE_DEPLOY=true -e '{"SIMPLETHEME_STATIC_FILES_URLS": [{"url": "http://docs.ansible.com/ansible/latest/_static/images/logo_invert.png", "dest":"lms/static/images/logo.png"}, {"url": "http://docs.ansible.com/favicon.ico", "dest":"lms/static/images/favicon.ico"}]}' -e '{"EDXAPP_COMPREHENSIVE_THEME_DIRS":["/edx/var/edxapp/themes"], "EDXAPP_ENABLE_COMPREHENSIVE_THEMING": true}'
Or, if you want to test the task as part of the deployment, change to role=edxapp,
and add --tags some-custom-tag-that-you-should-add-to-the-task
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
# Simple theme. Creates a basic theme at deploy time.
#
# See documentation in README.rst
#
# This file contains the variables you'll need to pass to the role, and some
# example values.
# Skeleton theme. Check README.rst
# EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO
# EDXAPP_COMPREHENSIVE_THEME_VERSION
# Enable/disable deploy
# This flag isn't at the edxapp role because of https://github.com/ansible/ansible/issues/19472
SIMPLETHEME_ENABLE_DEPLOY: False
# This variable holds the main path where the role will copy files.
# EDXAPP_COMPREHENSIVE_THEME_DIRS.0 can be
# "/edx/var/edxapp/themes" or
# "/edx/app/edxapp/themes" or
# "/edx/app/edxapp/themes/edx-platform"
# or any other.
# If you have more than 1 theme dirs, you'll need to override this internal variable
simpletheme_folder: "{{ EDXAPP_COMPREHENSIVE_THEME_DIRS.0 }}/{{ EDXAPP_DEFAULT_SITE_THEME }}"
# Define SASS variables
# Apart from giving direct values like '#123456', you may give values that use
# previously defined variables, like '$some-variable', as long as this variable
# is earlier in the list.
#
# Sample configuration:
# SIMPLETHEME_SASS_OVERRIDES:
# - variable: main-color
# value: '#123456'
# - variable: action-primary-bg
# value: '$main-color'
# - variable: action-primary-fg
# value: '#fff'
# - variable: link-color
# value: 'red'
# - variable: action-secondary-bg
# value: '#07f'
#
SIMPLETHEME_SASS_OVERRIDES: []
# Files from the specified directory will be copied to the static/ directory.
# This is mainly designed to include images and JS.
# Expected file structure is e.g.
# - lms
# - images
# - logo.png
# - favicon.ico
# - js
# - myscript.js
# - cms
# - images
# - logo.png
#
# Paths will be transformed like this:
# lms/images/logo.png → lms/static/images/logo.png
# lms/js/myscript.js → lms/static/js/myscript.js
# etc.
#
# Sample:
# SIMPLETHEME_STATIC_FILES_DIR: "{{ role_path }}/files/example_static_dir"
SIMPLETHEME_STATIC_FILES_DIR: ""
# These files will be downloaded and included in the static directory after the
# files from SIMPLETHEME_STATIC_FILES_DIR have been copied.
# Local paths must be relative, e.g. "lms/static/images/favicon.ico"
# Example which downloads logo and favicon:
# SIMPLETHEME_STATIC_FILES_URLS:
# - url: http://docs.ansible.com/ansible/latest/_static/images/logo_invert.png
# dest: lms/static/images/logo.png
# - url: http://docs.ansible.com/favicon.ico
# dest: lms/static/images/favicon.ico
SIMPLETHEME_STATIC_FILES_URLS: []
# This fragment will be inserted in _lms-overrides and will affect all pages
# Sample:
# SIMPLETHEME_EXTRA_SASS: |
# .header-global h1.logo a img {
# height: 50px;
# }
# .header-global.slim h2 {
# width: 60% !important;
# }
# .wrapper-footer {
# border-top: 3px solid $main-color;
# }
SIMPLETHEME_EXTRA_SASS: ""
@import 'lms/static/sass/discussion/lms-discussion-main';
@import '../lms-overrides';
@import 'lms/static/sass/lms-course';
@import 'lms-overrides';
@import 'lms/static/sass/lms-main-v1';
@import 'lms-overrides';
@import 'lms/static/sass/lms-main-v2';
@import 'lms-overrides';
...@@ -7,15 +7,5 @@ ...@@ -7,15 +7,5 @@
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions # code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT # license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
# #
##
# Role includes for role sitespeedio
#
dependencies:
- common
- role: oraclejdk
oraclejdk_version: "8u60"
oraclejdk_base: "jdk1.8.0_60"
oraclejdk_build: "b27"
oraclejdk_link: "/usr/lib/jvm/java-8-oracle"
dependencies: []
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
#
# Simple theme
#
# See documentation in README.rst
# Require comprehensive theming
# EDXAPP_COMPREHENSIVE_THEME_DIRS.0 is usually "/edx/app/edxapp/themes"
- assert:
that:
- "EDXAPP_COMPREHENSIVE_THEME_DIRS | length > 0"
- "EDXAPP_ENABLE_COMPREHENSIVE_THEMING"
msg: "Simple-theme deployment requires comprehensive theming to be enabled"
- assert:
that:
- "EDXAPP_DEFAULT_SITE_THEME != ''"
msg: "Simple-theme needs to know the name of the deployed theme. Pass it in EDXAPP_DEFAULT_SITE_THEME"
- name: Check whether theme directory already exists
stat: path="{{ simpletheme_folder }}"
register: simpletheme_folder_stat
# Note that if a theme already exists in the destination directory, it won't be
# deleted or redownloaded. It would be better to redownload, but for that we
# need https://github.com/ansible/ansible-modules-core/issues/5292 to be fixed,
# or to implement a workaround.
- block:
- name: Download skeleton theme
git:
repo: "{{ EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO }}"
dest: "{{ simpletheme_folder }}"
version: "{{ EDXAPP_COMPREHENSIVE_THEME_VERSION | default('master') }}"
# force: yes # Disabled due to ansible bug, see above
# Done in a separate step because "git:" doesn't have owner/group parameters
- name: Adjust owner/group of downloaded skeleton theme
file:
dest: "{{ simpletheme_folder }}"
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
recurse: yes
when: EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO != "" and not simpletheme_folder_stat.stat.exists
# If no skeleton theme, we still need some SASS files to include our SASS partials
- block:
- name: Create default skeleton (dirs)
file:
path: "{{ simpletheme_folder }}/{{ item.path }}"
state: directory
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_filetree: "../files/default_skeleton"
when: item.state == 'directory'
- name: Create default skeleton (files)
copy:
src: "{{ item.src }}"
dest: "{{ simpletheme_folder }}/{{ item.path }}"
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_filetree: "../files/default_skeleton"
when: item.state != 'directory'
when: EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO == "" and not simpletheme_folder_stat.stat.exists
# These are directories to hold the compiled templates included in this role
- name: Create directory to hold the theme and styles
file:
path: "{{ simpletheme_folder }}/{{ item }}"
state: directory
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_items:
- "."
- "lms/static/sass/partials/base"
- name: Compile the templates
template:
src: "{{ item }}.j2"
dest: "{{ simpletheme_folder }}/{{ item }}"
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_items:
# List of files from ./templates to be processed
- "lms/static/sass/partials/base/_variables.scss"
- "lms/static/sass/_lms-overrides.scss"
# Copying static files is done in two steps: create directories + copy files
# (while renaming their path to add "static/"). There could be a 1-step solution,
# e.g requesting with_filetree with depth 1 (if this is possible in ansible).
# Note: with_fileglob doesn't take directories, but with_filetree does.
- block:
- name: Create directories for static files to be copied
file:
path: "{{ simpletheme_folder }}/{{ item.path | regex_replace('^([^/]+)/(.+)$','\\1/static/\\2') }}"
state: directory
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_filetree: "{{ SIMPLETHEME_STATIC_FILES_DIR }}"
when: item.state == 'directory'
- name: Copy static files (adding "static/")
copy:
src: "{{ item.src }}"
dest: "{{ simpletheme_folder }}/{{ item.path | regex_replace('^([^/]+)/(.+)$','\\1/static/\\2') }}"
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_filetree: "{{ SIMPLETHEME_STATIC_FILES_DIR }}"
when: item.state != 'directory'
when: SIMPLETHEME_STATIC_FILES_DIR != ""
# Downloading remote files is done in two steps: create directorie + download each file.
# This step is done after the static files from SIMPLETHEME_STATIC_FILES_DIR have been
# copied, therefore remote files may overwrite the previously installed static files.
- block:
- name: Create directories for static files to be downloaded
file:
path: "{{ simpletheme_folder }}/{{ item.dest | dirname }}"
state: directory
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_items: "{{ SIMPLETHEME_STATIC_FILES_URLS }}"
- name: Download static files to be included in theme
get_url:
url: "{{ item.url }}"
dest: "{{ simpletheme_folder }}/{{ item.dest }}"
force: yes
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
with_items: "{{ SIMPLETHEME_STATIC_FILES_URLS }}"
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment