Commit 848fd291 by Filippo Panessa

Merge conflict

parents 11ad0661 911ce6a3
......@@ -4,6 +4,6 @@ Configuration Pull Request
Make sure that the following steps are done before merging
- [ ] @devops team member has commented with :+1:
- [ ] are you adding any new default values that need to be overriden when this goes live?
- [ ] are you adding any new default values that need to be overridden when this goes live?
- [ ] Open a ticket (DEVOPS) to make sure that they have been added to secure vars.
- [ ] Add an entry to the CHANGELOG.
# Travis CI configuration file for running tests
language: python
python:
- "2.7"
branches:
only:
- master
python:
- "2.7"
- master
services:
- docker
......
......@@ -201,3 +201,10 @@
- Changed MONGO_STORAGE_ENGINE to default to wiredTiger which is the default in 3.2 and 3.4 and what edX suggests be used even on 3.0.
If you have a mmapv1 3.0 install, override MONGO_STORAGE_ENGINE to be mmapv1 which was the old default.
- Ready for deploying Mongo 3.2
- Role: xqueue
- Added `EDXAPP_CELERY_BROKER_USE_SSL` to allow configuring celery to use TLS.
- Role: edxapp
- Added `XQUEUE_RABBITMQ_VHOST` to allow configuring the xqueue RabbitMQ host.
- Added `XQUEUE_RABBITMQ_PORT` and `XQUEUE_RABBITMQ_TLS` to allow configuring the RabbitMQ port, and enabling TLS respectively.
......@@ -26,7 +26,9 @@ test: docker.test
pkg: docker.pkg
clean:
clean: docker.clean
docker.clean:
rm -rf .build
docker.test.shard: $(foreach image,$(shell echo $(images) | python util/balancecontainers.py $(SHARDS) | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))
......
FROM edxops/precise-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
RUN apt-get update
......
......@@ -19,3 +19,6 @@ ANALYTICS_API_DATABASES:
PASSWORD: 'password'
HOST: "db.{{ DOCKER_TLD }}"
PORT: '3306'
# Change this if you want to build a specific version of the ANALYTICS_API
ANALYTICS_API_VERSION: 'master'
......@@ -7,7 +7,7 @@
# This allows the dockerfile to update /edx/app/edx_ansible/edx_ansible
# with the currently checked-out configuration repo.
FROM edxops/trusty-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
ENV DISCOVERY_VERSION=master
......
# To build this Dockerfile:
#
# From the root of configuration:
#
# docker build -f docker/build/docker-tools/Dockerfile .
#
# This allows the dockerfile to update /edx/app/edx_ansible/edx_ansible
# with the currently checked-out configuration repo.
FROM edxops/xenial-common:latest
MAINTAINER edxops
ENV PROGRAMS_VERSION=master
ENV REPO_OWNER=edx
ADD . /edx/app/edx_ansible/edx_ansible
WORKDIR /edx/app/edx_ansible/edx_ansible/docker/plays
COPY docker/build/docker-tools/ansible_overrides.yml /
RUN /edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook docker-tools.yml \
-c local -i '127.0.0.1,' \
-t 'install'
RUN which docker
RUN which docker-compose
......@@ -42,6 +42,9 @@ RUN apt-get update && apt-get install -y \
php5-common \
php5-cli
# Install dependencies needed for Ansible 2.x
RUN apt-get update && apt-get install -y libffi-dev libssl-dev
# Install drush (drupal shell) for access to Drupal commands/Acquia
RUN php -r "readfile('http://files.drush.org/drush.phar');" > drush && \
chmod +x drush && \
......@@ -56,7 +59,19 @@ RUN /bin/bash /tmp/docker/docker_install.sh
RUN usermod -aG docker go
# Assign the go user root privlidges
RUN printf "\ngo ALL=(ALL:ALL) NOPASSWD: /usr/bin/pip\n" >> /etc/sudoers
RUN printf "\ngo ALL=(ALL:ALL) NOPASSWD: /usr/bin/pip, /usr/local/bin/pip\n" >> /etc/sudoers
# Upgrade pip and setup tools. Needed for Ansible 2.x
# Must upgrade to latest before pinning to work around bug
# https://github.com/pypa/pip/issues/3862
RUN \
pip install --upgrade pip && \
#pip may have moved from /usr/bin/ to /usr/local/bin/. This clears bash's path cache.
hash -r && \
pip install --upgrade pip==8.1.2 && \
# upgrade setuptools early to avoid no distribution errors
pip install --upgrade setuptools==24.0.3
# Install AWS command-line interface - for AWS operations in a go-agent task.
RUN pip install awscli
......
......@@ -29,6 +29,11 @@ necessary.
##Building and Uploading the container to ECS
* Copy the go-agent GitHub private key to this path:
- ```docker/build/go-agent/files/go_github_key.pem```
- A dummy key is in the repo file.
- The actual private key is kept in LastPass - see DevOps for access.
- WARNING: Do *NOT* commit/push the real private key to the public configuration repo!
* Create image
- This must be run from the root of the configuration repository
- ```docker build -f docker/build/go-agent/Dockerfile .```
......@@ -36,9 +41,10 @@ necessary.
- ```make docker.test.go-agent```
* Log docker in to AWS
- ```sh -c `aws ecr get-login --region us-east-1` ```
- You might need to remove the `-e` option returned by that command in order to successfully login.
* Tag image
- ```docker tag -f <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/release-pipeline:latest```
- ```docker tag -f <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/release-pipeline:<version_number>```
- ```docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent:latest```
- ```docker tag <image_id> ############.dkr.ecr.us-east-1.amazonaws.com/prod-tools-goagent:<version_number>```
* upload:
- ```docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/go-agent/python:latest```
- ```docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/go-agent/python:<version_number>```
\ No newline at end of file
- ```docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent:latest```
- ```docker push ############.dkr.ecr.us-east-1.amazonaws.com/edx/release-pipeline/prod-tools-goagent:<version_number>```
\ No newline at end of file
FROM edxops/precise-common:latest
FROM edxops/xenial-common:latest
MAINTAINER edxops
ADD . /edx/app/edx_ansible/edx_ansible
......
......@@ -4,7 +4,7 @@ DOCKER_TLD: "edx"
# In addition, on systemd systems, and newer rsyslogd
# there may be issues with /dev/log existing
# http://www.projectatomic.io/blog/2014/09/running-syslog-within-a-docker-container/
PROGRAMS_DJANGO_SETTINGS_MODULE: programs.settings.local
PROGRAMS_DJANGO_SETTINGS_MODULE: programs.settings.devstack
PROGRAMS_DATABASES:
# rw user
default:
......
......@@ -2,8 +2,6 @@
DOCKER_TLD: "xqueue"
CONFIGURATION_REPO: "https://github.com/edx/configuration.git"
CONFIGURATION_VERSION: "hack2015/docker"
XQUEUE_SYSLOG_SERVER: "localhost"
XQUEUE_RABBITMQ_HOSTNAME: "rabbit.{{ DOCKER_TLD }}"
XQUEUE_MYSQL_HOST: "db.{{ DOCKER_TLD }}"
- name: build a VM with docker-tools
hosts: all
sudo: True
gather_facts: True
roles:
- docker
- docker-tools
......@@ -9,9 +9,10 @@ try:
import hipchat
except ImportError:
hipchat = None
from ansible.plugins.callback import CallbackBase
class CallbackModule(object):
class CallbackModule(CallbackBase):
"""Send status updates to a HipChat channel during playbook execution.
This plugin makes use of the following environment variables:
......
......@@ -28,9 +28,10 @@ except ImportError:
else:
import boto.sqs
from boto.exception import NoAuthHandlerFound
from ansible.plugins.callback import CallbackBase
class CallbackModule(object):
class CallbackModule(CallbackBase):
"""
This Ansible callback plugin sends task events
to SQS.
......
......@@ -238,7 +238,7 @@ class CallbackModule(CallbackBase):
Record the start of a play.
"""
self.playbook_name, _ = splitext(
basename(self.play.playbook.filename)
basename(self.play.get_name())
)
self.playbook_timestamp = Timestamp()
......
......@@ -12,3 +12,4 @@ ansible_managed=This file is created and updated by ansible, edit at your peril
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath="~/.ansible/tmp/ansible-ssh-%h-%p-%r" -o ServerAliveInterval=30
retries=5
\ No newline at end of file
......@@ -13,11 +13,13 @@
# - APPLICATION_NAME - The name of the application that we are migrating.
# - APPLICATION_USER - user which is meant to run the application
# - ARTIFACT_PATH - the path where the migration artifacts should be copied after completion
# - DB_MIGRATION_USER - the database username
# - DB_MIGRATION_PASS - the database password
#
# Other variables:
# - HIPCHAT_TOKEN - API token to send messages to hipchat
# - HIPCHAT_ROOM - ID or name of the room to send the notification
# - HIPCHAT_URL - URL of the hipchat API (defaults to v1 of the api)
#
# Other variables:
# - migration_plan - the filename where the unapplied migration YAML output is stored
# - migration_result - the filename where the migration output is saved
# - SUB_APPLICATION_NAME - used for migrations in edxapp {lms|cms}, must be specified
......@@ -59,7 +61,7 @@
shell: '{{ COMMAND_PREFIX }} {{ SUB_APPLICATION_NAME }} show_unapplied_migrations --database "{{ item }}" --output_file "{{ temp_output_dir.stdout }}/{{ item }}_{{ migration_plan }}" --settings "{{ EDX_PLATFORM_SETTINGS }}"'
become_user: "{{ APPLICATION_USER }}"
when: APPLICATION_NAME == "edxapp" and item != "read_replica"
with_items: edxapp_databases.keys()
with_items: "{{ edxapp_databases.keys() }}"
- name: migrate to apply any unapplied migrations
shell: '{{ COMMAND_PREFIX }} run_migrations --output_file "{{ temp_output_dir.stdout }}/{{ migration_result }}"'
......@@ -70,7 +72,7 @@
shell: '{{ COMMAND_PREFIX }} {{ SUB_APPLICATION_NAME }} run_migrations --database "{{ item }}" --settings "{{ EDX_PLATFORM_SETTINGS }}" --output_file "{{ temp_output_dir.stdout }}/{{ migration_result }}"'
become_user: "{{ APPLICATION_USER }}"
when: APPLICATION_NAME == "edxapp" and item != "read_replica"
with_items: edxapp_databases.keys()
with_items: "{{ edxapp_databases.keys() }}"
- name: List all migration files
action: "command ls -1 {{ temp_output_dir.stdout }}"
......
......@@ -13,25 +13,27 @@
keyfile: "/home/{{ owner }}/.ssh/authorized_keys"
serial: "{{ serial_count }}"
tasks:
- fail: msg="You must pass in a public_key"
- fail:
msg: "You must pass in a public_key"
when: public_key is not defined
- fail: msg="public does not exist in secrets"
- fail:
msg: "public does not exist in secrets"
when: ubuntu_public_keys[public_key] is not defined
- command: mktemp
register: mktemp
- name: Validate the public key before we add it to authorized_keys
copy: >
content="{{ ubuntu_public_keys[public_key] }}"
dest={{ mktemp.stdout }}
copy:
content: "{{ ubuntu_public_keys[public_key] }}"
dest: "{{ mktemp.stdout }}"
# This tests the public key and will not continue if it does not look valid
- command: ssh-keygen -l -f {{ mktemp.stdout }}
- file: >
path={{ mktemp.stdout }}
state=absent
- lineinfile: >
dest={{ keyfile }}
line="{{ ubuntu_public_keys[public_key] }}"
- file: >
path={{ keyfile }}
owner={{ owner }}
mode=0600
- file:
path: "{{ mktemp.stdout }}"
state: absent
- lineinfile:
dest: "{{ keyfile }}"
line: "{{ ubuntu_public_keys[public_key] }}"
- file:
path: "{{ keyfile }}"
owner: "{{ owner }}"
mode: 0600
......@@ -14,7 +14,8 @@
serial: "{{ serial_count }}"
pre_tasks:
- action: ec2_facts
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: "{{ ansible_ec2_instance_id }}"
when: elb_pre_post
- name: Instance De-register
local_action: ec2_elb
......@@ -26,8 +27,9 @@
become: False
when: elb_pre_post
tasks:
- debug: msg="{{ ansible_ec2_local_ipv4 }}"
with_items: list.results
- debug:
var: "{{ ansible_ec2_local_ipv4 }}"
with_items: "{{ list.results }}"
- command: rabbitmqctl stop_app
- command: rabbitmqctl join_cluster rabbit@ip-{{ hostvars.keys()[0]|replace('.', '-') }}
when: hostvars.keys()[0] != ansible_ec2_local_ipv4
......@@ -39,10 +41,9 @@
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
ec2_elbs: "{{ ec2_elbs }}"
region: us-east-1
state: present
wait_timeout: 60
with_items: ec2_elbs
become: False
when: elb_pre_post
......@@ -47,11 +47,10 @@
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
ec2_elbs: "{{ ec2_elbs }}"
region: us-east-1
state: present
wait_timeout: 60
with_items: ec2_elbs
become: False
when: elb_pre_post
#
......
......@@ -13,9 +13,9 @@
# is called it will use the new MYSQL connection
# info.
- name: Update RDS to point to the sandbox clone
lineinfile: >
dest=/edx/app/edx_ansible/server-vars.yml
line="{{ item }}"
lineinfile:
dest: /edx/app/edx_ansible/server-vars.yml
line: "{{ item }}"
with_items:
- "EDXAPP_MYSQL_HOST: {{ EDXAPP_MYSQL_HOST }}"
- "EDXAPP_MYSQL_DB_NAME: {{ EDXAPP_MYSQL_DB_NAME }}"
......@@ -24,9 +24,9 @@
tags: update_edxapp_mysql_host
- name: Update mongo to point to the sandbox mongo clone
lineinfile: >
dest=/edx/app/edx_ansible/server-vars.yml
line="{{ item }}"
lineinfile:
dest: /edx/app/edx_ansible/server-vars.yml
line: "{{ item }}"
with_items:
- "EDXAPP_MONGO_HOSTS: {{ EDXAPP_MONGO_HOSTS }}"
- "EDXAPP_MONGO_DB_NAME: {{ EDXAPP_MONGO_DB_NAME }}"
......@@ -35,6 +35,5 @@
tags: update_edxapp_mysql_host
- name: call update on edx-platform
shell: >
/edx/bin/update edx-platform {{ edxapp_version }}
shell: "/edx/bin/update edx-platform {{ edxapp_version }}"
tags: update_edxapp_mysql_host
......@@ -53,27 +53,27 @@
- MySQL-python
- name: create mysql databases
mysql_db: >
db={{ item.name}}
state={{ item.state }}
encoding={{ item.encoding }}
login_host={{ item.login_host }}
login_user={{ item.login_user }}
login_password={{ item.login_password }}
with_items: databases
mysql_db:
db: "{{ item.name}}"
state: "{{ item.state }}"
encoding: "{{ item.encoding }}"
login_host: "{{ item.login_host }}"
login_user: "{{ item.login_user }}"
login_password: "{{ item.login_password }}"
with_items: "{{ databases }}"
tags:
- dbs
- name: create mysql users and assign privileges
mysql_user: >
name="{{ item.name }}"
priv="{{ '/'.join(item.privileges) }}"
password="{{ item.password }}"
host={{ item.host }}
login_host={{ item.login_host }}
login_user={{ item.login_user }}
login_password={{ item.login_password }}
append_privs=yes
with_items: database_users
mysql_user:
name: "{{ item.name }}"
priv: "{{ '/'.join(item.privileges) }}"
password: "{{ item.password }}"
host: "{{ item.host }}"
login_host: "{{ item.login_host }}"
login_user: "{{ item.login_user }}"
login_password: "{{ item.login_password }}"
append_privs: yes
with_items: "{{ database_users }}"
tags:
- users
......@@ -41,4 +41,4 @@
roles: "{{ item.roles }}"
state: present
replica_set: "{{ repl_set }}"
with_items: MONGO_USERS
with_items: "{{ MONGO_USERS }}"
......@@ -21,7 +21,14 @@
dns_zone: sandbox.edx.org
name_tag: sandbox-temp
elb: false
vpc_subnet_id: subnet-cd867aba
ec2_vpc_subnet_id: subnet-cd867aba
instance_userdata: |
#!/bin/bash
set -x
set -e
export RUN_ANSIBLE=false;
wget https://raw.githubusercontent.com/edx/configuration/{{ configuration_version }}/util/install/ansible-bootstrap.sh -O - | bash;
launch_wait_time: 5
roles:
- role: launch_ec2
keypair: "{{ keypair }}"
......@@ -34,23 +41,27 @@
dns_name: "{{ dns_name }}"
dns_zone: "{{ dns_zone }}"
zone: "{{ zone }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
vpc_subnet_id: "{{ ec2_vpc_subnet_id }}"
assign_public_ip: yes
terminate_instance: true
instance_profile_name: sandbox
user_data: "{{ instance_userdata }}"
launch_ec2_wait_time: "{{ launch_wait_time }}"
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
gather_facts: False
vars:
elb: false
elb: False
pre_tasks:
- name: Wait for cloud-init to finish
wait_for: >
path=/var/log/cloud-init.log
timeout=15
search_regex="final-message"
wait_for:
path: /var/log/cloud-init.log
timeout: 15
search_regex: "final-message"
- name: gather_facts
setup: ""
vars_files:
- roles/edxapp/defaults/main.yml
- roles/xqueue/defaults/main.yml
......
# Usage: ansible-playbook -i localhost, edx_service.yml -e@<PATH TO>/edx-secure/cloud_migrations/edx_service.yml -e@<PATH TO>/<DEPLOYMENT>-secure/cloud_migrations/vpcs/<ENVIRONMENT>-<DEPLOYMENT>.yml -e@<PATH TO>/edx-secure/cloud_migrations/idas/<CLUSTER>.yml
---
- name: Build application artifacts
hosts: all
connection: local
gather_facts: False
vars:
state: "present"
tasks:
- name: Manage IAM Role and Profile
ec2_iam_role:
profile: "{{ profile }}"
state: "{{ state }}"
instance_profile_name: "{{ instance_profile_name }}"
role_name: "{{ role_name }}"
policies: "{{ role_policies }}"
tags:
- iam
- name: Manage ELB security group
ec2_group_local:
profile: "{{ profile }}"
description: "{{ elb_security_group.description }}"
name: "{{ elb_security_group.name }}"
vpc_id: "{{ vpc_id }}"
region: "{{ aws_region }}"
rules: "{{ elb_security_group.rules }}"
tags: "{{ elb_security_group.tags }}"
register: elb_sec_group
when: elbs is defined
tags:
- elb
- name: Set Base Security Rules
set_fact:
service_security_group_rules: "{{ service_security_group.rules }}"
when: service_port is not defined
- name: Merge Base and Service Port Security Rules
set_fact:
service_security_group_rules: "{{ service_security_group.rules + service_port_rules }}"
when: service_port is defined
- name: Manage service security group
ec2_group_local:
profile: "{{ profile }}"
description: "{{ service_security_group.description }}"
name: "{{ service_security_group.name }}"
vpc_id: "{{ vpc_id }}"
region: "{{ aws_region }}"
rules: "{{ service_security_group_rules }}"
tags: "{{ service_security_group.tags }}"
register: service_sec_group
- name: Set public Base ACLs
set_fact:
service_public_acl_rules: "{{ public_acls.rules }}"
when: service_port is not defined
- name: Merge public Base and Service Port ACLs
set_fact:
service_public_acl_rules: "{{ public_acls.rules + service_port_public_acls }}"
when: service_port is defined
- name: Manage Public ACLs
ec2_acl:
profile: "{{ profile }}"
name: "{{ public_acls.name }}"
vpc_id: "{{ vpc_id }}"
state: "{{ state }}"
region: "{{ aws_region }}"
rules: "{{ service_public_acl_rules }}"
register: created_public_acls
- name: Set private Base ACLs
set_fact:
service_private_acl_rules: "{{ private_acls.rules }}"
when: service_port is not defined
- name: Merge private Base and Service Port ACLs
set_fact:
service_private_acl_rules: "{{ private_acls.rules + service_port_private_acls }}"
when: service_port is defined
- name: Manage Private ACLs
ec2_acl:
profile: "{{ profile }}"
name: "{{ private_acls.name }}"
vpc_id: "{{ vpc_id }}"
state: "{{ state }}"
region: "{{ aws_region }}"
rules: "{{ service_private_acl_rules }}"
register: created_private_acls
- name: Merge created ACLs
set_fact:
created_acls: "{{ created_public_acls.results | default([]) + created_private_acls.results | default([]) }}"
- name: Apply function to acl_data
util_map:
function: 'zip_to_dict'
input: "{{ created_acls }}"
args:
- "name"
- "id"
register: acl_data
- name: Manage Service Subnets
ec2_subnet:
profile: "{{ profile }}"
state: "{{ state }}"
region: "{{ aws_region }}"
name: "{{ item.name }}"
vpc_id: "{{ vpc_id }}"
cidr_block: "{{ item.cidr }}"
az: "{{ item.az }}"
route_table_id: "{{ item.route_table_id }}"
tags: "{{ item.tags }}"
register: created_service_subnets
with_items: service_subnets
#
# Stubbed out
# For now we'll be using an existing route table
#
# - name: Manage Route Table
# ec2_rt:
# state: "{{ state }}"
# region: "{{ aws_region }}"
# name: "{{ rt.name }}"
# vpc_id: "{{ vpc_id }}"
# destination_cidr: "{{ rt.destination_cidr }}"
# target: "local" # simplifying generalization of instnace-id, gateway-id or local
#
- name: Manage Private ELB Subnets
ec2_subnet:
profile: "{{ profile }}"
state: "{{ state }}"
region: "{{ aws_region }}"
name: "{{ item.name }}"
vpc_id: "{{ vpc_id }}"
cidr_block: "{{ item.cidr }}"
az: "{{ item.az }}"
route_table_id: "{{ item.route_table_id }}"
tags: "{{ item.tags }}"
register: created_elb_private_subnets
with_items: elb_private_subnets
when: private_elb_subnet_1 is defined and private_elb_subnet_2 is defined
tags:
- elb
- name: Check that internal ELBs have subnets
fail: msg="If you set an elb scheme to 'internal' you must also define private_elb_subnet_1 and private_elb_subnet_2"
when: private_elb_subnet_1 is not defined and private_elb_subnet_2 is not defined and elbs is defined and 'internal' in elbs|map(attribute='scheme')|list
- name: Manage ELB
ec2_elb_lb:
profile: "{{ profile }}"
region: "{{ aws_region }}"
scheme: "{{ item.scheme }}"
name: "{{ item.name}}"
state: "{{ state }}"
security_group_ids: "{{ elb_sec_group.group_id }}"
subnets: "{{ created_elb_private_subnets.results|map(attribute='subnet_id')| list if ( item.scheme == 'internal' ) else elb_subnets}}"
health_check: "{{ elb_healthcheck }}"
listeners: "{{ elb_listeners }}"
cross_az_load_balancing: "{{ elb_enable_cross_zone_loadbalancing }}"
connection_draining_timeout: "{{ elb_draining_timeout }}"
register: created_elbs
with_items: elbs
when: elbs is defined
tags:
- elb
- name: Setup ELB DNS
route53:
profile: "{{ profile }}"
command: "create"
zone: "{{ dns_zone_name }}"
record: "{{ item.elb.name }}.{{ dns_zone_name }}"
type: "A"
value: "{{ item.elb.dns_name }}"
alias: true
alias_hosted_zone_id: "{{ item.elb.hosted_zone_id }}"
overwrite: true
with_items: created_elbs.results
when: elbs is defined
tags:
- elb
#
# Service related components
#
- name: Manage the launch configuration
ec2_lc:
profile: "{{ profile }}"
region: "{{ aws_region }}"
name: "{{ service_config.name }}"
image_id: "{{ service_config.ami }}"
key_name: "{{ service_config.key_name }}"
security_groups: "{{ service_sec_group.group_id }}"
instance_type: "{{ service_config.instance_type }}"
instance_profile_name: "{{ instance_profile_name }}"
volumes: "{{ service_config.volumes }}"
instance_monitoring: "{{ service_config.detailed_monitoring }}"
when: auto_scaling_service
#
# Hack alert, this registers a string in the global namespace
# of just the subnet ids for the service that were created above
#
- debug: msg="{{ created_service_subnets.results|map(attribute='subnet_id')| list | join(',') }}"
register: service_vpc_zone_identifier_string
- name: Transform tags into list dict format for the modules that expect it
util_map:
function: zip_to_listdict
input: "{{ asg_instance_tags }}"
args: ['key', 'value']
register: listdict_asg_instance_tags
- debug: msg="Instance Tags:{{ listdict_asg_instance_tags }}"
- name: Manage ASG
ec2_asg:
profile: "{{ profile }}"
region: "{{ aws_region }}"
name: "{{ asg_name }}"
launch_config_name: "{{ service_config.name }}"
availability_zones: "{{ aws_availability_zones }}"
min_size: "{{ asg_min_size }}"
max_size: "{{ asg_max_size }}"
desired_capacity: "{{ asg_desired_capacity }}"
vpc_zone_identifier: "{{ service_vpc_zone_identifier_string.msg }}"
tags: "{{ listdict_asg_instance_tags.function_output }}"
load_balancers: "{% if elb is defined %}{{ created_elbs.results|map(attribute='elb.name')|list }}{% else %}[]{% endif %}"
register: asg
when: auto_scaling_service
- name: Manage scaling policies
ec2_scaling_policy:
state: "{{ item.state }}"
profile: "{{ item.profile }}"
region: "{{ item.region }}"
name: "{{ item.name }}"
adjustment_type: "{{ item.adjustment_type }}"
asg_name: "{{ item.asg_name }}"
scaling_adjustment: "{{ item.scaling_adjustment }}"
min_adjustment_step: "{{ item.min_adjustment_step }}"
cooldown: "{{ item.cooldown }}"
with_items: scaling_policies
register: created_policies
when: auto_scaling_service
- name: Apply function to policy data
util_map:
function: 'zip_to_dict'
input: "{{ created_policies.results }}"
args:
- "name"
- "arn"
register: policy_data
when: auto_scaling_service
- name: Manage metric alarms
ec2_metric_alarm:
profile: "{{ profile }}"
state: "{{ item.state }}"
region: "{{ aws_region }}"
name: "{{ item.name }}"
metric: "{{ item.metric }}"
namespace: "{{ item.namespace }}"
statistic: "{{ item.statistic }}"
comparison: "{{ item.comparison }}"
threshold: "{{ item.threshold }}"
period: "{{ item.period }}"
evaluation_periods: "{{ item.evaluation_periods }}"
unit: "{{ item.unit }}"
description: "{{ item.description }}"
dimensions: "{{ item.dimensions }}"
alarm_actions: "{{ policy_data.function_output[item.target_policy] }}"
with_items: metric_alarms
when: auto_scaling_service
- name: Transform tags into dict format for the modules that expect it
util_map:
function: zip_to_dict
input: "{{ asg_instance_tags }}"
args: ['key', 'value']
register: reformatted_asg_instance_tags
- name: See if instances already exist
ec2_lookup:
region: "{{ aws_region }}"
tags: "{{ reformatted_asg_instance_tags.function_output }}"
register: potential_existing_instances
- name: Compare requested instances vs. current instances
when: not auto_scaling_service and (potential_existing_instances.instances|length > create_instances | int)
fail: msg="This playbook will not shrink the number of instances. {{create_instances }} requested. There are currently {{ potential_existing_instances.instances|length }} instances that match this tag."
#This task will create the number of instances requested (create_instances parameter).
# By default, it will create instances equaling the number of subnets specified.
#Modulo logic explained: The subnet specified will be the instance number modulo the number of subnets,
# so that instances are balanced across subnets.
- name: Manage instances
ec2:
profile: "{{ profile }}"
region: "{{ aws_region }}"
wait: "yes"
group_id:
- "{{ service_sec_group.group_id }}"
key_name: "{{ service_config.key_name }}"
vpc_subnet_id: "{{ created_service_subnets.results[item | int % created_service_subnets.results | length].subnet_id }}"
instance_type: "{{ service_config.instance_type }}"
instance_tags: "{{ reformatted_asg_instance_tags.function_output }}"
image: "{{ service_config.ami }}"
instance_profile_name: "{{ instance_profile_name }}"
volumes: "{{ service_config.volumes }}"
ebs_optimized: "{{ service_config.ebs_optimized }}"
monitoring: "{{ detailed_monitoring }}"
with_sequence: count={% if not auto_scaling_service %}{{ (create_instances | int - potential_existing_instances.instances|length) | default(created_service_subnets.results | length) }}{% else %}0{% endif %}
when: not auto_scaling_service and (potential_existing_instances.instances|length < create_instances | int)
register: created_instances
- name: Add new instances to host group
add_host:
hostname: "{{ item.1.private_ip }}"
instance_id: "{{ item.1.id }}"
groups: created_instances_group
#might need ansible_ssh_private_key_file and/or ansible_ssh_user
ansible_ssh_user: ubuntu
volumes: "{{ service_config.volumes }}"
with_subelements:
- created_instances.results | default({})
- instances
- name: Configure launched instances
hosts: created_instances_group
gather_facts: False
become: True
tasks:
#Wait in this play so it can multiplex across all launched hosts
- name: Wait for hosts to be ready
become: False
local_action:
module: wait_for
host: "{{ inventory_hostname }}"
port: 22
#Must wait for the instance to be ready before gathering facts
- name: Gather facts
setup:
- name: Unmount all specified disks that are currently mounted
mount:
name: "{{ item[0].mount }}"
src: "{{ item[0].device }}"
fstype: "{{ item[0].fstype }}"
state: absent
when: item[1].device_name == item[0].device
with_nested:
- ansible_mounts
- volumes
#Must use force=yes because AWS gives some ephemeral disks the wrong fstype and mounts them by default.
#Since we don't do this task if any prior instances were found in the ec2_lookup task, it's safe to force.
- name: Create filesystems
filesystem:
dev: "{{ item.device_name }}"
fstype: ext4
force: yes
with_items: volumes
- name: Mount disks
mount:
fstype: ext4
name: "{{ item.mount }}"
src: "{{ item.device_name }}"
state: mounted
fstype: "{{ item.fstype | default('ext4') }}"
opts: "{{ item.options | default('defaults') }}"
with_items: volumes
#Currently only supported in non-asg mode, when auto_scaling_service==false
#<http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enable-enhanced-networking>
#Done with local actions to avoid blocking on iteration
- name: Enable enhanced networking
hosts: created_instances_group
gather_facts: False
tasks:
- name: Shut down instances
local_action:
module: ec2
instance_ids: "{{ instance_id }}"
state: stopped
region: "{{ aws_region }}"
wait: yes
when: enhanced_networking == true
- name: Set enhanced networking instance attribute
local_action:
module: shell aws --profile {{ profile }} ec2 modify-instance-attribute --instance-id {{ instance_id }} --sriov-net-support simple
when: enhanced_networking == true
- name: Start instances
local_action:
module: ec2
instance_ids: "{{ instance_id }}"
state: running
region: "{{ aws_region }}"
wait: yes
when: enhanced_networking == true
---
- name: Build service RDS instances
hosts: all
connection: local
# Needed for timestamps
gather_facts: True
roles:
- edx_service_rds
---
# Sample command: ansible-playbook -c local -i localhost, edx_vpc.yml -e@/Users/feanil/src/edx-secure/cloud_migrations/vpcs/test.yml -vvv
- name: Create a simple empty vpc
hosts: all
connection: local
gather_facts: False
vars:
vpc_state: present
roles:
- edx_vpc
......@@ -8,9 +8,9 @@
- edxapp
tasks:
- name: migrate lms
shell: >
chdir={{ edxapp_code_dir }}
python manage.py lms migrate --database {{ item }} --noinput {{ db_dry_run }} --settings=aws
shell: "python manage.py lms migrate --database {{ item }} --noinput {{ db_dry_run }} --settings=aws"
args:
chdir: "{{ edxapp_code_dir }}"
environment:
DB_MIGRATION_USER: "{{ COMMON_MYSQL_MIGRATE_USER }}"
DB_MIGRATION_PASS: "{{ COMMON_MYSQL_MIGRATE_PASS }}"
......@@ -21,9 +21,9 @@
tags:
- always
- name: migrate cms
shell: >
chdir={{ edxapp_code_dir }}
python manage.py cms migrate --database {{ item }} --noinput {{ db_dry_run }} --settings=aws
shell: "python manage.py cms migrate --database {{ item }} --noinput {{ db_dry_run }} --settings=aws"
args:
chdir: "{{ edxapp_code_dir }}"
environment:
DB_MIGRATION_USER: "{{ COMMON_MYSQL_MIGRATE_USER }}"
DB_MIGRATION_PASS: "{{ COMMON_MYSQL_MIGRATE_PASS }}"
......
......@@ -12,7 +12,8 @@
pre_tasks:
- action: ec2_facts
when: elb_pre_post
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: ansible_ec2_instance_id
when: elb_pre_post
- name: Instance De-register
local_action: ec2_elb
......@@ -29,16 +30,16 @@
- oraclejdk
- elasticsearch
post_tasks:
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: ansible_ec2_instance_id
when: elb_pre_post
- name: Register instance in the elb
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
ec2_elbs: "{{ ec2_elbs }}"
region: us-east-1
state: present
wait_timeout: 60
with_items: ec2_elbs
become: False
when: elb_pre_post
......@@ -14,11 +14,11 @@
- name: stop certs service
service: name="certificates" state="stopped"
- name: checkout code
git_2_0_1: >
repo="{{ repo_url }}"
dest="{{ repo_path }}"
version="{{ certificates_version }}"
accept_hostkey=yes
git:
repo: "{{ repo_url }}"
dest: "{{ repo_path }}"
version: "{{ certificates_version }}"
accept_hostkey: yes
environment:
GIT_SSH: "{{ git_ssh_script }}"
- name: install requirements
......@@ -29,11 +29,11 @@
# Need to do this because the www-data user is not properly setup
# and can't run ssh.
- name: change owner to www-data
file: >
path="{{ repo_path }}"
owner="www-data"
group="www-data"
recurse=yes
state="directory"
file:
path: "{{ repo_path }}"
owner: "www-data"
group: "www-data"
recurse: yes
state: "directory"
- name: start certs service
service: name="certificates" state="started"
......@@ -79,6 +79,8 @@
manage_path: /edx/bin/manage.edxapp
ignore_user_creation_errors: no
deployment_settings: "{{ EDXAPP_SETTINGS | default('aws') }}"
vars_files:
- roles/common_vars/defaults/main.yml
tasks:
- name: Manage groups
shell: >
......@@ -86,7 +88,9 @@
manage_group {{ item.name | quote }}
{% if item.get('permissions', []) | length %}--permissions {{ item.permissions | default([]) | map('quote') | join(' ') }}{% endif %}
{% if item.get('remove') %}--remove{% endif %}
with_items: django_groups
with_items: "{{ django_groups }}"
become: true
become_user: "{{ common_web_user }}"
- name: Manage users
shell: >
......@@ -98,6 +102,8 @@
{% if item.get('staff') %}--staff{% endif %}
{% if item.get('unusable_password') %}--unusable-password{% endif %}
{% if item.get('initial_password_hash') %}--initial-password-hash {{ item.initial_password_hash | quote }}{% endif %}
with_items: django_users
with_items: "{{ django_users }}"
register: manage_users_result
failed_when: (manage_users_result | failed) and not (ignore_user_creation_errors | bool)
become: true
become_user: "{{ common_web_user }}"
......@@ -72,7 +72,7 @@
install_recommends: yes
force: yes
update_cache: yes
with_items: mongodb_debian_pkgs
with_items: "{{ mongodb_debian_pkgs }}"
- name: wait for mongo server to start
wait_for:
port: 27017
......
......@@ -48,7 +48,7 @@
install_recommends: yes
force: yes
update_cache: yes
with_items: mongodb_debian_pkgs
with_items: "{{ mongodb_debian_pkgs }}"
- name: wait for mongo server to start
wait_for:
port: 27017
......
......@@ -9,5 +9,6 @@
- "roles/ecommerce/defaults/main.yml"
- "roles/programs/defaults/main.yml"
- "roles/credentials/defaults/main.yml"
- "roles/discovery/defaults/main.yml"
roles:
- oauth_client_setup
......@@ -46,9 +46,7 @@
dest: "{{ xblock_config_temp_directory.stdout }}/{{ file | basename }}"
register: xblock_config_file
- name: Manage xblock configurations
shell: >
{{ python_path }} {{ manage_path }} lms --settings=aws
populate_model -f {{ xblock_config_file.dest | quote }} -u {{ user }}
shell: "{{ python_path }} {{ manage_path }} lms --settings=aws populate_model -f {{ xblock_config_file.dest | quote }} -u {{ user }}"
register: command_result
changed_when: "'Import complete, 0 new entries created' not in command_result.stdout"
- debug: msg="{{ command_result.stdout }}"
......
......@@ -17,7 +17,8 @@
pre_tasks:
- action: ec2_facts
when: elb_pre_post
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: ansible_ec2_instance_id
when: elb_pre_post
- name: Instance De-register
local_action: ec2_elb
......@@ -32,16 +33,16 @@
- aws
- rabbitmq
post_tasks:
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: ansible_ec2_instance_id
when: elb_pre_post
- name: Register instance in the elb
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
ec2_elbs: "{{ ec2_elbs }}"
region: us-east-1
state: present
wait_timeout: 60
with_items: ec2_elbs
become: False
when: elb_pre_post
......@@ -17,22 +17,21 @@
register: mktemp
# This command will fail if this returns zero lines which will prevent
# the last key from being removed
- shell: >
grep -Fv '{{ ubuntu_public_keys[public_key] }}' {{ keyfile }} > {{ mktemp.stdout }}
- shell: >
while read line; do ssh-keygen -lf /dev/stdin <<<$line; done <{{ mktemp.stdout }}
executable=/bin/bash
- shell: "grep -Fv '{{ ubuntu_public_keys[public_key] }}' {{ keyfile }} > {{ mktemp.stdout }}"
- shell: "while read line; do ssh-keygen -lf /dev/stdin <<<$line; done <{{ mktemp.stdout }}"
args:
executable: /bin/bash
register: keycheck
- fail: msg="public key check failed!"
when: keycheck.stderr != ""
- command: cp {{ mktemp.stdout }} {{ keyfile }}
- file: >
path={{ keyfile }}
owner={{ owner }}
mode=0600
- file: >
path={{ mktemp.stdout }}
state=absent
- file:
path: "{{ keyfile }}"
owner: "{{ owner }}"
mode: 0600
- file:
path: "{{ mktemp.stdout }}"
state: absent
- shell: wc -l < {{ keyfile }}
register: line_count
- fail: msg="There should only be one line in ubuntu's authorized_keys"
......
......@@ -7,6 +7,6 @@
- roles/supervisor/defaults/main.yml
tasks:
- name: supervisor | restart supervisor
service: >
name={{ supervisor_service }}
state=restarted
service:
name: "{{ supervisor_service }}"
state: restarted
......@@ -12,8 +12,8 @@
- name: Set hostname
hostname: name={{ hostname_fqdn.split('.')[0] }}
- name: Update /etc/hosts
lineinfile: >
dest=/etc/hosts
regexp="^127\.0\.1\.1"
line="127.0.1.1{{'\t'}}{{ hostname_fqdn.split('.')[0] }}{{'\t'}}{{ hostname_fqdn }}{{'\t'}}localhost"
state=present
lineinfile:
dest: /etc/hosts
regexp: "^127\\.0\\.1\\.1"
line: "127.0.1.1{{ '\t' }}{{ hostname_fqdn.split('.')[0] }}{{ '\t' }}{{ hostname_fqdn }}{{ '\t' }}localhost"
state: present
......@@ -11,7 +11,8 @@
pre_tasks:
- action: ec2_facts
when: elb_pre_post
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: "{{ ansible_ec2_instance_id }}"
when: elb_pre_post
- name: Instance De-register
local_action: ec2_elb
......@@ -25,16 +26,16 @@
tasks:
- shell: echo "test"
post_tasks:
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: "{{ ansible_ec2_instance_id }}"
when: elb_pre_post
- name: Register instance in the elb
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
ec2_elbs: "{{ ec2_elbs }}"
region: us-east-1
state: present
wait_timeout: 60
with_items: ec2_elbs
become: False
when: elb_pre_post
......@@ -14,7 +14,8 @@
pre_tasks:
- action: ec2_facts
when: elb_pre_post
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: "{{ ansible_ec2_instance_id }}"
when: elb_pre_post
- name: Instance De-register
local_action: ec2_elb
......@@ -38,16 +39,16 @@
- role: newrelic
when: COMMON_ENABLE_NEWRELIC
post_tasks:
- debug: var="{{ ansible_ec2_instance_id }}"
- debug:
var: "{{ ansible_ec2_instance_id }}"
when: elb_pre_post
- name: Register instance in the elb
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
ec2_elbs: "{{ ec2_elbs }}"
region: us-east-1
state: present
wait_timeout: 60
with_items: ec2_elbs
become: False
when: elb_pre_post
......@@ -40,8 +40,6 @@
- role: mongo
when: "'localhost' in EDXAPP_MONGO_HOSTS"
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- role: aws
when: EDXAPP_SETTINGS == 'aws'
- { role: 'edxapp', celery_worker: True }
- edxapp
- role: ecommerce
......
......@@ -96,22 +96,10 @@ def main():
aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'],
no_log=True),
aws_access_key=dict(aliases=['ec2_access_key', 'access_key']),
tags=dict(default=None),
tags=dict(default=None, type='dict'),
)
)
tags_param = module.params.get('tags')
tags = {}
if isinstance(tags_param, list):
for item in module.params.get('tags'):
for k,v in item.iteritems():
tags[k] = v
elif isinstance(tags_param, dict):
tags = tags_param
else:
module.fail_json(msg="Invalid format for tags")
aws_secret_key = module.params.get('aws_secret_key')
aws_access_key = module.params.get('aws_access_key')
region = module.params.get('region')
......@@ -137,7 +125,7 @@ def main():
instances = []
instance_ids = []
for res in ec2.get_all_instances(filters={'tag:' + tag: value
for tag, value in tags.iteritems()}):
for tag, value in module.params.get('tags').iteritems()}):
for inst in res.instances:
if inst.state == "running":
instances.append({k: v for k, v in inst.__dict__.iteritems()
......
......@@ -66,7 +66,7 @@ tasks:
- name: tag my launched instances
local_action: ec2_tag resource={{ item.id }} region=eu-west-1 state=present
with_items: ec2.instances
with_items: "{{ ec2.instances }}"
args:
tags:
Name: webserver
......@@ -76,7 +76,7 @@ tasks:
tasks:
- name: tag my instance
local_action: ec2_ntag resource={{ item.id }} region=us-east-1 state=present
with_items: ec2.instances
with_items: "{{ ec2.instances }}"
args:
tags:
- Name: "{{ some_variable }}"
......@@ -101,7 +101,7 @@ def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
resource = dict(required=True),
tags = dict(),
tags = dict(required=False, type='list'),
state = dict(default='present', choices=['present', 'absent', 'list']),
)
)
......
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: git
author:
- "Ansible Core Team"
- "Michael DeHaan"
version_added: "0.0.1"
short_description: Deploy software (or files) from git checkouts
description:
- Manage I(git) checkouts of repositories to deploy files or software.
options:
repo:
required: true
aliases: [ name ]
description:
- git, SSH, or HTTP protocol address of the git repository.
dest:
required: true
description:
- Absolute path of where the repository should be checked out to.
This parameter is required, unless C(clone) is set to C(no)
This change was made in version 1.8.3. Prior to this version,
the C(dest) parameter was always required.
version:
required: false
default: "HEAD"
description:
- What version of the repository to check out. This can be the
full 40-character I(SHA-1) hash, the literal string C(HEAD), a
branch name, or a tag name.
accept_hostkey:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "1.5"
description:
- if C(yes), adds the hostkey for the repo url if not already
added. If ssh_opts contains "-o StrictHostKeyChecking=no",
this parameter is ignored.
ssh_opts:
required: false
default: None
version_added: "1.5"
description:
- Creates a wrapper script and exports the path as GIT_SSH
which git then automatically uses to override ssh arguments.
An example value could be "-o StrictHostKeyChecking=no"
key_file:
required: false
default: None
version_added: "1.5"
description:
- Specify an optional private key file to use for the checkout.
reference:
required: false
default: null
version_added: "1.4"
description:
- Reference repository (see "git clone --reference ...")
remote:
required: false
default: "origin"
description:
- Name of the remote.
refspec:
required: false
default: null
version_added: "1.9"
description:
- Add an additional refspec to be fetched.
If version is set to a I(SHA-1) not reachable from any branch
or tag, this option may be necessary to specify the ref containing
the I(SHA-1).
Uses the same syntax as the 'git fetch' command.
An example value could be "refs/meta/config".
force:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "0.7"
description:
- If C(yes), any modified files in the working
repository will be discarded. Prior to 0.7, this was always
'yes' and could not be disabled. Prior to 1.9, the default was
`yes`
depth:
required: false
default: null
version_added: "1.2"
description:
- Create a shallow clone with a history truncated to the specified
number or revisions. The minimum possible value is C(1), otherwise
ignored.
clone:
required: false
default: "yes"
choices: [ "yes", "no" ]
version_added: "1.9"
description:
- If C(no), do not clone the repository if it does not exist locally
update:
required: false
default: "yes"
choices: [ "yes", "no" ]
version_added: "1.2"
description:
- If C(no), do not retrieve new revisions from the origin repository
executable:
required: false
default: null
version_added: "1.4"
description:
- Path to git executable to use. If not supplied,
the normal mechanism for resolving binary paths will be used.
bare:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "1.4"
description:
- if C(yes), repository will be created as a bare repo, otherwise
it will be a standard repo with a workspace.
recursive:
required: false
default: "yes"
choices: [ "yes", "no" ]
version_added: "1.6"
description:
- if C(no), repository will be cloned without the --recursive
option, skipping sub-modules.
track_submodules:
required: false
default: "no"
choices: ["yes", "no"]
version_added: "1.8"
description:
- if C(yes), submodules will track the latest commit on their
master branch (or other branch specified in .gitmodules). If
C(no), submodules will be kept at the revision specified by the
main project. This is equivalent to specifying the --remote flag
to git submodule update.
verify_commit:
required: false
default: "no"
choices: ["yes", "no"]
version_added: "2.0"
description:
- if C(yes), when cloning or checking out a C(version) verify the
signature of a GPG signed commit. This requires C(git) version>=2.1.0
to be installed. The commit MUST be signed and the public key MUST
be trusted in the GPG trustdb.
requirements:
- git (the command line tool)
notes:
- "If the task seems to be hanging, first verify remote host is in C(known_hosts).
SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt,
one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling
the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts."
'''
EXAMPLES = '''
# Example git checkout from Ansible Playbooks
- git: repo=git://foosball.example.org/path/to/repo.git
dest=/srv/checkout
version=release-0.22
# Example read-write git checkout from github
- git: repo=ssh://git@github.com/mylogin/hello.git dest=/home/mylogin/hello
# Example just ensuring the repo checkout exists
- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no
# Example just get information about the repository whether or not it has
# already been cloned locally.
- git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout clone=no update=no
# Example checkout a github repo and use refspec to fetch all pull requests
- git: repo=https://github.com/ansible/ansible-examples.git dest=/src/ansible-examples refspec=+refs/pull/*:refs/heads/*
'''
import re
import tempfile
def get_submodule_update_params(module, git_path, cwd):
#or: git submodule [--quiet] update [--init] [-N|--no-fetch]
#[-f|--force] [--rebase] [--reference <repository>] [--merge]
#[--recursive] [--] [<path>...]
params = []
# run a bad submodule command to get valid params
cmd = "%s submodule update --help" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=cwd)
lines = stderr.split('\n')
update_line = None
for line in lines:
if 'git submodule [--quiet] update ' in line:
update_line = line
if update_line:
update_line = update_line.replace('[','')
update_line = update_line.replace(']','')
update_line = update_line.replace('|',' ')
parts = shlex.split(update_line)
for part in parts:
if part.startswith('--'):
part = part.replace('--', '')
params.append(part)
return params
def write_ssh_wrapper():
module_dir = get_module_path()
try:
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
if os.access(module_dir, os.W_OK|os.R_OK|os.X_OK):
fd, wrapper_path = tempfile.mkstemp(prefix=module_dir + '/')
else:
raise OSError
except (IOError, OSError):
fd, wrapper_path = tempfile.mkstemp()
fh = os.fdopen(fd, 'w+b')
template = """#!/bin/sh
if [ -z "$GIT_SSH_OPTS" ]; then
BASEOPTS=""
else
BASEOPTS=$GIT_SSH_OPTS
fi
if [ -z "$GIT_KEY" ]; then
ssh $BASEOPTS "$@"
else
ssh -i "$GIT_KEY" $BASEOPTS "$@"
fi
"""
fh.write(template)
fh.close()
st = os.stat(wrapper_path)
os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
return wrapper_path
def set_git_ssh(ssh_wrapper, key_file, ssh_opts):
if os.environ.get("GIT_SSH"):
del os.environ["GIT_SSH"]
os.environ["GIT_SSH"] = ssh_wrapper
if os.environ.get("GIT_KEY"):
del os.environ["GIT_KEY"]
if key_file:
os.environ["GIT_KEY"] = key_file
if os.environ.get("GIT_SSH_OPTS"):
del os.environ["GIT_SSH_OPTS"]
if ssh_opts:
os.environ["GIT_SSH_OPTS"] = ssh_opts
def get_version(module, git_path, dest, ref="HEAD"):
''' samples the version of the git repo '''
cmd = "%s rev-parse %s" % (git_path, ref)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
sha = stdout.rstrip('\n')
return sha
def get_submodule_versions(git_path, module, dest, version='HEAD'):
cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg='Unable to determine hashes of submodules')
submodules = {}
subm_name = None
for line in out.splitlines():
if line.startswith("Entering '"):
subm_name = line[10:-1]
elif len(line.strip()) == 40:
if subm_name is None:
module.fail_json()
submodules[subm_name] = line.strip()
subm_name = None
else:
module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())
if subm_name is not None:
module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)
return submodules
def clone(git_path, module, repo, dest, remote, depth, version, bare,
reference, refspec, verify_commit):
''' makes a new git repo if it does not already exist '''
dest_dirname = os.path.dirname(dest)
try:
os.makedirs(dest_dirname)
except:
pass
cmd = [ git_path, 'clone' ]
if bare:
cmd.append('--bare')
else:
cmd.extend([ '--origin', remote ])
if is_remote_branch(git_path, module, dest, repo, version) \
or is_remote_tag(git_path, module, dest, repo, version):
cmd.extend([ '--branch', version ])
if depth:
cmd.extend([ '--depth', str(depth) ])
if reference:
cmd.extend([ '--reference', str(reference) ])
cmd.extend([ repo, dest ])
module.run_command(cmd, check_rc=True, cwd=dest_dirname)
if bare:
if remote != 'origin':
module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)
if refspec:
module.run_command([git_path, 'fetch', remote, refspec], check_rc=True, cwd=dest)
if verify_commit:
verify_commit_sign(git_path, module, dest, version)
def has_local_mods(module, git_path, dest, bare):
if bare:
return False
cmd = "%s status -s" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
lines = stdout.splitlines()
lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines)
return len(lines) > 0
def reset(git_path, module, dest):
'''
Resets the index and working tree to HEAD.
Discards any changes to tracked files in working
tree since that commit.
'''
cmd = "%s reset --hard HEAD" % (git_path,)
return module.run_command(cmd, check_rc=True, cwd=dest)
def get_remote_head(git_path, module, dest, version, remote, bare):
cloning = False
cwd = None
tag = False
if remote == module.params['repo']:
cloning = True
else:
cwd = dest
if version == 'HEAD':
if cloning:
# cloning the repo, just get the remote's HEAD version
cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)
else:
head_branch = get_head_branch(git_path, module, dest, remote, bare)
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)
elif is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
elif is_remote_tag(git_path, module, dest, remote, version):
tag = True
cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)
else:
# appears to be a sha1. return as-is since it appears
# cannot check for a specific sha1 on remote
return version
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)
if len(out) < 1:
module.fail_json(msg="Could not determine remote revision for %s" % version)
if tag:
# Find the dereferenced tag if this is an annotated tag.
for tag in out.split('\n'):
if tag.endswith(version + '^{}'):
out = tag
break
elif tag.endswith(version):
out = tag
rev = out.split()[0]
return rev
def is_remote_tag(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if version in out:
return True
else:
return False
def get_branches(git_path, module, dest):
branches = []
cmd = '%s branch -a' % (git_path,)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine branch data - received %s" % out)
for line in out.split('\n'):
branches.append(line.strip())
return branches
def get_tags(git_path, module, dest):
tags = []
cmd = '%s tag' % (git_path,)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine tag data - received %s" % out)
for line in out.split('\n'):
tags.append(line.strip())
return tags
def is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if version in out:
return True
else:
return False
def is_local_branch(git_path, module, dest, branch):
branches = get_branches(git_path, module, dest)
lbranch = '%s' % branch
if lbranch in branches:
return True
elif '* %s' % branch in branches:
return True
else:
return False
def is_not_a_branch(git_path, module, dest):
branches = get_branches(git_path, module, dest)
for b in branches:
if b.startswith('* ') and ('no branch' in b or 'detached from' in b):
return True
return False
def get_head_branch(git_path, module, dest, remote, bare=False):
'''
Determine what branch HEAD is associated with. This is partly
taken from lib/ansible/utils/__init__.py. It finds the correct
path to .git/HEAD and reads from that file the branch that HEAD is
associated with. In the case of a detached HEAD, this will look
up the branch in .git/refs/remotes/<remote>/HEAD.
'''
if bare:
repo_path = dest
else:
repo_path = os.path.join(dest, '.git')
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
# There is a posibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path.split('.git')[0], gitdir)
except (IOError, AttributeError):
return ''
# Read .git/HEAD for the name of the branch.
# If we're in a detached HEAD state, look up the branch associated with
# the remote HEAD in .git/refs/remotes/<remote>/HEAD
f = open(os.path.join(repo_path, "HEAD"))
if is_not_a_branch(git_path, module, dest):
f.close()
f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))
branch = f.readline().split('/')[-1].rstrip("\n")
f.close()
return branch
def set_remote_url(git_path, module, repo, dest, remote):
''' updates repo from remote sources '''
commands = [("set a new url %s for %s" % (repo, remote), [git_path, 'remote', 'set-url', remote, repo])]
for (label,command) in commands:
(rc,out,err) = module.run_command(command, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
def fetch(git_path, module, repo, dest, version, remote, bare, refspec):
''' updates repo from remote sources '''
set_remote_url(git_path, module, repo, dest, remote)
commands = []
fetch_str = 'download remote objects and refs'
if bare:
refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']
if refspec:
refspecs.append(refspec)
commands.append((fetch_str, [git_path, 'fetch', remote] + refspecs))
else:
# unlike in bare mode, there's no way to combine the
# additional refspec with the default git fetch behavior,
# so use two commands
commands.append((fetch_str, [git_path, 'fetch', remote]))
refspecs = ['+refs/tags/*:refs/tags/*']
if refspec:
refspecs.append(refspec)
commands.append((fetch_str, [git_path, 'fetch', remote] + refspecs))
for (label,command) in commands:
(rc,out,err) = module.run_command(command, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
def submodules_fetch(git_path, module, remote, track_submodules, dest):
changed = False
if not os.path.exists(os.path.join(dest, '.gitmodules')):
# no submodules
return changed
gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')
for line in gitmodules_file:
# Check for new submodules
if not changed and line.strip().startswith('path'):
path = line.split('=', 1)[1].strip()
# Check that dest/path/.git exists
if not os.path.exists(os.path.join(dest, path, '.git')):
changed = True
# add the submodule repo's hostkey
if line.strip().startswith('url'):
repo = line.split('=', 1)[1].strip()
if module.params['ssh_opts'] is not None:
if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
else:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
# Check for updates to existing modules
if not changed:
# Fetch updates
begin = get_submodule_versions(git_path, module, dest)
cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to fetch submodules: %s" % out + err)
if track_submodules:
# Compare against submodule HEAD
### FIXME: determine this from .gitmodules
version = 'master'
after = get_submodule_versions(git_path, module, dest, '%s/%s'
% (remote, version))
if begin != after:
changed = True
else:
# Compare against the superproject's expectation
cmd = [git_path, 'submodule', 'status']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)
for line in out.splitlines():
if line[0] != ' ':
changed = True
break
return changed
def submodule_update(git_path, module, dest, track_submodules):
''' init and update any submodules '''
# get the valid submodule params
params = get_submodule_update_params(module, git_path, dest)
# skip submodule commands if .gitmodules is not present
if not os.path.exists(os.path.join(dest, '.gitmodules')):
return (0, '', '')
cmd = [ git_path, 'submodule', 'sync' ]
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if 'remote' in params and track_submodules:
cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ,'--remote' ]
else:
cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ]
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to init/update submodules: %s" % out + err)
return (rc, out, err)
def switch_version(git_path, module, dest, remote, version, verify_commit):
cmd = ''
if version != 'HEAD':
if is_remote_branch(git_path, module, dest, remote, version):
if not is_local_branch(git_path, module, dest, version):
cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version)
else:
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % version)
cmd = "%s reset --hard %s/%s" % (git_path, remote, version)
else:
cmd = "%s checkout --force %s" % (git_path, version)
else:
branch = get_head_branch(git_path, module, dest, remote)
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % branch)
cmd = "%s reset --hard %s" % (git_path, remote)
(rc, out1, err1) = module.run_command(cmd, cwd=dest)
if rc != 0:
if version != 'HEAD':
module.fail_json(msg="Failed to checkout %s" % (version))
else:
module.fail_json(msg="Failed to checkout branch %s" % (branch))
if verify_commit:
verify_commit_sign(git_path, module, dest, version)
return (rc, out1, err1)
def verify_commit_sign(git_path, module, dest, version):
cmd = "%s verify-commit %s" % (git_path, version)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version)
return (rc, out, err)
# ===========================================
def main():
module = AnsibleModule(
argument_spec = dict(
dest=dict(),
repo=dict(required=True, aliases=['name']),
version=dict(default='HEAD'),
remote=dict(default='origin'),
refspec=dict(default=None),
reference=dict(default=None),
force=dict(default='no', type='bool'),
depth=dict(default=None, type='int'),
clone=dict(default='yes', type='bool'),
update=dict(default='yes', type='bool'),
verify_commit=dict(default='no', type='bool'),
accept_hostkey=dict(default='no', type='bool'),
key_file=dict(default=None, required=False),
ssh_opts=dict(default=None, required=False),
executable=dict(default=None),
bare=dict(default='no', type='bool'),
recursive=dict(default='yes', type='bool'),
track_submodules=dict(default='no', type='bool'),
),
supports_check_mode=True
)
dest = module.params['dest']
repo = module.params['repo']
version = module.params['version']
remote = module.params['remote']
refspec = module.params['refspec']
force = module.params['force']
depth = module.params['depth']
update = module.params['update']
allow_clone = module.params['clone']
bare = module.params['bare']
verify_commit = module.params['verify_commit']
reference = module.params['reference']
git_path = module.params['executable'] or module.get_bin_path('git', True)
key_file = module.params['key_file']
ssh_opts = module.params['ssh_opts']
# We screenscrape a huge amount of git commands so use C locale anytime we
# call run_command()
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
gitconfig = None
if not dest and allow_clone:
module.fail_json(msg="the destination directory must be specified unless clone=no")
elif dest:
dest = os.path.abspath(os.path.expanduser(dest))
if bare:
gitconfig = os.path.join(dest, 'config')
else:
gitconfig = os.path.join(dest, '.git', 'config')
# make sure the key_file path is expanded for ~ and $HOME
if key_file is not None:
key_file = os.path.abspath(os.path.expanduser(key_file))
# create a wrapper script and export
# GIT_SSH=<path> as an environment variable
# for git to use the wrapper script
ssh_wrapper = None
if key_file or ssh_opts:
ssh_wrapper = write_ssh_wrapper()
set_git_ssh(ssh_wrapper, key_file, ssh_opts)
module.add_cleanup_file(path=ssh_wrapper)
# add the git repo's hostkey
if module.params['ssh_opts'] is not None:
if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
else:
add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
recursive = module.params['recursive']
track_submodules = module.params['track_submodules']
rc, out, err, status = (0, None, None, None)
before = None
local_mods = False
repo_updated = None
if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):
# if there is no git configuration, do a clone operation unless:
# * the user requested no clone (they just want info)
# * we're doing a check mode test
# In those cases we do an ls-remote
if module.check_mode or not allow_clone:
remote_head = get_remote_head(git_path, module, dest, version, repo, bare)
module.exit_json(changed=True, before=before, after=remote_head)
# there's no git config, so clone
clone(git_path, module, repo, dest, remote, depth, version, bare, reference, refspec, verify_commit)
repo_updated = True
elif not update:
# Just return having found a repo already in the dest path
# this does no checking that the repo is the actual repo
# requested.
before = get_version(module, git_path, dest)
module.exit_json(changed=False, before=before, after=before)
else:
# else do a pull
local_mods = has_local_mods(module, git_path, dest, bare)
before = get_version(module, git_path, dest)
if local_mods:
# failure should happen regardless of check mode
if not force:
module.fail_json(msg="Local modifications exist in repository (force=no).")
# if force and in non-check mode, do a reset
if not module.check_mode:
reset(git_path, module, dest)
# exit if already at desired sha version
set_remote_url(git_path, module, repo, dest, remote)
remote_head = get_remote_head(git_path, module, dest, version, remote, bare)
if before == remote_head:
if local_mods:
module.exit_json(changed=True, before=before, after=remote_head,
msg="Local modifications exist")
elif is_remote_tag(git_path, module, dest, repo, version):
# if the remote is a tag and we have the tag locally, exit early
if version in get_tags(git_path, module, dest):
repo_updated = False
else:
# if the remote is a branch and we have the branch locally, exit early
if version in get_branches(git_path, module, dest):
repo_updated = False
if repo_updated is None:
if module.check_mode:
module.exit_json(changed=True, before=before, after=remote_head)
fetch(git_path, module, repo, dest, version, remote, bare, refspec)
repo_updated = True
# switch to version specified regardless of whether
# we got new revisions from the repository
if not bare:
switch_version(git_path, module, dest, remote, version, verify_commit)
# Deal with submodules
submodules_updated = False
if recursive and not bare:
submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)
if module.check_mode:
if submodules_updated:
module.exit_json(changed=True, before=before, after=remote_head, submodules_changed=True)
else:
module.exit_json(changed=False, before=before, after=remote_head)
if submodules_updated:
# Switch to version specified
submodule_update(git_path, module, dest, track_submodules)
# determine if we changed anything
after = get_version(module, git_path, dest)
changed = False
if before != after or local_mods or submodules_updated:
changed = True
# cleanup the wrapper script
if ssh_wrapper:
try:
os.remove(ssh_wrapper)
except OSError:
# No need to fail if the file already doesn't exist
pass
module.exit_json(changed=changed, before=before, after=after)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.known_hosts import *
if __name__ == '__main__':
main()
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016, Brian Coca <bcoca@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
module: systemd
author:
- "Ansible Core Team"
version_added: "2.2"
short_description: Manage services.
description:
- Controls systemd services on remote hosts.
options:
name:
required: true
description:
- Name of the service.
aliases: ['unit', 'service']
state:
required: false
default: null
choices: [ 'started', 'stopped', 'restarted', 'reloaded' ]
description:
- C(started)/C(stopped) are idempotent actions that will not run commands unless necessary.
C(restarted) will always bounce the service. C(reloaded) will always reload.
enabled:
required: false
choices: [ "yes", "no" ]
default: null
description:
- Whether the service should start on boot. B(At least one of state and enabled are required.)
masked:
required: false
choices: [ "yes", "no" ]
default: null
description:
- Whether the unit should be masked or not, a masked unit is impossible to start.
daemon_reload:
required: false
default: no
choices: [ "yes", "no" ]
description:
- run daemon-reload before doing any other operations, to make sure systemd has read any changes.
aliases: ['daemon-reload']
user:
required: false
default: no
choices: [ "yes", "no" ]
description:
- run systemctl talking to the service manager of the calling user, rather than the service manager
of the system.
notes:
- One option other than name is required.
requirements:
- A system managed by systemd
'''
EXAMPLES = '''
# Example action to start service httpd, if not running
- systemd:
state: started
name: httpd
# Example action to stop service cron on debian, if running
- systemd:
name: cron
state: stopped
# Example action to restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
- systemd:
state: restarted
daemon_reload: yes
name: crond
# Example action to reload service httpd, in all cases
- systemd:
name: httpd
state: reloaded
# Example action to enable service httpd and ensure it is not masked
- systemd:
name: httpd
enabled: yes
masked: no
# Example action to enable a timer for dnf-automatic
- systemd:
name: dnf-automatic.timer
state: started
enabled: True
'''
RETURN = '''
status:
description: A dictionary with the key=value pairs returned from `systemctl show`
returned: success
type: complex
sample: {
"ActiveEnterTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ActiveEnterTimestampMonotonic": "8135942",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "active",
"After": "auditd.service systemd-user-sessions.service time-sync.target systemd-journald.socket basic.target system.slice",
"AllowIsolate": "no",
"Before": "shutdown.target multi-user.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "1000",
"CPUAccounting": "no",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "1024",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes",
"ConditionTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ConditionTimestampMonotonic": "7902742",
"Conflicts": "shutdown.target",
"ControlGroup": "/system.slice/crond.service",
"ControlPID": "0",
"DefaultDependencies": "yes",
"Delegate": "no",
"Description": "Command Scheduler",
"DevicePolicy": "auto",
"EnvironmentFile": "/etc/sysconfig/crond (ignore_errors=no)",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "595",
"ExecMainStartTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ExecMainStartTimestampMonotonic": "8134990",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/crond ; argv[]=/usr/sbin/crond -n $CRONDARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FragmentPath": "/usr/lib/systemd/system/crond.service",
"GuessMainPID": "yes",
"IOScheduling": "0",
"Id": "crond.service",
"IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"InactiveExitTimestampMonotonic": "8135942",
"JobTimeoutUSec": "0",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "18446744073709551615",
"LimitCORE": "18446744073709551615",
"LimitCPU": "18446744073709551615",
"LimitDATA": "18446744073709551615",
"LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615",
"LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200",
"LimitNICE": "0",
"LimitNOFILE": "4096",
"LimitNPROC": "3902",
"LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0",
"LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "3902",
"LimitSTACK": "18446744073709551615",
"LoadState": "loaded",
"MainPID": "595",
"MemoryAccounting": "no",
"MemoryLimit": "18446744073709551615",
"MountFlags": "0",
"Names": "crond.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "none",
"OOMScoreAdjust": "0",
"OnFailureIsolate": "no",
"PermissionsStartOnly": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"RemainAfterExit": "no",
"Requires": "basic.target",
"Restart": "no",
"RestartUSec": "100ms",
"Result": "success",
"RootDirectoryStartOnly": "no",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitInterval": "10000000",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "running",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "simple",
"UMask": "0022",
"UnitFileState": "enabled",
"WantedBy": "multi-user.target",
"Wants": "system.slice",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "0",
}
'''
import os
import glob
from ansible.module_utils.basic import *
# ===========================================
# Main control flow
def main():
# init
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True, type='str', aliases=['unit', 'service']),
state = dict(choices=[ 'started', 'stopped', 'restarted', 'reloaded'], type='str'),
enabled = dict(type='bool'),
masked = dict(type='bool'),
daemon_reload= dict(type='bool', default=False, aliases=['daemon-reload']),
user= dict(type='bool', default=False),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled', 'masked', 'daemon_reload']],
)
# initialize
systemctl = module.get_bin_path('systemctl')
if module.params['user']:
systemctl = systemctl + " --user"
unit = module.params['name']
rc = 0
out = err = ''
result = {
'name': unit,
'changed': False,
'status': {},
}
# Run daemon-reload first, if requested
if module.params['daemon_reload']:
(rc, out, err) = module.run_command("%s daemon-reload" % (systemctl))
if rc != 0:
module.fail_json(msg='failure %d during daemon-reload: %s' % (rc, err))
#TODO: check if service exists
(rc, out, err) = module.run_command("%s show '%s'" % (systemctl, unit))
if rc != 0:
module.fail_json(msg='failure %d running systemctl show for %r: %s' % (rc, unit, err))
# load return of systemctl show into dictionary for easy access and return
k = None
multival = []
for line in out.split('\n'): # systemd can have multiline values delimited with {}
if line.strip():
if k is None:
if '=' in line:
k,v = line.split('=', 1)
if v.lstrip().startswith('{'):
if not v.rstrip().endswith('}'):
multival.append(line)
continue
result['status'][k] = v.strip()
k = None
else:
if line.rstrip().endswith('}'):
result['status'][k] = '\n'.join(multival).strip()
multival = []
k = None
else:
multival.append(line)
if 'LoadState' in result['status'] and result['status']['LoadState'] == 'not-found':
module.fail_json(msg='Could not find the requested service "%r": %s' % (unit, err))
elif 'LoadError' in result['status']:
module.fail_json(msg="Failed to get the service status '%s': %s" % (unit, result['status']['LoadError']))
# mask/unmask the service, if requested
if module.params['masked'] is not None:
masked = (result['status']['LoadState'] == 'masked')
# Change?
if masked != module.params['masked']:
result['changed'] = True
if module.params['masked']:
action = 'mask'
else:
action = 'unmask'
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
# Enable/disable service startup at boot if requested
if module.params['enabled'] is not None:
# do we need to enable the service?
enabled = False
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
# check systemctl result or if it is a init script
if rc == 0:
enabled = True
elif rc == 1:
# Deals with init scripts
# if both init script and unit file exist stdout should have enabled/disabled, otherwise use rc entries
initscript = '/etc/init.d/' + unit
if os.path.exists(initscript) and os.access(initscript, os.X_OK) and \
(not out.startswith('disabled') or bool(glob.glob('/etc/rc?.d/S??' + unit))):
enabled = True
# default to current state
result['enabled'] = enabled
# Change enable/disable if needed
if enabled != module.params['enabled']:
result['changed'] = True
if module.params['enabled']:
action = 'enable'
else:
action = 'disable'
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
result['enabled'] = not enabled
if module.params['state'] is not None:
# default to desired state
result['state'] = module.params['state']
# What is current service state?
if 'ActiveState' in result['status']:
action = None
if module.params['state'] == 'started':
if result['status']['ActiveState'] != 'active':
action = 'start'
result['changed'] = True
elif module.params['state'] == 'stopped':
if result['status']['ActiveState'] == 'active':
action = 'stop'
result['changed'] = True
else:
action = module.params['state'][:-2] # remove 'ed' from restarted/reloaded
result['state'] = 'started'
result['changed'] = True
if action:
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
else:
# this should not happen?
module.fail_json(msg="Service is in unknown state", status=result['status'])
module.exit_json(**result)
if __name__ == '__main__':
main()
......@@ -24,7 +24,7 @@
apt:
name: "{{ item }}"
state: present
with_items: ad_hoc_reporting_debian_pkgs
with_items: "{{ ad_hoc_reporting_debian_pkgs }}"
tags:
- install:system-requirements
......@@ -58,7 +58,7 @@
name: "{{ item }}"
state: present
extra_args: "-i {{ COMMON_PYPI_MIRROR_URL }}"
with_items: ad_hoc_reporting_pip_pkgs
with_items: "{{ ad_hoc_reporting_pip_pkgs }}"
tags:
- install:app-requirements
......@@ -92,7 +92,7 @@
- scripts
- scripts:mysql
- install:code
with_items: AD_HOC_REPORTING_REPLICA_DB_HOSTS
with_items: "{{ AD_HOC_REPORTING_REPLICA_DB_HOSTS }}"
# These templates rely on there being a global
# read_only mongo user, you must override the default
......
......@@ -27,3 +27,6 @@
##
# Defaults for role add_user
#
#
#
dirs: []
......@@ -65,8 +65,7 @@
owner: "{{ item.owner }}"
group: "{{ item.group }}"
mode: "{{ item.mode | default('0755') }}"
with_items: dirs
when: dirs is defined
with_items: "{{ dirs }}"
tags:
- install
- install:base
......@@ -12,7 +12,7 @@
notify: restart alton
- name: Checkout the code
git_2_0_1:
git:
dest: "{{ alton_code_dir }}"
repo: "{{ alton_source_repo }}"
version: "{{ alton_version }}"
......
......@@ -33,42 +33,40 @@
#
- name: setup the analytics_api env file
template: >
src="edx/app/analytics_api/analytics_api_env.j2"
dest="{{ analytics_api_home }}/analytics_api_env"
owner={{ analytics_api_user }}
group={{ analytics_api_user }}
mode=0644
template:
src: "edx/app/analytics_api/analytics_api_env.j2"
dest: "{{ analytics_api_home }}/analytics_api_env"
owner: "{{ analytics_api_user }}"
group: "{{ analytics_api_user }}"
mode: 0644
tags:
- install
- install:configuration
- name: "add gunicorn configuration file"
template: >
src=edx/app/analytics_api/analytics_api_gunicorn.py.j2
dest={{ analytics_api_home }}/analytics_api_gunicorn.py
template:
src: edx/app/analytics_api/analytics_api_gunicorn.py.j2
dest: "{{ analytics_api_home }}/analytics_api_gunicorn.py"
become_user: "{{ analytics_api_user }}"
tags:
- install
- install:configuration
- name: install application requirements
pip: >
requirements="{{ analytics_api_requirements_base }}/{{ item }}"
virtualenv="{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}"
state=present
pip:
requirements: "{{ analytics_api_requirements_base }}/{{ item }}"
virtualenv: "{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}"
state: present
become_user: "{{ analytics_api_user }}"
with_items: analytics_api_requirements
with_items: "{{ analytics_api_requirements }}"
tags:
- install
- install:app-requirements
- name: migrate
shell: >
chdir={{ analytics_api_code_dir }}
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python ./manage.py migrate --noinput
shell: "DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}' DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}' {{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python ./manage.py migrate --noinput"
args:
chdir: "{{ analytics_api_code_dir }}"
become_user: "{{ analytics_api_user }}"
environment: "{{ analytics_api_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes"
......@@ -77,9 +75,9 @@
- migrate:db
- name: run collectstatic
shell: >
chdir={{ analytics_api_code_dir }}
{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python manage.py collectstatic --noinput
shell: "{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python manage.py collectstatic --noinput"
args:
chdir: "{{ analytics_api_code_dir }}"
become_user: "{{ analytics_api_user }}"
environment: "{{ analytics_api_environment }}"
tags:
......@@ -87,40 +85,44 @@
- assets:gather
- name: create api users
shell: >
chdir={{ analytics_api_code_dir }}
{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python manage.py set_api_key {{ item.key }} {{ item.value }}
shell: "{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python manage.py set_api_key {{ item.key }} {{ item.value }}"
args:
chdir: "{{ analytics_api_code_dir }}"
become_user: "{{ analytics_api_user }}"
environment: "{{ analytics_api_environment }}"
with_dict: ANALYTICS_API_USERS
with_dict: "{{ ANALYTICS_API_USERS }}"
tags:
- manage
- manage:app-users
- name: write out the supervisor wrapper
template: >
src=edx/app/analytics_api/analytics_api.sh.j2
dest={{ analytics_api_home }}/{{ analytics_api_service_name }}.sh
mode=0650 owner={{ supervisor_user }} group={{ common_web_user }}
template:
src: edx/app/analytics_api/analytics_api.sh.j2
dest: "{{ analytics_api_home }}/{{ analytics_api_service_name }}.sh"
mode: 0650
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
tags:
- install
- install:configuration
- name: write supervisord config
template: >
src=edx/app/supervisor/conf.d.available/analytics_api.conf.j2
dest="{{ supervisor_available_dir }}/{{ analytics_api_service_name }}.conf"
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
template:
src: edx/app/supervisor/conf.d.available/analytics_api.conf.j2
dest: "{{ supervisor_available_dir }}/{{ analytics_api_service_name }}.conf"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: 0644
tags:
- install
- install:configuration
- name: enable supervisor script
file: >
src={{ supervisor_available_dir }}/{{ analytics_api_service_name }}.conf
dest={{ supervisor_cfg_dir }}/{{ analytics_api_service_name }}.conf
state=link
force=yes
file:
src: "{{ supervisor_available_dir }}/{{ analytics_api_service_name }}.conf"
dest: "{{ supervisor_cfg_dir }}/{{ analytics_api_service_name }}.conf"
state: link
force: yes
when: not disable_edx_services
tags:
- install
......@@ -134,10 +136,10 @@
- manage:start
- name: create symlinks from the venv bin dir
file: >
src="{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/{{ item }}"
dest="{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.analytics_api"
state=link
file:
src: "{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/{{ item }}"
dest: "{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.analytics_api"
state: link
with_items:
- python
- pip
......@@ -147,10 +149,10 @@
- install:base
- name: create symlinks from the repo dir
file: >
src="{{ analytics_api_code_dir }}/{{ item }}"
dest="{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.analytics_api"
state=link
file:
src: "{{ analytics_api_code_dir }}/{{ item }}"
dest: "{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.analytics_api"
state: link
with_items:
- manage.py
tags:
......@@ -158,11 +160,11 @@
- install:base
- name: restart analytics_api
supervisorctl: >
state=restarted
supervisorctl_path={{ supervisor_ctl }}
config={{ supervisor_cfg }}
name={{ analytics_api_service_name }}
supervisorctl:
state: restarted
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
name: "{{ analytics_api_service_name }}"
when: not disable_edx_services
become_user: "{{ supervisor_service_user }}"
tags:
......
......@@ -11,12 +11,17 @@
# Defaults for role analytics_pipeline
#
ANALYTICS_PIPELINE_OUTPUT_DATABASE_USER: pipeline001
ANALYTICS_PIPELINE_OUTPUT_DATABASE_PASSWORD: password
ANALYTICS_PIPELINE_OUTPUT_DATABASE_HOST: localhost
ANALYTICS_PIPELINE_OUTPUT_DATABASE_PORT: 3306
ANALYTICS_PIPELINE_OUTPUT_DATABASE_NAME: "{{ ANALYTICS_API_REPORTS_DB_NAME }}"
ANALYTICS_PIPELINE_OUTPUT_DATABASE:
username: pipeline001
password: password
host: localhost
port: 3306
username: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_USER }}"
password: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_PASSWORD }}"
host: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_HOST }}"
port: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_PORT }}"
ANALYTICS_PIPELINE_INPUT_DATABASE:
username: "{{ COMMON_MYSQL_READ_ONLY_USER }}"
......
......@@ -89,7 +89,7 @@
- install:configuration
- name: Util library source checked out
git_2_0_1:
git:
repo: "{{ analytics_pipeline_util_library.repo }}"
dest: "{{ analytics_pipeline_util_library.path }}"
version: "{{ analytics_pipeline_util_library.version }}"
......
......@@ -3,13 +3,13 @@
#
# Tasks for role {{ role_name }}
#
#
# Overview:
#
#
#
# Dependencies:
#
#
#
# Example play:
#
#
......@@ -149,7 +149,7 @@
tags:
- install
- install:app-requirements
- name: run collectstatic
command: make static
args:
......@@ -161,7 +161,7 @@
- assets:gather
- name: restart the application
supervisorctl:
supervisorctl:
state: restarted
supervisorctl_path: "{{ '{{' }} supervisor_ctl }}"
config: "{{ '{{' }} supervisor_cfg }}"
......@@ -173,20 +173,24 @@
- manage:start
- name: Copying nginx configs for {{ role_name }}
template: >
src=edx/app/nginx/sites-available/{{ role_name }}.j2
dest={{ '{{' }} nginx_sites_available_dir }}/{{ role_name }}
owner=root group={{ '{{' }} common_web_user }} mode=0640
template:
src: "edx/app/nginx/sites-available/{{ role_name }}.j2"
dest: "{{ '{{' }} nginx_sites_available_dir }}/{{ role_name }}"
owner: root
group: "{{ '{{' }} common_web_user }}"
mode: 0640
notify: reload nginx
tags:
- install
- install:vhosts
- name: Creating nginx config links for {{ role_name }}
file: >
src={{ '{{' }} nginx_sites_available_dir }}/{{ role_name }}
dest={{ '{{' }} nginx_sites_enabled_dir }}/{{ role_name }}
state=link owner=root group=root
file:
src: "{{ '{{' }} nginx_sites_available_dir }}/{{ role_name }}"
dest: "{{ '{{' }} nginx_sites_enabled_dir }}/{{ role_name }}"
state: link
owner: root
group: root
notify: reload nginx
tags:
- install
......
......@@ -23,41 +23,41 @@
- name: install antivirus system packages
apt: pkg={{ item }} install_recommends=yes state=present
with_items: antivirus_debian_pkgs
with_items: "{{ antivirus_debian_pkgs }}"
- name: create antivirus scanner user
user: >
name="{{ antivirus_user }}"
home="{{ antivirus_app_dir }}"
createhome=no
shell=/bin/false
user:
name: "{{ antivirus_user }}"
home: "{{ antivirus_app_dir }}"
createhome: no
shell: /bin/false
- name: create antivirus app and data dirs
file: >
path="{{ item }}"
state=directory
owner="{{ antivirus_user }}"
group="{{ antivirus_user }}"
file:
path: "{{ item }}"
state: directory
owner: "{{ antivirus_user }}"
group: "{{ antivirus_user }}"
with_items:
- "{{ antivirus_app_dir }}"
- "{{ antivirus_app_dir }}/data"
- name: install antivirus s3 scanner script
template: >
src=s3_bucket_virus_scan.sh.j2
dest={{ antivirus_app_dir }}/s3_bucket_virus_scan.sh
mode=0555
owner={{ antivirus_user }}
group={{ antivirus_user }}
template:
src: s3_bucket_virus_scan.sh.j2
dest: "{{ antivirus_app_dir }}/s3_bucket_virus_scan.sh"
mode: "0555"
owner: "{{ antivirus_user }}"
group: "{{ antivirus_user }}"
- name: install antivirus s3 scanner cronjob
cron: >
name="antivirus-{{ item }}"
job="{{ antivirus_app_dir }}/s3_bucket_virus_scan.sh -b '{{ item }}' -m '{{ ANTIVIRUS_MAILTO }}' -f '{{ ANTIVIRUS_MAILFROM }}'"
backup=yes
cron_file=antivirus-{{ item }}
user={{ antivirus_user }}
hour="*"
minute="0"
day="*"
with_items: ANTIVIRUS_BUCKETS
cron:
name: "antivirus-{{ item }}"
job: "{{ antivirus_app_dir }}/s3_bucket_virus_scan.sh -b '{{ item }}' -m '{{ ANTIVIRUS_MAILTO }}' -f '{{ ANTIVIRUS_MAILFROM }}'"
backup: yes
cron_file: "antivirus-{{ item }}"
user: "{{ antivirus_user }}"
hour: "*"
minute: "0"
day: "*"
with_items: "{{ ANTIVIRUS_BUCKETS }}"
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
##
# Defaults for role asqatasun
#
ASQATASUN_LOCALE: 'en_US.UTF-8'
ASQATASUN_DATABASE_NAME: 'asqatasun'
ASQATASUN_DATABASE_USER: 'asqatasun'
ASQATASUN_DATABASE_PASSWORD: 'changeme'
ASQATASUN_DATABASE_HOST: 'localhost'
ASQATASUN_DATABASE_ENCODING: 'utf8'
ASQATASUN_DATABASE_COLLATION: 'utf8_general_ci'
ASQATASUN_URL: 'http://localhost:8080/asqatasun/'
ASQATASUN_ADMIN_EMAIL: 'admin@example.com'
ASQATASUN_ADMIN_PASSWORD: 'changeme'
asqatasun_debian_pkgs:
- wget
- bzip2
- openjdk-7-jre
- unzip
- mysql-server
- libmysql-java
- python-mysqldb
- libtomcat7-java
- tomcat7
- libspring-instrument-java
- xvfb
- libdbus-glib-1-2
- mailutils
- postfix
locale: "{{ ASQATASUN_LOCALE }}"
asqatasun_download_link: "http://download.asqatasun.org/asqatasun-latest.tar.gz"
# Asqatasun version that you want to install, get the full list of releases
#by clicking in the release tab of the github main interface.
asqatasun_version: "asqatasun-4.0.0-rc.1"
# Go this link to find your desired ESR Firefox
# For 32-bit architecture
# http://download-origin.cdn.mozilla.net/pub/firefox/releases/31.4.0esr/linux-i686/
# For 64-bit architecture
# http://download-origin.cdn.mozilla.net/pub/firefox/releases/31.4.0esr/linux-x86_64/
# Default is en-US in our example
fixfox_esr_link: "http://download-origin.cdn.mozilla.net/pub/firefox/releases/31.4.0esr/linux-x86_64/en-US/firefox-31.4.0esr.tar.bz2"
# MySQL variables for Asqatasun
default_character_set: "utf8"
collation_server: "utf8_general_ci"
init_connect: "SET NAMES utf8"
character_set_server: "utf8"
mysql_max_allowed_packet: "64M"
asqatasun_parameters:
db_name: "{{ ASQATASUN_DATABASE_NAME }}"
db_user: "{{ ASQATASUN_DATABASE_USER }}"
db_password: "{{ ASQATASUN_DATABASE_PASSWORD }}"
db_host: "{{ ASQATASUN_DATABASE_HOST }}"
db_encoding: "{{ ASQATASUN_DATABASE_ENCODING }}"
db_collation: "{{ ASQATASUN_DATABASE_COLLATION }}"
url: "{{ ASQATASUN_URL }}"
admin_email: "{{ ASQATASUN_ADMIN_EMAIL }}"
admin_passwd: "{{ ASQATASUN_ADMIN_PASSWORD }}"
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
# Tasks for role asqatasun
#
# Overview:
#
# Install the Asqatasun, an opensource web site analyzer,
# used for web accessibility (a11y) and Search Engine Optimization (SEO)
#
# ansible-playbook -i 'asqatasun.example.com,' ./asqatasun.yml -e@/ansible/vars/deployment.yml -e@/ansible/vars/env-deployment.yml
#
- name: Set Postfix options
debconf:
name: postifx
question: "{{ item.question }}"
value: "{{ item.value }} "
vtype: "string"
with_items:
- { question: "postfix/mailname", value: " " }
- { question: "postfix/main_mailer_type", value: "Satellite system" }
tags:
- install
- install:configuration
- name: Update locale Setting
locale_gen:
name: "{{ locale }}"
state: present
register: set_locale
tags:
- install
- install:base
- name: Reconfigure locale
command: dpkg-reconfigure locales
when: set_locale.changed
- name: Install the Asqatasun Prerequisites
apt:
name: "{{ item }}"
update_cache: yes
state: installed
with_items: asqatasun_debian_pkgs
tags:
- install
- install:base
- name: Copy the asqatasun.cnf template to /etc/mysql/conf.d
template:
dest: /etc/mysql/conf.d/asqatasun.cnf
src: etc/mysql/conf.d/asqatasun.cnf.j2
owner: root
group: root
when: "'{{ asqatasun_parameters.db_host }}' == 'localhost'"
register: my_cnf
tags:
- install
- install:configuration
- name: Restart MySQL
service:
name: mysql
state: restarted
when: my_cnf.changed
- name: Create a soft link for tomcat jar and mysql connector
file:
dest: "{{ item.dest }}"
src: "{{ item.src }}"
state: link
with_items:
- { src: '/usr/share/java/spring3-instrument-tomcat.jar', dest: '/usr/share/tomcat7/lib/spring3-instrument-tomcat.jar' }
- { src: '/usr/share/java/mysql-connector-java.jar', dest: '/usr/share/tomcat7/lib/mysql-connector-java.jar'}
tags:
- install
- install:configuration
- name: Copy the xvfb template to /etc/init.d
template:
dest: /etc/init.d/xvfb
src: etc/init.d/xvfb.j2
owner: root
group: root
mode: 755
register: xvfb
tags:
- install
- install:config
- name: Restart xvfb
service:
name: xvfb
pattern: /etc/init.d/xvfb
state: restarted
enabled: yes
when: xvfb.changed
tags:
- install
- install:config
- name: Download the latest ESR Firfox
get_url:
url: "{{ fixfox_esr_link }}"
dest: "/tmp/{{ fixfox_esr_link | basename }}"
tags:
- install
- install:base
- name: Unzip the downloaded Firfox zipped file
unarchive:
src: "/tmp/{{ fixfox_esr_link | basename }}"
dest: /opt
copy: no
tags:
- install
- install:base
- name: Download the latest Asqatasun tarball
get_url:
url: "{{ asqatasun_download_link }}"
dest: "/tmp/{{ asqatasun_download_link | basename }}"
tags:
- install
- install:base
- name: Unzip the downloaded Asqatasun tarball
unarchive:
src: "/tmp/{{ asqatasun_download_link | basename }}"
dest: "/tmp/"
copy: no
tags:
- install
- install:base
- name: Create MySQL database for Asqatasun
mysql_db:
name: "{{ asqatasun_parameters.db_name }}"
state: present
encoding: "{{ asqatasun_parameters.db_encoding }}"
collation: "{{ asqatasun_parameters.db_collation }}"
tags:
- migrate
- migrate:db
- name: Create MySQL user for Asqatasun
mysql_user:
name: "{{ asqatasun_parameters.db_user }}"
password: "{{ asqatasun_parameters.db_password }}"
host: "{{ asqatasun_parameters.db_host }}"
priv: "{{ asqatasun_parameters.db_name }}.*:ALL"
state: present
tags:
- migrate
- migrate:db
- name: Check that asqatasun app is running
shell: >
/bin/ps aux | grep -i asqatasun
register: asqatasun_app
changed_when: no
tags:
- install
- install:base
- name: Install the Asqatasun
shell: >
/bin/echo "yes" | ./install.sh --database-user "{{ asqatasun_parameters.db_user }}" \
--database-passwd "{{ asqatasun_parameters.db_password }}" \
--database-db "{{ asqatasun_parameters.db_name }}" \
--database-host "{{ asqatasun_parameters.db_host }}" \
--asqatasun-url http://localhost:8080/asqatasun/ \
--tomcat-webapps /var/lib/tomcat7/webapps/ \
--tomcat-user tomcat7 \
--asqa-admin-email "{{ asqatasun_parameters.admin_email }}" \
--asqa-admin-passwd "{{ asqatasun_parameters.admin_passwd }}" \
--firefox-esr-binary-path /opt/firefox-esr/firefox
--display-port ":99"
args:
chdir: "/tmp/{{ asqatasun_version }}.i386"
when: "asqatasun_app.stdout.find('/etc/asqatasun') == -1"
register: asqatasun_install
tags:
- install
- install:base
- name: Restart tomcat7
service:
name: tomcat7
state: restarted
when: asqatasun_install.changed
#!/bin/sh
### BEGIN INIT INFO
# Provides: xvfb
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: XVFB - Virtual X server display
# Description: XVFB - Virtual X server display
### END INIT INFO
# Author: Matthieu Faure <mfaure@asqatasun.org>
# Do NOT "set -e"
# TODO: improve with help from /etc/init.d/skeleton
RUN_AS_USER=tomcat7
OPTS=":99 -screen 1 1024x768x24 -nolisten tcp"
XVFB_DIR=/usr/bin
PIDFILE=/var/run/xvfb
case $1 in
start)
start-stop-daemon --chuid $RUN_AS_USER -b --start --exec $XVFB_DIR/Xvfb --make-pidfile --pidfile $PIDFILE -- $OPTS &
;;
stop)
start-stop-daemon --stop --user $RUN_AS_USER --pidfile $PIDFILE
rm -f $PIDFILE
;;
restart)
if start-stop-daemon --test --stop --user $RUN_AS_USER --pidfile $PIDFILE >/dev/null; then
$0 stop
fi;
$0 start
;;
*)
echo "Usage: $0 (start|restart|stop)"
exit 1
;;
esac
exit 0
[client]
default-character-set={{ default_character_set }}
[mysql]
default-character-set={{ default_character_set }}
[mysqld]
collation-server = {{ collation_server }}
init-connect={{ "\'" + init_connect + "\'" }}
character-set-server = {{ character_set_server }}
max_allowed_packet = {{ mysql_max_allowed_packet }}
......@@ -102,8 +102,5 @@
file:
path: "{{ item.item }}"
mode: "0644"
when: >
vagrant_home_dir.stat.exists == false and
ansible_distribution in common_debian_variants and
item.stat.exists
with_items: motd_files_exist.results
when: vagrant_home_dir.stat.exists == False and ansible_distribution in common_debian_variants and item.stat.exists
with_items: "{{ motd_files_exist.results }}"
# Install browsermob-proxy, which is used for page performance testing with bok-choy
---
- name: get zip file
get_url: >
url={{ browsermob_proxy_url }}
dest=/var/tmp/browsermob-proxy-{{ browsermob_proxy_version }}.zip
get_url:
url: "{{ browsermob_proxy_url }}"
dest: "/var/tmp/browsermob-proxy-{{ browsermob_proxy_version }}.zip"
register: download_browsermob_proxy
- name: unzip into /var/tmp/
shell: >
unzip /var/tmp/browsermob-proxy-{{ browsermob_proxy_version }}.zip
chdir=/var/tmp
shell: "unzip /var/tmp/browsermob-proxy-{{ browsermob_proxy_version }}.zip"
args:
chdir: "/var/tmp"
when: download_browsermob_proxy.changed
- name: move to /etc/browsermob-proxy/
shell: >
mv /var/tmp/browsermob-proxy-{{ browsermob_proxy_version }} /etc/browsermob-proxy
shell: "mv /var/tmp/browsermob-proxy-{{ browsermob_proxy_version }} /etc/browsermob-proxy"
when: download_browsermob_proxy.changed
- name: change permissions of main script
file: >
path=/etc/browsermob-proxy/bin/browsermob-proxy
mode=0755
file:
path: "/etc/browsermob-proxy/bin/browsermob-proxy"
mode: 0755
when: download_browsermob_proxy.changed
- name: add wrapper script /usr/local/bin/browsermob-proxy
copy: >
src=browsermob-proxy
dest=/usr/local/bin/browsermob-proxy
copy:
src: browsermob-proxy
dest: /usr/local/bin/browsermob-proxy
when: download_browsermob_proxy.changed
- name: change permissions of wrapper script
file: >
path=/usr/local/bin/browsermob-proxy
mode=0755
file:
path: /usr/local/bin/browsermob-proxy
mode: 0755
when: download_browsermob_proxy.changed
......@@ -8,12 +8,12 @@
- name: download browser debian packages from S3
get_url: dest="/tmp/{{ item.name }}" url="{{ item.url }}"
register: download_deb
with_items: browser_s3_deb_pkgs
with_items: "{{ browser_s3_deb_pkgs }}"
- name: install browser debian packages
shell: gdebi -nq /tmp/{{ item.name }}
when: download_deb.changed
with_items: browser_s3_deb_pkgs
with_items: "{{ browser_s3_deb_pkgs }}"
# Because the source location has been deprecated, we need to
# ensure it does not interfere with subsequent apt commands
......@@ -50,15 +50,15 @@
- "chromedriver.stat.mode == '0755'"
- name: download PhantomJS
get_url: >
url={{ phantomjs_url }}
dest=/var/tmp/{{ phantomjs_tarfile }}
get_url:
url: "{{ phantomjs_url }}"
dest: "/var/tmp/{{ phantomjs_tarfile }}"
register: download_phantom_js
- name: unpack the PhantomJS tarfile
shell: >
tar -xjf /var/tmp/{{ phantomjs_tarfile }}
chdir=/var/tmp
shell: "tar -xjf /var/tmp/{{ phantomjs_tarfile }}"
args:
chdir: "/var/tmp"
when: download_phantom_js.changed
- name: move PhantomJS binary to /usr/local
......
......@@ -30,7 +30,7 @@
file:
path: "{{ cassandra_data_dir_prefix }}/{{ item }}"
state: directory
with_items: cassandra_data_dirs
with_items: "{{ cassandra_data_dirs }}"
- name: Mount ephemeral disks
mount:
......@@ -49,7 +49,7 @@
path: "{{ cassandra_data_dir_prefix }}/{{ item }}"
owner: "{{ cassandra_user }}"
group: "{{ cassandra_group }}"
with_items: cassandra_data_dirs
with_items: "{{ cassandra_data_dirs }}"
- name: Add the datastax repository apt-key
apt_key:
......
......@@ -3,10 +3,12 @@
template:
src: "{{ item.src }}"
dest: "{{ certs_app_dir }}/{{ item.dest }}"
owner: "{{ certs_user }}"
group: "{{ common_web_user }}"
mode: "0640"
with_items:
- { src: 'certs.env.json.j2', dest: 'env.json' }
- { src: 'certs.auth.json.j2', dest: 'auth.json' }
become_user: "{{ certs_user }}"
- name: Writing supervisor script for certificates
template:
......@@ -44,7 +46,7 @@
when: CERTS_GIT_IDENTITY != "none"
- name: "Checkout certificates repo into {{ certs_code_dir }}"
git_2_0_1:
git:
dest: "{{ certs_code_dir }}"
repo: "{{ CERTS_REPO }}"
version: "{{ certs_version }}"
......@@ -56,7 +58,7 @@
when: CERTS_GIT_IDENTITY != "none"
- name: Checkout certificates repo into {{ certs_code_dir }}
git_2_0_1:
git:
dest: "{{ certs_code_dir }}"
repo: "{{ CERTS_REPO }}"
version: "{{ certs_version }}"
......
......@@ -4,3 +4,4 @@
# role depends. This is to allow sharing vars without creating
# side-effects. Any vars requred by this role should be added to
# common_vars/defaults/main.yml
#
......@@ -3,7 +3,7 @@
fail:
msg: "Configuration Sources Checking (COMMON_EXTRA_CONFIGURATION_SOURCES_CHECKING) is enabled, you must define {{ item }}"
when: COMMON_EXTRA_CONFIGURATION_SOURCES_CHECKING and ({{ item }} is not defined or {{ item }} != True)
with_items: COMMON_EXTRA_CONFIGURATION_SOURCES
with_items: "{{ COMMON_EXTRA_CONFIGURATION_SOURCES }}"
tags:
- "install"
- "install:configuration"
......
......@@ -230,7 +230,6 @@ credentials_log_dir: "{{ COMMON_LOG_DIR }}/{{ credentials_service_name }}"
credentials_requirements_base: "{{ credentials_code_dir }}/requirements"
credentials_requirements:
- production.txt
- optional.txt
#
# OS packages
......
......@@ -10,13 +10,13 @@
#
#
# Tasks for role credentials
#
#
# Overview:
#
#
#
# Dependencies:
#
#
#
# Example play:
#
#
......@@ -43,9 +43,9 @@
- install:app-requirements
- name: create nodeenv
shell: >
creates={{ credentials_nodeenv_dir }}
{{ credentials_venv_dir }}/bin/nodeenv {{ credentials_nodeenv_dir }} --prebuilt
shell: "{{ credentials_venv_dir }}/bin/nodeenv {{ credentials_nodeenv_dir }} --prebuilt"
args:
creates: "{{ credentials_nodeenv_dir }}"
become_user: "{{ credentials_user }}"
tags:
- install
......@@ -74,9 +74,12 @@
# var should have more permissive permissions than the rest
- name: create credentials var dirs
file: >
path="{{ item }}" state=directory mode=0775
owner="{{ credentials_user }}" group="{{ common_web_group }}"
file:
path: "{{ item }}"
state: directory
mode: 0775
owner: "{{ credentials_user }}"
group: "{{ common_web_group }}"
with_items:
- "{{ CREDENTIALS_MEDIA_ROOT }}"
tags:
......@@ -180,7 +183,7 @@
- assets:gather
- name: restart the application
supervisorctl:
supervisorctl:
state: restarted
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
......@@ -192,20 +195,24 @@
- manage:start
- name: Copying nginx configs for credentials
template: >
src=edx/app/nginx/sites-available/credentials.j2
dest={{ nginx_sites_available_dir }}/credentials
owner=root group={{ common_web_user }} mode=0640
template:
src: edx/app/nginx/sites-available/credentials.j2
dest: "{{ nginx_sites_available_dir }}/credentials"
owner: root
group: "{{ common_web_user }}"
mode: 0640
notify: reload nginx
tags:
- install
- install:vhosts
- name: Creating nginx config links for credentials
file: >
src={{ nginx_sites_available_dir }}/credentials
dest={{ nginx_sites_enabled_dir }}/credentials
state=link owner=root group=root
file:
src: "{{ nginx_sites_available_dir }}/credentials"
dest: "{{ nginx_sites_enabled_dir }}/credentials"
state: link
owner: root
group: root
notify: reload nginx
tags:
- install
......
---
DATADOG_API_KEY: "SPECIFY_KEY_HERE"
datadog_agent_version: '1:5.1.1-546'
datadog_agent_version: '1:5.10.1-1'
datadog_apt_key: "0x226AE980C7A7DA52"
datadog_debian_pkgs:
......
---
- name: check out the demo course
git_2_0_1: >
dest={{ demo_code_dir }} repo={{ demo_repo }} version={{ demo_version }}
accept_hostkey=yes
git:
dest: "{{ demo_code_dir }}"
repo: "{{ demo_repo }}"
version: "{{ demo_version }}"
accept_hostkey: yes
become_user: "{{ demo_edxapp_user }}"
register: demo_checkout
- name: import demo course
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py cms --settings=aws import {{ demo_edxapp_course_data_dir }} {{ demo_code_dir }}
chdir={{ demo_edxapp_code_dir }}
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py cms --settings=aws import {{ demo_edxapp_course_data_dir }} {{ demo_code_dir }}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
when: demo_checkout.changed
- name: create some test users
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms manage_user {{ item.username}} {{ item.email }} --initial-password-hash {{ item.hashed_password | quote }}
chdir={{ demo_edxapp_code_dir }}
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms manage_user {{ item.username}} {{ item.email }} --initial-password-hash {{ item.hashed_password | quote }}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
with_items: demo_test_users
with_items: "{{ demo_test_users }}"
when: demo_checkout.changed
- name: create staff user
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms manage_user staff staff@example.com --initial-password-hash {{ demo_hashed_password | quote }} --staff
chdir={{ demo_edxapp_code_dir }}
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms manage_user staff staff@example.com --initial-password-hash {{ demo_hashed_password | quote }} --staff"
args:
chdir: "{{ demo_edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
when:
- demo_checkout.changed
- DEMO_CREATE_STAFF_USER
- name: enroll test users in the demo course
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms enroll_user_in_course -e {{ item.email }} -c {{ demo_course_id }}
chdir={{ demo_edxapp_code_dir }}
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms enroll_user_in_course -e {{ item.email }} -c {{ demo_course_id }}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
with_items:
- "{{ demo_test_users }}"
......@@ -43,15 +45,15 @@
- name: add test users to the certificate whitelist
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms cert_whitelist -a {{ item.email }} -c {{ demo_course_id }}
chdir={{ demo_edxapp_code_dir }}
with_items: demo_test_users
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms cert_whitelist -a {{ item.email }} -c {{ demo_course_id }}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
with_items: "{{ demo_test_users }}"
when: demo_checkout.changed
- name: seed the forums for the demo course
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws seed_permissions_roles {{ demo_course_id }}
chdir={{ demo_edxapp_code_dir }}
with_items: demo_test_users
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws seed_permissions_roles {{ demo_course_id }}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
with_items: "{{ demo_test_users }}"
when: demo_checkout.changed
......@@ -31,8 +31,10 @@
# - demo
- name: create demo app and data dirs
file: >
path="{{ demo_app_dir }}" state=directory
owner="{{ demo_edxapp_user }}" group="{{ common_web_group }}"
file:
path: "{{ demo_app_dir }}"
state: directory
owner: "{{ demo_edxapp_user }}"
group: "{{ common_web_group }}"
- include: deploy.yml tags=deploy
......@@ -77,9 +77,9 @@
- devstack:install
- name: create nodeenv
shell: >
creates={{ discovery_nodeenv_dir }}
{{ discovery_venv_dir }}/bin/nodeenv {{ discovery_nodeenv_dir }} --node={{ discovery_node_version }} --prebuilt
shell: "{{ discovery_venv_dir }}/bin/nodeenv {{ discovery_nodeenv_dir }} --node={{ discovery_node_version }} --prebuilt"
args:
creates: "{{ discovery_nodeenv_dir }}"
become_user: "{{ discovery_user }}"
tags:
- install
......@@ -94,9 +94,9 @@
- install:app-requirements
- name: install bower dependencies
shell: >
chdir={{ discovery_code_dir }}
. {{ discovery_nodeenv_bin }}/activate && {{ discovery_node_bin }}/bower install --production --config.interactive=false
shell: ". {{ discovery_nodeenv_bin }}/activate && {{ discovery_node_bin }}/bower install --production --config.interactive=false"
args:
chdir: "{{ discovery_code_dir }}"
become_user: "{{ discovery_user }}"
tags:
- install
......
......@@ -7,15 +7,28 @@ COMMAND=$1
case $COMMAND in
start)
{% set discovery_venv_bin = discovery_home + "/venvs/" + discovery_service_name + "/bin" %}
{% set discovery_venv_bin = discovery_venv_dir + "/bin" %}
{{ supervisor_venv_bin }}/supervisord --configuration {{ supervisor_cfg }}
# Needed to run bower as root. See explaination around 'discovery_user=root'
echo '{ "allow_root": true }' > /root/.bowerrc
cd /edx/app/edx_ansible/edx_ansible/docker/plays
ansible-playbook discovery.yml -c local -i '127.0.0.1,' \
-t 'install:app-requirements,assets:gather,devstack,migrate,manage:start' \
/edx/app/edx_ansible/venvs/edx_ansible/bin/ansible-playbook discovery.yml -c local -i '127.0.0.1,' \
-t 'install:app-requirements,assets:gather,devstack,migrate' \
--extra-vars="migrate_db=yes" \
--extra-vars="@/ansible_overrides.yml"
--extra-vars="@/ansible_overrides.yml" \
--extra-vars="discovery_user=root" # Needed when sharing the volume with the host machine because node/bower drops
# everything in the code directory by default. So we get issues with permissions
# on folders owned by the developer.
# Need to start supervisord and nginx manually because systemd is hard to run on docker
# http://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/
# Both daemon by default
nginx
/edx/app/supervisor/venvs/supervisor/bin/supervisord --configuration /edx/app/supervisor/supervisord.conf
# Docker requires an active foreground task. Tail the logs to appease Docker and
# provide useful output for development.
......
cache_valid_time: 3600
docker_tools_deps_deb_pkgs:
- apt-transport-https
- ca-certificates
- python-pip
docker_apt_keyserver: "hkp://ha.pool.sks-keyservers.net:80"
docker_apt_key_id: "58118E89F3A912897C070ADBF76221572C52609D"
docker_repo: "deb https://apt.dockerproject.org/repo ubuntu-xenial main"
docker_group: "docker"
docker_users: []
# Install docker-engine and docker-compose
# Add users to docker group
---
- name: add docker group
group:
name: "{{ docker_group }}"
tags:
- install
- install:base
- name: add users to docker group
user:
name: "{{ item }}"
groups: "{{ docker_group }}"
append: yes
with_items: "{{ docker_users }}"
tags:
- install
- install:base
- name: install package dependencies
apt:
name: "{{ docker_tools_deps_deb_pkgs }}"
update_cache: yes
cache_valid_time: "{{ cache_valid_time }}"
tags:
- install
- install:system-requirements
- name: add docker apt key
apt_key:
keyserver: "{{ docker_apt_keyserver }}"
id: "{{ docker_apt_key_id }}"
tags:
- install
- install:configuration
- name: add docker repo
apt_repository:
repo: "{{ docker_repo }}"
tags:
- install
- install:configuration
- name: install docker-engine
apt:
name: "docker-engine"
update_cache: yes
cache_valid_time: "{{ cache_valid_time }}"
tags:
- install
- install:system-requirements
- name: start docker service
service:
name: docker
enabled: yes
state: started
tags:
- install
- install:configuration
- name: install docker-compose
pip:
name: "docker-compose"
tags:
- install
- install:system-requirements
......@@ -21,16 +21,20 @@ ECOMMERCE_NGINX_PORT: "18130"
ECOMMERCE_SSL_NGINX_PORT: 48130
ECOMMERCE_DEFAULT_DB_NAME: 'ecommerce'
ECOMMERCE_DATABASE_USER: "ecomm001"
ECOMMERCE_DATABASE_PASSWORD: "password"
ECOMMERCE_DATABASE_HOST: "localhost"
ECOMMERCE_DATABASE_PORT: 3306
ECOMMERCE_DATABASES:
# rw user
default:
ENGINE: 'django.db.backends.mysql'
NAME: '{{ ECOMMERCE_DEFAULT_DB_NAME }}'
USER: 'ecomm001'
PASSWORD: 'password'
HOST: 'localhost'
PORT: '3306'
USER: '{{ ECOMMERCE_DATABASE_USER }}'
PASSWORD: '{{ ECOMMERCE_DATABASE_PASSWORD }}'
HOST: '{{ ECOMMERCE_DATABASE_HOST }}'
PORT: '{{ ECOMMERCE_DATABASE_PORT }}'
ATOMIC_REQUESTS: true
CONN_MAX_AGE: 60
......@@ -51,7 +55,7 @@ ECOMMERCE_JWT_DECODE_HANDLER: 'ecommerce.extensions.api.handlers.jwt_decode_hand
ECOMMERCE_JWT_ISSUERS:
- '{{ ECOMMERCE_LMS_URL_ROOT }}/oauth2'
- 'ecommerce_worker' # Must match the value of JWT_ISSUER configured for the ecommerce worker.
ECOMMERCE_JWT_LEEWAY: 1
# NOTE: We have an array of keys to allow for support of multiple when, for example,
# we change keys. This will ensure we continue to operate with JWTs issued signed with the old key
# while migrating to the new key.
......@@ -149,7 +153,7 @@ ECOMMERCE_SERVICE_CONFIG:
JWT_SECRET_KEY: '{{ ECOMMERCE_JWT_SECRET_KEY }}'
JWT_ALGORITHM: '{{ ECOMMERCE_JWT_ALGORITHM }}'
JWT_VERIFY_EXPIRATION: '{{ ECOMMERCE_JWT_VERIFY_EXPIRATION }}'
JWT_LEEWAY: 1
JWT_LEEWAY: '{{ ECOMMERCE_JWT_LEEWAY }}'
JWT_DECODE_HANDLER: '{{ ECOMMERCE_JWT_DECODE_HANDLER }}'
JWT_ISSUERS: '{{ ECOMMERCE_JWT_ISSUERS }}'
JWT_SECRET_KEYS: '{{ ECOMMERCE_JWT_SECRET_KEYS }}'
......
......@@ -84,11 +84,9 @@
- migrate:db
- name: Populate countries
shell: >
chdir={{ ecommerce_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }}
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }}
{{ ecommerce_venv_dir }}/bin/python ./manage.py oscar_populate_countries
shell: "DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }} DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }} {{ ecommerce_venv_dir }}/bin/python ./manage.py oscar_populate_countries"
args:
chdir: "{{ ecommerce_code_dir }}"
become_user: "{{ ecommerce_user }}"
environment: "{{ ecommerce_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes"
......
......@@ -16,7 +16,7 @@
virtualenv: '{{ ecommerce_worker_home }}/venvs/{{ ecommerce_worker_service_name }}'
state: present
become_user: '{{ ecommerce_worker_user }}'
with_items: ecommerce_worker_requirements
with_items: "{{ ecommerce_worker_requirements }}"
- name: write out the supervisor wrapper
template:
......
---
- name: Git checkout edx_ansible repo into edx_ansible_code_dir
git_2_0_1:
git:
dest: "{{ edx_ansible_code_dir }}"
repo: "{{ edx_ansible_source_repo }}"
version: "{{ configuration_version }}"
......
......@@ -51,7 +51,7 @@
state: present
extra_args: "--exists-action w"
become_user: "{{ edx_notes_api_user }}"
with_items: edx_notes_api_requirements
with_items: "{{ edx_notes_api_requirements }}"
- name: Migrate
shell: >
......
......@@ -16,6 +16,7 @@
#
edx_service_name: edx_service
edx_service_repos: []
#
# OS packages
#
......
......@@ -99,6 +99,7 @@
tags:
- install
- install:configuration
- install:app-configuration
- name: Install a bunch of system packages on which edx_service relies
apt:
......@@ -126,18 +127,19 @@
action: ec2_facts
tags:
- to-remove
#old syntax - should be fixed
- name: Tag instance
ec2_tag_local: resource={{ ansible_ec2_instance_id }} region={{ ansible_ec2_placement_region }}
ec2_tag_local:
args:
resource: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
tags:
- Name: version:{{ edx_service_name }}
- Name: "version:{{ edx_service_name }}"
Value: "{{ item.0.DOMAIN }}/{{ item.0.PATH }}/{{ item.0.REPO }} {{ item.1.after |truncate(7,True,'') }}"
when: item.1.after is defined and COMMON_TAG_EC2_INSTANCE and edx_service_repos is defined
with_together:
- edx_service_repos
- code_checkout.results
- "{{ edx_service_repos }}"
- "{{ code_checkout.results }}"
tags:
- to-remove
......
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
##
# Defaults for role edx_service_rds
#
#
# vars are namespaced with the module name.
#
edx_service_rds_role_name: edx_service_rds
E_D_C: "prod-sample-app"
EDX_SERVICE_RDS_INSTANCE_SIZE: 10
EDX_SERVICE_RDS_INSTANCE_TYPE: "db.m1.small"
EDX_SERVICE_RDS_ROOT_USER: "root"
# no unicode, c cedilla , passwords
EDX_SERVICE_RDS_ROOT_PASSWORD: "plus_ca_change"
EDX_SERVICE_RDS_ENGINE: "MySQL"
EDX_SERVICE_RDS_ENGINE_VERSION: "5.6.22"
EDX_SERVICE_RDS_PARAM_GROUP_ENGINE: "mysql5.6"
# will vary depending upon engine, examples assume
# MySQL 56
EDX_SERVICE_RDS_PARAM_GROUP_PARAMS:
character_set_client: "utf8"
character_set_connection: "utf8"
character_set_database: "utf8"
character_set_filesystem: "utf8"
character_set_results: "utf8"
character_set_server: "utf8"
collation_connection: "utf8_unicode_ci"
collation_server: "utf8_unicode_ci"
EDX_SERVICE_RDS_MULTI_AZ: No
EDX_SERVICE_RDS_MAINT_WINDOW: "Mon:00:00-Mon:01:15"
EDX_SERVICE_RDS_BACKUP_DAYS: 30
EDX_SERVICE_RDS_BACKUP_WINDOW: "02:00-03:00"
EDX_SERVICE_RDS_SUBNET_1_AZ: "us-east-1c"
EDX_SERVICE_RDS_SUBNET_1_CIDR: "{{ vpc_class_b }}.50.0/24"
EDX_SERVICE_RDS_SUBNET_2_AZ: "us-east-1d"
EDX_SERVICE_RDS_SUBNET_2_CIDR: "{{ vpc_class_b }}.51.0/24"
# The defaults are permissive, override
EDX_SERVICE_RDS_SECURITY_GROUP:
name: "{{ e_d_c }}-rds-sg"
description: "RDS ingress and egress."
rules:
- proto: "tcp"
from_port: "3306"
to_port: "3306"
cidr_ip: "0.0.0.0/0"
rules_egress:
- proto: "tcp"
from_port: "3306"
to_port: "3306"
cidr_ip: "0.0.0.0/0"
# The defaults are permissive, override
EDX_SERVICE_RDS_VPC_DB_ACL:
name: "{{ e_d_c }}-db"
rules:
- number: "100"
type: "ingress"
protocol: "tcp"
from_port: 3306
to_port: 3306
cidr_block: "0.0.0.0/0"
rule_action: "allow"
- number: "100"
type: "egress"
protocol: "all"
from_port: 0
to_port: 65535
cidr_block: "0.0.0.0/0"
rule_action: "allow"
EDX_SERVICE_RDS_VPC_DB_ROUTE_TABLE:
- cidr: "{{ vpc_class_b }}.0.0/16"
gateway: 'local'
# typically override the all caps vars, but may
# be convenient to override the entire structure
# if you spanning more than two subnets
edx_service_rds_vpc_db_subnets:
- name: "{{ E_D_C }}-db-{{ EDX_SERVICE_RDS_SUBNET_1_AZ }}"
cidr: "{{ EDX_SERVICE_RDS_SUBNET_1_CIDR }}"
az: "{{ EDX_SERVICE_RDS_SUBNET_1_AZ }}"
- name: "{{ E_D_C }}-db-{{ EDX_SERVICE_RDS_SUBNET_2_AZ }}"
cidr: "{{ EDX_SERVICE_RDS_SUBNET_2_CIDR }}"
az: "{{ EDX_SERVICE_RDS_SUBNET_2_AZ }}"
edx_service_rds_state: "present"
edx_service_rds_db:
state: "{{ edx_service_rds_state }}"
name: "{{ E_D_C }}-primary"
size: "{{ EDX_SERVICE_RDS_INSTANCE_SIZE }}"
instance_type: "{{ EDX_SERVICE_RDS_INSTANCE_TYPE }}"
root_user: "{{ EDX_SERVICE_RDS_ROOT_USER }}"
root_password: "{{ EDX_SERVICE_RDS_ROOT_PASSWORD }}"
engine: "{{ EDX_SERVICE_RDS_ENGINE }}"
engine_version: "{{ EDX_SERVICE_RDS_ENGINE_VERSION }}"
multi_az: "{{ EDX_SERVICE_RDS_MULTI_AZ }}"
maint_window: "{{ EDX_SERVICE_RDS_MAINT_WINDOW }}"
backup_days: "{{ EDX_SERVICE_RDS_BACKUP_DAYS }}"
backup_window: "{{ EDX_SERVICE_RDS_BACKUP_WINDOW }}"
param_group:
name: "{{ E_D_C}}"
engine: "{{ EDX_SERVICE_RDS_PARAM_GROUP_ENGINE }}"
params: "{{ EDX_SERVICE_RDS_PARAM_GROUP_PARAMS }}"
#
# OS packages
#
edx_service_rds_debian_pkgs: []
edx_service_rds_redhat_pkgs: []
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
#
# Tasks for role edx_service_rds
#
# Overview:
#
# Creates a VPC RDS instance and related network infrastructure, e.g.,
# subnets, subnet groups, acls, as well as an instance specific
# parameter group.
#
# NB: When using a boto profile other than the default, you will need
# to export AWS_PROFILE because some tasks do not properly process
# the profile argument.
#
# NB: You should currently not use this play for deleting databases as
# the final snapshot functionality doesn't work properly in the ansible
# module. First it default to not taking a final snapshot and
# when you specify one, it throw a key error.
#
# Dependencies:
#
# Assumes a working VPC, ideally created via the edx_vpc role as this
# role will produce configuration output that this role requires
# like the VPC, route table and subnet IDs.
#
# Example play:
#
# export AWS_PROFILE=sandbox
# ansible-playbook -i 'localhost,' edx_service_rds.yml -e@/path/to/secure-repo/cloud_migrations/vpcs/vpc-file.yml -e@/path/to/secure-repo/cloud_migrations/dbs/e-d-c-rds.yml
#
# TODO:
# - handle db deletes and updates
# - handle DNS updates, consider that a different profile may be required for this.
#
- name: create database route table
ec2_rt:
profile: "{{ profile }}"
vpc_id: "{{ vpc_id }}"
region: "{{ aws_region }}"
state: "{{ edx_service_rds_state }}"
name: "{{ e_d_c }}-db"
routes: "{{ EDX_SERVICE_RDS_VPC_DB_ROUTE_TABLE }}"
register: created_db_rt
- name: create db network acl
ec2_acl:
profile: "{{ profile }}"
name: "{{ EDX_SERVICE_RDS_VPC_DB_ACL.name }}"
vpc_id: "{{ vpc_id }}"
state: "{{ edx_service_rds_state }}"
region: "{{ aws_region }}"
rules: "{{ EDX_SERVICE_RDS_VPC_DB_ACL.rules }}"
register: created_db_acl
- name: create db subnets
ec2_subnet:
profile: "{{ profile }}"
vpc_id: "{{ vpc_id }}"
region: "{{ aws_region }}"
state: "{{ edx_service_rds_state }}"
name: "{{ item.name }}"
cidr: "{{ item.cidr }}"
az: "{{ item.az }}"
route_table_id: "{{ created_db_rt.id }}"
network_acl_id: "{{ created_db_acl.id }}"
with_items: edx_service_rds_vpc_db_subnets
register: created_db_subnets
- name: Apply function to subnet data
util_map:
function: 'zip_to_list'
input: "{{ created_db_subnets.results }}"
args:
- "subnet_id"
register: subnet_data
- name:
rds_subnet_group:
state: "{{ edx_service_rds_state }}"
profile: "{{ profile }}"
region: "{{ aws_region }}"
name: "{{ e_d_c }}"
description: "{{ e_d_c }}"
subnets: "{{ subnet_data.function_output }}"
- name: create RDS security group
ec2_group:
profile: "{{ profile }}"
vpc_id: "{{ vpc_id }}"
state: "{{ edx_service_rds_state }}"
region: "{{ aws_region }}"
name: "{{ EDX_SERVICE_RDS_SECURITY_GROUP.name }}"
rules: "{{ EDX_SERVICE_RDS_SECURITY_GROUP.rules }}"
description: "{{ EDX_SERVICE_RDS_SECURITY_GROUP.description }}"
rules_egress: "{{ EDX_SERVICE_RDS_SECURITY_GROUP.rules_egress }}"
register: created_rds_security_group
- name: create instance parameter group
rds_param_group:
state: "{{ edx_service_rds_state }}"
region: "{{ aws_region }}"
name: "{{ edx_service_rds_db.param_group.name }}"
description: "{{ edx_service_rds_db.param_group.name }}"
engine: "{{ edx_service_rds_db.param_group.engine }}"
params: "{{ edx_service_rds_db.param_group.params }}"
register: created_param_group
#
# Create the database
#
- name: Create service database
rds:
command: "create"
region: "{{ aws_region }}"
instance_name: "{{ edx_service_rds_db.name }}"
db_engine: "{{ edx_service_rds_db.engine }}"
engine_version: "{{ edx_service_rds_db.engine_version }}"
size: "{{ edx_service_rds_db.size }}"
instance_type: "{{ edx_service_rds_db.instance_type }}"
username: "{{ edx_service_rds_db.root_user }}"
password: "{{ edx_service_rds_db.root_password }}"
subnet: "{{ e_d_c }}"
vpc_security_groups: "{{ created_rds_security_group.group_id }}"
multi_zone: "{{ edx_service_rds_db.multi_az }}"
maint_window: "{{ edx_service_rds_db.maint_window }}"
backup_window: "{{ edx_service_rds_db.backup_window }}"
backup_retention: "{{ edx_service_rds_db.backup_days }}"
parameter_group: "{{ edx_service_rds_db.param_group.name }}"
tags:
Environment: "{{ env }}"
Application: "{{ deployment }}"
when: edx_service_rds_db.state == 'present'
register: created_db
#
# Delete the database, need to debug module for this to
# full work.
#
- name: Delete service database
rds:
command: "delete"
region: "{{ aws_region }}"
instance_name: "{{ edx_service_rds_db.name }}"
# bug inthe module related to final snapshots
#snapshot: "{{ edx_service_rds_db.name }}-final-{{ ansible_date_time.epoch }}"
snapshot: "red-blue"
when: edx_service_rds_db.state == 'absent'
......@@ -50,7 +50,7 @@
shell: /bin/bash
groups: "{{ themes_group }}"
append: yes
with_items: theme_users
with_items: "{{ theme_users }}"
when: theme_users is defined
- name: update .bashrc to set umask value
......
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
##
# Defaults for role edx_vpc
#
#
# vars are namespace with the module name.
#
vpc_role_name: vpc
#
# OS packages
#
vpc_debian_pkgs: []
vpc_redhat_pkgs: []
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://openedx.atlassian.net/wiki/display/OpenOPS
# code style: https://openedx.atlassian.net/wiki/display/OpenOPS/Ansible+Code+Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
#
# Tasks for role edx_vpc
#
# Overview:
# This role creates an opinionated vpc for containing cluster of edx services.
#
# It currently assumes that we will be multi-az, with a single NAT, and all
# traffic going over that NAT. A public subnet, and both public and private
# route tables are created by default that can be used by new services in this
# vpc. The public subnet should house ELBs and any newly created private subnets
# can use the existing private route table to be able to reach the internet from
# private machines.
#
#
# Example play:
#
# ansible-playbook -c local -i localhost, edx_vpc.yml -e@/Users/feanil/src/edx-secure/cloud_migrations/vpcs/test.yml
# DO NOT use the subnet or route table sections of this command.
# They will delete any subnets or rts not defined here which is
# probably not what you want, since other services were added
# to the vpc whose subnets and rts are not enumerated here.
- name: create a vpc
local_action:
profile: "{{ vpc_aws_profile }}"
module: "ec2_vpc_local"
resource_tags: "{{ vpc_tags }}"
cidr_block: "{{ vpc_cidr }}"
region: "{{ vpc_aws_region }}"
state: "{{ vpc_state }}"
internet_gateway: yes
wait: yes
register: created_vpc
# A default network acl is created when a vpc is created so each VPC
# should have one but we create one here that allows access to the
# outside world using the internet gateway.
- name: create public network acl
ec2_acl:
profile: "{{ vpc_aws_profile }}"
name: "{{ vpc_public_acl.name }}"
vpc_id: "{{ created_vpc.vpc_id }}"
state: "present"
region: "{{ vpc_aws_region }}"
rules: "{{ vpc_public_acl.rules }}"
register: created_public_acl
- name: create public route table
ec2_rt:
profile: "{{ vpc_aws_profile }}"
vpc_id: "{{ created_vpc.vpc_id }}"
region: "{{ vpc_aws_region }}"
state: "present"
name: "{{ vpc_name }}-public"
routes: "{{ vpc_public_route_table }}"
register: created_public_rt
- name: create public subnets
ec2_subnet:
profile: "{{ vpc_aws_profile }}"
vpc_id: "{{ created_vpc.vpc_id }}"
region: "{{ vpc_aws_region }}"
state: "present"
name: "{{ item.name }}"
cidr: "{{ item.cidr }}"
az: "{{ item.az }}"
route_table_id: "{{ created_public_rt.id }}"
network_acl_id: "{{ created_public_acl.id }}"
with_items: vpc_public_subnets
register: created_public_subnets
- name: create NAT security group
ec2_group:
profile: "{{ vpc_aws_profile }}"
vpc_id: "{{ created_vpc.vpc_id }}"
state: "present"
region: "{{ vpc_aws_region }}"
name: "{{ nat_security_group.name }}"
rules: "{{ nat_security_group.rules }}"
description: "{{ nat_security_group.description }}"
rules_egress: "{{ nat_security_group.rules_egress }}"
register: created_nat_security_group
- name: check to see if we already have a nat instance
local_action:
module: "ec2_lookup"
region: "{{ vpc_aws_region }}"
tags:
- Name: "{{ vpc_name }}-nat-instance"
register: nat_instance
- name: create nat instance
local_action:
module: "ec2"
state: "present"
wait: yes
source_dest_check: false
region: "{{ vpc_aws_region }}"
profile: "{{ vpc_aws_profile }}"
group_id: "{{ created_nat_security_group.group_id }}"
key_name: "{{ vpc_keypair }}"
vpc_subnet_id: "{{ created_public_subnets.results[0].subnet_id }}"
instance_type: "{{ vpc_nat_instance_type }}"
instance_tags:
Name: "{{ vpc_name }}-nat-instance"
image: "{{ vpc_nat_ami_id }}"
register: new_nat_instance
when: nat_instance.instances|length == 0
# We need to do this instead of registering the output of the above
# command because if the above command get skipped, the output does
# not contain information about the instance.
- name: lookup the created nat_instance
local_action:
module: "ec2_lookup"
region: "{{ vpc_aws_region }}"
tags:
- Name: "{{ vpc_name }}-nat-instance"
register: nat_instance
- name: assign eip to nat
ec2_eip:
profile: "{{ vpc_aws_profile }}"
region: "{{ vpc_aws_region }}"
instance_id: "{{ nat_instance.instances[0].id }}"
in_vpc: true
reuse_existing_ip_allowed: true
when: new_nat_instance.changed
- name: create private route table
ec2_rt:
profile: "{{ vpc_aws_profile }}"
vpc_id: "{{ created_vpc.vpc_id }}"
region: "{{ vpc_aws_region }}"
state: "present"
name: "{{ vpc_name }}-private"
routes: "{{ vpc_private_route_table }}"
register: created_private_rt
- name: output a vpc_config for using to build services
local_action:
module: template
src: "vpc_config.yml.j2"
dest: "~/{{ e_d }}.yml"
#
# Configuration for the environment-deployment
#
profile: "{{ vpc_aws_profile }}"
vpc_id: "{{ created_vpc.vpc_id }}"
vpc_cidr: "{{ vpc_cidr }}"
vpc_class_b: "{{ vpc_class_b }}"
env: "{{ vpc_environment }}"
deployment: "{{ vpc_deployment }}"
e_d_c: "{{ vpc_environment }}-{{ vpc_deployment }}-{{ '{{' }} cluster {{ '}}' }}"
aws_region: "{{ vpc_aws_region }}"
aws_availability_zones:
{% for subnet in vpc_public_subnets %}
- {{ subnet.az }}
{% endfor %}
# Should this be service specific
ssl_cert: "{{ vpc_ssl_cert }}"
# used for ELB
public_route_table: "{{ created_public_rt.id }}"
# used for service subnet
private_route_table: "{{ created_private_rt.id }}"
instance_key_name: "{{ vpc_keypair }}"
# subject to change #TODO: provide the correct var for the eni
nat_device: "{{ nat_instance.instances[0].id }}"
public_subnet_1: "{{ vpc_public_subnets[0].cidr }}"
public_subnet_2: "{{ vpc_public_subnets[1].cidr }}"
# /28 per AZ NEEDE?
# private_subnet_1: "{{ vpc_class_b }}.110.16/28"
# private_subnet_2: "{{ vpc_class_b }}.120.16/28"
elb_subnets:
{% for subnet in created_public_subnets.results %}
- "{{ subnet.subnet_id }}"
{% endfor %}
#
# Do not use vars in policies :(
# Should be specific to the service right?
role_policies: []
# - name: "{{ '{{ ' + 'e_d_c' + '}}' }}-s3-policy"
# document: |
# {
# "Statement":[
# {
# "Effect":"Allow",
# "Action":["s3:*"],
# "Resource":["arn:aws:s3:::edx-stage-edx"]
# }
# ]
# }
# - name: "{{ '{{ ' + 'e_d_c' + '}}' }}-create-instance-tags"
# document: |
# {
# "Statement": [
# {
# "Effect": "Allow",
# "Action": ["ec2:CreateTags"],
# "Resource": ["arn:aws:ec2:us-east-1:xxxxxxxxxxxx:instance/*"]
# }
# ]
# }
# - name: "{{ '{{ ' + 'e_d_c' + '}}' }}-describe-ec2"
# document: |
# {"Statement":[
# {"Resource":"*",
# "Action":["ec2:DescribeInstances","ec2:DescribeTags","ec2:DescribeVolumes"],
# "Effect":"Allow"}]}
......@@ -44,6 +44,7 @@ EDXAPP_AWS_ACCESS_KEY_ID: "None"
EDXAPP_AWS_SECRET_ACCESS_KEY: "None"
EDXAPP_AWS_QUERYSTRING_AUTH: false
EDXAPP_AWS_STORAGE_BUCKET_NAME: "SET-ME-PLEASE (ex. bucket-name)"
EDXAPP_IMPORT_EXPORT_BUCKET: "SET-ME-PLEASE (ex. bucket-name)"
EDXAPP_AWS_S3_CUSTOM_DOMAIN: "SET-ME-PLEASE (ex. bucket-name.s3.amazonaws.com)"
EDXAPP_SWIFT_USERNAME: "None"
EDXAPP_SWIFT_KEY: "None"
......@@ -55,7 +56,6 @@ EDXAPP_SWIFT_REGION_NAME: "None"
EDXAPP_SWIFT_USE_TEMP_URLS: false
EDXAPP_SWIFT_TEMP_URL_KEY: "None"
EDXAPP_SWIFT_TEMP_URL_DURATION: 1800 # seconds
EDXAPP_USE_SWIFT_STORAGE: false
EDXAPP_DEFAULT_FILE_STORAGE: "django.core.files.storage.FileSystemStorage"
EDXAPP_XQUEUE_BASIC_AUTH: [ "{{ COMMON_HTPASSWD_USER }}", "{{ COMMON_HTPASSWD_PASS }}" ]
EDXAPP_XQUEUE_DJANGO_AUTH:
......@@ -134,6 +134,7 @@ EDXAPP_ZENDESK_API_KEY: ""
EDXAPP_CELERY_USER: 'celery'
EDXAPP_CELERY_PASSWORD: 'celery'
EDXAPP_CELERY_BROKER_VHOST: ""
EDXAPP_CELERY_BROKER_USE_SSL: false
EDXAPP_VIDEO_CDN_URLS:
EXAMPLE_COUNTRY_CODE: "http://example.com/edx/video?s3_url="
......@@ -498,8 +499,8 @@ EDXAPP_CELERY_WORKERS:
monitor: False
max_tasks_per_child: 1
EDXAPP_RECALCULATE_GRADES_ROUTING_KEY: 'edx.lms.core.default'
EDXAPP_LMS_CELERY_QUEUES: "{{ edxapp_workers|selectattr('service_variant', 'equalto', 'lms')|map(attribute='queue')|map('regex_replace', '(.*)', 'edx.lms.core.\\\\1')|list }}"
EDXAPP_CMS_CELERY_QUEUES: "{{ edxapp_workers|selectattr('service_variant', 'equalto', 'cms')|map(attribute='queue')|map('regex_replace', '(.*)', 'edx.cms.core.\\\\1')|list }}"
EDXAPP_LMS_CELERY_QUEUES: "{{ edxapp_workers|selectattr('service_variant', 'equalto', 'lms')|map(attribute='queue')|map('regex_replace', '(.*)', 'edx.lms.core.\\1')|list }}"
EDXAPP_CMS_CELERY_QUEUES: "{{ edxapp_workers|selectattr('service_variant', 'equalto', 'cms')|map(attribute='queue')|map('regex_replace', '(.*)', 'edx.cms.core.\\1')|list }}"
EDXAPP_DEFAULT_CACHE_VERSION: "1"
EDXAPP_OAUTH_ENFORCE_SECURE: True
......@@ -639,10 +640,12 @@ edxapp_venvs_dir: "{{ edxapp_app_dir }}/venvs"
edxapp_venv_dir: "{{ edxapp_venvs_dir }}/edxapp"
edxapp_venv_bin: "{{ edxapp_venv_dir }}/bin"
edxapp_nodeenv_dir: "{{ edxapp_app_dir }}/nodeenvs/edxapp"
edxapp_node_bin: "{{ edxapp_nodeenv_dir }}/bin"
edxapp_node_version: "0.10.37"
edxapp_nodeenv_bin: "{{ edxapp_nodeenv_dir }}/bin"
edxapp_node_version: "6.9.2"
# This is where node installs modules, not node itself
edxapp_node_bin: "{{ edxapp_code_dir }}/node_modules/.bin"
edxapp_user: edxapp
edxapp_deploy_path: "{{ edxapp_venv_bin }}:{{ edxapp_code_dir }}/bin:{{ edxapp_node_bin }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
edxapp_deploy_path: "{{ edxapp_venv_bin }}:{{ edxapp_code_dir }}/bin:{{ edxapp_node_bin }}:{{ edxapp_nodeenv_bin }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
edxapp_staticfile_dir: "{{ edxapp_data_dir }}/staticfiles"
edxapp_media_dir: "{{ edxapp_data_dir }}/media"
edxapp_course_static_dir: "{{ edxapp_data_dir }}/course_static"
......@@ -799,8 +802,6 @@ edxapp_generic_auth_config: &edxapp_generic_auth
generic_cache_config: &default_generic_cache
BACKEND: 'django.core.cache.backends.memcached.MemcachedCache'
KEY_FUNCTION: 'util.memcache.safe_key'
KEY_PREFIX: 'default'
LOCATION: "{{ EDXAPP_MEMCACHE }}"
generic_env_config: &edxapp_generic_env
ECOMMERCE_PUBLIC_URL_ROOT: "{{ EDXAPP_ECOMMERCE_PUBLIC_URL_ROOT }}"
......@@ -821,6 +822,7 @@ generic_env_config: &edxapp_generic_env
ANALYTICS_DATA_URL: "{{ EDXAPP_ANALYTICS_DATA_URL }}"
ANALYTICS_DASHBOARD_URL: '{{ EDXAPP_ANALYTICS_DASHBOARD_URL }}'
CELERY_BROKER_VHOST: "{{ EDXAPP_CELERY_BROKER_VHOST }}"
CELERY_BROKER_USE_SSL: "{{ EDXAPP_CELERY_BROKER_USE_SSL }}"
PAYMENT_SUPPORT_EMAIL: "{{ EDXAPP_PAYMENT_SUPPORT_EMAIL }}"
ZENDESK_URL: "{{ EDXAPP_ZENDESK_URL }}"
COURSES_WITH_UNSAFE_CODE: "{{ EDXAPP_COURSES_WITH_UNSAFE_CODE }}"
......@@ -884,23 +886,29 @@ generic_env_config: &edxapp_generic_env
default:
<<: *default_generic_cache
KEY_PREFIX: 'default'
LOCATION: "{{ EDXAPP_MEMCACHE }}"
VERSION: "{{ EDXAPP_DEFAULT_CACHE_VERSION }}"
general:
<<: *default_generic_cache
KEY_PREFIX: 'general'
LOCATION: "{{ EDXAPP_MEMCACHE }}"
mongo_metadata_inheritance:
<<: *default_generic_cache
KEY_PREFIX: 'mongo_metadata_inheritance'
TIMEOUT: 300
LOCATION: "{{ EDXAPP_MEMCACHE }}"
staticfiles:
<<: *default_generic_cache
KEY_PREFIX: "{{ ansible_hostname|default('staticfiles') }}_general"
LOCATION: "{{ EDXAPP_MEMCACHE }}"
configuration:
<<: *default_generic_cache
KEY_PREFIX: "{{ ansible_hostname|default('configuration') }}"
LOCATION: "{{ EDXAPP_MEMCACHE }}"
celery:
<<: *default_generic_cache
KEY_PREFIX: 'celery'
LOCATION: "{{ EDXAPP_MEMCACHE }}"
TIMEOUT: "7200"
course_structure_cache:
<<: *default_generic_cache
......@@ -1029,6 +1037,7 @@ lms_env_config:
DOC_LINK_BASE_URL: "{{ EDXAPP_LMS_DOC_LINK_BASE_URL }}"
RECALCULATE_GRADES_ROUTING_KEY: "{{ EDXAPP_RECALCULATE_GRADES_ROUTING_KEY }}"
CELERY_QUEUES: "{{ EDXAPP_LMS_CELERY_QUEUES }}"
ALTERNATE_WORKER_QUEUES: "cms"
cms_auth_config:
<<: *edxapp_generic_auth
......@@ -1060,6 +1069,8 @@ cms_env_config:
GIT_REPO_EXPORT_DIR: "{{ EDXAPP_GIT_REPO_EXPORT_DIR }}"
DOC_LINK_BASE_URL: "{{ EDXAPP_CMS_DOC_LINK_BASE_URL }}"
CELERY_QUEUES: "{{ EDXAPP_CMS_CELERY_QUEUES }}"
ALTERNATE_WORKER_QUEUES: "lms"
COURSE_IMPORT_EXPORT_BUCKET: "{{ EDXAPP_IMPORT_EXPORT_BUCKET }}"
# install dir for the edx-platform repo
edxapp_code_dir: "{{ edxapp_app_dir }}/edx-platform"
......
......@@ -8,5 +8,3 @@ dependencies:
theme_users:
- "{{ edxapp_user }}"
when: "{{ EDXAPP_ENABLE_COMPREHENSIVE_THEMING }}"
- role: openstack
when: "{{ EDXAPP_USE_SWIFT_STORAGE }}"
......@@ -45,7 +45,7 @@
# Do A Checkout
- name: checkout edx-platform repo into {{ edxapp_code_dir }}
git_2_0_1:
git:
dest: "{{ edxapp_code_dir }}"
repo: "{{ edx_platform_repo }}"
version: "{{ edx_platform_version }}"
......@@ -72,7 +72,7 @@
# (yes, lowercase) to a Stanford-style theme and set
# edxapp_theme_name (again, lowercase) to its name.
- name: checkout Stanford-style theme
git_2_0_1:
git:
dest: "{{ edxapp_app_dir }}/themes/{{ edxapp_theme_name }}"
repo: "{{ edxapp_theme_source_repo }}"
version: "{{ edxapp_theme_version }}"
......@@ -110,10 +110,10 @@
- install:app-requirements
- name: Create the virtualenv to install the Python requirements
command: >
virtualenv {{ edxapp_venv_dir }}
chdir={{ edxapp_code_dir }}
creates={{ edxapp_venv_dir }}/bin/pip
command: "virtualenv {{ edxapp_venv_dir }}"
args:
chdir: "{{ edxapp_code_dir }}"
creates: "{{ edxapp_venv_dir }}/bin/pip"
become_user: "{{ edxapp_user }}"
environment: "{{ edxapp_environment }}"
tags:
......@@ -134,9 +134,9 @@
# Need to use command rather than pip so that we can maintain the context of our current working directory; some
# requirements are pathed relative to the edx-platform repo. Using the pip from inside the virtual environment implicitly
# installs everything into that virtual environment.
command: >
{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item.item }}
chdir={{ edxapp_code_dir }}
command: "{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item.item }}"
args:
chdir: "{{ edxapp_code_dir }}"
become_user: "{{ edxapp_user }}"
environment: "{{ edxapp_environment }}"
when: item.stat.exists
......@@ -151,9 +151,9 @@
# Need to use shell rather than pip so that we can maintain the context of our current working directory; some
# requirements are pathed relative to the edx-platform repo. Using the pip from inside the virtual environment implicitly
# installs everything into that virtual environment.
shell: >
{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}
chdir={{ edxapp_code_dir }}
shell: "{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}"
args:
chdir: "{{ edxapp_code_dir }}"
with_items:
- "{{ private_requirements_file }}"
become_user: "{{ edxapp_user }}"
......@@ -172,7 +172,7 @@
extra_args: "--exists-action w {{ item.extra_args|default('') }}"
virtualenv: "{{ edxapp_venv_dir }}"
state: present
with_items: EDXAPP_EXTRA_REQUIREMENTS
with_items: "{{ EDXAPP_EXTRA_REQUIREMENTS }}"
become_user: "{{ edxapp_user }}"
tags:
- install
......@@ -197,9 +197,9 @@
# Need to use shell rather than pip so that we can maintain the context of our current working directory; some
# requirements are pathed relative to the edx-platform repo. Using the pip from inside the virtual environment implicitly
# installs everything into that virtual environment.
shell: >
{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}
chdir={{ edxapp_code_dir }}
shell: "{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}"
args:
chdir: "{{ edxapp_code_dir }}"
with_items:
- "{{ sandbox_base_requirements }}"
- "{{ sandbox_local_requirements }}"
......@@ -211,8 +211,7 @@
- install:app-requirements
- name: create nodeenv
shell: >
{{ edxapp_venv_dir }}/bin/nodeenv {{ edxapp_nodeenv_dir }} --node={{ edxapp_node_version }} --prebuilt
shell: "{{ edxapp_venv_dir }}/bin/nodeenv {{ edxapp_nodeenv_dir }} --node={{ edxapp_node_version }} --prebuilt"
args:
creates: "{{ edxapp_nodeenv_dir }}"
tags:
......@@ -223,8 +222,7 @@
# This needs to be done as root since npm is weird about
# chown - https://github.com/npm/npm/issues/3565
- name: Set the npm registry
shell: >
npm config set registry '{{ COMMON_NPM_MIRROR_URL }}'
shell: "npm config set registry '{{ COMMON_NPM_MIRROR_URL }}'"
args:
creates: "{{ edxapp_app_dir }}/.npmrc"
environment: "{{ edxapp_environment }}"
......@@ -244,7 +242,7 @@
- name: install node dependencies
npm:
executable: "{{ edxapp_node_bin }}/npm"
executable: "{{ edxapp_nodeenv_bin }}/npm"
path: "{{ edxapp_code_dir }}"
production: yes
environment: "{{ edxapp_environment }}"
......@@ -279,9 +277,9 @@
- install:app-requirements
- name: code sandbox | Install sandbox requirements into sandbox venv
shell: >
{{ edxapp_sandbox_venv_dir }}/bin/pip install -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}
chdir={{ edxapp_code_dir }}
shell: "{{ edxapp_sandbox_venv_dir }}/bin/pip install -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ item }}"
args:
chdir: "{{ edxapp_code_dir }}"
with_items:
- "{{ sandbox_local_requirements }}"
- "{{ sandbox_post_requirements }}"
......
......@@ -3,27 +3,35 @@
template:
src: "{{ item[0] }}.{{ item[1] }}.json.j2"
dest: "{{ edxapp_app_dir }}/{{ item[0] }}.{{ item[1] }}.json"
become_user: "{{ edxapp_user }}"
with_nested:
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
mode: 0640
become: true
with_nested:
- "{{ service_variants_enabled }}"
- [ 'env', 'auth' ]
tags:
- install
- install:configuration
- edxapp_cfg
- install:app-configuration
- edxapp_cfg # Old deprecated tag, will remove when possible
- name: create auth and application yaml config
template:
src: "{{ item[0] }}.{{ item[1] }}.yaml.j2"
dest: "{{ EDXAPP_CFG_DIR }}/{{ item[0] }}.{{ item[1] }}.yaml"
become_user: "{{ edxapp_user }}"
owner: "{{ edxapp_user }}"
group: "{{ common_web_group }}"
mode: 0640
become: true
with_nested:
- "{{ service_variants_enabled }}"
- [ 'env', 'auth' ]
tags:
- install
- install:configuration
- edxapp_cfg
- install:app-configuration
- edxapp_cfg # Old deprecated tag, will remove when possible
# write the supervisor scripts for the service variants
- name: "writing {{ item }} supervisor script"
......@@ -32,6 +40,7 @@
dest: "{{ supervisor_available_dir }}/{{ item }}.conf"
owner: "{{ supervisor_user }}"
group: "{{ supervisor_user }}"
mode: 0644
become_user: "{{ supervisor_user }}"
with_items: "{{ service_variants_enabled }}"
tags:
......@@ -45,6 +54,7 @@
dest: "{{ supervisor_available_dir }}/{{ item }}"
owner: "{{ supervisor_user }}"
group: "{{ supervisor_user }}"
mode: 0644
become_user: "{{ supervisor_user }}"
with_items:
- edxapp.conf
......@@ -57,6 +67,7 @@
template:
src: "{{ item }}_gunicorn.py.j2"
dest: "{{ edxapp_app_dir }}/{{ item }}_gunicorn.py"
mode: 0644
become_user: "{{ edxapp_user }}"
with_items: "{{ service_variants_enabled }}"
tags:
......
......@@ -10,7 +10,7 @@ command={{ executable }} -c {{ edxapp_app_dir }}/cms_gunicorn.py {{ EDXAPP_CMS_G
user={{ common_web_user }}
directory={{ edxapp_code_dir }}
environment={% if COMMON_ENABLE_NEWRELIC_APP %}NEW_RELIC_APP_NAME={{ EDXAPP_NEWRELIC_CMS_APPNAME }},NEW_RELIC_LICENSE_KEY={{ NEWRELIC_LICENSE_KEY }},{% endif -%}PORT={{ edxapp_cms_gunicorn_port }},ADDRESS={{ edxapp_cms_gunicorn_host }},LANG={{ EDXAPP_LANG }},DJANGO_SETTINGS_MODULE={{ EDXAPP_CMS_ENV }},SERVICE_VARIANT="cms",ALTERNATE_WORKER_QUEUES="lms"
environment={% if COMMON_ENABLE_NEWRELIC_APP %}NEW_RELIC_APP_NAME={{ EDXAPP_NEWRELIC_CMS_APPNAME }},NEW_RELIC_LICENSE_KEY={{ NEWRELIC_LICENSE_KEY }},{% endif -%}PORT={{ edxapp_cms_gunicorn_port }},ADDRESS={{ edxapp_cms_gunicorn_host }},LANG={{ EDXAPP_LANG }},DJANGO_SETTINGS_MODULE={{ EDXAPP_CMS_ENV }},SERVICE_VARIANT="cms"
stdout_logfile={{ supervisor_log_dir }}/%(program_name)s-stdout.log
stderr_logfile={{ supervisor_log_dir }}/%(program_name)s-stderr.log
killasgroup=true
......
......@@ -10,7 +10,7 @@ command={{ executable }} -c {{ edxapp_app_dir }}/lms_gunicorn.py lms.wsgi
user={{ common_web_user }}
directory={{ edxapp_code_dir }}
environment={% if COMMON_ENABLE_NEWRELIC_APP %}NEW_RELIC_APP_NAME={{ EDXAPP_NEWRELIC_LMS_APPNAME }},NEW_RELIC_LICENSE_KEY={{ NEWRELIC_LICENSE_KEY }},NEW_RELIC_CONFIG_FILE={{ edxapp_app_dir }}/newrelic.ini,{% endif -%} PORT={{ edxapp_lms_gunicorn_port }},ADDRESS={{ edxapp_lms_gunicorn_host }},LANG={{ EDXAPP_LANG }},DJANGO_SETTINGS_MODULE={{ EDXAPP_LMS_ENV }},SERVICE_VARIANT="lms",ALTERNATE_WORKER_QUEUES="cms",PATH="{{ edxapp_deploy_path }}"
environment={% if COMMON_ENABLE_NEWRELIC_APP %}NEW_RELIC_APP_NAME={{ EDXAPP_NEWRELIC_LMS_APPNAME }},NEW_RELIC_LICENSE_KEY={{ NEWRELIC_LICENSE_KEY }},NEW_RELIC_CONFIG_FILE={{ edxapp_app_dir }}/newrelic.ini,{% endif -%} PORT={{ edxapp_lms_gunicorn_port }},ADDRESS={{ edxapp_lms_gunicorn_host }},LANG={{ EDXAPP_LANG }},DJANGO_SETTINGS_MODULE={{ EDXAPP_LMS_ENV }},SERVICE_VARIANT="lms",PATH="{{ edxapp_deploy_path }}"
stdout_logfile={{ supervisor_log_dir }}/%(program_name)s-stdout.log
stderr_logfile={{ supervisor_log_dir }}/%(program_name)s-stderr.log
killasgroup=true
......
......@@ -19,13 +19,13 @@ edxlocal_databases:
edxlocal_database_users:
- {
db: "{{ ECOMMERCE_DEFAULT_DB_NAME | default(None) }}",
user: "{{ ECOMMERCE_DATABASES.default.USER | default(None) }}",
pass: "{{ ECOMMERCE_DATABASES.default.PASSWORD | default(None) }}"
user: "{{ ECOMMERCE_DATABASE_USER | default(None) }}",
pass: "{{ ECOMMERCE_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ INSIGHTS_DATABASE_NAME | default(None) }}",
user: "{{ INSIGHTS_DATABASES.default.USER | default(None) }}",
pass: "{{ INSIGHTS_DATABASES.default.PASSWORD | default(None) }}"
user: "{{ INSIGHTS_MYSQL_USER | default(None) }}",
pass: "{{ INSIGHTS_MYSQL_USER | default(None) }}"
}
- {
db: "{{ XQUEUE_MYSQL_DB_NAME | default(None) }}",
......@@ -44,18 +44,18 @@ edxlocal_database_users:
}
- {
db: "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}",
user: "{{ PROGRAMS_DATABASES.default.USER | default(None) }}",
pass: "{{ PROGRAMS_DATABASES.default.PASSWORD | default(None) }}"
user: "{{ PROGRAMS_DATABASE_USER | default(None) }}",
pass: "{{ PROGRAMS_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_NAME | default(None) }}",
user: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE.username }}",
pass: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE.password }}"
user: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_USER | default(None) }}",
pass: "{{ ANALYTICS_PIPELINE_OUTPUT_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ HIVE_METASTORE_DATABASE_NAME | default(None) }}",
user: "{{ HIVE_METASTORE_DATABASE.user | default(None) }}",
pass: "{{ HIVE_METASTORE_DATABASE.password | default(None) }}"
user: "{{ HIVE_METASTORE_DATABASE_USER | default(None) }}",
pass: "{{ HIVE_METASTORE_DATABASE_PASSWORD | default(None) }}"
}
- {
db: "{{ CREDENTIALS_DEFAULT_DB_NAME | default(None) }}",
......
......@@ -21,30 +21,27 @@
#
#
- name: download elasticsearch plugin
shell: >
./npi fetch {{ ELASTICSEARCH_MONITOR_PLUGIN }} -y
shell: "./npi fetch {{ ELASTICSEARCH_MONITOR_PLUGIN }} -y"
args:
chdir: "{{ NEWRELIC_NPI_PREFIX }}"
creates: "{{ NEWRELIC_NPI_PREFIX }}/plugins/{{ ELASTICSEARCH_MONITOR_PLUGIN }}.compressed"
become_user: "{{ NEWRELIC_USER }}"
- name: prepare elasticsearch plugin
shell: >
./npi prepare {{ ELASTICSEARCH_MONITOR_PLUGIN }} -n
shell: "./npi prepare {{ ELASTICSEARCH_MONITOR_PLUGIN }} -n"
args:
chdir: "{{ NEWRELIC_NPI_PREFIX }}"
become_user: "{{ NEWRELIC_USER }}"
- name: configure elasticsearch plugin
template: >
src=plugins/me.snov.newrelic-elasticsearch/newrelic-elasticsearch-plugin-1.4.1/config/plugin.json.j2
dest={{ NEWRELIC_NPI_PREFIX }}/plugins/{{ ELASTICSEARCH_MONITOR_PLUGIN }}/newrelic-elasticsearch-plugin-1.4.1/config/plugin.json
owner={{ NEWRELIC_USER }}
mode=0644
template:
src: "plugins/me.snov.newrelic-elasticsearch/newrelic-elasticsearch-plugin-1.4.1/config/plugin.json.j2"
dest: "{{ NEWRELIC_NPI_PREFIX }}/plugins/{{ ELASTICSEARCH_MONITOR_PLUGIN }}/newrelic-elasticsearch-plugin-1.4.1/config/plugin.json"
owner: "{{ NEWRELIC_USER }}"
mode: 0644
- name: register/start elasticsearch plugin
shell: >
./npi add-service {{ ELASTICSEARCH_MONITOR_PLUGIN }} --start --user={{ NEWRELIC_USER }}
shell: "./npi add-service {{ ELASTICSEARCH_MONITOR_PLUGIN }} --start --user={{ NEWRELIC_USER }}"
args:
chdir: "{{ NEWRELIC_NPI_PREFIX }}"
become_user: "root"
......
......@@ -27,7 +27,7 @@
- name: Test for enhanced networking
local_action:
module: shell aws --profile {{ profile }} ec2 describe-instance-attribute --instance-id {{ ansible_ec2_instance_id }} --attribute sriovNetSupport
module: shell aws ec2 describe-instance-attribute --instance-id {{ ansible_ec2_instance_id }} --attribute sriovNetSupport
changed_when: False
become: False
register: enhanced_networking_raw
......@@ -56,7 +56,7 @@
- name: Set enhanced networking instance attribute
local_action:
module: shell aws --profile {{ profile }} ec2 modify-instance-attribute --instance-id {{ ansible_ec2_instance_id }} --sriov-net-support simple
module: shell aws ec2 modify-instance-attribute --instance-id {{ ansible_ec2_instance_id }} --sriov-net-support simple
when: supports_enhanced_networking and has_ixgbevf_kernel_module and not enhanced_networking_already_on
- name: Start instances
......
......@@ -39,7 +39,7 @@
- install:configuration
- name: git checkout forum repo into {{ forum_code_dir }}
git_2_0_1:
git:
dest: "{{ forum_code_dir }}"
repo: "{{ forum_source_repo }}"
version: "{{ forum_version }}"
......
......@@ -30,37 +30,39 @@
---
- name: install pip packages
pip: name={{ item }} state=present
with_items: gh_mirror_pip_pkgs
with_items: "{{ gh_mirror_pip_pkgs }}"
- name: install debian packages
apt: >
pkg={{ ",".join(gh_mirror_debian_pkgs) }}
state=present
update_cache=yes
apt:
pkg: '{{ ",".join(gh_mirror_debian_pkgs) }}'
state: present
update_cache: yes
- name: create gh_mirror user
user: >
name={{ gh_mirror_user }}
state=present
user:
name: "{{ gh_mirror_user }}"
state: present
- name: create the gh_mirror data directory
file: >
path={{ gh_mirror_data_dir }}
state=directory
owner={{ gh_mirror_user }}
group={{ gh_mirror_group }}
file:
path: "{{ gh_mirror_data_dir }}"
state: directory
owner: "{{ gh_mirror_user }}"
group: "{{ gh_mirror_group }}"
- name: create the gh_mirror app directory
file: >
path={{ gh_mirror_app_dir }}
state=directory
file:
path: "{{ gh_mirror_app_dir }}"
state: directory
- name: create org config
template: src=orgs.yml.j2 dest={{ gh_mirror_app_dir }}/orgs.yml
- name: copying sync scripts
copy: src={{ item }} dest={{ gh_mirror_app_dir }}/{{ item }}
with_items: gh_mirror_app_files
copy:
src: "{{ item }}"
dest: "{{ gh_mirror_app_dir }}/{{ item }}"
with_items: "{{ gh_mirror_app_files }}"
- name: creating cron job to update repos
cron:
......
......@@ -23,8 +23,8 @@
- name: Set git fetch.prune to ignore deleted remote refs
shell: git config --global fetch.prune true
become_user: "{{ repo_owner }}"
when: GIT_REPOS is defined
no_log: true
when: repo_owner is defined and GIT_REPOS|length > 0
tags:
- install
- install:code
......@@ -33,7 +33,7 @@
fail:
msg: '{{ GIT_REPOS.PROTOCOL }} must be "https" or "ssh"'
when: (item.PROTOCOL != "https") and (item.PROTOCOL != "ssh") and GIT_REPOS is defined
with_items: GIT_REPOS
with_items: "{{ GIT_REPOS }}"
no_log: true
tags:
- install
......@@ -48,14 +48,14 @@
group: "{{ repo_group }}"
mode: "0600"
when: item.PROTOCOL == "ssh" and GIT_REPOS is defined
with_items: GIT_REPOS
with_items: "{{ GIT_REPOS }}"
no_log: true
tags:
- install
- install:code
- name: Checkout code over ssh
git_2_0_1:
git:
repo: "git@{{ item.DOMAIN }}:{{ item.PATH }}/{{ item.REPO }}"
dest: "{{ item.DESTINATION }}"
version: "{{ item.VERSION }}"
......@@ -64,21 +64,21 @@
become_user: "{{ repo_owner }}"
register: code_checkout
when: item.PROTOCOL == "ssh" and GIT_REPOS is defined
with_items: GIT_REPOS
with_items: "{{ GIT_REPOS }}"
no_log: true
tags:
- install
- install:code
- name: Checkout code over https
git_2_0_1:
git:
repo: "https://{{ item.DOMAIN }}/{{ item.PATH }}/{{ item.REPO }}"
dest: "{{ item.DESTINATION }}"
version: "{{ item.VERSION }}"
become_user: "{{ repo_owner }}"
register: code_checkout
when: item.PROTOCOL == "https" and GIT_REPOS is defined
with_items: GIT_REPOS
with_items: "{{ GIT_REPOS }}"
no_log: true
tags:
- install
......@@ -89,7 +89,7 @@
dest: "{{ git_home }}/.ssh/{{ item.REPO }}"
state: absent
when: item.PROTOCOL == "ssh" and GIT_REPOS is defined
with_items: GIT_REPOS
with_items: "{{ GIT_REPOS }}"
no_log: true
tags:
- install
......
......@@ -15,9 +15,9 @@
#
#
- name: restart gitreload
supervisorctl: >
name=gitreload
supervisorctl_path={{ supervisor_ctl }}
config={{ supervisor_cfg }}
state=restarted
supervisorctl:
name: gitreload
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
state: restarted
when: not disable_edx_services
# Tasks to run if cloning repos to edx-platform.
- name: clone all course repos
git_2_0_1: dest={{ GITRELOAD_REPODIR }}/{{ item.name }} repo={{ item.url }} version={{ item.commit }}
git: dest={{ GITRELOAD_REPODIR }}/{{ item.name }} repo={{ item.url }} version={{ item.commit }}
become_user: "{{ common_web_user }}"
with_items: GITRELOAD_REPOS
with_items: "{{ GITRELOAD_REPOS }}"
- name: do import of courses
shell: >
executable=/bin/bash
chdir="{{ edxapp_code_dir }}"
SERVICE_VARIANT=lms {{ edxapp_venv_bin }}/python manage.py lms --settings=aws git_add_course {{ item.url }} {{ GITRELOAD_REPODIR }}/{{ item.name }}
shell: "SERVICE_VARIANT=lms {{ edxapp_venv_bin }}/python manage.py lms --settings=aws git_add_course {{ item.url }} {{ GITRELOAD_REPODIR }}/{{ item.name }}"
args:
executable: "/bin/bash"
chdir: "{{ edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
with_items: GITRELOAD_REPOS
with_items: "{{ GITRELOAD_REPOS }}"
- name: change ownership on repos for access by edxapp and www-data
file: >
path={{ GITRELOAD_REPODIR }}
state=directory
owner={{ common_web_user }}
owner={{ common_web_group }}
recurse=yes
file:
path: "{{ GITRELOAD_REPODIR }}"
state: directory
owner: "{{ common_web_user }}"
owner: "{{ common_web_group }}"
recurse: yes
- name: change group on repos if using devstack
file: >
path={{ GITRELOAD_REPODIR }}
state=directory
group={{ edxapp_user }}
recurse=yes
file:
path: "{{ GITRELOAD_REPODIR }}"
state: directory
group: "{{ edxapp_user }}"
recurse: yes
when: devstack
- name: change mode on repos with using devstack
command: chmod -R o=rwX,g=srwX,o=rX {{ GITRELOAD_REPODIR }}
command: "chmod -R o=rwX,g=srwX,o=rX {{ GITRELOAD_REPODIR }}"
when: devstack
- name: create ssh dir for the content repos key
file: path=~/.ssh state=directory mode=0700
file:
path: "~/.ssh"
state: "directory"
mode: "0700"
become_user: "{{ common_web_user }}"
- name: install ssh key for the content repos
copy: content="{{ GITRELOAD_GIT_IDENTITY }}" dest=~/.ssh/id_rsa mode=0600
copy:
content: "{{ GITRELOAD_GIT_IDENTITY }}"
dest: "~/.ssh/id_rsa"
mode: "0600"
become_user: "{{ common_web_user }}"
- include: course_pull.yml
......@@ -11,35 +17,44 @@
tags: course_pull
- name: install gitreload
pip: >
name=git+{{ gitreload_repo }}@{{ gitreload_version }}#egg=gitreload
virtualenv={{ gitreload_venv }}
extra_args="--exists-action w"
pip:
name: "git+{{ gitreload_repo }}@{{ gitreload_version }}#egg=gitreload"
virtualenv: "{{ gitreload_venv }}"
extra_args: "--exists-action w"
become_user: "{{ gitreload_user }}"
notify: restart gitreload
- name: copy configuration
template: src=edx/app/gitreload/gr.env.json.j2 dest={{ gitreload_dir }}/gr.env.json
template:
src: "edx/app/gitreload/gr.env.json.j2"
dest: "{{ gitreload_dir }}/gr.env.json"
become_user: "{{ gitreload_user }}"
notify: restart gitreload
- name: "add gunicorn configuration file"
template: >
src=edx/app/gitreload/gitreload_gunicorn.py.j2 dest={{ gitreload_dir }}/gitreload_gunicorn.py
template:
src: "edx/app/gitreload/gitreload_gunicorn.py.j2"
dest: "{{ gitreload_dir }}/gitreload_gunicorn.py"
become_user: "{{ gitreload_user }}"
notify: restart gitreload
- name: "writing supervisor script"
template: >
src=edx/app/supervisor/conf.available.d/gitreload.conf.j2 dest={{ supervisor_available_dir }}/gitreload.conf
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
template:
src: "edx/app/supervisor/conf.available.d/gitreload.conf.j2"
dest: "{{ supervisor_available_dir }}/gitreload.conf"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: "0644"
- name: "enable supervisor script"
file: >
src={{ supervisor_available_dir }}/gitreload.conf
dest={{ supervisor_cfg_dir }}/gitreload.conf
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
state=link force=yes
file:
src: "{{ supervisor_available_dir }}/gitreload.conf"
dest: "{{ supervisor_cfg_dir }}/gitreload.conf"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: "0644"
state: link
force: "yes"
when: not disable_edx_services
# call supervisorctl update. this reloads
......@@ -54,9 +69,9 @@
when: not disable_edx_services
- name: ensure gitreload is started
supervisorctl: >
name=gitreload
supervisorctl_path={{ supervisor_ctl }}
config={{ supervisor_cfg }}
state=started
supervisorctl:
name: gitreload
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
state: started
when: not disable_edx_services
......@@ -38,45 +38,45 @@
- deploy
- name: create gitreload user
user: >
name="{{ gitreload_user }}"
home="{{ gitreload_dir }}"
createhome=no
shell=/bin/false
user:
name: "{{ gitreload_user }}"
home: "{{ gitreload_dir }}"
createhome: no
shell: /bin/false
- name: ensure home folder exists
file: >
path={{ gitreload_dir }}
state=directory
owner={{ gitreload_user }}
group={{ gitreload_user }}
file:
path: "{{ gitreload_dir }}"
state: directory
owner: "{{ gitreload_user }}"
group: "{{ gitreload_user }}"
- name: ensure repo dir exists
file: >
path={{ GITRELOAD_REPODIR }}
state=directory
owner={{ common_web_user }}
group={{ common_web_group }}
file:
path: "{{ GITRELOAD_REPODIR }}"
state: directory
owner: "{{ common_web_user }}"
group: "{{ common_web_group }}"
- name: grab ssh host keys
shell: ssh-keyscan {{ item }}
become_user: "{{ common_web_user }}"
with_items: GITRELOAD_HOSTS
with_items: "{{ GITRELOAD_HOSTS }}"
register: gitreload_repo_host_keys
- name: add host keys if needed to known_hosts
lineinfile: >
create=yes
dest=~/.ssh/known_hosts
line="{{ item.stdout }}"
lineinfile:
create: yes
dest: ~/.ssh/known_hosts
line: "{{ item.stdout }}"
become_user: "{{ common_web_user }}"
with_items: gitreload_repo_host_keys.results
with_items: "{{ gitreload_repo_host_keys.results }}"
- name: create a symlink for venv python
file: >
src="{{ gitreload_venv_bin }}/{{ item }}"
dest={{ COMMON_BIN_DIR }}/{{ item }}.gitreload
state=link
file:
src: "{{ gitreload_venv_bin }}/{{ item }}"
dest: "{{ COMMON_BIN_DIR }}/{{ item }}.gitreload"
state: "link"
with_items:
- python
- pip
......
......@@ -24,7 +24,7 @@
# Ignoring error below so that we can move the data folder and have it be a link
- name: all | create folders
file: path={{ item.path }} state=directory
with_items: gluster_volumes
with_items: "{{ gluster_volumes }}"
when: >
"{{ ansible_default_ipv4.address }}" in "{{ gluster_peers|join(' ') }}"
ignore_errors: yes
......@@ -32,52 +32,52 @@
- name: primary | create peers
command: gluster peer probe {{ item }}
with_items: gluster_peers
with_items: "{{ gluster_peers }}"
when: ansible_default_ipv4.address == gluster_primary_ip
tags: gluster
- name: primary | create volumes
command: gluster volume create {{ item.name }} replica {{ item.replicas }} transport tcp {% for server in gluster_peers %}{{ server }}:{{ item.path }} {% endfor %}
with_items: gluster_volumes
with_items: "{{ gluster_volumes }}"
when: ansible_default_ipv4.address == gluster_primary_ip
ignore_errors: yes # There should be better error checking here
tags: gluster
- name: primary | start volumes
command: gluster volume start {{ item.name }}
with_items: gluster_volumes
with_items: "{{ gluster_volumes }}"
when: ansible_default_ipv4.address == gluster_primary_ip
ignore_errors: yes # There should be better error checking here
tags: gluster
- name: primary | set security
command: gluster volume set {{ item.name }} auth.allow {{ item.security }}
with_items: gluster_volumes
with_items: "{{ gluster_volumes }}"
when: ansible_default_ipv4.address == gluster_primary_ip
tags: gluster
- name: primary | set performance cache
command: gluster volume set {{ item.name }} performance.cache-size {{ item.cache_size }}
with_items: gluster_volumes
with_items: "{{ gluster_volumes }}"
when: ansible_default_ipv4.address == gluster_primary_ip
tags: gluster
- name: all | mount volume
mount: >
name={{ item.mount_location }}
src={{ gluster_primary_ip }}:{{ item.name }}
fstype=glusterfs
state=mounted
opts=defaults,_netdev
with_items: gluster_volumes
mount:
name: "{{ item.mount_location }}"
src: "{{ gluster_primary_ip }}:{{ item.name }}"
fstype: glusterfs
state: mounted
opts: defaults,_netdev
with_items: "{{ gluster_volumes }}"
tags: gluster
# This required due to an annoying bug in Ubuntu and gluster where it tries to mount the system
# before the network stack is up and can't lookup 127.0.0.1
- name: all | sleep mount
lineinfile: >
dest=/etc/rc.local
line='sleep 5; /bin/mount -a'
regexp='sleep 5; /bin/mount -a'
insertbefore='exit 0'
lineinfile:
dest: /etc/rc.local
line: 'sleep 5; /bin/mount -a'
regexp: 'sleep 5; /bin/mount -a'
insertbefore: 'exit 0'
tags: gluster
......@@ -37,13 +37,13 @@
state: present
update_cache: true
cache_valid_time: 3600
with_items: GO_SERVER_BACKUP_APT_PKGS
with_items: "{{ GO_SERVER_BACKUP_APT_PKGS }}"
- name: install required python packages
pip:
name: "{{ item }}"
state: present
with_items: GO_SERVER_BACKUP_PIP_PKGS
with_items: "{{ GO_SERVER_BACKUP_PIP_PKGS }}"
- name: create the temp directory
file:
......
......@@ -52,7 +52,7 @@
state: present
update_cache: true
cache_valid_time: 3600
with_items: GO_SERVER_APT_PKGS
with_items: "{{ GO_SERVER_APT_PKGS }}"
- name: create go-server plugin directory
file:
......@@ -76,20 +76,17 @@
- { url: "{{ GO_SERVER_GITHUB_PR_PLUGIN_JAR_URL }}", md5: "{{ GO_SERVER_GITHUB_PR_PLUGIN_MD5 }}" }
- name: generate line for go-server password file for admin user
command: >
/usr/bin/htpasswd -nbs "{{ GO_SERVER_ADMIN_USERNAME }}" "{{ GO_SERVER_ADMIN_PASSWORD }}"
command: "/usr/bin/htpasswd -nbs \"{{ GO_SERVER_ADMIN_USERNAME }}\" \"{{ GO_SERVER_ADMIN_PASSWORD }}\""
register: admin_user_password_line
when: GO_SERVER_ADMIN_USERNAME and GO_SERVER_ADMIN_PASSWORD
- name: generate line for go-server password file for backup user
command: >
/usr/bin/htpasswd -nbs "{{ GO_SERVER_BACKUP_USERNAME }}" "{{ GO_SERVER_BACKUP_PASSWORD }}"
command: "/usr/bin/htpasswd -nbs \"{{ GO_SERVER_BACKUP_USERNAME }}\" \"{{ GO_SERVER_BACKUP_PASSWORD }}\""
register: backup_user_password_line
when: GO_SERVER_BACKUP_USERNAME and GO_SERVER_BACKUP_PASSWORD
- name: generate line for go-server password file for gomatic user
command: >
/usr/bin/htpasswd -nbs "{{ GO_SERVER_GOMATIC_USERNAME }}" "{{ GO_SERVER_GOMATIC_PASSWORD }}"
command: "/usr/bin/htpasswd -nbs \"{{ GO_SERVER_GOMATIC_USERNAME }}\" \"{{ GO_SERVER_GOMATIC_PASSWORD }}\""
register: gomatic_user_password_line
when: GO_SERVER_GOMATIC_USERNAME and GO_SERVER_GOMATIC_PASSWORD
......
......@@ -23,68 +23,84 @@
#
- name: install system packages
apt: >
pkg={{ item }}
state=present
with_items: hadoop_common_debian_pkgs
apt:
pkg: "{{ item }}"
state: present
with_items: "{{ hadoop_common_debian_pkgs }}"
- name: ensure group exists
group: name={{ hadoop_common_group }} system=yes state=present
group:
name: "{{ hadoop_common_group }}"
system: yes
state: present
- name: ensure user exists
user: >
name={{ hadoop_common_user }}
group={{ hadoop_common_group }}
home={{ HADOOP_COMMON_USER_HOME }} createhome=yes
shell=/bin/bash system=yes generate_ssh_key=yes
state=present
user:
name: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
home: "{{ HADOOP_COMMON_USER_HOME }}"
createhome: yes
shell: /bin/bash
system: yes
generate_ssh_key: yes
state: present
- name: own key authorized
file: >
src={{ HADOOP_COMMON_USER_HOME }}/.ssh/id_rsa.pub
dest={{ HADOOP_COMMON_USER_HOME }}/.ssh/authorized_keys
owner={{ hadoop_common_user }} group={{ hadoop_common_group }} state=link
file:
src: "{{ HADOOP_COMMON_USER_HOME }}/.ssh/id_rsa.pub"
dest: "{{ HADOOP_COMMON_USER_HOME }}/.ssh/authorized_keys"
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
state: link
- name: ssh configured
template: >
src=hadoop_user_ssh_config.j2
dest={{ HADOOP_COMMON_USER_HOME }}/.ssh/config
mode=0600 owner={{ hadoop_common_user }} group={{ hadoop_common_group }}
template:
src: hadoop_user_ssh_config.j2
dest: "{{ HADOOP_COMMON_USER_HOME }}/.ssh/config"
mode: 0600
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
- name: ensure user is in sudoers
lineinfile: >
dest=/etc/sudoers state=present
regexp='^%hadoop ALL\=' line='%hadoop ALL=(ALL) NOPASSWD:ALL'
validate='visudo -cf %s'
lineinfile:
dest: /etc/sudoers
state: present
regexp: '^%hadoop ALL\='
line: '%hadoop ALL=(ALL) NOPASSWD:ALL'
validate: 'visudo -cf %s'
- name: check if downloaded and extracted
stat: path={{ HADOOP_COMMON_HOME }}
register: extracted_hadoop_dir
- name: distribution downloaded
get_url: >
url={{ hadoop_common_dist.url }}
sha256sum={{ hadoop_common_dist.sha256sum }}
dest={{ hadoop_common_temporary_dir }}
get_url:
url: "{{ hadoop_common_dist.url }}"
sha256sum: "{{ hadoop_common_dist.sha256sum }}"
dest: "{{ hadoop_common_temporary_dir }}"
when: not extracted_hadoop_dir.stat.exists
- name: distribution extracted
shell: >
chdir={{ HADOOP_COMMON_USER_HOME }}
tar -xzf {{ hadoop_common_temporary_dir }}/{{ hadoop_common_dist.filename }} && chown -R {{ hadoop_common_user }}:{{ hadoop_common_group }} hadoop-{{ HADOOP_COMMON_VERSION }}
shell: "tar -xzf {{ hadoop_common_temporary_dir }}/{{ hadoop_common_dist.filename }} && chown -R {{ hadoop_common_user }}:{{ hadoop_common_group }} hadoop-{{ HADOOP_COMMON_VERSION }}"
args:
chdir: "{{ HADOOP_COMMON_USER_HOME }}"
when: not extracted_hadoop_dir.stat.exists
- name: versioned directory symlink created
file: >
src={{ HADOOP_COMMON_USER_HOME }}/hadoop-{{ HADOOP_COMMON_VERSION }}
dest={{ HADOOP_COMMON_HOME }}
owner={{ hadoop_common_user }} group={{ hadoop_common_group }} state=link
file:
src: "{{ HADOOP_COMMON_USER_HOME }}/hadoop-{{ HADOOP_COMMON_VERSION }}"
dest: "{{ HADOOP_COMMON_HOME }}"
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
state: link
- name: configuration installed
template: >
src={{ item }}.j2
dest={{ HADOOP_COMMON_CONF_DIR }}/{{ item }}
mode=0640 owner={{ hadoop_common_user }} group={{ hadoop_common_group }}
template:
src: "{{ item }}.j2"
dest: "{{ HADOOP_COMMON_CONF_DIR }}/{{ item }}"
mode: 0640
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
with_items:
- hadoop-env.sh
- mapred-site.xml
......@@ -93,79 +109,84 @@
- yarn-site.xml
- name: upstart scripts installed
template: >
src={{ item }}.j2
dest=/etc/init/{{ item }}
mode=0640 owner=root group=root
template:
src: "{{ item }}.j2"
dest: "/etc/init/{{ item }}"
mode: 0640
owner: root
group: root
with_items:
- hdfs.conf
- yarn.conf
- name: hadoop env file exists
file: >
path={{ hadoop_common_env }} state=touch
owner={{ hadoop_common_user }} group={{ hadoop_common_group }}
file:
path: "{{ hadoop_common_env }}"
state: touch
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
- name: env vars sourced in bashrc
lineinfile: >
dest={{ HADOOP_COMMON_USER_HOME }}/.bashrc
state=present
regexp="^. {{ hadoop_common_env }}"
line=". {{ hadoop_common_env }}"
insertbefore=BOF
lineinfile:
dest: "{{ HADOOP_COMMON_USER_HOME }}/.bashrc"
state: present
regexp: "^. {{ hadoop_common_env }}"
line: ". {{ hadoop_common_env }}"
insertbefore: BOF
- name: env vars sourced in hadoop env
lineinfile: >
dest={{ hadoop_common_env }} state=present
regexp="^. {{ HADOOP_COMMON_CONF_DIR }}/hadoop-env.sh" line=". {{ HADOOP_COMMON_CONF_DIR }}/hadoop-env.sh"
lineinfile:
dest: "{{ hadoop_common_env }}"
state: present
regexp: "^. {{ HADOOP_COMMON_CONF_DIR }}/hadoop-env.sh"
line: ". {{ HADOOP_COMMON_CONF_DIR }}/hadoop-env.sh"
- name: check if native libraries need to be built
stat: path={{ HADOOP_COMMON_USER_HOME }}/.native_libs_built
register: native_libs_built
- name: protobuf downloaded
get_url: >
url={{ hadoop_common_protobuf_dist.url }}
sha256sum={{ hadoop_common_protobuf_dist.sha256sum }}
dest={{ hadoop_common_temporary_dir }}
get_url:
url: "{{ hadoop_common_protobuf_dist.url }}"
sha256sum: "{{ hadoop_common_protobuf_dist.sha256sum }}"
dest: "{{ hadoop_common_temporary_dir }}"
when: not native_libs_built.stat.exists
- name: protobuf extracted
shell: >
chdir={{ hadoop_common_temporary_dir }}
tar -xzf {{ hadoop_common_protobuf_dist.filename }}
shell: "tar -xzf {{ hadoop_common_protobuf_dist.filename }}"
args:
chdir: "{{ hadoop_common_temporary_dir }}"
when: not native_libs_built.stat.exists
- name: protobuf installed
shell: >
chdir={{ hadoop_common_temporary_dir }}/protobuf-{{ HADOOP_COMMON_PROTOBUF_VERSION }}
./configure --prefix=/usr/local && make && make install
shell: "./configure --prefix=/usr/local && make && make install"
args:
chdir: "{{ hadoop_common_temporary_dir }}/protobuf-{{ HADOOP_COMMON_PROTOBUF_VERSION }}"
when: not native_libs_built.stat.exists
- name: native lib source downloaded
get_url: >
url={{ hadoop_common_native_dist.url }}
sha256sum={{ hadoop_common_native_dist.sha256sum }}
dest={{ hadoop_common_temporary_dir }}/{{ hadoop_common_native_dist.filename }}
get_url:
url: "{{ hadoop_common_native_dist.url }}"
sha256sum: "{{ hadoop_common_native_dist.sha256sum }}"
dest: "{{ hadoop_common_temporary_dir }}/{{ hadoop_common_native_dist.filename }}"
when: not native_libs_built.stat.exists
- name: native lib source extracted
shell: >
chdir={{ hadoop_common_temporary_dir }}
tar -xzf {{ hadoop_common_native_dist.filename }}
shell: "tar -xzf {{ hadoop_common_native_dist.filename }}"
args:
chdir: "{{ hadoop_common_temporary_dir }}"
when: not native_libs_built.stat.exists
- name: native lib built
shell: >
chdir={{ hadoop_common_temporary_dir }}/hadoop-common-release-{{ HADOOP_COMMON_VERSION }}/hadoop-common-project
mvn package -X -Pnative -DskipTests
shell: "mvn package -X -Pnative -DskipTests"
args:
chdir: "{{ hadoop_common_temporary_dir }}/hadoop-common-release-{{ HADOOP_COMMON_VERSION }}/hadoop-common-project"
environment:
LD_LIBRARY_PATH: /usr/local/lib
when: not native_libs_built.stat.exists
- name: old native libs renamed
shell: >
mv {{ HADOOP_COMMON_HOME }}/lib/native/{{ item.name }} {{ HADOOP_COMMON_HOME }}/lib/native/{{ item.new_name }}
shell: "mv {{ HADOOP_COMMON_HOME }}/lib/native/{{ item.name }} {{ HADOOP_COMMON_HOME }}/lib/native/{{ item.new_name }}"
with_items:
- { name: libhadoop.a, new_name: libhadoop32.a }
- { name: libhadoop.so, new_name: libhadoop32.so }
......@@ -173,9 +194,9 @@
when: not native_libs_built.stat.exists
- name: new native libs installed
shell: >
shell: "chown {{ hadoop_common_user }}:{{ hadoop_common_group }} {{ item }} && cp {{ item }} {{ HADOOP_COMMON_HOME }}/lib/native/{{ item }}"
args:
chdir={{ hadoop_common_temporary_dir }}/hadoop-common-release-{{ HADOOP_COMMON_VERSION }}/hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
chown {{ hadoop_common_user }}:{{ hadoop_common_group }} {{ item }} && cp {{ item }} {{ HADOOP_COMMON_HOME }}/lib/native/{{ item }}
with_items:
- libhadoop.a
- libhadoop.so
......@@ -183,13 +204,17 @@
when: not native_libs_built.stat.exists
- name: native lib marker touched
file: >
path={{ HADOOP_COMMON_USER_HOME }}/.native_libs_built
owner={{ hadoop_common_user }} group={{ hadoop_common_group }} state=touch
file:
path: "{{ HADOOP_COMMON_USER_HOME }}/.native_libs_built"
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
state: touch
when: not native_libs_built.stat.exists
- name: service directory exists
file: >
path={{ HADOOP_COMMON_SERVICES_DIR }}
mode=0750 owner={{ hadoop_common_user }} group={{ hadoop_common_group }}
state=directory
file:
path: "{{ HADOOP_COMMON_SERVICES_DIR }}"
mode: "0750"
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
state: directory
......@@ -22,9 +22,11 @@
notify: restart haproxy
- name: Server configuration file
template: >
src={{ haproxy_template_dir }}/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg
owner=root group=root mode=0644
template:
src: "{{ haproxy_template_dir }}/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg"
owner: root
group: root
mode: 0644
notify: reload haproxy
- name: Enabled in default
......
---
# Installs the harprofiler
- name: create harprofiler user
user: >
name="{{ harprofiler_user }}"
createhome=no
home={{ harprofiler_dir }}
shell=/bin/bash
user:
name: "{{ harprofiler_user }}"
createhome: no
home: "{{ harprofiler_dir }}"
shell: /bin/bash
- name: create harprofiler repo
file: >
path={{ harprofiler_dir }} state=directory
owner="{{ harprofiler_user }}" group="{{ common_web_group }}"
mode=0755
file:
path: "{{ harprofiler_dir }}"
state: directory
owner: "{{ harprofiler_user }}"
group: "{{ common_web_group }}"
mode: 0755
- name: check out the harprofiler
git_2_0_1: >
dest={{ harprofiler_dir }}
repo={{ harprofiler_github_url }} version={{ harprofiler_version }}
accept_hostkey=yes
git:
dest: "{{ harprofiler_dir }}"
repo: "{{ harprofiler_github_url }}"
version: "{{ harprofiler_version }}"
accept_hostkey: yes
become_user: "{{ harprofiler_user }}"
- name: set bashrc for harprofiler user
template: >
src=bashrc.j2 dest="{{ harprofiler_dir }}/.bashrc" owner="{{ harprofiler_user }}"
mode=0755
template:
src: bashrc.j2
dest: "{{ harprofiler_dir }}/.bashrc"
owner: "{{ harprofiler_user }}"
mode: 0755
- name: install requirements
pip: >
requirements="{{ harprofiler_dir }}/requirements.txt" virtualenv="{{ harprofiler_venv_dir }}"
pip:
requirements: "{{ harprofiler_dir }}/requirements.txt"
virtualenv: "{{ harprofiler_venv_dir }}"
become_user: "{{ harprofiler_user }}"
- name: update config file
# harprofiler ships with a default config file. Doing a line-replace for the default
# configuration that does not match what this machine will have
lineinfile: >
dest={{ harprofiler_dir }}/config.yaml
regexp="browsermob_dir"
line="browsermob_dir: /usr/local"
state=present
lineinfile:
dest: "{{ harprofiler_dir }}/config.yaml"
regexp: "browsermob_dir"
line: "browsermob_dir: /usr/local"
state: present
- name: create validation shell script
template:
......@@ -47,8 +53,8 @@
mode: 0755
become_user: "{{ harprofiler_user }}"
- name: test install
shell: >
./{{ harprofiler_validation_script }} chdir={{ harprofiler_dir }}
shell: "./{{ harprofiler_validation_script }}"
args:
chdir: "{{ harprofiler_dir }}"
become_user: "{{ harprofiler_user }}"
......@@ -31,7 +31,7 @@
- install
- install:app-requirements
become_user: "{{ harstorage_user }}"
with_items: harstorage_python_pkgs
with_items: "{{ harstorage_python_pkgs }}"
- name: create directories
file:
......
......@@ -18,12 +18,17 @@ HIVE_CONF: "{{ HIVE_HOME }}/conf"
HIVE_LIB: "{{ HIVE_HOME }}/lib"
HIVE_METASTORE_DATABASE_NAME: edx_hive_metastore
HIVE_METASTORE_DATABASE_USER: edx_hive
HIVE_METASTORE_DATABASE_PASSWORD: edx
HIVE_METASTORE_DATABASE_HOST: 127.0.0.1
HIVE_METASTORE_DATABASE_PORT: 3306
HIVE_METASTORE_DATABASE:
user: edx_hive
password: edx
user: "{{ HIVE_METASTORE_DATABASE_USER }}"
password: "{{ HIVE_METASTORE_DATABASE_PASSWORD }}"
name: "{{ HIVE_METASTORE_DATABASE_NAME }}"
host: 127.0.0.1
port: 3306
host: "{{ HIVE_METASTORE_DATABASE_HOST }}"
port: "{{ HIVE_METASTORE_DATABASE_PORT }}"
#
......
......@@ -21,63 +21,71 @@
- name: check if downloaded and extracted
stat: path={{ HIVE_HOME }}
stat:
path: "{{ HIVE_HOME }}"
register: extracted_dir
- name: distribution downloaded
get_url: >
url={{ hive_dist.url }}
sha256sum={{ hive_dist.sha256sum }}
dest={{ hive_temporary_dir }}
get_url:
url: "{{ hive_dist.url }}"
sha256sum: "{{ hive_dist.sha256sum }}"
dest: "{{ hive_temporary_dir }}"
when: not extracted_dir.stat.exists
- name: distribution extracted
shell: >
chdir={{ HADOOP_COMMON_USER_HOME }}
tar -xzf {{ hive_temporary_dir }}/{{ hive_dist.filename }} && chown -R {{ hadoop_common_user }}:{{ hadoop_common_group }} hive-{{ HIVE_VERSION }}-bin
shell: "tar -xzf {{ hive_temporary_dir }}/{{ hive_dist.filename }} && chown -R {{ hadoop_common_user }}:{{ hadoop_common_group }} hive-{{ HIVE_VERSION }}-bin"
args:
chdir: "{{ HADOOP_COMMON_USER_HOME }}"
when: not extracted_dir.stat.exists
- name: versioned directory symlink created
file: >
src={{ HADOOP_COMMON_USER_HOME }}/hive-{{ HIVE_VERSION }}-bin
dest={{ HIVE_HOME }}
owner={{ hadoop_common_user }} group={{ hadoop_common_group }} state=link
file:
src: "{{ HADOOP_COMMON_USER_HOME }}/hive-{{ HIVE_VERSION }}-bin"
dest: "{{ HIVE_HOME }}"
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
state: link
- name: hive mysql connector distribution downloaded
get_url: >
url={{ hive_mysql_connector_dist.url }}
sha256sum={{ hive_mysql_connector_dist.sha256sum }}
dest={{ hive_temporary_dir }}
get_url:
url: "{{ hive_mysql_connector_dist.url }}"
sha256sum: "{{ hive_mysql_connector_dist.sha256sum }}"
dest: "{{ hive_temporary_dir }}"
when: not extracted_dir.stat.exists
- name: hive mysql connector distribution extracted
shell: >
chdir={{ hive_temporary_dir }}
tar -xzf {{ hive_temporary_dir }}/{{ hive_mysql_connector_dist.filename }}
shell: "tar -xzf {{ hive_temporary_dir }}/{{ hive_mysql_connector_dist.filename }}"
args:
chdir: "{{ hive_temporary_dir }}"
when: not extracted_dir.stat.exists
- name: hive lib exists
file: >
path={{ HIVE_LIB }}
owner={{ hadoop_common_user }} group={{ hadoop_common_group }} state=directory
file:
path: "{{ HIVE_LIB }}"
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
state: directory
- name: hive mysql connector installed
shell: >
chdir=/{{ hive_temporary_dir }}/mysql-connector-java-{{ HIVE_MYSQL_CONNECTOR_VERSION }}
cp mysql-connector-java-{{ HIVE_MYSQL_CONNECTOR_VERSION }}-bin.jar {{ HIVE_LIB }} &&
chown {{ hadoop_common_user }}:{{ hadoop_common_group }} {{ HIVE_LIB }}/mysql-connector-java-{{ HIVE_MYSQL_CONNECTOR_VERSION }}-bin.jar
shell: "cp mysql-connector-java-{{ HIVE_MYSQL_CONNECTOR_VERSION }}-bin.jar {{ HIVE_LIB }} && chown {{ hadoop_common_user }}:{{ hadoop_common_group }} {{ HIVE_LIB }}/mysql-connector-java-{{ HIVE_MYSQL_CONNECTOR_VERSION }}-bin.jar"
args:
chdir: "/{{ hive_temporary_dir }}/mysql-connector-java-{{ HIVE_MYSQL_CONNECTOR_VERSION }}"
when: not extracted_dir.stat.exists
- name: configuration installed
template: >
src={{ item }}.j2
dest={{ HIVE_CONF }}/{{ item }}
mode=0640 owner={{ hadoop_common_user }} group={{ hadoop_common_group }}
template:
src: "{{ item }}.j2"
dest: "{{ HIVE_CONF }}/{{ item }}"
mode: 0640
owner: "{{ hadoop_common_user }}"
group: "{{ hadoop_common_group }}"
with_items:
- hive-env.sh
- hive-site.xml
- name: env vars sourced in hadoop env
lineinfile: >
dest={{ hadoop_common_env }} state=present
regexp="^. {{ HIVE_CONF }}/hive-env.sh" line=". {{ HIVE_CONF }}/hive-env.sh"
lineinfile:
dest: "{{ hadoop_common_env }}"
state: present
regexp: "^. {{ HIVE_CONF }}/hive-env.sh"
line: ". {{ HIVE_CONF }}/hive-env.sh"
......@@ -61,15 +61,20 @@ INSIGHTS_LEARNER_API_LIST_DOWNLOAD_FIELDS: !!null
INSIGHTS_DATABASE_NAME: 'dashboard'
INSIGHTS_DATABASE_USER: rosencrantz
INSIGHTS_DATABASE_PASSWORD: secret
INSIGHTS_DATABASE_HOST: 127.0.0.1
INSIGHTS_DATABASE_PORT: 3306
INSIGHTS_DATABASES:
# rw user
default:
ENGINE: 'django.db.backends.mysql'
NAME: '{{ INSIGHTS_DATABASE_NAME }}'
USER: 'rosencrantz'
PASSWORD: 'secret'
HOST: '127.0.0.1'
PORT: '3306'
USER: '{{ INSIGHTS_DATABASE_USER }}'
PASSWORD: '{{ INSIGHTS_DATABASE_PASSWORD }}'
HOST: "{{ INSIGHTS_DATABASE_HOST }}"
PORT: '{{ INSIGHTS_DATABASE_PORT }}'
INSIGHTS_LMS_COURSE_SHORTCUT_BASE_URL: "URL_FOR_LMS_COURSE_LIST_PAGE"
......@@ -201,8 +206,9 @@ insights_debian_pkgs:
- 'libmysqlclient-dev'
- 'build-essential'
- gettext
- openjdk-7-jdk
insights_redhat_pkgs:
- 'community-mysql-devel'
- openjdk-7-jdk
insights_release_specific_debian_pkgs:
precise:
- openjdk-7-jdk
xenial:
- openjdk-8-jdk
......@@ -20,6 +20,5 @@ dependencies:
edx_service_user: "{{ insights_user }}"
edx_service_home: "{{ insights_home }}"
edx_service_packages:
debian: "{{ insights_debian_pkgs }}"
redhat: "{{ insights_redhat_pkgs }}"
debian: "{{ insights_debian_pkgs + insights_release_specific_debian_pkgs[ansible_distribution_release] }}"
redhat: []
......@@ -10,43 +10,44 @@
#
#
# Tasks for role insights
#
#
# Overview:
#
#
#
# Dependencies:
#
#
#
# Example play:
#
#
- name: setup the insights env file
template: >
src="edx/app/insights/insights_env.j2"
dest="{{ insights_app_dir }}/insights_env"
owner={{ insights_user }}
group={{ insights_user }}
mode=0644
template:
src: "edx/app/insights/insights_env.j2"
dest: "{{ insights_app_dir }}/insights_env"
owner: "{{ insights_user }}"
group: "{{ insights_user }}"
mode: 0644
tags:
- install
- install:configuration
- name: install application requirements
pip: >
requirements="{{ insights_requirements_base }}/{{ item }}"
virtualenv="{{ insights_venv_dir }}"
state=present extra_args="--exists-action w"
pip:
requirements: "{{ insights_requirements_base }}/{{ item }}"
virtualenv: "{{ insights_venv_dir }}"
state: present
extra_args: "--exists-action w"
become_user: "{{ insights_user }}"
with_items: insights_requirements
with_items: "{{ insights_requirements }}"
tags:
- install
- install:app-requirements
- name: create nodeenv
shell: >
creates={{ insights_nodeenv_dir }}
{{ insights_venv_dir }}/bin/nodeenv {{ insights_nodeenv_dir }} --prebuilt
shell: "{{ insights_venv_dir }}/bin/nodeenv {{ insights_nodeenv_dir }} --prebuilt"
args:
creates: "{{ insights_nodeenv_dir }}"
become_user: "{{ insights_user }}"
tags:
- install
......@@ -61,21 +62,19 @@
environment: "{{ insights_environment }}"
- name: install bower dependencies
shell: >
chdir={{ insights_code_dir }}
. {{ insights_venv_dir }}/bin/activate &&
. {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/bower install --production --config.interactive=false
shell: ". {{ insights_venv_dir }}/bin/activate && . {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/bower install --production --config.interactive=false"
args:
chdir: "{{ insights_code_dir }}"
become_user: "{{ insights_user }}"
tags:
- install
- install:app-requirements
- name: migrate
shell: >
chdir={{ insights_code_dir }}
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ insights_venv_dir }}/bin/python {{ insights_manage }} migrate --noinput
shell: "DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}' DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}' {{ insights_venv_dir }}/bin/python {{ insights_manage }} migrate --noinput"
args:
chdir: "{{ insights_code_dir }}"
become_user: "{{ insights_user }}"
environment: "{{ insights_environment }}"
when: migrate_db is defined and migrate_db|lower == "yes"
......@@ -84,18 +83,18 @@
- migrate:db
- name: run r.js optimizer
shell: >
chdir={{ insights_code_dir }}
. {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/r.js -o build.js
shell: ". {{ insights_nodeenv_bin }}/activate && {{ insights_node_bin }}/r.js -o build.js"
args:
chdir: "{{ insights_code_dir }}"
become_user: "{{ insights_user }}"
tags:
- assets
- assets:gather
- name: run collectstatic
shell: >
chdir={{ insights_code_dir }}
{{ insights_venv_dir }}/bin/python {{ insights_manage }} {{ item }}
shell: "{{ insights_venv_dir }}/bin/python {{ insights_manage }} {{ item }}"
args:
chdir: "{{ insights_code_dir }}"
become_user: "{{ insights_user }}"
environment: "{{ insights_environment }}"
with_items:
......@@ -106,38 +105,42 @@
- assets:gather
- name: compile translations
shell: >
chdir={{ insights_code_dir }}/analytics_dashboard
. {{ insights_venv_dir }}/bin/activate && i18n_tool generate -v
shell: ". {{ insights_venv_dir }}/bin/activate && i18n_tool generate -v"
args:
chdir: "{{ insights_code_dir }}/analytics_dashboard"
become_user: "{{ insights_user }}"
tags:
- assets
- assets:gather
- name: write out the supervisior wrapper
template: >
src=edx/app/insights/insights.sh.j2
dest={{ insights_app_dir }}/{{ insights_service_name }}.sh
mode=0650 owner={{ supervisor_user }} group={{ common_web_user }}
template:
src: "edx/app/insights/insights.sh.j2"
dest: "{{ insights_app_dir }}/{{ insights_service_name }}.sh"
mode: 0650
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
tags:
- install
- install:configuration
- name: write supervisord config
template: >
src=edx/app/supervisor/conf.d.available/insights.conf.j2
dest="{{ supervisor_available_dir }}/{{ insights_service_name }}.conf"
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
template:
src: edx/app/supervisor/conf.d.available/insights.conf.j2
dest: "{{ supervisor_available_dir }}/{{ insights_service_name }}.conf"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: 0644
tags:
- install
- install:configuration
- name: enable supervisor script
file: >
src={{ supervisor_available_dir }}/{{ insights_service_name }}.conf
dest={{ supervisor_cfg_dir }}/{{ insights_service_name }}.conf
state=link
force=yes
file:
src: "{{ supervisor_available_dir }}/{{ insights_service_name }}.conf"
dest: "{{ supervisor_cfg_dir }}/{{ insights_service_name }}.conf"
state: link
force: yes
when: not disable_edx_services
tags:
- install
......@@ -151,10 +154,10 @@
- manage:start
- name: create symlinks from the venv bin dir
file: >
src="{{ insights_venv_dir }}/bin/{{ item }}"
dest="{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.{{ insights_service_name }}"
state=link
file:
src: "{{ insights_venv_dir }}/bin/{{ item }}"
dest: "{{ COMMON_BIN_DIR }}/{{ item.split('.')[0] }}.{{ insights_service_name }}"
state: link
with_items:
- python
- pip
......@@ -164,20 +167,20 @@
- install:base
- name: create manage.py symlink
file: >
src="{{ insights_manage }}"
dest="{{ COMMON_BIN_DIR }}/manage.{{ insights_service_name }}"
state=link
file:
src: "{{ insights_manage }}"
dest: "{{ COMMON_BIN_DIR }}/manage.{{ insights_service_name }}"
state: link
tags:
- install
- install:base
- name: restart insights
supervisorctl: >
state=restarted
supervisorctl_path={{ supervisor_ctl }}
config={{ supervisor_cfg }}
name={{ insights_service_name }}
supervisorctl:
state: restarted
supervisorctl_path: "{{ supervisor_ctl }}"
config: "{{ supervisor_cfg }}"
name: "{{ insights_service_name }}"
when: not disable_edx_services
become_user: "{{ supervisor_service_user }}"
tags:
......
......@@ -34,108 +34,125 @@
when: JENKINS_ADMIN_S3_PROFILE.secret_key is not defined
- name: add admin specific apt repositories
apt_repository: repo="{{ item }}" state=present update_cache=yes
with_items: jenkins_admin_debian_repos
apt_repository:
repo: "{{ item }}"
state: "present"
update_cache: "yes"
with_items: "{{ jenkins_admin_debian_repos }}"
- name: create the scripts directory
file: path={{ jenkins_admin_scripts_dir }} state=directory
owner={{ jenkins_user }} group={{ jenkins_group }} mode=755
file:
path: "{{ jenkins_admin_scripts_dir }}"
state: "directory"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0755
- name: configure s3 plugin
template: >
src="./{{ jenkins_home }}/hudson.plugins.s3.S3BucketPublisher.xml.j2"
dest="{{ jenkins_home }}/hudson.plugins.s3.S3BucketPublisher.xml"
owner={{ jenkins_user }}
group={{ jenkins_group }}
mode=0644
template:
src: "./{{ jenkins_home }}/hudson.plugins.s3.S3BucketPublisher.xml.j2"
dest: "{{ jenkins_home }}/hudson.plugins.s3.S3BucketPublisher.xml"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0644
- name: configure the boto profiles for jenkins
template: >
src="./{{ jenkins_home }}/boto.j2"
dest="{{ jenkins_home }}/.boto"
owner="{{ jenkins_user }}"
group="{{ jenkins_group }}"
mode="0600"
template:
src: "./{{ jenkins_home }}/boto.j2"
dest: "{{ jenkins_home }}/.boto"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0600
tags:
- aws-config
- name: create the .aws directory
file: path={{ jenkins_home }}/.aws state=directory
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700
file:
path: "{{ jenkins_home }}/.aws"
state: "directory"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0700
tags:
- aws-config
- name: configure the awscli profiles for jenkins
template: >
src="./{{ jenkins_home }}/aws_config.j2"
dest="{{ jenkins_home }}/.aws/config"
owner="{{ jenkins_user }}"
group="{{ jenkins_group }}"
mode="0600"
template:
src: "./{{ jenkins_home }}/aws_config.j2"
dest: "{{ jenkins_home }}/.aws/config"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0600
tags:
- aws-config
- name: create the ssh directory
file: >
path={{ jenkins_home }}/.ssh
owner={{ jenkins_user }}
group={{ jenkins_group }}
mode=0700
state=directory
file:
path: "{{ jenkins_home }}/.ssh"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0700
state: directory
# Need to add Github to known_hosts to avoid
# being prompted when using git through ssh
- name: Add github.com to known_hosts if it does not exist
shell: >
ssh-keygen -f {{ jenkins_home }}/.ssh/known_hosts -H -F github.com | grep -q found || ssh-keyscan -H github.com > {{ jenkins_home }}/.ssh/known_hosts
shell: "ssh-keygen -f {{ jenkins_home }}/.ssh/known_hosts -H -F github.com | grep -q found || ssh-keyscan -H github.com > {{ jenkins_home }}/.ssh/known_hosts"
- name: create job directory
file: >
path="{{ jenkins_home }}/jobs"
owner="{{ jenkins_user }}"
group="{{ jenkins_group }}"
mode=0755
state=directory
file:
path: "{{ jenkins_home }}/jobs"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0755
state: directory
- name: create admin job directories
file: >
path="{{ jenkins_home }}/jobs/{{ item }}"
owner={{ jenkins_user }}
group={{ jenkins_group }}
mode=0755
state=directory
file:
path: "{{ jenkins_home }}/jobs/{{ item }}"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0755
state: directory
with_items: jenkins_admin_jobs
- name: create admin job config files
template: >
src="./{{ jenkins_home }}/jobs/{{ item }}/config.xml.j2"
dest="{{ jenkins_home }}/jobs/{{ item }}/config.xml"
owner={{ jenkins_user }}
group={{ jenkins_group }}
mode=0644
template:
src: "./{{ jenkins_home }}/jobs/{{ item }}/config.xml.j2"
dest: "{{ jenkins_home }}/jobs/{{ item }}/config.xml"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0644
with_items: jenkins_admin_jobs
# adding chris-lea nodejs repo
- name: add ppas for current versions of nodejs
apt_repository: repo="{{ jenkins_chrislea_ppa }}"
apt_repository:
repo: "{{ jenkins_chrislea_ppa }}"
- name: install system packages for edxapp virtualenvs
apt: pkg={{','.join(jenkins_admin_debian_pkgs)}} state=present update_cache=yes
apt:
pkg: "{{ ','.join(jenkins_admin_debian_pkgs) }}"
state: "present"
update_cache: yes
# This is necessary so that ansible can run with
# sudo set to True (as the jenkins user) on jenkins
- name: grant sudo access to the jenkins user
copy: >
content="{{ jenkins_user }} ALL=({{ jenkins_user }}) NOPASSWD:ALL"
dest=/etc/sudoers.d/99-jenkins owner=root group=root
mode=0440 validate='visudo -cf %s'
copy:
content: "{{ jenkins_user }} ALL=({{ jenkins_user }}) NOPASSWD:ALL"
dest: "/etc/sudoers.d/99-jenkins"
owner: "root"
group: "root"
mode: 0440
validate: "visudo -cf %s"
- name: install global gem dependencies
gem: >
name={{ item.name }}
state=present
version={{ item.version }}
user_install=no
gem:
name: "{{ item.name }}"
state: present
version: "{{ item.version }}"
user_install: no
with_items: jenkins_admin_gem_pkgs
- name: get s3 one time url
......@@ -152,7 +169,7 @@
get_url:
url: "{{ s3_one_time_url.url }}"
dest: "/tmp/{{ JENKINS_ADMIN_BACKUP_S3_KEY | basename }}"
mode: "0644"
mode: 0644
owner: "{{ jenkins_user }}"
when: JENKINS_ADMIN_BACKUP_BUCKET is defined and JENKINS_ADMIN_BACKUP_S3_KEY is defined
......
......@@ -336,7 +336,7 @@ The full list of seed job configuration variables is:
access to the secure repo. Default is `{{ JENKINS_ANALYTICS_GITHUB_CREDENTIAL_ID }}`.
See [Jenkins Credentials](#jenkins-credentials) below for details.
* `ANALYTICS_SCHEDULE_JOBS_DSL_REPO_URL`: Optional URL for the git repo that contains the analytics job DSLs.
Default is `git@github.com:edx-ops/edx-jenkins-job-dsl.git`.
Default is `git@github.com:edx/jenkins-job-dsl-internal.git`.
This repo is cloned directly into the seed job workspace.
* `ANALYTICS_SCHEDULE_JOBS_DSL_REPO_VERSION`: Optional branch/tagname to checkout for the job DSL repo.
Default is `master`.
......@@ -526,7 +526,7 @@ Example scm configuration:
removed_view_action: "IGNORE"
scm:
type: git
url: "git@github.com:edx-ops/edx-jenkins-job-dsl.git"
url: "git@github.com:edx/jenkins-job-dsl-internal.git"
credential_id: "github-deploy-key"
target_jobs: "jobs/analytics-edx-jenkins.edx.org/*Jobs.groovy"
additional_classpath: "src/main/groovy"
......
......@@ -90,7 +90,7 @@ ANALYTICS_SCHEDULE_SECURE_REPO_DEST: "analytics-secure-config"
ANALYTICS_SCHEDULE_SECURE_REPO_VERSION: "master"
ANALYTICS_SCHEDULE_SECURE_REPO_CREDENTIAL_ID: "{{ JENKINS_ANALYTICS_GITHUB_CREDENTIAL_ID }}"
ANALYTICS_SCHEDULE_SECURE_REPO_MASTER_SSH_CREDENTIAL_FILE: "aws.pem"
ANALYTICS_SCHEDULE_JOBS_DSL_REPO_URL: "git@github.com:edx-ops/edx-jenkins-job-dsl.git"
ANALYTICS_SCHEDULE_JOBS_DSL_REPO_URL: "git@github.com:edx/jenkins-job-dsl.git"
ANALYTICS_SCHEDULE_JOBS_DSL_REPO_VERSION: "master"
ANALYTICS_SCHEDULE_JOBS_DSL_REPO_CREDENTIAL_ID: "{{ JENKINS_ANALYTICS_GITHUB_CREDENTIAL_ID }}"
......
......@@ -26,15 +26,14 @@
dest: "{{ jenkins_cli_jar }}"
- name: execute command
shell: >
{{ jenkins_command_prefix|default('') }} java -jar {{ jenkins_cli_jar }} -s http://localhost:{{ jenkins_port }}
{{ jenkins_auth_realm.cli_auth }}
{{ jenkins_command_string }}
shell: "{{ jenkins_command_prefix|default('') }} java -jar {{ jenkins_cli_jar }} -s http://localhost:{{ jenkins_port }} {{ jenkins_auth_realm.cli_auth }} {{ jenkins_command_string }}"
register: jenkins_command_output
ignore_errors: "{{ jenkins_ignore_cli_errors|default (False) }}"
- name: "clean up --- remove the credentials dir"
file: name=jenkins_cli_root state=absent
file:
name: jenkins_cli_root
state: absent
- name: "clean up --- remove cached Jenkins credentials"
command: rm -rf $HOME/.jenkins
......@@ -3,7 +3,7 @@
- name: install jenkins analytics extra system packages
apt:
pkg={{ item }} state=present update_cache=yes
with_items: JENKINS_ANALYTICS_EXTRA_PKGS
with_items: "{{ JENKINS_ANALYTICS_EXTRA_PKGS }}"
tags:
- jenkins
......@@ -170,9 +170,9 @@
- jenkins-seed-job
- name: generate seed job xml
shell: >
cd {{ jenkins_seed_job_root }} &&
GRADLE_OPTS="-Dorg.gradle.daemon=true" ./gradlew run -Pargs={{ jenkins_seed_job_script }}
shell: "GRADLE_OPTS=\"-Dorg.gradle.daemon=true\" ./gradlew run -Pargs={{ jenkins_seed_job_script }}"
args:
chdir: "{{ jenkins_seed_job_root }}"
become: yes
become_user: "{{ jenkins_user }}"
tags:
......
......@@ -72,7 +72,7 @@ jenkins_plugins:
- { name: "ssh-agent", version: "1.5" }
- { name: "ssh-credentials", version: "1.11" }
- { name: "ssh-slaves", version: "1.9" }
- { name: "shiningpanda", version: "0.21" }
- { name: "shiningpanda", version: "0.23" }
- { name: "tmpcleaner", version: "1.1" }
- { name: "token-macro", version: "1.10" }
- { name: "timestamper", version: "1.5.15" }
......
......@@ -143,7 +143,7 @@
# upstream, we may be able to use the regular plugin install process.
# Until then, we compile and install the forks ourselves.
- name: Checkout custom plugin repo
git_2_0_1:
git:
repo: "{{ item.repo_url }}"
dest: "/tmp/{{ item.repo_name }}"
version: "{{ item.version }}"
......
......@@ -3,14 +3,10 @@ jenkins_user: "jenkins"
jenkins_group: "jenkins"
jenkins_home: /home/jenkins
# repo for nodejs
jenkins_chrislea_ppa: "ppa:chris-lea/node.js"
jenkins_edx_platform_version: master
# System packages
jenkins_debian_pkgs:
- nodejs
- pkg-config
- libffi-dev
- python-dev
......@@ -22,21 +18,3 @@ packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd6
# custom firefox
custom_firefox_version: 42.0
custom_firefox_url: "https://ftp.mozilla.org/pub/firefox/releases/{{ custom_firefox_version }}/linux-x86_64/en-US/firefox-{{ custom_firefox_version }}.tar.bz2"
# Pip-accel itself and other workarounds that need to be installed with pip
pip_accel_reqs:
# Install Shapely with pip as it does not install cleanly
# with pip-accel because it has a weird setup.py
- "Shapely==1.2.16"
# Install unittest2 which is needed by lettuce
# but also pip-accel has trouble with determining that.
# unittest2>=0.8.0 (from testtools>=0.9.34->python-subunit->lettuce==0.2.20)
- "unittest2>=0.8.0"
# There is a bug in pip 1.4.1 by which --exists-action is broken.
# This is fixed in pip 1.5.x, but alas pip-accel is not yet compatible with pip 1.5.x.
# Remove when we can upgrade to a version of pip-accel that supports pip 1.5.x.
- "git+https://github.com/jzoldak/pip.git@v1.4.1patch772#egg=pip"
# Install pip-accel itself (using pip)
- "pip-accel==0.21.1"
# pip-accel only makes the s3 functionality available if boto is installed
- "boto=={{ common_boto_version }}"
......@@ -6,37 +6,20 @@
# refers to the --depth-setting of git clone. A value of 1
# will truncate all history prior to the last revision.
- name: Create shallow clone of edx-platform
git_2_0_1: >
repo=https://github.com/edx/edx-platform.git
dest={{ jenkins_home }}/shallow-clone
version={{ jenkins_edx_platform_version }}
depth=1
git:
repo: https://github.com/edx/edx-platform.git
dest: "{{ jenkins_home }}/shallow-clone"
version: "{{ jenkins_edx_platform_version }}"
depth: 1
become_user: "{{ jenkins_user }}"
# pip-accel skipped due to conflicting versions of pip required
# by the pip-accel package and edx-platform
# - name: Pip installs that are needed for pip-accel to work for us
# pip: >
# name="{{ item }}"
# virtualenv={{ jenkins_home }}/edx-venv
# virtualenv_command=virtualenv-2.7
# become_user: "{{ jenkins_user }}"
# with_items: pip_accel_reqs
# Install the platform requirements using pip.
# Installing the platform requirements using pip-accel
# would allow the binary distributions to be downloaded from S3
# rather than compiled each time. This was previously enabled,
# but reverted back to pip because the current version of pip-accel
# (0.22.4) is only compatible with pip >= 1.4, < 1.5 and the current
# version of pip in edx-platform is 6.0.8.
- name: Install edx-platform requirements using pip
pip: >
requirements={{ jenkins_home }}/shallow-clone/requirements/edx/{{ item }}
extra_args="--exists-action=w"
virtualenv={{ jenkins_home }}/edx-venv
virtualenv_command=virtualenv
executable=pip
pip:
requirements: "{{ jenkins_home }}/shallow-clone/requirements/edx/{{ item }}"
extra_args: "--exists-action=w"
virtualenv: "{{ jenkins_home }}/edx-venv"
virtualenv_command: virtualenv
with_items:
- pre.txt
- github.txt
......@@ -54,12 +37,11 @@
become_user: "{{ jenkins_user }}"
- name: Install edx-platform post requirements using pip
pip: >
requirements={{ jenkins_home }}/shallow-clone/requirements/edx/{{ item }}
extra_args="--exists-action=w"
virtualenv={{ jenkins_home }}/edx-venv
virtualenv_command=virtualenv
executable=pip
pip:
requirements: "{{ jenkins_home }}/shallow-clone/requirements/edx/{{ item }}"
extra_args: "--exists-action=w"
virtualenv: "{{ jenkins_home }}/edx-venv"
virtualenv_command: virtualenv
with_items:
- post.txt
become_user: "{{ jenkins_user }}"
......@@ -70,9 +52,9 @@
# The edx-venv directory is deleted and then recreated
# cleanly from the archive by the jenkins build scripts.
- name: Create a clean virtualenv archive
command: >
tar -cpzf edx-venv_clean.tar.gz edx-venv
chdir={{ jenkins_home }}
command: "tar -cpzf edx-venv_clean.tar.gz edx-venv"
args:
chdir: "{{ jenkins_home }}"
become_user: "{{ jenkins_user }}"
# Remove the shallow-clone directory now that we are
......
......@@ -26,10 +26,6 @@
owner={{ jenkins_user }} group={{ jenkins_group }} mode=400
ignore_errors: yes
# adding chris-lea nodejs repo
- name: add ppas for current versions of nodejs
apt_repository: repo="{{ jenkins_chrislea_ppa }}"
- name: Install system packages
apt: pkg={{','.join(jenkins_debian_pkgs)}}
state=present update_cache=yes
......@@ -43,22 +39,9 @@
# Need to add Github to known_hosts to avoid
# being prompted when using git through ssh
- name: Add github.com to known_hosts if it does not exist
shell: >
ssh-keygen -f {{ jenkins_home }}/.ssh/known_hosts -H -F github.com | grep -q found || ssh-keyscan -H github.com > {{ jenkins_home }}/.ssh/known_hosts
shell: "ssh-keygen -f {{ jenkins_home }}/.ssh/known_hosts -H -F github.com | grep -q found || ssh-keyscan -H github.com > {{ jenkins_home }}/.ssh/known_hosts"
# Edit the /etc/hosts file so that the Preview button will work in Studio
- name: add preview.localhost to /etc/hosts
shell: sed -i -r 's/^127.0.0.1\s+.*$/127.0.0.1 localhost preview.localhost/' /etc/hosts
become: yes
# Set up configuration for pip-accel for caching python requirements
- name: Create directory for pip-accel config file
file: path={{ jenkins_home }}/.pip-accel state=directory
owner={{ jenkins_user }} group={{ jenkins_group }} mode=0777 recurse=yes
when: platform_worker is defined
- name: Create pip-accel config file
template:
src=pip-accel.conf.j2 dest={{ jenkins_home }}/.pip-accel/pip-accel.conf
owner={{ jenkins_user }} group={{ jenkins_group }} mode=0664
when: platform_worker is defined
[pip-accel]
auto-install = no
data-directory = ~/.pip-accel
download-cache = ~/.pip/download-cache
s3-bucket = edx-platform.pip-accel-cache
s3-prefix = precise64
s3-readonly = no
......@@ -12,28 +12,42 @@
- nginx
- name: Ensure {{ kibana_app_dir }} exists
file: path={{ kibana_app_dir }} state=directory owner=root group=root mode=0755
file:
path: "{{ kibana_app_dir }}"
state: directory
owner: root
group: root
mode: 0755
- name: Ensure subdirectories exist
file: path={{ kibana_app_dir }}/{{ item }} owner=root group=root mode=0755 state=directory
file:
path: "{{ kibana_app_dir }}/{{ item }}"
owner: root
group: root
mode: 0755
state: directory
with_items:
- htdocs
- share
- name: ensure we have the specified kibana release
get_url: url={{ kibana_url }} dest={{ kibana_app_dir }}/share/{{ kibana_file }}
get_url:
url: "{{ kibana_url }}"
dest: "{{ kibana_app_dir }}/share/{{ kibana_file }}"
- name: extract
shell: >
chdir={{ kibana_app_dir }}/share
tar -xzvf {{ kibana_app_dir }}/share/{{ kibana_file }}
creates={{ kibana_app_dir }}/share/{{ kibana_file|replace('.tar.gz','') }}
shell: "tar -xzvf {{ kibana_app_dir }}/share/{{ kibana_file }}"
args:
chdir: "{{ kibana_app_dir }}/share"
creates: "{{ kibana_app_dir }}/share/{{ kibana_file|replace('.tar.gz','') }}"
- name: install
shell: >
chdir={{ kibana_app_dir }}/share/{{ kibana_file|replace('.tar.gz','') }}
cp -R * {{ kibana_app_dir }}/htdocs/
shell: "cp -R * {{ kibana_app_dir }}/htdocs/"
args:
chdir: "{{ kibana_app_dir }}/share/{{ kibana_file|replace('.tar.gz','') }}"
- name: copy config
template: src=config.js.j2 dest={{ kibana_app_dir }}/htdocs/config.js
template:
src: config.js.j2
dest: "{{ kibana_app_dir }}/htdocs/config.js"
......@@ -8,7 +8,7 @@
module: ec2_lookup
region: "{{ region }}"
tags:
- Name: "{{ name_tag }}"
Name: "{{ name_tag }}"
register: tag_lookup
when: terminate_instance == true
......@@ -52,6 +52,7 @@
delete_on_termination: true
zone: "{{ zone }}"
instance_profile_name: "{{ instance_profile_name }}"
user_data: "{{ user_data }}"
register: ec2
- name: Add DNS name
......@@ -64,7 +65,7 @@
ttl: 300
record: "{{ dns_name }}.{{ dns_zone }}"
value: "{{ item.public_dns_name }}"
with_items: ec2.instances
with_items: "{{ ec2.instances }}"
- name: Add DNS names for services
local_action:
......@@ -77,7 +78,7 @@
record: "{{ item[1] }}-{{ dns_name }}.{{ dns_zone }}"
value: "{{ item[0].public_dns_name }}"
with_nested:
- ec2.instances
- "{{ ec2.instances }}"
- ['studio', 'ecommerce', 'preview', 'programs', 'discovery', 'credentials']
- name: Add new instance to host group
......@@ -85,7 +86,7 @@
module: add_host
hostname: "{{ item.public_ip }}"
groups: launched
with_items: ec2.instances
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
local_action:
......@@ -94,4 +95,8 @@
search_regex: OpenSSH
port: 22
delay: 10
with_items: ec2.instances
with_items: "{{ ec2.instances }}"
- name: Wait for python to install
pause:
minutes: "{{ launch_ec2_wait_time }}"
......@@ -8,63 +8,63 @@ localdev_xvfb_display: ":1"
localdev_accounts:
- {
user: "{{ edxapp_user|default('None') }}",
home: "{{ edxapp_app_dir }}",
home: "{{ edxapp_app_dir|default('None') }}",
env: "edxapp_env",
repo: "edx-platform"
}
- {
user: "{{ forum_user|default('None') }}",
home: "{{ forum_app_dir }}",
home: "{{ forum_app_dir|default('None') }}",
env: "forum_env",
repo: "cs_comments_service"
}
- {
user: "{{ notifier_user|default('None') }}",
home: "{{ notifier_app_dir }}",
home: "{{ notifier_app_dir|default('None') }}",
env: "notifier_env",
repo: ""
}
- {
user: "{{ ecommerce_user|default('None') }}",
home: "{{ ecommerce_home }}",
home: "{{ ecommerce_home|default('None') }}",
env: "ecommerce_env",
repo: "ecommerce"
}
- {
user: "{{ ecommerce_worker_user|default('None') }}",
home: "{{ ecommerce_worker_home }}",
home: "{{ ecommerce_worker_home|default('None') }}",
env: "ecommerce_worker_env",
repo: "ecommerce_worker"
}
- {
user: "{{ analytics_api_user|default('None') }}",
home: "{{ analytics_api_home }}",
home: "{{ analytics_api_home|default('None') }}",
env: "analytics_api_env",
repo: "analytics_api"
}
- {
user: "{{ insights_user|default('None') }}",
home: "{{ insights_home }}",
home: "{{ insights_home|default('None') }}",
env: "insights_env",
repo: "edx_analytics_dashboard"
}
- {
user: "{{ programs_user|default('None') }}",
home: "{{ programs_home }}",
home: "{{ programs_home|default('None') }}",
env: "programs_env",
repo: "programs"
}
- {
user: "{{ credentials_user|default('None') }}",
home: "{{ credentials_home }}",
home: "{{ credentials_home|default('None') }}",
env: "credentials_env",
repo: "credentials"
}
......
......@@ -72,11 +72,21 @@
line: ". {{ localdev_home }}/share_x11"
state: present
# Create a .bashrc.d directory to hold extra bash initializations
- name: Create .bashrc.d dir
file:
path: "{{ item.home }}/.bashrc.d"
owner: "{{ item.user }}"
group: "{{ common_web_group }}"
state: directory
with_items: "{{ localdev_accounts }}"
when: item.user != 'None'
# Create scripts to add paver autocomplete
- name: Add paver autocomplete
copy:
src: paver_autocomplete
dest: "{{ item.home }}/.paver_autocomplete"
src: paver_autocomplete.sh
dest: "{{ item.home }}/.bashrc.d/paver_autocomplete.sh"
owner: "{{ item.user }}"
group: "{{ common_web_group }}"
mode: "0755"
......
......@@ -28,8 +28,13 @@ else
export DISPLAY="{{ localdev_xvfb_display }}"
fi
cd "{{ item.home }}/{{ item.repo }}"
# Import ~/.bashrc.d modules
if [ -d {{ item.home }}/.bashrc.d ]; then
for BASHMODULE in {{ item.home }}/.bashrc.d/*; do
source $BASHMODULE
done
fi
source "{{ item.home }}/.paver_autocomplete"
cd "{{ item.home }}/{{ item.repo }}"
export JSCOVER_JAR="/usr/local/bin/JSCover-all-{{ localdev_jscover_version }}.jar"
......@@ -36,7 +36,7 @@
state: "present"
update_cache: true
cache_valid_time: 3600
with_items: locust_debian_pkgs
with_items: "{{ locust_debian_pkgs }}"
- name: Install application requirements
pip:
......
......@@ -49,26 +49,26 @@
- name: Install python requirements
pip: name={{ item }} state=present
with_items: logstash_python_requirements
with_items: "{{ logstash_python_requirements }}"
- name: Checkout logstash rotation scripts
git: repo={{ logstash_scripts_repo }} dest={{ logstash_app_dir }}/share/logstash-elasticsearch-scripts
when: LOGSTASH_ROTATE|bool
- name: Setup cron to run rotation
cron: >
user=root
name="Elasticsearch logstash index rotation"
hour={{ logstash_rotate_cron.hour }}
minute={{ logstash_rotate_cron.minute }}
job="/usr/bin/python {{ logstash_app_dir }}/share/logstash-elasticsearch-scripts/logstash_index_cleaner.py -d {{ LOGSTASH_DAYS_TO_KEEP }} > {{ logstash_log_dir }}/rotation_cron"
cron:
user: root
name: "Elasticsearch logstash index rotation"
hour: "{{ logstash_rotate_cron.hour }}"
minute: "{{ logstash_rotate_cron.minute }}"
job: "/usr/bin/python {{ logstash_app_dir }}/share/logstash-elasticsearch-scripts/logstash_index_cleaner.py -d {{ LOGSTASH_DAYS_TO_KEEP }} > {{ logstash_log_dir }}/rotation_cron"
when: LOGSTASH_ROTATE|bool
- name: Setup cron to run rotation
cron: >
user=root
name="Elasticsearch logstash index optimization"
hour={{ logstash_optimize_cron.hour }}
minute={{ logstash_optimize_cron.minute }}
job="/usr/bin/python {{ logstash_app_dir }}/share/logstash-elasticsearch-scripts/logstash_index_optimize.py -d {{ LOGSTASH_DAYS_TO_KEEP }} > {{ logstash_log_dir }}/optimize_cron"
cron:
user: root
name: "Elasticsearch logstash index optimization"
hour: "{{ logstash_optimize_cron.hour }}"
minute: "{{ logstash_optimize_cron.minute }}"
job: "/usr/bin/python {{ logstash_app_dir }}/share/logstash-elasticsearch-scripts/logstash_index_optimize.py -d {{ LOGSTASH_DAYS_TO_KEEP }} > {{ logstash_log_dir }}/optimize_cron"
when: LOGSTASH_ROTATE|bool
- name: copy galera cluster config
template: >
src="etc/mysql/conf.d/galera.cnf.j2"
dest="/etc/mysql/conf.d/galera.cnf"
owner="root"
group="root"
mode=0600
template:
src: "etc/mysql/conf.d/galera.cnf.j2"
dest: "/etc/mysql/conf.d/galera.cnf"
owner: "root"
group: "root"
mode: 0600
- name: check if we have already bootstrapped the cluster
stat: path=/etc/mysql/ansible_cluster_started
......@@ -15,18 +15,18 @@
when: not mariadb_bootstrap.stat.exists
- name: setup bootstrap on primary
lineinfile: >
dest="/etc/mysql/conf.d/galera.cnf"
regexp="^wsrep_cluster_address=gcomm://{{ hostvars.keys()|sort|join(',') }}$"
line="wsrep_cluster_address=gcomm://"
lineinfile:
dest: "/etc/mysql/conf.d/galera.cnf"
regexp: "^wsrep_cluster_address=gcomm://{{ hostvars.keys()|sort|join(',') }}$"
line: "wsrep_cluster_address=gcomm://"
when: ansible_hostname == hostvars[hostvars.keys()[0]].ansible_hostname and not mariadb_bootstrap.stat.exists
- name: fetch debian.cnf file so start-stop will work properly
fetch: >
src=/etc/mysql/debian.cnf
dest=/tmp/debian.cnf
fail_on_missing=yes
flat=yes
fetch:
src: /etc/mysql/debian.cnf
dest: /tmp/debian.cnf
fail_on_missing: yes
flat: yes
when: ansible_hostname == hostvars[hostvars.keys()[0]].ansible_hostname and not mariadb_bootstrap.stat.exists
register: mariadb_new_debian_cnf
......@@ -39,12 +39,12 @@
when: not mariadb_bootstrap.stat.exists
- name: reset galera cluster config since we are bootstrapped
template: >
src="etc/mysql/conf.d/galera.cnf.j2"
dest="/etc/mysql/conf.d/galera.cnf"
owner="root"
group="root"
mode=0600
template:
src: "etc/mysql/conf.d/galera.cnf.j2"
dest: "/etc/mysql/conf.d/galera.cnf"
owner: "root"
group: "root"
mode: 0600
when: not mariadb_bootstrap.stat.exists
- name: touch bootstrap file to confirm we are fully up
......@@ -53,6 +53,5 @@
# This is needed for mysql-check in haproxy or other mysql monitor
# scripts to prevent haproxy checks exceeding `max_connect_errors`.
- name: create haproxy monitor user
command: >
mysql -e "INSERT INTO mysql.user (Host,User) values ('{{ item }}','{{ MARIADB_HAPROXY_USER }}'); FLUSH PRIVILEGES;"
with_items: MARIADB_HAPROXY_HOSTS
command: "mysql -e \"INSERT INTO mysql.user (Host,User) values ('{{ item }}','{{ MARIADB_HAPROXY_USER }}'); FLUSH PRIVILEGES;\""
with_items: "{{ MARIADB_HAPROXY_HOSTS }}"
......@@ -23,31 +23,32 @@
- name: Install pre-req debian packages
apt: name={{ item }} state=present
with_items: mariadb_debian_pkgs
with_items: "{{ mariadb_debian_pkgs }}"
- name: Add mariadb apt key
apt_key: url="{{ COMMON_UBUNTU_APT_KEYSERVER }}{{ MARIADB_APT_KEY_ID }}"
apt_key:
url: "{{ COMMON_UBUNTU_APT_KEYSERVER }}{{ MARIADB_APT_KEY_ID }}"
- name: add the mariadb repo to the sources list
apt_repository: >
repo='{{ MARIADB_REPO }}'
state=present
apt_repository:
repo: "{{ MARIADB_REPO }}"
state: present
- name: install mariadb solo packages
apt: name={{ item }} update_cache=yes
with_items: mariadb_solo_packages
with_items: "{{ mariadb_solo_packages }}"
when: not MARIADB_CLUSTERED|bool
- name: install mariadb cluster packages
apt: name={{ item }} update_cache=yes
with_items: mariadb_cluster_packages
with_items: "{{ mariadb_cluster_packages }}"
when: MARIADB_CLUSTERED|bool
- name: remove bind-address
lineinfile: >
dest=/etc/mysql/my.cnf
regexp="^bind-address\s+=\s+127\.0\.0\.1$"
state=absent
lineinfile:
dest: /etc/mysql/my.cnf
regexp: '^bind-address\s+=\s+127\.0\.0\.1$'
state: absent
when: MARIADB_LISTEN_ALL|bool or MARIADB_CLUSTERED|bool
- include: cluster.yml
......@@ -57,37 +58,37 @@
service: name=mysql state=started
- name: create all databases
mysql_db: >
db={{ item }}
state=present
encoding=utf8
with_items: MARIADB_DATABASES
mysql_db:
db: "{{ item }}"
state: present
encoding: utf8
with_items: "{{ MARIADB_DATABASES }}"
when: MARIADB_CREATE_DBS|bool
- name: create all analytics dbs
mysql_db: >
db={{ item }}
state=present
encoding=utf8
with_items: MARIADB_ANALYTICS_DATABASES
mysql_db:
db: "{{ item }}"
state: present
encoding: utf8
with_items: "{{ MARIADB_ANALYTICS_DATABASES }}"
when: MARIADB_CREATE_DBS|bool and ANALYTICS_API_CONFIG is defined
- name: create all users/privs
mysql_user: >
name="{{ item.name }}"
password="{{ item.pass }}"
priv="{{ item.priv }}"
host="{{ item.host }}"
append_privs=yes
with_items: MARIADB_USERS
mysql_user:
name: "{{ item.name }}"
password: "{{ item.pass }}"
priv: "{{ item.priv }}"
host: "{{ item.host }}"
append_privs: yes
with_items: "{{ MARIADB_USERS }}"
when: MARIADB_CREATE_DBS|bool
- name: create all analytics users/privs
mysql_user: >
name="{{ item.name }}"
password="{{ item.pass }}"
priv="{{ item.priv }}"
host="{{ item.host }}"
append_privs=yes
with_items: MARIADB_ANALYTICS_USERS
mysql_user:
name: "{{ item.name }}"
password: "{{ item.pass }}"
priv: "{{ item.priv }}"
host: "{{ item.host }}"
append_privs: yes
with_items: "{{ MARIADB_ANALYTICS_USERS }}"
when: MARIADB_CREATE_DBS|bool and ANALYTICS_API_CONFIG is defined
......@@ -79,6 +79,27 @@
- install
- install:base
- name: Add mongod systemd configuration on 16.04
template:
src: "etc/systemd/system/mongod.service.j2"
dest: "/etc/systemd/system/mongod.service"
notify:
- restart mongo
when: ansible_distribution_release == 'xenial'
tags:
- install
- install:configuration
- name: enable mongod systemd unit on 16.04
systemd:
name: mongod
enabled: yes
daemon_reload: yes
when: ansible_distribution_release == 'xenial'
tags:
- install
- install:configuration
- name: Stop mongod service
service:
name: mongod
......@@ -118,27 +139,6 @@
- install
- install:configuration
- name: Add mongod systemd configuration on 16.04
template:
src: "etc/systemd/system/mongod.service.j2"
dest: "/etc/systemd/system/mongod.service"
notify:
- restart mongo
when: ansible_distribution_release == 'xenial'
tags:
- install
- install:configuration
- name: enable mongod systemd unit on 16.04
systemd_2_2:
name: mongod
enabled: yes
daemon_reload: yes
when: ansible_distribution_release == 'xenial'
tags:
- install
- install:configuration
- name: Start mongo service
service:
name: mongod
......
......@@ -55,7 +55,7 @@
install_recommends: yes
force: yes
update_cache: yes
with_items: mongodb_debian_pkgs
with_items: "{{ mongodb_debian_pkgs }}"
tags:
- install
- install:app-requirements
......@@ -292,8 +292,6 @@
register: replset_status
when: MONGO_CLUSTERED
tags:
- configure_replica_set
tags:
- "manage"
- "manage:db"
- "configure_replica_set"
......@@ -314,8 +312,6 @@
run_once: true
when: MONGO_CLUSTERED
tags:
- configure_replica_set
tags:
- "manage"
- "manage:db"
......@@ -330,7 +326,7 @@
roles: "{{ item.roles }}"
state: present
replica_set: "{{ MONGO_REPL_SET }}"
with_items: MONGO_USERS
with_items: "{{ MONGO_USERS }}"
run_once: true
when: MONGO_CLUSTERED
tags:
......@@ -346,7 +342,7 @@
password: "{{ item.password }}"
roles: "{{ item.roles }}"
state: present
with_items: MONGO_USERS
with_items: "{{ MONGO_USERS }}"
when: not MONGO_CLUSTERED
tags:
- "manage"
......
......@@ -11,31 +11,30 @@
when: MMSAPIKEY is not defined
- name: download mongo mms agent
get_url: >
url="{{ base_url }}/{{ item.dir }}/{{ item.agent }}_{{ item.version }}_{{ pkg_arch }}.{{ pkg_format }}"
dest="/tmp/{{ item.agent }}-{{ item.version }}.{{ pkg_format }}"
get_url:
url: "{{ base_url }}/{{ item.dir }}/{{ item.agent }}_{{ item.version }}_{{ pkg_arch }}.{{ pkg_format }}"
dest: "/tmp/{{ item.agent }}-{{ item.version }}.{{ pkg_format }}"
register: download_mms_deb
with_items:
agents
with_items: "{{ agents }}"
- name: install mongo mms agent
apt: >
deb="/tmp/{{ item.agent }}-{{ item.version }}.deb"
apt:
deb: "/tmp/{{ item.agent }}-{{ item.version }}.deb"
when: download_mms_deb.changed
notify: restart mms
with_items:
agents
- name: add key to monitoring-agent.config
lineinfile: >
dest="{{ item.config }}"
regexp="^mmsApiKey="
line="mmsApiKey={{ MMSAPIKEY }}"
lineinfile:
dest: "{{ item.config }}"
regexp: "^mmsApiKey="
line: "mmsApiKey={{ MMSAPIKEY }}"
notify: restart mms
with_items:
agents
with_items: "{{ agents }}"
- name: start mms service
service: name="{{ item.agent }}" state=started
with_items:
agents
service:
name: "{{ item.agent }}"
state: started
with_items: "{{ agents }}"
......@@ -24,7 +24,7 @@
fstype: "{{ (ansible_mounts | selectattr('device', 'equalto', item.device) | first | default({'fstype': None})).fstype }}"
state: unmounted
when: "{{ UNMOUNT_DISKS and (ansible_mounts | selectattr('device', 'equalto', item.device) | first | default({'fstype': None})).fstype != item.fstype }}"
with_items: volumes
with_items: "{{ volumes }}"
# Noop & reports "ok" if fstype is correct
# Errors if fstype is wrong and disk is mounted (hence above task)
......@@ -34,7 +34,7 @@
fstype: "{{ item.fstype }}"
# Necessary because AWS gives some ephemeral disks the wrong fstype by default
force: true
with_items: volumes
with_items: "{{ volumes }}"
# This can fail if one volume is mounted on a child directory as another volume
# and it attempts to unmount the parent first. This is generally fixable by rerunning.
......@@ -49,21 +49,21 @@
src: "{{ item.device }}"
fstype: "{{ item.fstype }}"
state: unmounted
when: >
when:
UNMOUNT_DISKS and
volumes | selectattr('device', 'equalto', item.device) | list | length != 0 and
(volumes | selectattr('device', 'equalto', item.device) | first).mount != item.mount
with_items: ansible_mounts
with_items: "{{ ansible_mounts }}"
# If there are disks we want to be unmounting, but we can't because UNMOUNT_DISKS is false
# that is an errorable condition, since it can easily allow us to double mount a disk.
- name: Check that we don't want to unmount disks when UNMOUNT_DISKS is false
fail: msg="Found disks mounted in the wrong place, but can't unmount them. This role will need to be re-run with -e 'UNMOUNT_DISKS=True' if you believe that is safe."
when: >
when:
not UNMOUNT_DISKS and
volumes | selectattr('device', 'equalto', item.device) | list | length != 0 and
(volumes | selectattr('device', 'equalto', item.device) | first).mount != item.mount
with_items: ansible_mounts
with_items: "{{ ansible_mounts }}"
- name: Mount disks
mount:
......@@ -72,4 +72,4 @@
state: mounted
fstype: "{{ item.fstype }}"
opts: "{{ item.options }}"
with_items: volumes
with_items: "{{ volumes }}"
......@@ -61,7 +61,7 @@
name: "{{ item }}"
install_recommends: yes
state: present
with_items: mysql_debian_pkgs
with_items: "{{ mysql_debian_pkgs }}"
- name: Start mysql
service:
......
......@@ -22,41 +22,37 @@
#
- name: Download newrelic NPI
get_url: >
dest="/tmp/{{ newrelic_npi_installer }}"
url="{{ NEWRELIC_NPI_URL }}"
get_url:
dest: "/tmp/{{ newrelic_npi_installer }}"
url: "{{ NEWRELIC_NPI_URL }}"
register: download_npi_installer
- name: create npi install directory {{ NEWRELIC_NPI_PREFIX }}
file: >
path="{{ NEWRELIC_NPI_PREFIX }}"
state=directory
mode=0755
owner="{{ NEWRELIC_USER }}"
file:
path: "{{ NEWRELIC_NPI_PREFIX }}"
state: directory
mode: 0755
owner: "{{ NEWRELIC_USER }}"
- name: install newrelic npi
shell: >
tar -xzf /tmp/{{ newrelic_npi_installer }} --strip-components=1 -C "{{NEWRELIC_NPI_PREFIX}}"
shell: "tar -xzf /tmp/{{ newrelic_npi_installer }} --strip-components=1 -C \"{{NEWRELIC_NPI_PREFIX}}\""
when: download_npi_installer.changed
become_user: "{{ NEWRELIC_USER }}"
- name: configure npi with the default user
shell: >
{{ NEWRELIC_NPI_PREFIX }}/bin/node {{ NEWRELIC_NPI_PREFIX }}/npi.js "set user {{ NEWRELIC_USER }}"
shell: "{{ NEWRELIC_NPI_PREFIX }}/bin/node {{ NEWRELIC_NPI_PREFIX }}/npi.js \"set user {{ NEWRELIC_USER }}\""
args:
chdir: "{{ NEWRELIC_NPI_PREFIX }}"
become_user: "{{ NEWRELIC_USER }}"
- name: configure npi with the license key
shell: >
./npi set license_key {{ NEWRELIC_LICENSE_KEY }}
shell: "./npi set license_key {{ NEWRELIC_LICENSE_KEY }}"
args:
chdir: "{{ NEWRELIC_NPI_PREFIX }}"
become_user: "{{ NEWRELIC_USER }}"
- name: configure npi with the distro
shell: >
./npi set distro {{ NEWRELIC_NPI_DISTRO }}
shell: "./npi set distro {{ NEWRELIC_NPI_DISTRO }}"
args:
chdir: "{{ NEWRELIC_NPI_PREFIX }}"
become_user: "{{ NEWRELIC_USER }}"
......
......@@ -76,6 +76,11 @@ server {
try_files $uri @proxy_to_app;
}
# Allow access to this API for POST back from payment processors.
location /payment {
try_files $uri @proxy_to_app;
}
{% include "robots.j2" %}
location @proxy_to_app {
......
......@@ -117,16 +117,17 @@ error_page {{ k }} {{ v }};
{% include "common-settings.j2" %}
{% if NGINX_EDXAPP_EMBARGO_CIDRS -%}
#only redirect to embargo when $embargo == true and $uri != /embargo
#only redirect to embargo when $embargo == true and $uri != $embargo_url
#this is a hack to do multiple conditionals
set $embargo_url "/embargo/blocked-message/courseware/embargo/";
if ( $embargo ) {
set $do_embargo "A";
}
if ( $uri != "/embargo" ) {
if ( $uri != $embargo_url ) {
set $do_embargo "${do_embargo}B";
}
if ( $do_embargo = "AB" ) {
return 302 /embargo;
return 302 $embargo_url;
}
{% endif -%}
......@@ -166,7 +167,7 @@ error_page {{ k }} {{ v }};
{% include "basic-auth.j2" %}
{% endif %}
if ( $arg_next = "favicon.ico" ) {
if ( $arg_next ~* "favicon.ico" ) {
return 403;
}
......@@ -174,7 +175,7 @@ error_page {{ k }} {{ v }};
}
{% if NGINX_EDXAPP_EMBARGO_CIDRS %}
location /embargo {
location $embargo_url {
try_files $uri @proxy_to_lms_app;
}
{% endif %}
......
......@@ -5,23 +5,24 @@
- name: create the nltk data directory and subdirectories
file: path={{ NLTK_DATA_DIR }}/{{ item.path|dirname }} state=directory
with_items: NLTK_DATA
with_items: "{{ NLTK_DATA }}"
tags:
- deploy
- name: download nltk data
get_url: >
dest={{ NLTK_DATA_DIR }}/{{ item.url|basename }}
url={{ item.url }}
with_items: NLTK_DATA
get_url:
dest: "{{ NLTK_DATA_DIR }}/{{ item.url|basename }}"
url: "{{ item.url }}"
with_items: "{{ NLTK_DATA }}"
register: nltk_download
tags:
- deploy
- name: unarchive nltk data
shell: >
unzip {{ NLTK_DATA_DIR }}/{{ item.url|basename }} chdir="{{ NLTK_DATA_DIR }}/{{ item.path|dirname }}"
with_items: NLTK_DATA
shell: "unzip {{ NLTK_DATA_DIR }}/{{ item.url|basename }}"
args:
chdir: "{{ NLTK_DATA_DIR }}/{{ item.path|dirname }}"
with_items: "{{ NLTK_DATA }}"
when: nltk_download|changed
tags:
- deploy
---
- name: Checkout code
git_2_0_1:
git:
dest: "{{ NOTIFIER_CODE_DIR }}"
repo: "{{ NOTIFIER_SOURCE_REPO }}"
version: "{{ NOTIFIER_VERSION }}"
......@@ -38,7 +38,7 @@
when: NOTIFIER_GIT_IDENTITY != ""
- name: Checkout theme
git_2_0_1:
git:
dest: "{{ NOTIFIER_CODE_DIR }}/{{ NOTIFIER_THEME_NAME }}"
repo: "{{ NOTIFIER_THEME_REPO }}"
version: "{{ NOTIFIER_THEME_VERSION }}"
......
......@@ -19,38 +19,38 @@ oauth_client_setup_role_name: oauth_client_setup
oauth_client_setup_oauth2_clients:
- {
name: "{{ ecommerce_service_name | default('None') }}",
url_root: "{{ ECOMMERCE_ECOMMERCE_URL_ROOT }}",
id: "{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_KEY }}",
secret: "{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_SECRET }}",
logout_uri: "{{ ECOMMERCE_LOGOUT_URL }}"
url_root: "{{ ECOMMERCE_ECOMMERCE_URL_ROOT | default('None') }}",
id: "{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_KEY | default('None') }}",
secret: "{{ ECOMMERCE_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}",
logout_uri: "{{ ECOMMERCE_LOGOUT_URL | default('None') }}"
}
- {
name: "{{ INSIGHTS_OAUTH2_APP_CLIENT_NAME | default('None') }}",
url_root: "{{ INSIGHTS_BASE_URL }}",
id: "{{ INSIGHTS_OAUTH2_KEY }}",
secret: "{{ INSIGHTS_OAUTH2_SECRET }}",
logout_uri: "{{ INSIGHTS_LOGOUT_URL }}"
url_root: "{{ INSIGHTS_BASE_URL | default('None') }}",
id: "{{ INSIGHTS_OAUTH2_KEY | default('None') }}",
secret: "{{ INSIGHTS_OAUTH2_SECRET | default('None') }}",
logout_uri: "{{ INSIGHTS_LOGOUT_URL | default('None') }}"
}
- {
name: "{{ programs_service_name | default('None') }}",
url_root: "{{ PROGRAMS_URL_ROOT }}",
id: "{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_KEY }}",
secret: "{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_SECRET }}",
logout_uri: "{{ PROGRAMS_LOGOUT_URL }}"
url_root: "{{ PROGRAMS_URL_ROOT | default('None') }}",
id: "{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_KEY | default('None') }}",
secret: "{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}",
logout_uri: "{{ PROGRAMS_LOGOUT_URL | default('None') }}"
}
- {
name: "{{ credentials_service_name | default('None') }}",
url_root: "{{ CREDENTIALS_URL_ROOT }}",
id: "{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}",
secret: "{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}",
logout_uri: "{{ CREDENTIALS_LOGOUT_URL }}"
url_root: "{{ CREDENTIALS_URL_ROOT | default('None') }}",
id: "{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY | default('None') }}",
secret: "{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}",
logout_uri: "{{ CREDENTIALS_LOGOUT_URL | default('None') }}"
}
- {
name: "{{ discovery_service_name | default('None') }}",
url_root: "{{ DISCOVERY_URL_ROOT }}",
id: "{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_KEY }}",
secret: "{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}",
logout_uri: "{{ DISCOVERY_LOGOUT_URL }}"
url_root: "{{ DISCOVERY_URL_ROOT | default('None') }}",
id: "{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_KEY | default('None') }}",
secret: "{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET | default('None') }}",
logout_uri: "{{ DISCOVERY_LOGOUT_URL | default('None') }}"
}
#
......
......@@ -35,5 +35,5 @@
--logout_uri {{ item.logout_uri | default("") }}
become_user: "{{ edxapp_user }}"
environment: "{{ edxapp_environment }}"
with_items: oauth_client_setup_oauth2_clients
with_items: "{{ oauth_client_setup_oauth2_clients }}"
when: item.name != 'None'
......@@ -50,8 +50,17 @@
dest: "{{ COMMON_OBJECT_STORE_LOG_SYNC_SCRIPT }}"
when: COMMON_OBJECT_STORE_LOG_SYNC
# We want to ensure that OpenStack requirements are installed at the same time as edxapp
# requirements so that they're available during the initial migration.
- name: make sure Openstack Python requirements get installed
set_fact:
edxapp_requirements_files: "{{ edxapp_requirements_files + [openstack_requirements_file] }}"
# Install openstack python requirements into {{ edxapp_venv_dir }}
- name : Install python requirements
# Need to use command rather than pip so that we can maintain the context of our current working directory;
# some requirements are pathed relative to the edx-platform repo.
# Using the pip from inside the virtual environment implicitly installs everything into that virtual environment.
command: "{{ edxapp_venv_dir }}/bin/pip install {{ COMMON_PIP_VERBOSITY }} -i {{ COMMON_PYPI_MIRROR_URL }} --exists-action w -r {{ openstack_requirements_file }}"
args:
chdir: "{{ edxapp_code_dir }}"
sudo_user: "{{ edxapp_user }}"
environment: "{{ edxapp_environment }}"
when: edxapp_code_dir is defined
tags:
- install
- install:app-requirements
......@@ -21,15 +21,20 @@ PROGRAMS_SSL_NGINX_PORT: 48140
PROGRAMS_DEFAULT_DB_NAME: 'programs'
PROGRAMS_DATABASE_USER: 'programs001'
PROGRAMS_DATABASE_PASSWORD: 'password'
PROGRAMS_DATABASE_HOST: 'localhost'
PROGRAMS_DATABASE_PORT: 3306
PROGRAMS_DATABASES:
# rw user
default:
ENGINE: 'django.db.backends.mysql'
NAME: '{{ PROGRAMS_DEFAULT_DB_NAME }}'
USER: 'programs001'
PASSWORD: 'password'
HOST: 'localhost'
PORT: '3306'
USER: '{{ PROGRAMS_DATABASE_USER }}'
PASSWORD: '{{ PROGRAMS_DATABASE_PASSWORD }}'
HOST: '{{ PROGRAMS_DATABASE_HOST }}'
PORT: '{{ PROGRAMS_DATABASE_PORT }}'
ATOMIC_REQUESTS: true
CONN_MAX_AGE: 60
......
......@@ -56,9 +56,9 @@
- migrate:db
- name: run collectstatic
shell: >
chdir={{ programs_code_dir }}
{{ programs_venv_dir }}/bin/python manage.py collectstatic --noinput
shell: "{{ programs_venv_dir }}/bin/python manage.py collectstatic --noinput"
args:
chdir: "{{ programs_code_dir }}"
become_user: "{{ programs_user }}"
environment: "{{ programs_environment }}"
when: not devstack
......@@ -68,9 +68,12 @@
# NOTE this isn't used or needed when s3 is used for PROGRAMS_MEDIA_STORAGE_BACKEND
- name: create programs media dir
file: >
path="{{ item }}" state=directory mode=0775
owner="{{ programs_user }}" group="{{ common_web_group }}"
file:
path: "{{ item }}"
state: directory
mode: 0775
owner: "{{ programs_user }}"
group: "{{ common_web_group }}"
with_items:
- "{{ PROGRAMS_MEDIA_ROOT }}"
tags:
......
......@@ -171,8 +171,7 @@
- maintenance
- name: Make queues mirrored
shell: >
/usr/sbin/rabbitmqctl -p {{ item }} set_policy HA "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
shell: "/usr/sbin/rabbitmqctl -p {{ item }} set_policy HA \"\" '{\"ha-mode\":\"all\",\"ha-sync-mode\":\"automatic\"}'"
when: RABBITMQ_CLUSTERED_HOSTS|length > 1
with_items: "{{ RABBITMQ_VHOSTS }}"
tags:
......
......@@ -38,42 +38,47 @@
when: rbenv_ruby_version is not defined
- name: create rbenv user {{ rbenv_user }}
user: >
name={{ rbenv_user }} home={{ rbenv_dir }}
shell=/bin/false createhome=no
user:
name: "{{ rbenv_user }}"
home: "{{ rbenv_dir }}"
shell: /bin/false
createhome: no
when: rbenv_user != common_web_user
tags:
- install
- install:base
- name: create rbenv dir if it does not exist
file: >
path="{{ rbenv_dir }}" owner="{{ rbenv_user }}"
state=directory
file:
path: "{{ rbenv_dir }}"
owner: "{{ rbenv_user }}"
state: directory
tags:
- install
- install:base
- name: install build depends
apt: pkg={{ ",".join(rbenv_debian_pkgs) }} update_cache=yes state=present install_recommends=no
with_items: rbenv_debian_pkgs
with_items: "{{ rbenv_debian_pkgs }}"
tags:
- install
- install:base
- name: update rbenv repo
git_2_0_1: >
repo=https://github.com/sstephenson/rbenv.git
dest={{ rbenv_dir }}/.rbenv version={{ rbenv_version }}
accept_hostkey=yes
git:
repo: https://github.com/sstephenson/rbenv.git
dest: "{{ rbenv_dir }}/.rbenv"
version: "{{ rbenv_version }}"
accept_hostkey: yes
become_user: "{{ rbenv_user }}"
tags:
- install
- install:base
- name: ensure ruby_env exists
template: >
src=ruby_env.j2 dest={{ rbenv_dir }}/ruby_env
template:
src: ruby_env.j2
dest: "{{ rbenv_dir }}/ruby_env"
become_user: "{{ rbenv_user }}"
tags:
- install
......@@ -107,9 +112,10 @@
- install:base
- name: clone ruby-build repo
git: >
repo=https://github.com/sstephenson/ruby-build.git dest={{ tempdir.stdout }}/ruby-build
accept_hostkey=yes
git:
repo: https://github.com/sstephenson/ruby-build.git
dest: "{{ tempdir.stdout }}/ruby-build"
accept_hostkey: yes
when: tempdir.stdout is defined and (rbuild_present|failed or (installable_ruby_vers is defined and rbenv_ruby_version not in installable_ruby_vers))
become_user: "{{ rbenv_user }}"
tags:
......
......@@ -29,13 +29,13 @@
# file:
# path={{ item.mount_point }} owner={{ item.owner }}
# group={{ item.group }} mode={{ item.mode }} state="directory"
# with_items: my_role_s3fs_mounts
# with_items: "{{ my_role_s3fs_mounts }}"
#
# - name: mount s3 buckets
# mount:
# name={{ item.mount_point }} src={{ item.bucket }} fstype=fuse.s3fs
# opts=use_cache=/tmp,iam_role={{ task_iam_role }},allow_other state=mounted
# with_items: myrole_s3fs_mounts
# with_items: "{{ myrole_s3fs_mounts }}"
#
# Example play:
#
......
......@@ -57,47 +57,3 @@
with_items:
- unattended-upgrade --dry-run
- unattended-upgrade
#### Bash security vulnerability
- name: Check if we are vulnerable
shell: "executable=/bin/bash chdir=/tmp foo='() { echo vulnerable; }' bash -c foo"
register: test_vuln
ignore_errors: yes
- name: Apply bash security update if we are vulnerable
apt:
name: bash
state: latest
update_cache: yes
when: "'vulnerable' in test_vuln.stdout"
- name: Check again and fail if we are still vulnerable
shell: "executable=/bin/bash foo='() { echo vulnerable; }' bash -c foo"
when: "'vulnerable' in test_vuln.stdout"
register: test_vuln
failed_when: "'vulnerable' in test_vuln.stdout"
#### GHOST security vulnerability
- name: GHOST.c
copy:
src: "tmp/GHOST.c"
dest: "/tmp/GHOST.c"
owner: root
group: root
- name: Compile GHOST
shell: "gcc -o /tmp/GHOST /tmp/GHOST.c"
- name: Check if we are vulnerable
shell: "/tmp/GHOST"
register: test_ghost_vuln
ignore_errors: yes
- name: Apply glibc security update if we are vulnerable
apt:
name: libc6
state: latest
update_cache: yes
when: "'vulnerable' in test_ghost_vuln.stdout"
......@@ -15,12 +15,12 @@
file: path=/etc/shibboleth/metadata state=directory mode=2774 group=_shibd owner=_shibd
- name: Downloads metadata into metadata directory as backup
get_url: >
url={{ shib_metadata_backup_url }}
dest=/etc/shibboleth/metadata/idp-metadata.xml
mode=0640
group=_shibd
owner=_shibd
get_url:
url: "{{ shib_metadata_backup_url }}"
dest: "/etc/shibboleth/metadata/idp-metadata.xml"
mode: 0640
group: _shibd
owner: _shibd
when: shib_download_metadata
- name: writes out key and pem file
......
......@@ -9,39 +9,51 @@
- oinkmaster
- name: configure snort
template: >
src=etc/snort/snort.conf.j2 dest=/etc/snort/snort.conf
owner=root group=root mode=0644
template:
src: etc/snort/snort.conf.j2
dest: /etc/snort/snort.conf
owner: root
group: root
mode: 0644
- name: configure snort (debian)
template: >
src=etc/snort/snort.debian.conf.j2 dest=/etc/snort/snort.debian.conf
owner=root group=root mode=0644
template:
src: etc/snort/snort.debian.conf.j2
dest: /etc/snort/snort.debian.conf
owner: root
group: root
mode: 0644
- name: configure oinkmaster
template: >
src=etc/oinkmaster.conf.j2 dest=/etc/oinkmaster.conf
owner=root group=root mode=0644
template:
src: etc/oinkmaster.conf.j2
dest: /etc/oinkmaster.conf
owner: root
group: root
mode: 0644
- name: update snort
shell: oinkmaster -C /etc/oinkmaster.conf -o /etc/snort/rules/
become: yes
- name: snort service
service: >
name="snort"
state="started"
service:
name: "snort"
state: "started"
- name: open read permissions on snort logs
file: >
name="/var/log/snort"
state="directory"
mode="755"
file:
name: "/var/log/snort"
state: "directory"
mode: "755"
- name: install oinkmaster cronjob
template: >
src=etc/cron.daily/oinkmaster.j2 dest=/etc/cron.daily/oinkmaster
owner=root group=root mode=0755
template:
src: etc/cron.daily/oinkmaster.j2
dest: /etc/cron.daily/oinkmaster
owner: root
group: root
mode: 0755
......@@ -25,7 +25,7 @@
fail:
msg: Please define either "source" or "sourcetype", not both or neither
when: ('source' in item and 'sourcetype' in item) or ('source' not in item and 'sourcetype' not in item)
with_items: SPLUNK_FIELD_EXTRACTIONS
with_items: "{{ SPLUNK_FIELD_EXTRACTIONS }}"
- name: Make sure necessary dirs exist
file:
......@@ -144,7 +144,7 @@
owner: "{{ splunk_user }}"
group: "{{ splunk_user }}"
mode: 0700
with_items: SPLUNK_DASHBOARDS
with_items: "{{ SPLUNK_DASHBOARDS }}"
tags:
- install
- install:configuration
......
......@@ -116,7 +116,7 @@
group: splunk
mode: "0400"
when: "{{ item.ssl_cert is defined }}"
with_items: SPLUNKFORWARDER_SERVERS
with_items: "{{ SPLUNKFORWARDER_SERVERS }}"
- name: Write root CA to disk
copy:
......@@ -126,7 +126,7 @@
group: splunk
mode: "0400"
when: "{{ item.ssl_cert is defined }}"
with_items: SPLUNKFORWARDER_SERVERS
with_items: "{{ SPLUNKFORWARDER_SERVERS }}"
- name: Create inputs and outputs configuration
template:
......
......@@ -11,15 +11,15 @@ import time
# Services that should be checked for migrations.
MIGRATION_COMMANDS = {
'lms': "NO_EDXAPP_SUDO=1 /edx/bin/edxapp-migrate-lms --noinput --list",
'cms': "NO_EDXAPP_SUDO=1 /edx/bin/edxapp-migrate-cms --noinput --list",
'xqueue': "SERVICE_VARIANT=xqueue {python} {code_dir}/manage.py migrate --noinput --list --settings=xqueue.aws_settings",
'ecommerce': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'programs': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'insights': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'analytics_api': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'credentials': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'discovery': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'lms': "/edx/bin/edxapp-migrate-lms --noinput --list",
'cms': "/edx/bin/edxapp-migrate-cms --noinput --list",
'xqueue': "SERVICE_VARIANT=xqueue sudo -E -u xqueue {python} {code_dir}/manage.py migrate --noinput --list --settings=xqueue.aws_settings",
'ecommerce': ". {env_file}; sudo -E -u ecommerce {python} {code_dir}/manage.py showmigrations",
'programs': ". {env_file}; sudo -E -u programs {python} {code_dir}/manage.py showmigrations",
'insights': ". {env_file}; sudo -E -u insights {python} {code_dir}/manage.py showmigrations",
'analytics_api': ". {env_file}; sudo -E -u analytics_api {python} {code_dir}/manage.py showmigrations",
'credentials': ". {env_file}; sudo -E -u credentials {python} {code_dir}/manage.py showmigrations",
'discovery': ". {env_file}; sudo -E -u discovery {python} {code_dir}/manage.py showmigrations",
}
HIPCHAT_USER = "PreSupervisor"
......@@ -265,7 +265,7 @@ if __name__ == '__main__':
available_file = os.path.join(args.available, "{}.conf".format(service))
link_location = os.path.join(args.enabled, "{}.conf".format(service))
if os.path.exists(available_file):
subprocess.call("ln -sf {} {}".format(available_file, link_location), shell=True)
subprocess.call("sudo -u supervisor ln -sf {} {}".format(available_file, link_location), shell=True)
report.append("Enabling service: {}".format(service))
else:
raise Exception("No conf available for service: {}".format(link_location))
......
......@@ -3,8 +3,6 @@ description "Tasks before supervisord"
start on runlevel [2345]
task
setuid {{ supervisor_user }}
{% if programs_code_dir is defined %}
{% set programs_command = "--programs-env " + programs_home + "/programs_env --programs-code-dir " + programs_code_dir + " --programs-python " + COMMON_BIN_DIR + "/python.programs" %}
{% else %}
......
---
tanguru_debian_pkgs:
- openjdk-7-jre
- unzip
- libmysql-java
- python-mysqldb
- tomcat7
- libspring-instrument-java
- xvfb
- mailutils
- postfix
tanaguru_download_link: "http://download.tanaguru.org/Tanaguru/tanaguru-3.1.0.i386.tar.gz"
# Go this link to find your desired ESR Firefox
# http://download-origin.cdn.mozilla.net/pub/firefox/releases/24.0esr/linux-x86_64/
# Default is en-US in our example
fixfox_esr_link: "http://download-origin.cdn.mozilla.net/pub/firefox/releases/24.0esr/linux-x86_64/en-US/firefox-24.0esr.tar.bz2"
TANAGURU_DATABASE_NAME: 'tgdatabase'
TANAGURU_DATABASE_USER: 'tguser'
TANAGURU_DATABASE_PASSWORD: 'tgPassword'
TANAGURU_URL: 'http://localhost:8080/tanaguru/'
TANAGURU_ADMIN_EMAIL: 'admin@example.com'
TANAGURU_ADMIN_PASSWORD: 'tanaguru15'
tanaguru_parameters:
db_name: "{{ TANAGURU_DATABASE_NAME }}"
db_user: "{{ TANAGURU_DATABASE_USER }}"
db_password: "{{ TANAGURU_DATABASE_PASSWORD }}"
url: "{{ TANAGURU_URL }}"
admin_email: "{{ TANAGURU_ADMIN_EMAIL }}"
admin_passwd: "{{ TANAGURU_ADMIN_PASSWORD }}"
\ No newline at end of file
---
- name: Add the Partner repository
apt_repository:
repo: "{{ item }}"
state: present
with_items:
- "deb http://archive.canonical.com/ubuntu {{ ansible_distribution_release }} partner"
- "deb-src http://archive.canonical.com/ubuntu {{ ansible_distribution_release }} partner"
tags:
- install
- install:base
- name: Set Postfix options
debconf:
name: postifx
question: "{{ item.question }}"
value: "{{ item.value }} "
vtype: "string"
with_items:
- { question: "postfix/mailname", value: " " }
- { question: "postfix/main_mailer_type", value: "Satellite system" }
tags:
- install
- install:configuration
- name: Install the TanaGuru Prerequisites
apt:
name: "{{ item }}"
update_cache: yes
state: installed
with_items: tanguru_debian_pkgs
tags:
- install
- install:base
- name: Modify the my.cnf file for max_allowed_packet option
lineinfile:
dest: /etc/mysql/my.cnf
regexp: '^max_allowed_packet'
line: 'max_allowed_packet = 64M'
state: present
register: my_cnf
tags:
- install
- install:configuration
- name: Restart MySQL
service:
name: mysql
state: restarted
when: my_cnf.changed
- name: Create a soft link for tomcat jar and mysql connector
file:
dest: "{{ item.dest }}"
src: "{{ item.src }}"
state: link
with_items:
- { src: '/usr/share/java/spring3-instrument-tomcat.jar', dest: '/usr/share/tomcat7/lib/spring3-instrument-tomcat.jar' }
- { src: '/usr/share/java/mysql-connector-java.jar', dest: '/usr/share/tomcat7/lib/mysql-connector-java.jar'}
tags:
- install
- install:configuration
- name: Copy the xvfb template to /etc/init.d
template:
dest: /etc/init.d/xvfb
src: xvfb.j2
owner: root
group: root
mode: 0755
register: xvfb
tags:
- install
- install:configuration
- name: Restart xvfb
service:
name: xvfb
pattern: /etc/init.d/xvfb
state: restarted
when: xvfb.changed
- name: Configure xvfb to run at startup
command: update-rc.d xvfb defaults
ignore_errors: yes
when: xvfb.changed
- name: Download the latest ESR Firfox
get_url:
url: "{{ fixfox_esr_link }}"
dest: "/tmp/{{ fixfox_esr_link | basename }}"
tags:
- install
- install:base
- name: Unzip the downloaded Firfox zipped file
unarchive:
src: "/tmp/{{ fixfox_esr_link | basename }}"
dest: /opt
copy: no
tags:
- install
- install:base
- name: Download the latest TanaGuru tarball
get_url:
url: "{{ tanaguru_download_link }}"
dest: "/tmp/{{ tanaguru_download_link | basename }}"
tags:
- install
- install:base
- name: Unzip the downloaded TanaGuru tarball
unarchive:
src: "/tmp/{{ tanaguru_download_link | basename }}"
dest: "/tmp/"
copy: no
tags:
- install
- install:base
- name: Create MySQL database for TanaGuru
mysql_db:
name: "{{ tanaguru_parameters.db_name }}"
state: present
encoding: utf8
collation: utf8_general_ci
tags:
- install
- install:base
- name: Create MySQL user for TanaGuru
mysql_user:
name: "{{ tanaguru_parameters.db_user }}"
password: "{{ tanaguru_parameters.db_password }}"
host: localhost
priv: "{{ tanaguru_parameters.db_name }}.*:ALL"
state: present
tags:
- install
- install:base
- name: Check that tanaguru app is running
shell: >
/bin/ps aux | grep -i tanaguru
register: tanaguru_app
changed_when: no
tags:
- install
- name: Install the TanaGuru
shell: >
/bin/echo "yes" | ./install.sh --mysql-tg-user "{{ tanaguru_parameters.db_user }}" \
--mysql-tg-passwd "{{ tanaguru_parameters.db_password }}" \
--mysql-tg-db "{{ tanaguru_parameters.db_name }}" \
--tanaguru-url "{{ tanaguru_parameters.url }}" \
--tomcat-webapps /var/lib/tomcat7/webapps \
--tomcat-user tomcat7 \
--tg-admin-email "{{ tanaguru_parameters.admin_email }}" \
--tg-admin-passwd "{{ tanaguru_parameters.admin_passwd }}" \
--firefox-esr-path /opt/firefox/firefox \
--display-port ":99.1"
args:
chdir: "/tmp/{{ tanaguru_download_link | basename | regex_replace('.tar.gz$', '') }}"
when: "tanaguru_app.stdout.find('/etc/tanaguru/') == -1"
register: tanaguru_install
tags:
- install
- install:base
- name: Restart tomcat7
service:
name: tomcat7
state: restarted
when: tanaguru_install.changed
\ No newline at end of file
#!/bin/sh
set -e
RUN_AS_USER=tomcat7
OPTS=":99 -screen 1 1024x768x24 -nolisten tcp"
XVFB_DIR=/usr/bin
PIDFILE=/var/run/xvfb
case $1 in
start)
start-stop-daemon --chuid $RUN_AS_USER -b --start --exec $XVFB_DIR/Xvfb --make-pidfile --pidfile $PIDFILE -- $OPTS &
;;
stop)
start-stop-daemon --stop --user $RUN_AS_USER --pidfile $PIDFILE
rm -f $PIDFILE
;;
restart)
if start-stop-daemon --test --stop --user $RUN_AS_USER --pidfile $PIDFILE >/dev/null; then
$0 stop
fi;
$0 start
;;
*)
echo "Usage: $0 (start|restart|stop)"
exit 1
;;
esac
exit 0
\ No newline at end of file
......@@ -21,20 +21,20 @@
#
- name: Create clone of edx-platform
git_2_0_1: >
repo=https://github.com/edx/edx-platform.git
dest={{ test_build_server_repo_path }}/edx-platform-clone
version={{ test_edx_platform_version }}
git:
repo: "https://github.com/edx/edx-platform.git"
dest: "{{ test_build_server_repo_path }}/edx-platform-clone"
version: "{{ test_edx_platform_version }}"
become_user: "{{ test_build_server_user }}"
- name: get xargs limit
shell: "xargs --show-limits"
- name: Copy test-development-environment.sh to somewhere the jenkins user can access it
copy: >
src=test-development-environment.sh
dest="{{ test_build_server_repo_path }}"
mode=0755
copy:
src: test-development-environment.sh
dest: "{{ test_build_server_repo_path }}"
mode: 0755
- name: Validate build environment
shell: "bash test-development-environment.sh {{ item }}"
......
---
- name: import the test courses from github
shell: >
{{ demo_edxapp_venv_bin }}/python /edx/bin/manage.edxapp lms git_add_course --settings=aws "{{ item.github_url }}"
shell: "{{ demo_edxapp_venv_bin }}/python /edx/bin/manage.edxapp lms git_add_course --settings=aws \"{{ item.github_url }}\""
become_user: "{{ common_web_user }}"
when: item.install == True
with_items: TESTCOURSES_EXPORTS
with_items: "{{ TESTCOURSES_EXPORTS }}"
- name: enroll test users in the testcourses
shell: >
{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms enroll_user_in_course -e {{ item[0].email }} -c {{ item[1].course_id }}
chdir={{ demo_edxapp_code_dir }}
shell: "{{ demo_edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms enroll_user_in_course -e {{ item[0].email }} -c {{ item[1].course_id }}"
args:
chdir: "{{ demo_edxapp_code_dir }}"
become_user: "{{ common_web_user }}"
when: item[1].install == True
with_nested:
- demo_test_users
- TESTCOURSES_EXPORTS
- "{{ demo_test_users }}"
- "{{ TESTCOURSES_EXPORTS }}"
......@@ -10,7 +10,7 @@
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0644
with_items: hpi_files.stdout_lines
with_items: "{{ hpi_files.stdout_lines }}"
when: hpi_files
notify:
- restart Jenkins
......@@ -26,7 +26,7 @@
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 0644
with_items: jpi_files.stdout_lines
with_items: "{{ jpi_files.stdout_lines }}"
when: jpi_files
notify:
- restart Jenkins
......
......@@ -138,6 +138,7 @@
- name: Get github key(s) and update the authorized_keys file
authorized_key:
user: "{{ item.name }}"
exclusive: yes
key: "https://github.com/{{ item.name }}.keys"
when: item.github is defined and item.get('state', 'present') == 'present'
with_items: "{{ user_info }}"
......
......@@ -45,7 +45,10 @@ XQUEUE_DJANGO_USERS:
lms: 'password'
XQUEUE_RABBITMQ_USER: 'edx'
XQUEUE_RABBITMQ_PASS: 'edx'
XQUEUE_RABBITMQ_VHOST: '/'
XQUEUE_RABBITMQ_HOSTNAME: 'localhost'
XQUEUE_RABBITMQ_PORT: 5672
XQUEUE_RABBITMQ_TLS: true
XQUEUE_LANG: 'en_US.UTF-8'
XQUEUE_MYSQL_DB_NAME: 'xqueue'
......@@ -82,6 +85,9 @@ xqueue_env_config:
SYSLOG_SERVER: "{{ XQUEUE_SYSLOG_SERVER }}"
LOG_DIR: "{{ COMMON_DATA_DIR }}/logs/xqueue"
RABBIT_HOST: "{{ XQUEUE_RABBITMQ_HOSTNAME }}"
RABBIT_PORT: "{{ XQUEUE_RABBITMQ_PORT }}"
RABBIT_VHOST: "{{ XQUEUE_RABBITMQ_VHOST }}"
RABBIT_TLS: "{{ XQUEUE_RABBITMQ_TLS }}"
LOCAL_LOGLEVEL: "{{ XQUEUE_LOCAL_LOGLEVEL }}"
UPLOAD_BUCKET: "{{ XQUEUE_UPLOAD_BUCKET }}"
UPLOAD_PATH_PREFIX: "{{ XQUEUE_UPLOAD_PATH_PREFIX }}"
......
......@@ -46,7 +46,7 @@
# Do A Checkout
- name: "Git checkout xqueue repo into {{ xqueue_code_dir }}"
git_2_0_1:
git:
repo: "{{ xqueue_source_repo }}"
dest: "{{ xqueue_code_dir }}"
version: "{{ xqueue_version }}"
......@@ -153,11 +153,5 @@
- xqueue
- xqueue_consumer
tags:
- install
- install:configuration
- install:code
- install:app-requirements
- migrate
- migrate:db
- manage
- manage:app-users
- manage:start
......@@ -7,6 +7,7 @@
mode: "0600"
when: XQWATCHER_GIT_IDENTITY != 'none'
tags:
- deploy
- install
- install:code
......@@ -19,6 +20,7 @@
group: "{{ xqwatcher_user }}"
mode: "0644"
tags:
- deploy
- install
- install:configuration
......@@ -28,4 +30,4 @@
- include: deploy_courses.yml
tags:
- deploy-courses
\ No newline at end of file
- deploy-courses
......@@ -3,7 +3,7 @@
# a per queue basis.
- name: Checkout grader code
git_2_0_1:
git:
repo: "{{ item.GIT_REPO }}"
dest: "{{ xqwatcher_app_dir }}/data/{{ item.COURSE }}"
version: "{{ item.GIT_REF }}"
......
......@@ -50,7 +50,7 @@
config: "{{ supervisor_cfg }}"
state: restarted
when: not disable_edx_services
become_user: "{{ xqwatcher_user }}"
become_user: "{{ common_web_user }}"
tags:
- manage
- manage:update
......@@ -115,5 +115,3 @@
- include: code_jail.yml CODE_JAIL_COMPLAIN=false
- include: deploy.yml
tags:
- deploy
......@@ -21,12 +21,14 @@
template:
src: xserver_gunicorn.py.j2
dest: "{{ xserver_app_dir }}/xserver_gunicorn.py"
become_user: "{{ xserver_user }}"
owner: "{{ supervisor_user }}"
group: "{{ common_web_user }}"
mode: "0644"
notify:
- restart xserver
- name: Checkout code
git_2_0_1:
git:
dest: "{{ xserver_code_dir }}"
repo: "{{ xserver_source_repo }}"
version: "{{xserver_version}}"
......@@ -84,7 +86,7 @@
- restart xserver
- name: Checkout grader code
git_2_0_1:
git:
dest: "{{ XSERVER_GRADER_DIR }}"
repo: "{{ XSERVER_GRADER_SOURCE }}"
version: "{{ xserver_grader_version }}"
......
......@@ -46,11 +46,8 @@
- name: Set sandbox limits
template:
src: "{{ item }}"
src: "sandbox.conf.j2"
dest: "/etc/security/limits.d/sandbox.conf"
first_available_file:
- "{{ secure_dir }}/sandbox.conf.j2"
- "sandbox.conf.j2"
- name: Install system dependencies of xserver
apt:
......@@ -60,11 +57,8 @@
- name: Load python-sandbox apparmor profile
template:
src: "{{ item }}"
src: "usr.bin.python-sandbox.j2"
dest: "/etc/apparmor.d/edx_apparmor_sandbox"
first_available_file:
- "{{ secure_dir }}/files/edx_apparmor_sandbox.j2"
- "usr.bin.python-sandbox.j2"
- include: deploy.yml
tags:
......
......@@ -47,7 +47,7 @@
notify: restart xsy
- name: Checkout the code
git_2_0_1:
git:
dest: "{{ xsy_code_dir }}"
repo: "{{ xsy_source_repo }}"
version: "{{ xsy_version }}"
......
......@@ -13,6 +13,9 @@
EDXAPP_OAUTH_ENFORCE_SECURE: false
EDXAPP_LMS_BASE_SCHEME: http
ECOMMERCE_DJANGO_SETTINGS_MODULE: "ecommerce.settings.devstack"
# When provisioning your devstack, we apply security updates
COMMON_SECURITY_UPDATES: true
SECURITY_UPGRADE_ON_ANSIBLE: true
vars_files:
- roles/edxapp/vars/devstack.yml
roles:
......
-r github.txt
PyYAML==3.11
ansible==2.2.0.0
PyYAML==3.12
Jinja2==2.8
MarkupSafe==0.23
boto==2.33.0
ecdsa==0.11
paramiko==1.15.1
paramiko==2.0.2
pycrypto==2.6.1
wsgiref==0.1.2
docopt==0.6.1
......
yml_files:=$(shell find . -name "*.yml")
json_files:=$(shell find . -name "*.json")
jinja_files:=$(shell find . -name "*.j2")
# $(images) is calculated in the docker.mk file
test: test.syntax test.edx_east_roles
test.syntax: test.syntax.yml test.syntax.json test.syntax.jinja test.syntax.dockerfiles
test.syntax: test.syntax.yml test.syntax.json test.syntax.dockerfiles
test.syntax.yml: $(patsubst %,test.syntax.yml/%,$(yml_files))
......@@ -18,13 +17,13 @@ test.syntax.json: $(patsubst %,test.syntax.json/%,$(json_files))
test.syntax.json/%:
jsonlint -v $*
test.syntax.jinja: $(patsubst %,test.syntax.jinja/%,$(jinja_files))
test.syntax.jinja/%:
cd playbooks && python ../tests/jinja_check.py ../$*
test.syntax.dockerfiles:
python util/check_dockerfile_coverage.py "$(images)"
test.edx_east_roles:
tests/test_edx_east_roles.sh
clean: test.clean
test.clean:
rm -rf playbooks/edx-east/test_output
#!/usr/bin/env python
import os
import sys
from jinja2 import FileSystemLoader
from jinja2 import Environment as j
from jinja2.exceptions import UndefinedError
from ansible.utils.template import _get_filters, _get_extensions
from yaml.representer import RepresenterError
input_file = sys.argv[1]
if not os.path.exists(input_file):
print('{0}: deleted in diff'.format(input_file))
sys.exit(0)
# Setup jinja to include ansible filters
j_e = j(trim_blocks=True, extensions=_get_extensions())
j_e.loader = FileSystemLoader(['.', os.path.dirname(input_file)])
j_e.filters.update(_get_filters())
# Go ahead and catch errors for undefined variables and bad yaml
# from `to_nice_yaml` ansible filter
try:
j_e.from_string(file((input_file)).read()).render(func=lambda: None)
except (UndefinedError, RepresenterError), ex:
pass
except TypeError, ex:
if ex.message != 'Undefined is not JSON serializable':
raise Exception(ex.message)
pass
print('{}: ok'.format(input_file))
......@@ -7,7 +7,7 @@ import logging
import sys
import docker_images
TRAVIS_BUILD_DIR = os.environ.get("TRAVIS_BUILD_DIR")
TRAVIS_BUILD_DIR = os.environ.get("TRAVIS_BUILD_DIR", ".")
CONFIG_FILE_PATH = pathlib2.Path(TRAVIS_BUILD_DIR, "util", "parsefiles_config.yml")
LOGGER = logging.getLogger(__name__)
......
......@@ -33,6 +33,10 @@ if [[ -z "${UPGRADE_OS}" ]]; then
UPGRADE_OS=false
fi
if [[ -z "${RUN_ANSIBLE}" ]]; then
RUN_ANSIBLE=true
fi
#
# Bootstrapping constants
#
......@@ -45,7 +49,7 @@ ANSIBLE_DIR="/tmp/ansible"
CONFIGURATION_DIR="/tmp/configuration"
EDX_PPA="deb http://ppa.edx.org precise main"
EDX_PPA_KEY_SERVER="hkp://pgp.mit.edu:80"
EDX_PPA_KEY_ID="69464050"
EDX_PPA_KEY_ID="B41E5E3969464050"
cat << EOF
******************************************************************************
......@@ -75,9 +79,9 @@ then
elif grep -q 'Xenial Xerus' /etc/os-release
then
SHORT_DIST="xenial"
else
else
cat << EOF
This script is only known to work on Ubuntu Precise, Trusty and Xenial,
exiting. If you are interested in helping make installation possible
on other platforms, let us know.
......@@ -96,7 +100,7 @@ if [ "${UPGRADE_OS}" = true ]; then
echo "Upgrading the OS..."
apt-get upgrade -y
fi
# Required for add-apt-repository
apt-get install -y software-properties-common python-software-properties
......@@ -110,13 +114,14 @@ if [[ "precise" = "${SHORT_DIST}" || "trusty" = "${SHORT_DIST}" ]]; then
apt-key adv --keyserver "${EDX_PPA_KEY_SERVER}" --recv-keys "${EDX_PPA_KEY_ID}"
add-apt-repository -y "${EDX_PPA}"
fi
# Install python 2.7 latest, git and other common requirements
# NOTE: This will install the latest version of python 2.7 and
# which may differ from what is pinned in virtualenvironments
apt-get update -y
apt-get install -y python2.7 python2.7-dev python-pip python-apt python-yaml python-jinja2 build-essential sudo git-core libmysqlclient-dev
apt-get install -y python2.7 python2.7-dev python-pip python-apt python-yaml python-jinja2 build-essential sudo git-core libmysqlclient-dev libffi-dev libssl-dev
# Workaround for a 16.04 bug, need to upgrade to latest and then
# potentially downgrade to the preferred version.
......@@ -135,35 +140,41 @@ PATH=/usr/local/bin:${PATH}
pip install setuptools=="${SETUPTOOLS_VERSION}"
pip install virtualenv=="${VIRTUAL_ENV_VERSION}"
# create a new virtual env
/usr/local/bin/virtualenv "${VIRTUAL_ENV}"
PATH="${PYTHON_BIN}":${PATH}
if [[ "true" == "${RUN_ANSIBLE}" ]]; then
# create a new virtual env
/usr/local/bin/virtualenv "${VIRTUAL_ENV}"
# Install the configuration repository to install
# edx_ansible role
git clone ${CONFIGURATION_REPO} ${CONFIGURATION_DIR}
cd ${CONFIGURATION_DIR}
git checkout ${CONFIGURATION_VERSION}
make requirements
PATH="${PYTHON_BIN}":${PATH}
cd "${CONFIGURATION_DIR}"/playbooks/edx-east
"${PYTHON_BIN}"/ansible-playbook edx_ansible.yml -i '127.0.0.1,' -c local -e "configuration_version=${CONFIGURATION_VERSION}"
# Install the configuration repository to install
# edx_ansible role
git clone ${CONFIGURATION_REPO} ${CONFIGURATION_DIR}
cd ${CONFIGURATION_DIR}
git checkout ${CONFIGURATION_VERSION}
make requirements
# cleanup
rm -rf "${ANSIBLE_DIR}"
rm -rf "${CONFIGURATION_DIR}"
rm -rf "${VIRTUAL_ENV}"
rm -rf "${HOME}/.ansible"
cd "${CONFIGURATION_DIR}"/playbooks/edx-east
"${PYTHON_BIN}"/ansible-playbook edx_ansible.yml -i '127.0.0.1,' -c local -e "configuration_version=${CONFIGURATION_VERSION}"
cat << EOF
******************************************************************************
# cleanup
rm -rf "${ANSIBLE_DIR}"
rm -rf "${CONFIGURATION_DIR}"
rm -rf "${VIRTUAL_ENV}"
rm -rf "${HOME}/.ansible"
Done bootstrapping, edx_ansible is now installed in /edx/app/edx_ansible.
Time to run some plays. Activate the virtual env with
cat << EOF
******************************************************************************
> . /edx/app/edx_ansible/venvs/edx_ansible/bin/activate
Done bootstrapping, edx_ansible is now installed in /edx/app/edx_ansible.
Time to run some plays. Activate the virtual env with
******************************************************************************
> . /edx/app/edx_ansible/venvs/edx_ansible/bin/activate
******************************************************************************
EOF
else
mkdir -p /edx/ansible/facts.d
echo '{ "ansible_bootstrap_run": true }' > /edx/ansible/facts.d/ansible_bootstrap.json
fi
......@@ -337,7 +337,9 @@ elb: $elb
EOF
if [[ $server_type != "full_edx_installation_from_scratch" ]]; then
extra_var_arg+=' -e instance_userdata="" -e launch_wait_time=0'
fi
# run the tasks to launch an ec2 instance from AMI
cat $extra_vars_file
run_ansible edx_provision.yml -i inventory.ini $extra_var_arg --user ubuntu
......
......@@ -18,9 +18,12 @@ weights:
- xqueue: 2
- trusty-common: 5
- precise-common: 4
- xenial-common: 6
- ecommerce: 6
- rabbitmq: 2
- automated: 1
- programs: 4
- mysql: 2
- elasticsearch: 7
- docker-tools: 3
- tools_jenkins: 8
......@@ -302,14 +302,129 @@ export ANSIBLE_ENABLE_SQS SQS_NAME SQS_REGION SQS_MSG_PREFIX PYTHONUNBUFFERED
export HIPCHAT_TOKEN HIPCHAT_ROOM HIPCHAT_MSG_PREFIX HIPCHAT_FROM
export HIPCHAT_MSG_COLOR DATADOG_API_KEY
if [[ ! -x /usr/bin/git || ! -x /usr/bin/pip ]]; then
echo "Installing pkg dependencies"
/usr/bin/apt-get update
/usr/bin/apt-get install -y git python-pip python-apt \\
git-core build-essential python-dev libxml2-dev \\
libxslt-dev curl libmysqlclient-dev --force-yes
#################################### Lifted from ansible-bootstrap.sh
if [[ -z "$ANSIBLE_REPO" ]]; then
ANSIBLE_REPO="https://github.com/edx/ansible.git"
fi
if [[ -z "$ANSIBLE_VERSION" ]]; then
ANSIBLE_VERSION="master"
fi
if [[ -z "$CONFIGURATION_REPO" ]]; then
CONFIGURATION_REPO="https://github.com/edx/configuration.git"
fi
if [[ -z "$CONFIGURATION_VERSION" ]]; then
CONFIGURATION_VERSION="master"
fi
if [[ -z "$UPGRADE_OS" ]]; then
UPGRADE_OS=false
fi
#
# Bootstrapping constants
#
VIRTUAL_ENV_VERSION="15.0.2"
PIP_VERSION="8.1.2"
SETUPTOOLS_VERSION="24.0.3"
EDX_PPA="deb http://ppa.edx.org precise main"
EDX_PPA_KEY_SERVER="hkp://pgp.mit.edu:80"
EDX_PPA_KEY_ID="B41E5E3969464050"
cat << EOF
******************************************************************************
Running the abbey with the following arguments:
ANSIBLE_REPO="$ANSIBLE_REPO"
ANSIBLE_VERSION="$ANSIBLE_VERSION"
CONFIGURATION_REPO="$CONFIGURATION_REPO"
CONFIGURATION_VERSION="$CONFIGURATION_VERSION"
******************************************************************************
EOF
if [[ $(id -u) -ne 0 ]] ;then
echo "Please run as root";
exit 1;
fi
if grep -q 'Precise Pangolin' /etc/os-release
then
SHORT_DIST="precise"
elif grep -q 'Trusty Tahr' /etc/os-release
then
SHORT_DIST="trusty"
elif grep -q 'Xenial Xerus' /etc/os-release
then
SHORT_DIST="xenial"
else
cat << EOF
This script is only known to work on Ubuntu Precise, Trusty and Xenial,
exiting. If you are interested in helping make installation possible
on other platforms, let us know.
EOF
exit 1;
fi
EDX_PPA="deb http://ppa.edx.org $SHORT_DIST main"
# Upgrade the OS
apt-get update -y
apt-key update -y
if [ "$UPGRADE_OS" = true ]; then
echo "Upgrading the OS..."
apt-get upgrade -y
fi
# Required for add-apt-repository
apt-get install -y software-properties-common python-software-properties
# Add git PPA
add-apt-repository -y ppa:git-core/ppa
# For older distributions we need to install a PPA for Python 2.7.10
if [[ "precise" = "$SHORT_DIST" || "trusty" = "$SHORT_DIST" ]]; then
# Add python PPA
apt-key adv --keyserver "$EDX_PPA_KEY_SERVER" --recv-keys "$EDX_PPA_KEY_ID"
add-apt-repository -y "$EDX_PPA"
fi
# Install python 2.7 latest, git and other common requirements
# NOTE: This will install the latest version of python 2.7 and
# which may differ from what is pinned in virtualenvironments
apt-get update -y
apt-get install -y python2.7 python2.7-dev python-pip python-apt python-yaml python-jinja2 build-essential sudo git-core libmysqlclient-dev libffi-dev libssl-dev
# Workaround for a 16.04 bug, need to upgrade to latest and then
# potentially downgrade to the preferred version.
# https://github.com/pypa/pip/issues/3862
if [[ "xenial" = "$SHORT_DIST" ]]; then
pip install --upgrade pip
pip install --upgrade pip=="$PIP_VERSION"
else
pip install --upgrade pip=="$PIP_VERSION"
fi
# pip moves to /usr/local/bin when upgraded
hash -r #pip may have moved from /usr/bin/ to /usr/local/bin/. This clears bash's path cache.
PATH=/usr/local/bin:$PATH
pip install setuptools=="$SETUPTOOLS_VERSION"
pip install virtualenv=="$VIRTUAL_ENV_VERSION"
##################### END Lifted from ansible-bootstrap.sh
# python3 is required for certain other things
# (currently xqwatcher so it can run python2 and 3 grader code,
# but potentially more in the future). It's not available on Ubuntu 12.04,
......@@ -324,15 +439,6 @@ fi
# only runs on a build from scratch
/usr/bin/apt-get install -y python-httplib2 --force-yes
# Must upgrade to latest before pinning to work around bug
# https://github.com/pypa/pip/issues/3862
pip install --upgrade pip
hash -r #pip may have moved from /usr/bin/ to /usr/local/bin/. This clears bash's path cache.
pip install --upgrade pip==8.1.2
# upgrade setuptools early to avoid no distribution errors
pip install --upgrade setuptools==24.0.3
rm -rf $base_dir
mkdir -p $base_dir
cd $base_dir
......
......@@ -13,6 +13,24 @@ if ENV["VAGRANT_GUEST_IP"]
vm_guest_ip = ENV["VAGRANT_GUEST_IP"]
end
# These are versioning variables in the roles. Each can be overridden, first
# with OPENEDX_RELEASE, and then with a specific environment variable of the
# same name but upper-cased.
VERSION_VARS = [
'edx_platform_version',
'configuration_version',
'certs_version',
'forum_version',
'xqueue_version',
'demo_version',
'NOTIFIER_VERSION',
'ECOMMERCE_VERSION',
'ECOMMERCE_WORKER_VERSION',
'PROGRAMS_VERSION',
'ANALYTICS_API_VERSION',
'INSIGHTS_VERSION',
]
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Creates a devstack from a base Ubuntu 12.04 image for virtualbox
......@@ -70,36 +88,15 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
ansible.verbose = "vvvv"
ansible.extra_vars = {}
if ENV['OPENEDX_RELEASE']
ansible.extra_vars = {
edx_platform_version: ENV['OPENEDX_RELEASE'],
configuration_version: ENV['OPENEDX_RELEASE'],
certs_version: ENV['OPENEDX_RELEASE'],
forum_version: ENV['OPENEDX_RELEASE'],
xqueue_version: ENV['OPENEDX_RELEASE'],
demo_version: ENV['OPENEDX_RELEASE'],
NOTIFIER_VERSION: ENV['OPENEDX_RELEASE'],
ECOMMERCE_VERSION: ENV['OPENEDX_RELEASE'],
ECOMMERCE_WORKER_VERSION: ENV['OPENEDX_RELEASE'],
PROGRAMS_VERSION: ENV['OPENEDX_RELEASE'],
ANALYTICS_API_VERSION: ENV['OPENEDX_RELEASE'],
INSIGHTS_VERSION: ENV['OPENEDX_RELEASE'],
}
end
if ENV['CONFIGURATION_VERSION']
ansible.extra_vars['configuration_version'] = ENV['CONFIGURATION_VERSION']
end
if ENV['EDX_PLATFORM_VERSION']
ansible.extra_vars['edx_platform_version'] = ENV['EDX_PLATFORM_VERSION']
end
if ENV['ECOMMERCE_VERSION']
ansible.extra_vars['ECOMMERCE_VERSION'] = ENV['ECOMMERCE_VERSION']
end
if ENV['ECOMMERCE_WORKER_VERSION']
ansible.extra_vars['ECOMMERCE_WORKER_VERSION'] = ENV['ECOMMERCE_WORKER_VERSION']
end
if ENV['PROGRAMS_VERSION']
ansible.extra_vars['PROGRAMS_VERSION'] = ENV['PROGRAMS_VERSION']
VERSION_VARS.each do |var|
if ENV['OPENEDX_RELEASE']
ansible.extra_vars[var] = ENV['OPENEDX_RELEASE']
end
env_var = var.upcase
if ENV[env_var]
ansible.extra_vars[var] = ENV[env_var]
end
end
end
end
......@@ -35,7 +35,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.network :private_network, ip: vm_guest_ip
config.vm.network :private_network, ip: vm_guest_ip, nic_type: "virtio"
# If you want to run the box but don't need network ports, set VAGRANT_NO_PORTS=1.
# This is useful if you want to run more than one box at once.
......
......@@ -5,6 +5,18 @@ VAGRANTFILE_API_VERSION = "2"
MEMORY = 4096
CPU_COUNT = 2
# These are versioning variables in the roles. Each can be overridden, first
# with OPENEDX_RELEASE, and then with a specific environment variable of the
# same name but upper-cased.
VERSION_VARS = [
'edx_platform_version',
'configuration_version',
'certs_version',
'forum_version',
'xqueue_version',
'demo_version',
]
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
......@@ -31,17 +43,20 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# point Vagrant at the location of your playbook you want to run
ansible.playbook = "../../../playbooks/vagrant-fullstack.yml"
ansible.verbose = "vvv"
# set extra-vars here instead of in the vagrant play so that
# Set extra-vars here instead of in the vagrant play so that
# they are written out to /edx/etc/server-vars.yml which can
# be used later when running ansible locally
if ENV['OPENEDX_RELEASE']
ansible.extra_vars = {
edx_platform_version: ENV['OPENEDX_RELEASE'],
certs_version: ENV['OPENEDX_RELEASE'],
forum_version: ENV['OPENEDX_RELEASE'],
xqueue_version: ENV['OPENEDX_RELEASE'],
demo_version: ENV['OPENEDX_RELEASE'],
}
# be used later when running ansible locally.
ansible.extra_vars = {}
VERSION_VARS.each do |var|
if ENV['OPENEDX_RELEASE']
ansible.extra_vars[var] = ENV['OPENEDX_RELEASE']
end
env_var = var.upcase
if ENV[env_var]
ansible.extra_vars[var] = ENV[env_var]
end
end
end
end
......@@ -8,44 +8,23 @@ VAGRANTFILE_API_VERSION = "2"
MEMORY = 4096
CPU_COUNT = 2
$script = <<SCRIPT
if [ ! -d /edx/app/edx_ansible ]; then
echo "Error: Base box is missing provisioning scripts." 1>&2
exit 1
fi
OPENEDX_RELEASE=$1
export PYTHONUNBUFFERED=1
source /edx/app/edx_ansible/venvs/edx_ansible/bin/activate
cd /edx/app/edx_ansible/edx_ansible/playbooks
# Did we specify an openedx release?
if [ -n "$OPENEDX_RELEASE" ]; then
EXTRA_VARS="-e edx_platform_version=$OPENEDX_RELEASE \
-e certs_version=$OPENEDX_RELEASE \
-e forum_version=$OPENEDX_RELEASE \
-e xqueue_version=$OPENEDX_RELEASE \
-e demo_version=$OPENEDX_RELEASE \
-e NOTIFIER_VERSION=$OPENEDX_RELEASE \
-e ECOMMERCE_VERSION=$OPENEDX_RELEASE \
-e ECOMMERCE_WORKER_VERSION=$OPENEDX_RELEASE \
-e PROGRAMS_VERSION=$OPENEDX_RELEASE \
-e ANALYTICS_API_VERSION=$OPENEDX_RELEASE \
-e INSIGHTS_VERSION=$OPENEDX_RELEASE \
"
CONFIG_VER=$OPENEDX_RELEASE
# Need to ensure that the configuration repo is updated
# The vagrant-analyticstack.yml playbook will also do this, but only
# after loading the playbooks into memory. If these are out of date,
# this can cause problems (e.g. looking for templates that no longer exist).
/edx/bin/update configuration $CONFIG_VER
else
CONFIG_VER="master"
fi
ansible-playbook -i localhost, -c local run_role.yml -e role=edx_ansible -e configuration_version=$CONFIG_VER $EXTRA_VARS
ansible-playbook -i localhost, -c local vagrant-analytics.yml -e configuration_version=$CONFIG_VER $EXTRA_VARS -e ELASTICSEARCH_CLUSTER_MEMBERS=[]
SCRIPT
# These are versioning variables in the roles. Each can be overridden, first
# with OPENEDX_RELEASE, and then with a specific environment variable of the
# same name but upper-cased.
VERSION_VARS = [
'edx_platform_version',
'configuration_version',
'certs_version',
'forum_version',
'xqueue_version',
'demo_version',
'NOTIFIER_VERSION',
'ECOMMERCE_VERSION',
'ECOMMERCE_WORKER_VERSION',
'PROGRAMS_VERSION',
'ANALYTICS_API_VERSION',
'INSIGHTS_VERSION',
]
MOUNT_DIRS = {
:edx_platform => {:repo => "edx-platform", :local => "/edx/app/edxapp/edx-platform", :owner => "edxapp"},
......@@ -82,14 +61,41 @@ openedx_releases = {
openedx_releases.default = {
:name => "analyticstack", :file => "analyticstack-latest.box",
}
rel = ENV['OPENEDX_RELEASE']
openedx_release = ENV['OPENEDX_RELEASE']
# Build -e override lines for each overridable variable.
extra_vars_lines = ""
VERSION_VARS.each do |var|
rel = ENV[var.upcase] || openedx_release
if rel
extra_vars_lines += "-e #{var}=#{rel} \\\n"
end
end
$script = <<SCRIPT
if [ ! -d /edx/app/edx_ansible ]; then
echo "Error: Base box is missing provisioning scripts." 1>&2
exit 1
fi
export PYTHONUNBUFFERED=1
source /edx/app/edx_ansible/venvs/edx_ansible/bin/activate
cd /edx/app/edx_ansible/edx_ansible/playbooks
EXTRA_VARS="#{extra_vars_lines}"
CONFIG_VER="#{ENV['CONFIGURATION_VERSION'] || openedx_release || 'master'}"
ansible-playbook -i localhost, -c local run_role.yml -e role=edx_ansible -e configuration_version=$CONFIG_VER $EXTRA_VARS
ansible-playbook -i localhost, -c local vagrant-analytics.yml -e configuration_version=$CONFIG_VER $EXTRA_VARS -e ELASTICSEARCH_CLUSTER_MEMBERS=[]
SCRIPT
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
reldata = openedx_releases[rel]
reldata = openedx_releases[openedx_release]
if Hash == reldata.class
boxname = openedx_releases[rel][:name]
boxfile = openedx_releases[rel].fetch(:file, "#{boxname}.box")
boxname = openedx_releases[openedx_release][:name]
boxfile = openedx_releases[openedx_release].fetch(:file, "#{boxname}.box")
else
boxname = reldata
boxfile = "#{boxname}.box"
......@@ -153,5 +159,5 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Assume that the base box has the edx_ansible role installed
# We can then tell the Vagrant instance to update itself.
config.vm.provision "shell", inline: $script, args: rel
config.vm.provision "shell", inline: $script
end
......@@ -155,6 +155,12 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Allow DNS to work for Ubuntu 12.10 host
# http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
# Virtio is faster, but the box needs to have support for it. We didn't
# have support in the boxes before Ficus.
if !(boxname.include?("dogwood") || boxname.include?("eucalyptus"))
vb.customize ['modifyvm', :id, '--nictype1', 'virtio']
end
end
# Use vagrant-vbguest plugin to make sure Guest Additions are in sync
......
......@@ -40,14 +40,14 @@ openedx_releases = {
}
openedx_releases.default = "eucalyptus-fullstack-2016-09-01"
rel = ENV['OPENEDX_RELEASE']
openedx_release = ENV['OPENEDX_RELEASE']
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
reldata = openedx_releases[rel]
reldata = openedx_releases[openedx_release]
if Hash == reldata.class
boxname = openedx_releases[rel][:name]
boxfile = openedx_releases[rel].fetch(:file, "#{boxname}.box")
boxname = openedx_releases[openedx_release][:name]
boxfile = openedx_releases[openedx_release].fetch(:file, "#{boxname}.box")
else
boxname = reldata
boxfile = "#{boxname}.box"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment