Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
C
configuration
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
OpenEdx
configuration
Commits
5c5b041f
Commit
5c5b041f
authored
Jan 22, 2014
by
John Jarvis
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #671 from edx/jarv/ansible-1.4
removing role identifiers for ansible 1.4
parents
0553e057
4499b90f
Expand all
Show whitespace changes
Inline
Side-by-side
Showing
82 changed files
with
792 additions
and
786 deletions
+792
-786
playbooks/library/ec2_local
+48
-56
playbooks/roles/analytics-server/handlers/main.yml
+2
-2
playbooks/roles/analytics-server/tasks/deploy.yml
+10
-10
playbooks/roles/analytics-server/tasks/main.yml
+11
-11
playbooks/roles/analytics/handlers/main.yml
+2
-2
playbooks/roles/analytics/tasks/deploy.yml
+10
-10
playbooks/roles/analytics/tasks/main.yml
+11
-11
playbooks/roles/ansible-role/tasks/main.yml
+4
-4
playbooks/roles/ansible-role/templates/handlers/main.yml.j2
+1
-1
playbooks/roles/ansible-role/templates/tasks/main.yml.j2
+2
-3
playbooks/roles/apache/handlers/main.yml
+1
-1
playbooks/roles/apache/tasks/apache_site.yml
+4
-4
playbooks/roles/apache/tasks/main.yml
+7
-7
playbooks/roles/automated/tasks/main.yml
+12
-12
playbooks/roles/browsers/tasks/main.yml
+11
-11
playbooks/roles/certs/handlers/main.yml
+1
-1
playbooks/roles/certs/tasks/deploy.yml
+19
-19
playbooks/roles/certs/tasks/main.yml
+10
-10
playbooks/roles/common/handlers/main.yml
+1
-1
playbooks/roles/common/tasks/main.yml
+14
-14
playbooks/roles/datadog/handlers/main.yml
+1
-1
playbooks/roles/datadog/tasks/main.yml
+8
-8
playbooks/roles/demo/tasks/deploy.yml
+6
-6
playbooks/roles/demo/tasks/main.yml
+1
-1
playbooks/roles/devpi/handlers/main.yml
+1
-1
playbooks/roles/devpi/tasks/main.yml
+16
-16
playbooks/roles/discern/handlers/main.yml
+1
-1
playbooks/roles/discern/tasks/deploy.yml
+24
-24
playbooks/roles/discern/tasks/main.yml
+14
-14
playbooks/roles/edx_ansible/tasks/deploy.yml
+8
-8
playbooks/roles/edx_ansible/tasks/main.yml
+3
-3
playbooks/roles/edxapp/handlers/main.yml
+2
-2
playbooks/roles/edxapp/tasks/deploy.yml
+0
-0
playbooks/roles/edxapp/tasks/main.yml
+21
-21
playbooks/roles/edxapp/tasks/python_sandbox_env.yml
+21
-21
playbooks/roles/edxapp/tasks/service_variant_config.yml
+15
-15
playbooks/roles/edxlocal/tasks/main.yml
+6
-6
playbooks/roles/elasticsearch/tasks/main.yml
+3
-3
playbooks/roles/forum/handlers/main.yml
+1
-1
playbooks/roles/forum/tasks/deploy.yml
+9
-9
playbooks/roles/forum/tasks/main.yml
+6
-6
playbooks/roles/forum/tasks/test.yml
+4
-4
playbooks/roles/gh_mirror/tasks/main.yml
+7
-7
playbooks/roles/gh_users/tasks/main.yml
+6
-6
playbooks/roles/gluster/tasks/main.yml
+11
-11
playbooks/roles/haproxy/handlers/main.yml
+3
-3
playbooks/roles/haproxy/tasks/main.yml
+10
-10
playbooks/roles/jenkins_master/handlers/main.yml
+3
-3
playbooks/roles/jenkins_master/tasks/main.yml
+34
-34
playbooks/roles/jenkins_worker/tasks/jscover.yml
+5
-5
playbooks/roles/jenkins_worker/tasks/python.yml
+10
-10
playbooks/roles/jenkins_worker/tasks/system.yml
+8
-8
playbooks/roles/launch_ec2/tasks/main.yml
+13
-13
playbooks/roles/legacy_ora/tasks/main.yml
+4
-4
playbooks/roles/local_dev/tasks/main.yml
+12
-12
playbooks/roles/mongo/tasks/main.yml
+14
-14
playbooks/roles/nginx/handlers/main.yml
+2
-2
playbooks/roles/nginx/tasks/main.yml
+26
-26
playbooks/roles/notifier/handlers/main.yml
+2
-2
playbooks/roles/notifier/tasks/deploy.yml
+13
-13
playbooks/roles/notifier/tasks/main.yml
+18
-18
playbooks/roles/ora/handlers/main.yml
+2
-2
playbooks/roles/ora/tasks/deploy.yml
+29
-29
playbooks/roles/ora/tasks/ease.yml
+18
-18
playbooks/roles/ora/tasks/main.yml
+15
-15
playbooks/roles/oraclejdk/tasks/main.yml
+5
-5
playbooks/roles/rabbitmq/tasks/main.yml
+22
-22
playbooks/roles/rbenv/tasks/main.yml
+18
-18
playbooks/roles/s3fs/tasks/main.yml
+10
-10
playbooks/roles/shibboleth/handlers/main.yml
+1
-1
playbooks/roles/shibboleth/tasks/main.yml
+10
-10
playbooks/roles/splunkforwarder/handlers/main.yml
+1
-1
playbooks/roles/splunkforwarder/tasks/main.yml
+18
-18
playbooks/roles/supervisor/tasks/main.yml
+12
-12
playbooks/roles/xqueue/handlers/main.yml
+1
-1
playbooks/roles/xqueue/tasks/deploy.yml
+17
-17
playbooks/roles/xqueue/tasks/main.yml
+8
-8
playbooks/roles/xserver/handlers/main.yml
+1
-1
playbooks/roles/xserver/tasks/deploy.yml
+21
-21
playbooks/roles/xserver/tasks/main.yml
+10
-10
requirements.txt
+4
-4
util/jenkins/ansible-provision.sh
+16
-1
No files found.
playbooks/library/ec2_local
View file @
5c5b041f
...
...
@@ -121,7 +121,7 @@ options:
required: False
default: 1
aliases: []
monitor:
monitor
ing
:
version_added: "1.1"
description:
- enable detailed monitoring (CloudWatch) for instance
...
...
@@ -185,7 +185,7 @@ options:
default: 'present'
aliases: []
root_ebs_size:
version_added: "1.
4
"
version_added: "1.
5
"
desription:
- size of the root volume in gigabytes
required: false
...
...
@@ -193,7 +193,7 @@ options:
aliases: []
requirements: [ "boto" ]
author: Seth Vidal, Tim Gerla, Lester Wade
, John Jarvis
author: Seth Vidal, Tim Gerla, Lester Wade
'''
EXAMPLES
=
'''
...
...
@@ -210,17 +210,6 @@ EXAMPLES = '''
group: webserver
count: 3
# Basic provisioning example with setting the root volume size to 50GB
- local_action:
module: ec2
keypair: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
count: 3
root_ebs_size: 50
# Advanced example with tagging and CloudWatch
- local_action:
module: ec2
...
...
@@ -231,7 +220,8 @@ EXAMPLES = '''
wait: yes
wait_timeout: 500
count: 5
instance_tags: '{"db":"postgres"}' monitoring=yes'
instance_tags: '{"db":"postgres"}'
monitoring=yes
# Multiple groups example
local_action:
...
...
@@ -243,7 +233,8 @@ local_action:
wait: yes
wait_timeout: 500
count: 5
instance_tags: '{"db":"postgres"}' monitoring=yes'
instance_tags: '{"db":"postgres"}'
monitoring=yes
# VPC example
- local_action:
...
...
@@ -406,6 +397,7 @@ def create_instances(module, ec2):
else
:
bdm
=
None
# group_id and group_name are exclusive of each other
if
group_id
and
group_name
:
module
.
fail_json
(
msg
=
str
(
"Use only one type of parameter (group_name) or (group_id)"
))
...
...
@@ -416,9 +408,7 @@ def create_instances(module, ec2):
if
group_name
:
grp_details
=
ec2
.
get_all_security_groups
()
if
type
(
group_name
)
==
list
:
# FIXME: this should be a nice list comprehension
# also not py 2.4 compliant
group_id
=
list
(
filter
(
lambda
grp
:
str
(
grp
.
id
)
if
str
(
tmp
)
in
str
(
grp
)
else
None
,
grp_details
)
for
tmp
in
group_name
)
group_id
=
[
str
(
grp
.
id
)
for
grp
in
grp_details
if
str
(
grp
.
name
)
in
group_name
]
elif
type
(
group_name
)
==
str
:
for
grp
in
grp_details
:
if
str
(
group_name
)
in
str
(
grp
):
...
...
@@ -501,7 +491,7 @@ def create_instances(module, ec2):
if
instance_tags
:
try
:
ec2
.
create_tags
(
instids
,
module
.
from_json
(
instance_tags
)
)
ec2
.
create_tags
(
instids
,
instance_tags
)
except
boto
.
exception
.
EC2ResponseError
as
e
:
module
.
fail_json
(
msg
=
"
%
s:
%
s"
%
(
e
.
error_code
,
e
.
error_message
))
...
...
@@ -558,6 +548,10 @@ def terminate_instances(module, ec2, instance_ids):
"""
# Whether to wait for termination to complete before returning
wait
=
module
.
params
.
get
(
'wait'
)
wait_timeout
=
int
(
module
.
params
.
get
(
'wait_timeout'
))
changed
=
False
instance_dict_array
=
[]
...
...
@@ -576,8 +570,30 @@ def terminate_instances(module, ec2, instance_ids):
module
.
fail_json
(
msg
=
'Unable to terminate instance {0}, error: {1}'
.
format
(
inst
.
id
,
e
))
changed
=
True
return
(
changed
,
instance_dict_array
,
terminated_instance_ids
)
# wait here until the instances are 'terminated'
if
wait
:
num_terminated
=
0
wait_timeout
=
time
.
time
()
+
wait_timeout
while
wait_timeout
>
time
.
time
()
and
num_terminated
<
len
(
terminated_instance_ids
):
response
=
ec2
.
get_all_instances
(
\
instance_ids
=
terminated_instance_ids
,
\
filters
=
{
'instance-state-name'
:
'terminated'
})
try
:
num_terminated
=
len
(
response
.
pop
()
.
instances
)
except
Exception
,
e
:
# got a bad response of some sort, possibly due to
# stale/cached data. Wait a second and then try again
time
.
sleep
(
1
)
continue
if
num_terminated
<
len
(
terminated_instance_ids
):
time
.
sleep
(
5
)
# waiting took too long
if
wait_timeout
<
time
.
time
()
and
num_terminated
<
len
(
terminated_instance_ids
):
module
.
fail_json
(
msg
=
"wait for instance termination timeout on
%
s"
%
time
.
asctime
())
return
(
changed
,
instance_dict_array
,
terminated_instance_ids
)
def
main
():
...
...
@@ -593,16 +609,16 @@ def main():
image
=
dict
(),
kernel
=
dict
(),
count
=
dict
(
default
=
'1'
),
monitoring
=
dict
(
choices
=
BOOLEANS
,
default
=
False
),
monitoring
=
dict
(
type
=
'bool'
,
default
=
False
),
ramdisk
=
dict
(),
wait
=
dict
(
choices
=
BOOLEANS
,
default
=
False
),
wait
=
dict
(
type
=
'bool'
,
default
=
False
),
wait_timeout
=
dict
(
default
=
300
),
ec2_url
=
dict
(),
aws_secret_key
=
dict
(
aliases
=
[
'ec2
_secret_key'
,
'secret_key'
],
no_log
=
True
),
aws_access_key
=
dict
(
aliases
=
[
'ec2
_access_key'
,
'access_key'
]),
ec2_secret_key
=
dict
(
aliases
=
[
'aws
_secret_key'
,
'secret_key'
],
no_log
=
True
),
ec2_access_key
=
dict
(
aliases
=
[
'aws
_access_key'
,
'access_key'
]),
placement_group
=
dict
(),
user_data
=
dict
(),
instance_tags
=
dict
(),
instance_tags
=
dict
(
type
=
'dict'
),
vpc_subnet_id
=
dict
(),
private_ip
=
dict
(),
instance_profile_name
=
dict
(),
...
...
@@ -612,33 +628,9 @@ def main():
)
)
ec2_url
=
module
.
params
.
get
(
'ec2_url'
)
aws_secret_key
=
module
.
params
.
get
(
'aws_secret_key'
)
aws_access_key
=
module
.
params
.
get
(
'aws_access_key'
)
region
=
module
.
params
.
get
(
'region'
)
# allow eucarc environment variables to be used if ansible vars aren't set
if
not
ec2_url
and
'EC2_URL'
in
os
.
environ
:
ec2_url
=
os
.
environ
[
'EC2_URL'
]
if
not
aws_secret_key
:
if
'AWS_SECRET_KEY'
in
os
.
environ
:
aws_secret_key
=
os
.
environ
[
'AWS_SECRET_KEY'
]
elif
'EC2_SECRET_KEY'
in
os
.
environ
:
aws_secret_key
=
os
.
environ
[
'EC2_SECRET_KEY'
]
if
not
aws_access_key
:
if
'AWS_ACCESS_KEY'
in
os
.
environ
:
aws_access_key
=
os
.
environ
[
'AWS_ACCESS_KEY'
]
elif
'EC2_ACCESS_KEY'
in
os
.
environ
:
aws_access_key
=
os
.
environ
[
'EC2_ACCESS_KEY'
]
if
not
region
:
if
'AWS_REGION'
in
os
.
environ
:
region
=
os
.
environ
[
'AWS_REGION'
]
elif
'EC2_REGION'
in
os
.
environ
:
region
=
os
.
environ
[
'EC2_REGION'
]
# def get_ec2_creds(module):
# return ec2_url, ec2_access_key, ec2_secret_key, region
ec2_url
,
aws_access_key
,
aws_secret_key
,
region
=
get_ec2_creds
(
module
)
# If we have a region specified, connect to its endpoint.
if
region
:
...
...
@@ -672,8 +664,8 @@ def main():
module
.
exit_json
(
changed
=
changed
,
instance_ids
=
new_instance_ids
,
instances
=
instance_dict_array
)
# this is magic, see lib/ansible/module_common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
# import module snippets
from
ansible.module_utils.basic
import
*
from
ansible.module_utils.ec2
import
*
main
()
playbooks/roles/analytics-server/handlers/main.yml
View file @
5c5b041f
...
...
@@ -15,8 +15,8 @@
#
#
-
name
:
analytics-server |
stop the analytics service
-
name
:
stop the analytics service
service
:
name=analytics state=stopped
-
name
:
analytics-server |
start the analytics service
-
name
:
start the analytics service
service
:
name=analytics state=started
playbooks/roles/analytics-server/tasks/deploy.yml
View file @
5c5b041f
#
# TODO: Needed while this repo is private
#
-
name
:
analytics-server |
upload ssh script
-
name
:
upload ssh script
template
:
src=tmp/{{ as_role_name }}.git_ssh.sh.j2 dest={{ as_git_ssh }}
force=yes owner=root group=adm mode=750
...
...
@@ -13,7 +13,7 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics-server |
install read-only ssh key required for checkout
-
name
:
install read-only ssh key required for checkout
copy
:
src={{ as_git_identity_path }} dest={{ as_git_identity_dest }}
force=yes owner=ubuntu group=adm mode=0600
...
...
@@ -22,14 +22,14 @@
-
install
-
update
-
name
:
analytics-server |
checkout code
-
name
:
checkout code
git
:
dest={{ as_code_dir }} repo={{ as_source_repo }}
version={{ as_version }} force=true
environment
:
GIT_SSH
:
$as_git_ssh
notify
:
analytics-server |
restart the analytics service
notify
:
analytics-server |
start the analytics service
notify
:
restart the analytics service
notify
:
start the analytics service
tags
:
-
analytics-server
-
install
...
...
@@ -38,7 +38,7 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics-server |
update src permissions
-
name
:
update src permissions
file
:
path={{ as_code_dir }} state=directory owner={{ as_user }}
group={{ as_web_user }} mode=2750 recurse=yes
...
...
@@ -50,7 +50,7 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics-server |
remove read-only ssh key for the content repo
-
name
:
remove read-only ssh key for the content repo
file
:
path={{ as_git_identity_dest }} state=absent
tags
:
-
analytics-server
...
...
@@ -60,20 +60,20 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics-server |
remove ssh script
-
name
:
remove ssh script
file
:
path={{ as_git_ssh }} state=absent
tags
:
-
analytics-server
-
install
-
update
-
name
:
analytics-server |
install application requirements
-
name
:
install application requirements
pip
:
requirements={{ as_requirements_file }}
virtualenv={{ as_venv_dir }} state=present
sudo
:
true
sudo_user
:
"
{{
as_user
}}"
notify
:
analytics-server |
start the analytics service
notify
:
start the analytics service
tags
:
-
analytics-server
-
install
...
...
playbooks/roles/analytics-server/tasks/main.yml
View file @
5c5b041f
...
...
@@ -37,14 +37,14 @@
# - common
# - analytics-server
#
-
name
:
analytics-server |
install system packages
-
name
:
install system packages
apt
:
pkg={{','.join(as_debian_pkgs)}} state=present
tags
:
-
analytics-server
-
install
-
update
-
name
:
analytics-server |
create analytics-server user {{ as_user }}
-
name
:
create analytics-server user {{ as_user }}
user
:
name={{ as_user }} state=present shell=/bin/bash
home={{ as_home }} createhome=yes
...
...
@@ -53,7 +53,7 @@
-
install
-
update
-
name
:
analytics-server |
setup the analytics-server env
-
name
:
setup the analytics-server env
template
:
src=opt/wwc/analytics-server/{{ as_env }}.j2
dest={{ as_home }}/{{ as_env }}
...
...
@@ -63,7 +63,7 @@
-
install
-
update
-
name
:
analytics-server |
drop a bash_profile
-
name
:
drop a bash_profile
copy
:
>
src=../../common/files/bash_profile
dest={{ as_home }}/.bash_profile
...
...
@@ -71,7 +71,7 @@
group={{ as_user }}
# Awaiting next ansible release.
#- name:
analytics-server |
ensure .bashrc exists
#- name: ensure .bashrc exists
# file: path={{ as_home }}/.bashrc state=touch
# sudo: true
# sudo_user: "{{ as_user }}"
...
...
@@ -80,7 +80,7 @@
# - install
# - update
-
name
:
analytics-server |
ensure .bashrc exists
-
name
:
ensure .bashrc exists
shell
:
touch {{ as_home }}/.bashrc
sudo
:
true
sudo_user
:
"
{{
as_user
}}"
...
...
@@ -89,7 +89,7 @@
-
install
-
update
-
name
:
a
nalytics-server | a
dd source of analytics-server_env to .bashrc
-
name
:
add source of analytics-server_env to .bashrc
lineinfile
:
dest={{ as_home }}/.bashrc
regexp='. {{ as_home }}/analytics-server_env'
...
...
@@ -99,7 +99,7 @@
-
install
-
update
-
name
:
a
nalytics-server | a
dd source venv to .bashrc
-
name
:
add source venv to .bashrc
lineinfile
:
dest={{ as_home }}/.bashrc
regexp='. {{ as_venv_dir }}/bin/activate'
...
...
@@ -109,7 +109,7 @@
-
install
-
update
-
name
:
analytics-server |
install global python requirements
-
name
:
install global python requirements
pip
:
name={{ item }}
with_items
:
as_pip_pkgs
tags
:
...
...
@@ -117,7 +117,7 @@
-
install
-
update
-
name
:
analytics-server |
create config
-
name
:
create config
template
:
src=opt/wwc/analytics.auth.json.j2
dest=/opt/wwc/analytics.auth.json
...
...
@@ -128,7 +128,7 @@
-
install
-
update
-
name
:
analytics-server |
install service
-
name
:
install service
template
:
src=etc/init/analytics.conf.j2 dest=/etc/init/analytics.conf
owner=root group=root
...
...
playbooks/roles/analytics/handlers/main.yml
View file @
5c5b041f
...
...
@@ -15,8 +15,8 @@
#
#
-
name
:
analytics |
stop the analytics service
-
name
:
stop the analytics service
service
:
name=analytics state=stopped
-
name
:
analytics |
start the analytics service
-
name
:
start the analytics service
service
:
name=analytics state=started
playbooks/roles/analytics/tasks/deploy.yml
View file @
5c5b041f
#
# TODO: Needed while this repo is private
#
-
name
:
analytics |
upload ssh script
-
name
:
upload ssh script
template
:
src=tmp/{{ analytics_role_name }}.git_ssh.sh.j2 dest={{ analytics_git_ssh }}
force=yes owner=root group=adm mode=750
...
...
@@ -13,7 +13,7 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics |
install read-only ssh key required for checkout
-
name
:
install read-only ssh key required for checkout
copy
:
src={{ analytics_git_identity_path }} dest={{ analytics_git_identity_dest }}
force=yes owner=ubuntu group=adm mode=0600
...
...
@@ -22,14 +22,14 @@
-
install
-
update
-
name
:
analytics |
checkout code
-
name
:
checkout code
git
:
dest={{ analytics_code_dir }} repo={{ analytics_source_repo }}
version={{ analytics_version }} force=true
environment
:
GIT_SSH
:
$analytics_git_ssh
notify
:
analytics |
restart the analytics service
notify
:
analytics |
start the analytics service
notify
:
restart the analytics service
notify
:
start the analytics service
tags
:
-
analytics
-
install
...
...
@@ -38,7 +38,7 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics |
update src permissions
-
name
:
update src permissions
file
:
path={{ analytics_code_dir }} state=directory owner={{ analytics_user }}
group={{ analytics_web_user }} mode=2750 recurse=yes
...
...
@@ -50,7 +50,7 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics |
remove read-only ssh key for the content repo
-
name
:
remove read-only ssh key for the content repo
file
:
path={{ analytics_git_identity_dest }} state=absent
tags
:
-
analytics
...
...
@@ -60,20 +60,20 @@
#
# TODO: Needed while this repo is private
#
-
name
:
analytics |
remove ssh script
-
name
:
remove ssh script
file
:
path={{ analytics_git_ssh }} state=absent
tags
:
-
analytics
-
install
-
update
-
name
:
analytics |
install application requirements
-
name
:
install application requirements
pip
:
requirements={{ analytics_requirements_file }}
virtualenv={{ analytics_venv_dir }} state=present
sudo
:
true
sudo_user
:
"
{{
analytics_user
}}"
notify
:
analytics |
start the analytics service
notify
:
start the analytics service
tags
:
-
analytics
-
install
...
...
playbooks/roles/analytics/tasks/main.yml
View file @
5c5b041f
...
...
@@ -37,14 +37,14 @@
# - common
# - analytics
#
-
name
:
analytics |
install system packages
-
name
:
install system packages
apt
:
pkg={{','.join(analytics_debian_pkgs)}} state=present
tags
:
-
analytics
-
install
-
update
-
name
:
analytics |
create analytics user {{ analytics_user }}
-
name
:
create analytics user {{ analytics_user }}
user
:
name={{ analytics_user }} state=present shell=/bin/bash
home={{ analytics_home }} createhome=yes
...
...
@@ -53,7 +53,7 @@
-
install
-
update
-
name
:
analytics |
setup the analytics env
-
name
:
setup the analytics env
template
:
src=opt/wwc/analytics/{{ analytics_env }}.j2
dest={{ analytics_home }}/{{ analytics_env }}
...
...
@@ -63,7 +63,7 @@
-
install
-
update
-
name
:
analytics |
drop a bash_profile
-
name
:
drop a bash_profile
copy
:
>
src=../../common/files/bash_profile
dest={{ analytics_home }}/.bash_profile
...
...
@@ -71,7 +71,7 @@
group={{ analytics_user }}
# Awaiting next ansible release.
#- name:
analytics |
ensure .bashrc exists
#- name: ensure .bashrc exists
# file: path={{ analytics_home }}/.bashrc state=touch
# sudo: true
# sudo_user: "{{ analytics_user }}"
...
...
@@ -80,7 +80,7 @@
# - install
# - update
-
name
:
analytics |
ensure .bashrc exists
-
name
:
ensure .bashrc exists
shell
:
touch {{ analytics_home }}/.bashrc
sudo
:
true
sudo_user
:
"
{{
analytics_user
}}"
...
...
@@ -89,7 +89,7 @@
-
install
-
update
-
name
:
a
nalytics | a
dd source of analytics_env to .bashrc
-
name
:
add source of analytics_env to .bashrc
lineinfile
:
dest={{ analytics_home }}/.bashrc
regexp='. {{ analytics_home }}/analytics_env'
...
...
@@ -99,7 +99,7 @@
-
install
-
update
-
name
:
a
nalytics | a
dd source venv to .bashrc
-
name
:
add source venv to .bashrc
lineinfile
:
dest={{ analytics_home }}/.bashrc
regexp='. {{ analytics_venv_dir }}/bin/activate'
...
...
@@ -109,7 +109,7 @@
-
install
-
update
-
name
:
analytics |
install global python requirements
-
name
:
install global python requirements
pip
:
name={{ item }}
with_items
:
analytics_pip_pkgs
tags
:
...
...
@@ -117,7 +117,7 @@
-
install
-
update
-
name
:
analytics |
create config
-
name
:
create config
template
:
src=opt/wwc/analytics.auth.json.j2
dest=/opt/wwc/analytics.auth.json
...
...
@@ -128,7 +128,7 @@
-
install
-
update
-
name
:
analytics |
install service
-
name
:
install service
template
:
src=etc/init/analytics.conf.j2 dest=/etc/init/analytics.conf
owner=root group=root
...
...
playbooks/roles/ansible-role/tasks/main.yml
View file @
5c5b041f
---
-
name
:
ansible-role |
check if the role exists
-
name
:
check if the role exists
command
:
test -d roles/{{ role_name }}
register
:
role_exists
ignore_errors
:
yes
-
name
:
ansible-role |
prompt for overwrite
-
name
:
prompt for overwrite
pause
:
prompt="Role {{ role_name }} exists. Overwrite? Touch any key to continue or <CTRL>-c, then a, to abort."
when
:
role_exists | success
-
name
:
ansible-role |
create role directories
-
name
:
create role directories
file
:
path=roles/{{role_name}}/{{ item }} state=directory
with_items
:
-
tasks
...
...
@@ -19,7 +19,7 @@
-
templates
-
files
-
name
:
ansible-role |
make an ansible role
-
name
:
make an ansible role
template
:
src={{ item }}/main.yml.j2 dest=roles/{{ role_name }}/{{ item }}/main.yml
with_items
:
-
tasks
...
...
playbooks/roles/ansible-role/templates/handlers/main.yml.j2
View file @
5c5b041f
...
...
@@ -7,5 +7,5 @@
# Overview:
#
#
- name:
{{ role_name }} |
notify me
- name: notify me
debug: msg="stub handler"
playbooks/roles/ansible-role/templates/tasks/main.yml.j2
View file @
5c5b041f
...
...
@@ -14,6 +14,6 @@
#
#
- name:
{{ role_name }} |
stub ansible task
- name: stub ansible task
debug: msg="This is a stub task created by the ansible-role role"
notify: {{ role_name }} | notify me
\ No newline at end of file
notify: notify me
playbooks/roles/apache/handlers/main.yml
View file @
5c5b041f
---
-
name
:
apache |
restart apache
-
name
:
restart apache
service
:
name=apache2 state=restarted
playbooks/roles/apache/tasks/apache_site.yml
View file @
5c5b041f
# Requires nginx package
---
-
name
:
apache |
Copying apache config {{ site_name }}
-
name
:
Copying apache config {{ site_name }}
template
:
src={{ item }} dest=/etc/apache2/sites-available/{{ site_name }}
first_available_file
:
-
"
{{
local_dir
}}/apache/templates/{{
site_name
}}.j2"
# seems like paths in first_available_file must be relative to the playbooks dir
-
"
roles/apache/templates/{{
site_name
}}.j2"
notify
:
apache |
restart apache
notify
:
restart apache
when
:
apache_role_run is defined
tags
:
-
apache
-
update
-
name
:
apache |
Creating apache2 config link {{ site_name }}
-
name
:
Creating apache2 config link {{ site_name }}
file
:
src=/etc/apache2/sites-available/{{ site_name }} dest=/etc/apache2/sites-enabled/{{ site_name }} state={{ state }} owner=root group=root
notify
:
apache |
restart apache
notify
:
restart apache
when
:
apache_role_run is defined
tags
:
-
apache
...
...
playbooks/roles/apache/tasks/main.yml
View file @
5c5b041f
#Installs apache and runs the lms wsgi
---
-
name
:
apache |
Installs apache and mod_wsgi from apt
-
name
:
Installs apache and mod_wsgi from apt
apt
:
pkg={{item}} install_recommends=no state=present update_cache=yes
with_items
:
-
apache2
-
libapache2-mod-wsgi
notify
:
apache |
restart apache
notify
:
restart apache
tags
:
-
apache
-
install
-
name
:
apache |
disables default site
-
name
:
disables default site
command
:
a2dissite 000-default
notify
:
apache |
restart apache
notify
:
restart apache
tags
:
-
apache
-
install
-
name
:
apache |
rewrite apache ports conf
-
name
:
rewrite apache ports conf
template
:
dest=/etc/apache2/ports.conf src=ports.conf.j2 owner=root group=root
notify
:
apache |
restart apache
notify
:
restart apache
tags
:
-
apache
-
install
-
name
:
apache |
Register the fact that apache role has run
-
name
:
Register the fact that apache role has run
command
:
echo True
register
:
apache_role_run
tags
:
...
...
playbooks/roles/automated/tasks/main.yml
View file @
5c5b041f
...
...
@@ -57,7 +57,7 @@
-
fail
:
automated_sudoers_dest required for role
when
:
automated_sudoers_dest is not defined
-
name
:
automated |
create automated user
-
name
:
create automated user
user
:
name={{ automated_user }} state=present shell=/bin/rbash
home={{ automated_home }} createhome=yes
...
...
@@ -66,7 +66,7 @@
-
install
-
update
-
name
:
automated |
create sudoers file from file
-
name
:
create sudoers file from file
copy
:
dest=/etc/sudoers.d/{{ automated_sudoers_dest }}
src={{ automated_sudoers_file }} owner="root"
...
...
@@ -77,7 +77,7 @@
-
install
-
update
-
name
:
automated |
create sudoers file from template
-
name
:
create sudoers file from template
template
:
dest=/etc/sudoers.d/{{ automated_sudoers_dest }}
src={{ automated_sudoers_template }} owner="root"
...
...
@@ -92,7 +92,7 @@
# Prevent user from updating their PATH and
# environment.
#
-
name
:
automated |
update shell file mode
-
name
:
update shell file mode
file
:
path={{ automated_home }}/{{ item }} mode=0640
state=file owner="root" group={{ automated_user }}
...
...
@@ -105,7 +105,7 @@
-
.profile
-
.bash_logout
-
name
:
automated |
change ~automated ownership
-
name
:
change ~automated ownership
file
:
path={{ automated_home }} mode=0750 state=directory
owner="root" group={{ automated_user }}
...
...
@@ -119,7 +119,7 @@
# and that links that were remove from the role are
# removed.
#
-
name
:
automated |
remove ~automated/bin directory
-
name
:
remove ~automated/bin directory
file
:
path={{ automated_home }}/bin state=absent
ignore_errors
:
yes
...
...
@@ -128,7 +128,7 @@
-
install
-
update
-
name
:
automated |
create ~automated/bin directory
-
name
:
create ~automated/bin directory
file
:
path={{ automated_home }}/bin state=directory mode=0750
owner="root" group={{ automated_user }}
...
...
@@ -137,7 +137,7 @@
-
install
-
update
-
name
:
automated |
re-write .profile
-
name
:
re-write .profile
copy
:
src=home/automator/.profile
dest={{ automated_home }}/.profile
...
...
@@ -149,7 +149,7 @@
-
install
-
update
-
name
:
automated |
re-write .bashrc
-
name
:
re-write .bashrc
copy
:
src=home/automator/.bashrc
dest={{ automated_home }}/.bashrc
...
...
@@ -161,7 +161,7 @@
-
install
-
update
-
name
:
automated |
create .ssh directory
-
name
:
create .ssh directory
file
:
path={{ automated_home }}/.ssh state=directory mode=0700
owner={{ automated_user }} group={{ automated_user }}
...
...
@@ -170,7 +170,7 @@
-
install
-
update
-
name
:
automated |
copy key to .ssh/authorized_keys
-
name
:
copy key to .ssh/authorized_keys
copy
:
src=home/automator/.ssh/authorized_keys
dest={{ automated_home }}/.ssh/authorized_keys mode=0600
...
...
@@ -180,7 +180,7 @@
-
install
-
update
-
name
:
automated |
create allowed command links
-
name
:
create allowed command links
file
:
src={{ item }} dest={{ automated_home }}/bin/{{ item.split('/').pop() }}
state=link
...
...
playbooks/roles/browsers/tasks/main.yml
View file @
5c5b041f
# Install browsers required to run the JavaScript
# and acceptance test suite locally without a display
---
-
name
:
browsers |
install system packages
-
name
:
install system packages
apt
:
pkg={{','.join(browser_deb_pkgs)}}
state=present update_cache=yes
-
name
:
browsers |
download browser debian packages from S3
-
name
:
download browser debian packages from S3
get_url
:
dest="/tmp/{{ item.name }}" url="{{ item.url }}"
register
:
download_deb
with_items
:
"
{{
browser_s3_deb_pkgs
}}"
with_items
:
browser_s3_deb_pkgs
-
name
:
browsers |
install browser debian packages
-
name
:
install browser debian packages
shell
:
gdebi -nq /tmp/{{ item.name }}
when
:
download_deb.changed
with_items
:
"
{{
browser_s3_deb_pkgs
}}"
with_items
:
browser_s3_deb_pkgs
-
name
:
browsers |
Install ChromeDriver
-
name
:
Install ChromeDriver
get_url
:
url={{ chromedriver_url }}
dest=/var/tmp/chromedriver_{{ chromedriver_version }}.zip
-
name
:
browsers |
Install ChromeDriver 2
-
name
:
Install ChromeDriver 2
shell
:
unzip /var/tmp/chromedriver_{{ chromedriver_version }}.zip
chdir=/var/tmp
-
name
:
browsers |
Install ChromeDriver 3
-
name
:
Install ChromeDriver 3
shell
:
mv /var/tmp/chromedriver /usr/local/bin/chromedriver
-
name
:
browsers |
Install Chromedriver 4
-
name
:
Install Chromedriver 4
file
:
path=/usr/local/bin/chromedriver mode=0755
-
name
:
browsers |
create xvfb upstart script
-
name
:
create xvfb upstart script
template
:
src=xvfb.conf.j2 dest=/etc/init/xvfb.conf owner=root group=root
-
name
:
browsers |
start xvfb
-
name
:
start xvfb
shell
:
start xvfb
ignore_errors
:
yes
playbooks/roles/certs/handlers/main.yml
View file @
5c5b041f
...
...
@@ -14,7 +14,7 @@
# Overview:
#
-
name
:
certs |
restart certs
-
name
:
restart certs
supervisorctl_local
:
>
name=certs
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/certs/tasks/deploy.yml
View file @
5c5b041f
---
-
name
:
c
erts | c
reate certificate application config
-
name
:
create certificate application config
template
:
>
src=certs.env.json.j2
dest={{ certs_app_dir }}/env.json
sudo_user
:
"
{{
certs_user
}}"
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
c
erts | c
reate certificate auth file
-
name
:
create certificate auth file
template
:
>
src=certs.auth.json.j2
dest={{ certs_app_dir }}/auth.json
sudo_user
:
"
{{
certs_user
}}"
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
certs |
writing supervisor script for certificates
-
name
:
writing supervisor script for certificates
template
:
>
src=certs.conf.j2 dest={{ supervisor_cfg_dir }}/certs.conf
owner={{ supervisor_user }} mode=0644
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
c
erts | c
reate ssh script for git
-
name
:
create ssh script for git
template
:
>
src={{ certs_git_ssh|basename }}.j2 dest={{ certs_git_ssh }}
owner={{ certs_user }} mode=750
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
certs |
install read-only ssh key for the certs repo
-
name
:
install read-only ssh key for the certs repo
copy
:
>
src={{ CERTS_LOCAL_GIT_IDENTITY }} dest={{ certs_git_identity }}
force=yes owner={{ certs_user }} mode=0600
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
c
erts | c
heckout certificates repo into {{ certs_code_dir }}
-
name
:
checkout certificates repo into {{ certs_code_dir }}
git
:
dest={{ certs_code_dir }} repo={{ certs_repo }} version={{ certs_version }}
sudo_user
:
"
{{
certs_user
}}"
environment
:
GIT_SSH
:
"
{{
certs_git_ssh
}}"
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
certs |
remove read-only ssh key for the certs repo
-
name
:
remove read-only ssh key for the certs repo
file
:
path={{ certs_git_identity }} state=absent
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
install python requirements
pip
:
requirements="{{ certs_requirements_file }}" virtualenv="{{ certs_venv_dir }}" state=present
sudo_user
:
"
{{
certs_user
}}"
notify
:
certs |
restart certs
notify
:
restart certs
# call supervisorctl update. this reloads
# the supervisorctl config and restarts
# the services if any of the configurations
# have changed.
#
-
name
:
certs |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
sudo_user
:
"
{{
supervisor_service_user
}}"
changed_when
:
supervisor_update.stdout != ""
when
:
start_services
-
name
:
certs |
ensure certs has started
-
name
:
ensure certs has started
supervisorctl_local
:
>
name=certs
supervisorctl_path={{ supervisor_ctl }}
...
...
@@ -69,12 +69,12 @@
sudo_user
:
"
{{
supervisor_service_user
}}"
when
:
start_services
-
name
:
c
erts | c
reate a symlink for venv python
-
name
:
create a symlink for venv python
file
:
>
src="{{ certs_venv_bin }}/{{ item }}"
dest={{ COMMON_BIN_DIR }}/{{ item }}.certs
state=link
notify
:
certs |
restart certs
notify
:
restart certs
with_items
:
-
python
-
pip
...
...
playbooks/roles/certs/tasks/main.yml
View file @
5c5b041f
...
...
@@ -35,46 +35,46 @@
fail
:
msg="You must set CERTS_LOCAL_GIT_IDENTITY var for this role!"
when
:
not CERTS_LOCAL_GIT_IDENTITY
-
name
:
c
erts | c
reate application user
-
name
:
create application user
user
:
>
name="{{ certs_user }}"
home="{{ certs_app_dir }}"
createhome=no
shell=/bin/false
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
c
erts | c
reate certs app and data dirs
-
name
:
create certs app and data dirs
file
:
>
path="{{ item }}"
state=directory
owner="{{ certs_user }}"
group="{{ common_web_group }}"
notify
:
certs |
restart certs
notify
:
restart certs
with_items
:
-
"
{{
certs_app_dir
}}"
-
"
{{
certs_venvs_dir
}}"
-
name
:
c
erts | c
reate certs gpg dir
-
name
:
create certs gpg dir
file
:
>
path="{{ certs_gpg_dir }}" state=directory
owner="{{ common_web_user }}"
mode=0700
notify
:
certs |
restart certs
notify
:
restart certs
-
name
:
c
erts | c
opy the private gpg signing key
-
name
:
copy the private gpg signing key
copy
:
>
src={{ CERTS_LOCAL_PRIVATE_KEY }}
dest={{ certs_app_dir }}/{{ CERTS_LOCAL_PRIVATE_KEY|basename }}
owner={{ common_web_user }} mode=0600
notify
:
certs |
restart certs
notify
:
restart certs
register
:
certs_gpg_key
-
name
:
certs |
load the gpg key
-
name
:
load the gpg key
shell
:
>
/usr/bin/gpg --homedir {{ certs_gpg_dir }} --import {{ certs_app_dir }}/{{ CERTS_LOCAL_PRIVATE_KEY|basename }}
sudo_user
:
"
{{
common_web_user
}}"
when
:
certs_gpg_key.changed
notify
:
certs |
restart certs
notify
:
restart certs
-
include
:
deploy.yml tags=deploy
playbooks/roles/common/handlers/main.yml
View file @
5c5b041f
---
-
name
:
common |
restart rsyslogd
-
name
:
restart rsyslogd
service
:
name=rsyslog state=restarted
sudo
:
True
playbooks/roles/common/tasks/main.yml
View file @
5c5b041f
---
-
name
:
common |
Add user www-data
-
name
:
Add user www-data
# This is the default user for nginx
user
:
>
name="{{ common_web_user }}"
shell=/bin/false
-
name
:
common |
Create common directories
-
name
:
Create common directories
file
:
>
path={{ item }} state=directory owner=root
group=root mode=0755
...
...
@@ -16,57 +16,57 @@
-
"
{{
COMMON_CFG_DIR
}}"
# Need to install python-pycurl to use Ansible's apt_repository module
-
name
:
common |
Install python-pycurl
-
name
:
Install python-pycurl
apt
:
pkg=python-pycurl state=present update_cache=yes
# Ensure that we get a current version of Git
# GitHub requires version 1.7.10 or later
# https://help.github.com/articles/https-cloning-errors
-
name
:
common |
Add git apt repository
-
name
:
Add git apt repository
apt_repository
:
repo="{{ common_git_ppa }}"
-
name
:
common |
Install role-independent useful system packages
-
name
:
Install role-independent useful system packages
# do this before log dir setup; rsyslog package guarantees syslog user present
apt
:
>
pkg={{','.join(common_debian_pkgs)}} install_recommends=yes
state=present update_cache=yes
-
name
:
common |
Create common log directory
-
name
:
Create common log directory
file
:
>
path={{ COMMON_LOG_DIR }} state=directory owner=syslog
group=syslog mode=0755
-
name
:
common |
upload sudo config for key forwarding as root
-
name
:
upload sudo config for key forwarding as root
copy
:
>
src=ssh_key_forward dest=/etc/sudoers.d/ssh_key_forward
validate='visudo -c -f %s' owner=root group=root mode=0440
-
name
:
common |
pip install virtualenv
-
name
:
pip install virtualenv
pip
:
>
name="{{ item }}" state=present
extra_args="-i {{ COMMON_PYPI_MIRROR_URL }}"
with_items
:
common_pip_pkgs
-
name
:
common |
Install rsyslog configuration for edX
-
name
:
Install rsyslog configuration for edX
template
:
dest=/etc/rsyslog.d/99-edx.conf src=edx_rsyslog.j2 owner=root group=root mode=644
notify
:
common |
restart rsyslogd
notify
:
restart rsyslogd
-
name
:
common |
Install logrotate configuration for edX
-
name
:
Install logrotate configuration for edX
template
:
dest=/etc/logrotate.d/edx-services src=edx_logrotate.j2 owner=root group=root mode=644
-
name
:
common |
update /etc/hosts
-
name
:
update /etc/hosts
template
:
src=hosts.j2 dest=/etc/hosts
when
:
COMMON_HOSTNAME
register
:
etc_hosts
-
name
:
common |
update /etc/hostname
-
name
:
update /etc/hostname
template
:
src=hostname.j2 dest=/etc/hostname
when
:
COMMON_HOSTNAME
register
:
etc_hostname
-
name
:
common |
run hostname
-
name
:
run hostname
shell
:
>
hostname -F /etc/hostname
when
:
COMMON_HOSTNAME and (etc_hosts.changed or etc_hostname.changed)
playbooks/roles/datadog/handlers/main.yml
View file @
5c5b041f
---
-
name
:
datadog |
restart the datadog service
-
name
:
restart the datadog service
service
:
name=datadog-agent state=restarted
playbooks/roles/datadog/tasks/main.yml
View file @
5c5b041f
...
...
@@ -15,43 +15,43 @@
# - datadog
#
-
name
:
datadog |
install debian needed pkgs
-
name
:
install debian needed pkgs
apt
:
pkg={{ item }}
with_items
:
datadog_debian_pkgs
tags
:
-
datadog
-
name
:
datadog |
add apt key
-
name
:
add apt key
apt_key
:
id=C7A7DA52 url={{datadog_apt_key}} state=present
tags
:
-
datadog
-
name
:
datadog |
install apt repository
-
name
:
install apt repository
apt_repository
:
repo='deb http://apt.datadoghq.com/ unstable main' update_cache=yes
tags
:
-
datadog
-
name
:
datadog |
install datadog agent
-
name
:
install datadog agent
apt
:
pkg="datadog-agent"
tags
:
-
datadog
-
name
:
datadog |
bootstrap config
-
name
:
bootstrap config
shell
:
cp /etc/dd-agent/datadog.conf.example /etc/dd-agent/datadog.conf creates=/etc/dd-agent/datadog.conf
tags
:
-
datadog
-
name
:
datadog |
update api-key
-
name
:
update api-key
lineinfile
:
>
dest="/etc/dd-agent/datadog.conf"
regexp="^api_key:.*"
line="api_key:{{ datadog_api_key }}"
notify
:
-
datadog |
restart the datadog service
-
restart the datadog service
tags
:
-
datadog
-
name
:
datadog |
ensure started and enabled
-
name
:
ensure started and enabled
service
:
name=datadog-agent state=started enabled=yes
tags
:
-
datadog
playbooks/roles/demo/tasks/deploy.yml
View file @
5c5b041f
---
-
name
:
demo |
check out the demo course
-
name
:
check out the demo course
git
:
dest={{ demo_code_dir }} repo={{ demo_repo }} version={{ demo_version }}
sudo_user
:
"
{{
edxapp_user
}}"
register
:
demo_checkout
-
name
:
demo |
import demo course
-
name
:
import demo course
shell
:
>
{{ edxapp_venv_bin }}/python ./manage.py cms --settings=aws import {{ edxapp_course_data_dir }} {{ demo_code_dir }}
chdir={{ edxapp_code_dir }}
sudo_user
:
"
{{
common_web_user
}}"
when
:
demo_checkout.changed
-
name
:
demo |
create some test users and enroll them in the course
-
name
:
create some test users and enroll them in the course
shell
:
>
{{ edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms create_user -e {{ item.email }} -p {{ item.password }} -m {{ item.mode }} -c {{ demo_course_id }}
chdir={{ edxapp_code_dir }}
...
...
@@ -20,21 +20,21 @@
with_items
:
demo_test_users
when
:
demo_checkout.changed
-
name
:
demo |
create staff user
-
name
:
create staff user
shell
:
>
{{ edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms create_user -e staff@example.com -p edx -s -c {{ demo_course_id }}
chdir={{ edxapp_code_dir }}
sudo_user
:
"
{{
common_web_user
}}"
when
:
demo_checkout.changed
-
name
:
demo |
add test users to the certificate whitelist
-
name
:
add test users to the certificate whitelist
shell
:
>
{{ edxapp_venv_bin }}/python ./manage.py lms --settings=aws --service-variant lms cert_whitelist -a {{ item.email }} -c {{ demo_course_id }}
chdir={{ edxapp_code_dir }}
with_items
:
demo_test_users
when
:
demo_checkout.changed
-
name
:
demo |
seed the forums for the demo course
-
name
:
seed the forums for the demo course
shell
:
>
{{ edxapp_venv_bin }}/python ./manage.py lms --settings=aws seed_permissions_roles {{ demo_course_id }}
chdir={{ edxapp_code_dir }}
...
...
playbooks/roles/demo/tasks/main.yml
View file @
5c5b041f
...
...
@@ -30,7 +30,7 @@
# - edxapp
# - demo
-
name
:
demo |
create demo app and data dirs
-
name
:
create demo app and data dirs
file
:
>
path="{{ demo_app_dir }}" state=directory
owner="{{ edxapp_user }}" group="{{ common_web_group }}"
...
...
playbooks/roles/devpi/handlers/main.yml
View file @
5c5b041f
...
...
@@ -11,7 +11,7 @@
# Defaults for role devpi
#
---
-
name
:
devpi |
restart devpi
-
name
:
restart devpi
supervisorctl_local
:
>
state=restarted
supervisorctl_path={{ devpi_supervisor_ctl }}
...
...
playbooks/roles/devpi/tasks/main.yml
View file @
5c5b041f
...
...
@@ -30,13 +30,13 @@
# - devpi
---
-
name
:
devpi |
create devpi user
-
name
:
create devpi user
user
:
>
name={{ devpi_user }}
shell=/bin/false createhome=no
notify
:
devpi |
restart devpi
notify
:
restart devpi
-
name
:
devpi |
create devpi application directories
-
name
:
create devpi application directories
file
:
>
path={{ item }}
state=directory
...
...
@@ -45,9 +45,9 @@
with_items
:
-
"
{{
devpi_app_dir
}}"
-
"
{{
devpi_venv_dir
}}"
notify
:
devpi |
restart devpi
notify
:
restart devpi
-
name
:
devpi |
create the devpi data directory, needs write access by the service user
-
name
:
create the devpi data directory, needs write access by the service user
file
:
>
path={{ item }}
state=directory
...
...
@@ -56,40 +56,40 @@
with_items
:
-
"
{{
devpi_data_dir
}}"
-
"
{{
devpi_mirror_dir
}}"
notify
:
devpi |
restart devpi
notify
:
restart devpi
-
name
:
devpi |
install devpi pip pkgs
-
name
:
install devpi pip pkgs
pip
:
>
name={{ item }}
state=present
virtualenv={{ devpi_venv_dir }}
sudo_user
:
"
{{
devpi_user
}}"
with_items
:
devpi_pip_pkgs
notify
:
devpi |
restart devpi
notify
:
restart devpi
-
name
:
devpi |
writing supervisor script
-
name
:
writing supervisor script
template
:
>
src=devpi.conf.j2 dest={{ devpi_supervisor_cfg_dir }}/devpi.conf
owner={{ devpi_user }} group={{ devpi_user }} mode=0644
notify
:
devpi |
restart devpi
notify
:
restart devpi
-
name
:
devpi |
create a symlink for venv python, pip
-
name
:
create a symlink for venv python, pip
file
:
>
src="{{ devpi_venv_bin }}/{{ item }}"
dest={{ COMMON_BIN_DIR }}/{{ item }}.devpi
state=link
notify
:
devpi |
restart devpi
notify
:
restart devpi
with_items
:
-
python
-
pip
-
name
:
devpi |
create a symlink for venv supervisor
-
name
:
create a symlink for venv supervisor
file
:
>
src="{{ devpi_supervisor_venv_bin }}/supervisorctl"
dest={{ COMMON_BIN_DIR }}/{{ item }}.devpi
state=link
-
name
:
devpi |
create a symlink for supervisor config
-
name
:
create a symlink for supervisor config
file
:
>
src="{{ devpi_supervisor_app_dir }}/supervisord.conf"
dest={{ COMMON_CFG_DIR }}/supervisord.conf.devpi
...
...
@@ -100,12 +100,12 @@
# the services if any of the configurations
# have changed.
#
-
name
:
devpi |
update devpi supervisor configuration
-
name
:
update devpi supervisor configuration
shell
:
"
{{
devpi_supervisor_ctl
}}
-c
{{
devpi_supervisor_cfg
}}
update"
register
:
supervisor_update
changed_when
:
supervisor_update.stdout != ""
-
name
:
devpi |
ensure devpi is started
-
name
:
ensure devpi is started
supervisorctl_local
:
>
state=started
supervisorctl_path={{ devpi_supervisor_ctl }}
...
...
playbooks/roles/discern/handlers/main.yml
View file @
5c5b041f
---
-
name
:
discern |
restart discern
-
name
:
restart discern
supervisorctl_local
:
>
name=discern
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/discern/tasks/deploy.yml
View file @
5c5b041f
---
-
name
:
discern |
create supervisor scripts - discern, discern_celery
-
name
:
create supervisor scripts - discern, discern_celery
template
:
>
src={{ item }}.conf.j2 dest={{ supervisor_cfg_dir }}/{{ item }}.conf
owner={{ supervisor_user }} mode=0644
...
...
@@ -8,56 +8,56 @@
with_items
:
[
'
discern'
,
'
discern_celery'
]
#Upload config files for django (auth and env)
-
name
:
discern |
create discern application config env.json file
-
name
:
create discern application config env.json file
template
:
src=env.json.j2 dest={{ discern_app_dir }}/env.json
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
-
name
:
discern |
create discern auth file auth.json
-
name
:
create discern auth file auth.json
template
:
src=auth.json.j2 dest={{ discern_app_dir }}/auth.json
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
-
name
:
discern |
git checkout discern repo into discern_code_dir
-
name
:
git checkout discern repo into discern_code_dir
git
:
dest={{ discern_code_dir }} repo={{ discern_source_repo }} version={{ discern_version }}
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
-
name
:
discern |
git checkout ease repo into discern_ease_code_dir
-
name
:
git checkout ease repo into discern_ease_code_dir
git
:
dest={{ discern_ease_code_dir}} repo={{ discern_ease_source_repo }} version={{ discern_ease_version }}
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
#Numpy has to be a pre-requirement in order for scipy to build
-
name
:
discern |
install python pre-requirements for discern and ease
-
name
:
install python pre-requirements for discern and ease
pip
:
requirements={{item}} virtualenv={{ discern_venv_dir }} state=present
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
with_items
:
-
"
{{
discern_pre_requirements_file
}}"
-
"
{{
discern_ease_pre_requirements_file
}}"
-
name
:
discern |
install python requirements for discern and ease
-
name
:
install python requirements for discern and ease
pip
:
requirements={{item}} virtualenv={{ discern_venv_dir }} state=present
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
with_items
:
-
"
{{
discern_post_requirements_file
}}"
-
"
{{
discern_ease_post_requirements_file
}}"
-
name
:
discern |
install ease python package
-
name
:
install ease python package
shell
:
>
{{ discern_venv_dir }}/bin/activate; cd {{ discern_ease_code_dir }}; python setup.py install
notify
:
-
discern |
restart discern
-
restart discern
-
name
:
d
iscern | d
ownload and install nltk
-
name
:
download and install nltk
shell
:
|
set -e
curl -o {{ discern_nltk_tmp_file }} {{ discern_nltk_download_url }}
...
...
@@ -68,30 +68,30 @@
chdir={{ discern_data_dir }}
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
#Run this instead of using the ansible module because the ansible module only support syncdb of these three, and does not
#support virtualenvs as of this comment
-
name
:
d
iscern | d
jango syncdb migrate and collectstatic for discern
-
name
:
django syncdb migrate and collectstatic for discern
shell
:
>
{{ discern_venv_dir }}/bin/python {{discern_code_dir}}/manage.py {{item}} --noinput --settings={{discern_settings}} --pythonpath={{discern_code_dir}}
chdir={{ discern_code_dir }}
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
with_items
:
-
syncdb
-
migrate
-
collectstatic
#Have this separate from the other three because it doesn't take the noinput flag
-
name
:
d
iscern | d
jango update_index for discern
-
name
:
django update_index for discern
shell
:
>
{{ discern_venv_dir}}/bin/python {{discern_code_dir}}/manage.py update_index --settings={{discern_settings}} --pythonpath={{discern_code_dir}}
chdir={{ discern_code_dir }}
sudo_user
:
"
{{
discern_user
}}"
notify
:
-
discern |
restart discern
-
restart discern
# call supervisorctl update. this reloads
...
...
@@ -99,14 +99,14 @@
# the services if any of the configurations
# have changed.
#
-
name
:
discern |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
sudo_user
:
"
{{
supervisor_service_user
}}"
when
:
start_services
changed_when
:
supervisor_update.stdout != ""
-
name
:
discern |
ensure discern, discern_celery has started
-
name
:
ensure discern, discern_celery has started
supervisorctl_local
:
>
name={{ item }}
supervisorctl_path={{ supervisor_ctl }}
...
...
@@ -117,7 +117,7 @@
-
discern
-
discern_celery
-
name
:
discern |
create a symlink for venv python
-
name
:
create a symlink for venv python
file
:
>
src="{{ discern_venv_bin }}/python"
dest={{ COMMON_BIN_DIR }}/python.discern
...
...
playbooks/roles/discern/tasks/main.yml
View file @
5c5b041f
---
-
name
:
discern |
create application user
-
name
:
create application user
user
:
>
name="{{ discern_user }}"
home="{{ discern_app_dir }}"
createhome=no
shell=/bin/false
notify
:
-
discern |
restart discern
-
restart discern
-
name
:
discern |
create discern app dirs owned by discern
-
name
:
create discern app dirs owned by discern
file
:
>
path="{{ item }}"
state=directory
owner="{{ discern_user }}"
group="{{ common_web_group }}"
notify
:
-
discern |
restart discern
-
restart discern
with_items
:
-
"
{{
discern_app_dir
}}"
-
"
{{
discern_venvs_dir
}}"
-
name
:
discern |
create discern data dir, owned by {{ common_web_user }}
-
name
:
create discern data dir, owned by {{ common_web_user }}
file
:
>
path="{{ discern_data_dir }}" state=directory
owner="{{ common_web_user }}" group="{{ discern_user }}"
mode=0775
notify
:
-
discern |
restart discern
-
restart discern
-
name
:
discern |
install debian packages that discern needs
-
name
:
install debian packages that discern needs
apt
:
pkg={{ item }} state=present
notify
:
-
discern |
restart discern
-
restart discern
with_items
:
discern_debian_pkgs
-
name
:
discern |
install debian packages for ease that discern needs
-
name
:
install debian packages for ease that discern needs
apt
:
pkg={{ item }} state=present
notify
:
-
discern |
restart discern
-
restart discern
with_items
:
discern_ease_debian_pkgs
-
name
:
discern |
copy sudoers file for discern
-
name
:
copy sudoers file for discern
copy
:
>
src=sudoers-discern dest=/etc/sudoers.d/discern
mode=0440 validate='visudo -cf %s' owner=root group=root
notify
:
-
discern |
restart discern
-
restart discern
#Needed if using redis to prevent memory issues
-
name
:
discern |
change memory commit settings -- needed for redis
-
name
:
change memory commit settings -- needed for redis
command
:
sysctl vm.overcommit_memory=1
notify
:
-
discern |
restart discern
-
restart discern
-
include
:
deploy.yml tags=deploy
playbooks/roles/edx_ansible/tasks/deploy.yml
View file @
5c5b041f
---
-
name
:
edx_ansible |
git checkout edx_ansible repo into edx_ansible_code_dir
-
name
:
git checkout edx_ansible repo into edx_ansible_code_dir
git
:
dest={{ edx_ansible_code_dir }} repo={{ edx_ansible_source_repo }} version={{ configuration_version }}
sudo_user
:
"
{{
edx_ansible_user
}}"
-
name
:
edx_ansible |
install edx_ansible venv requirements
-
name
:
install edx_ansible venv requirements
pip
:
requirements="{{ edx_ansible_requirements_file }}" virtualenv="{{ edx_ansible_venv_dir }}" state=present
sudo_user
:
"
{{
edx_ansible_user
}}"
-
name
:
edx_ansible |
create update script
-
name
:
create update script
template
:
>
dest={{ edx_ansible_app_dir}}/update
src=update.j2 owner={{ edx_ansible_user }} group={{ edx_ansible_user }} mode=755
-
name
:
edx_ansible |
create a symlink for update.sh
-
name
:
create a symlink for update.sh
file
:
>
src={{ edx_ansible_app_dir }}/update
dest={{ COMMON_BIN_DIR }}/update
state=link
-
name
:
edx_ansible |
dump all vars to yaml
-
name
:
dump all vars to yaml
template
:
src=dumpall.yml.j2 dest={{ edx_ansible_var_file }} mode=0600
-
name
:
edx_ansible |
clean up var file, removing all version vars
-
name
:
clean up var file, removing all version vars
shell
:
sed -i -e "/{{item}}/d" {{ edx_ansible_var_file }}
with_items
:
-
edx_platform_version
...
...
@@ -37,10 +37,10 @@
-
ease_version
-
certs_version
-
name
:
edx_ansible |
remove the special _original_file var
-
name
:
remove the special _original_file var
shell
:
sed -i -e "/_original_file/d" {{ edx_ansible_var_file }}
-
name
:
edxapp |
create a symlink for var file
-
name
:
create a symlink for var file
file
:
>
src={{ edx_ansible_var_file }}
dest={{ COMMON_CFG_DIR }}/{{ edx_ansible_var_file|basename }}
...
...
playbooks/roles/edx_ansible/tasks/main.yml
View file @
5c5b041f
...
...
@@ -23,14 +23,14 @@
#
#
#
-
name
:
edx_ansible |
create application user
-
name
:
create application user
user
:
>
name="{{ edx_ansible_user }}"
home="{{ edx_ansible_app_dir }}"
createhome=no
shell=/bin/false
-
name
:
edx_ansible |
create edx_ansible app and venv dir
-
name
:
create edx_ansible app and venv dir
file
:
>
path="{{ item }}"
state=directory
...
...
@@ -41,7 +41,7 @@
-
"
{{
edx_ansible_data_dir
}}"
-
"
{{
edx_ansible_venvs_dir
}}"
-
name
:
edx_ansible |
install a bunch of system packages on which edx_ansible relies
-
name
:
install a bunch of system packages on which edx_ansible relies
apt
:
pkg={{','.join(edx_ansible_debian_pkgs)}} state=present
-
include
:
deploy.yml tags=deploy
playbooks/roles/edxapp/handlers/main.yml
View file @
5c5b041f
---
-
name
:
edxapp |
restart edxapp
-
name
:
restart edxapp
supervisorctl_local
:
>
state=restarted
supervisorctl_path={{ supervisor_ctl }}
...
...
@@ -9,7 +9,7 @@
sudo_user
:
"
{{
supervisor_service_user
}}"
with_items
:
service_variants_enabled
-
name
:
edxapp |
restart edxapp_workers
-
name
:
restart edxapp_workers
supervisorctl_local
:
>
name="edxapp_worker:{{ item.service_variant }}_{{ item.queue }}_{{ item.concurrency }}"
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/edxapp/tasks/deploy.yml
View file @
5c5b041f
This diff is collapsed.
Click to expand it.
playbooks/roles/edxapp/tasks/main.yml
View file @
5c5b041f
...
...
@@ -4,27 +4,27 @@
---
-
name
:
edxapp |
Install logrotate configuration for tracking file
-
name
:
Install logrotate configuration for tracking file
template
:
dest=/etc/logrotate.d/tracking.log src=edx_logrotate_tracking_log.j2 owner=root group=root mode=644
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
-
name
:
edxapp |
create application user
-
name
:
create application user
user
:
>
name="{{ edxapp_user }}" home="{{ edxapp_app_dir }}"
createhome=no shell=/bin/false
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
-
name
:
edxapp |
create edxapp user dirs
-
name
:
create edxapp user dirs
file
:
>
path="{{ item }}" state=directory
owner="{{ edxapp_user }}" group="{{ common_web_group }}"
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
with_items
:
-
"
{{
edxapp_app_dir
}}"
-
"
{{
edxapp_data_dir
}}"
...
...
@@ -32,36 +32,36 @@
-
"
{{
edxapp_theme_dir
}}"
-
"
{{
edxapp_staticfile_dir
}}"
-
name
:
edxapp |
create edxapp log dir
-
name
:
create edxapp log dir
file
:
>
path="{{ edxapp_log_dir }}" state=directory
owner="{{ common_log_user }}" group="{{ common_log_user }}"
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
-
name
:
edxapp |
create web-writable edxapp data dirs
-
name
:
create web-writable edxapp data dirs
file
:
>
path="{{ item }}" state=directory
owner="{{ common_web_user }}" group="{{ edxapp_user }}"
mode="0775"
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
with_items
:
-
"
{{
edxapp_course_data_dir
}}"
-
"
{{
edxapp_upload_dir
}}"
-
name
:
edxapp |
install system packages on which LMS and CMS rely
-
name
:
install system packages on which LMS and CMS rely
apt
:
pkg={{','.join(edxapp_debian_pkgs)}} state=present
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
-
name
:
edxapp |
create log directories for service variants
-
name
:
create log directories for service variants
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
file
:
>
path={{ edxapp_log_dir }}/{{ item }} state=directory
owner={{ common_log_user }} group={{ common_log_user }}
...
...
playbooks/roles/edxapp/tasks/python_sandbox_env.yml
View file @
5c5b041f
-
name
:
edxapp |
code sandbox | Create edxapp sandbox user
-
name
:
code sandbox | Create edxapp sandbox user
user
:
name={{ edxapp_sandbox_user }} shell=/bin/false home={{ edxapp_sandbox_venv_dir }}
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
-
name
:
edxapp |
code sandbox | Install apparmor utils system pkg
-
name
:
code sandbox | Install apparmor utils system pkg
apt
:
pkg=apparmor-utils state=present
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
-
name
:
edxapp |
code sandbox | write out apparmor code sandbox config
-
name
:
code sandbox | write out apparmor code sandbox config
template
:
src=code.sandbox.j2 dest=/etc/apparmor.d/code.sandbox mode=0644 owner=root group=root
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
-
name
:
edxapp |
code sandbox | write out sandbox user sudoers config
-
name
:
code sandbox | write out sandbox user sudoers config
template
:
src=95-sandbox-sudoer.j2 dest=/etc/sudoers.d/95-{{ edxapp_sandbox_user }} mode=0440 owner=root group=root validate='visudo -c -f %s'
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
# we boostrap and enable the apparmor service here. in deploy.yml we disable, deploy, then re-enable
# so we need to enable it in main.yml
-
name
:
edxapp |
code sandbox | start apparmor service
-
name
:
code sandbox | start apparmor service
service
:
name=apparmor state=started
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
-
name
:
edxapp |
code sandbox | (bootstrap) load code sandbox profile
-
name
:
code sandbox | (bootstrap) load code sandbox profile
command
:
apparmor_parser -r /etc/apparmor.d/code.sandbox
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
-
name
:
edxapp |
code sandbox | (bootstrap) put code sandbox into aa-enforce or aa-complain mode depending on EDXAPP_SANDBOX_ENFORCE
-
name
:
code sandbox | (bootstrap) put code sandbox into aa-enforce or aa-complain mode depending on EDXAPP_SANDBOX_ENFORCE
command
:
/usr/sbin/{{ edxapp_aa_command }} /etc/apparmor.d/code.sandbox
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
tags
:
-
edxapp-sandbox
playbooks/roles/edxapp/tasks/service_variant_config.yml
View file @
5c5b041f
...
...
@@ -5,8 +5,8 @@
sudo_user
:
"
{{
edxapp_user
}}"
with_items
:
service_variants_enabled
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
-
name
:
"
create
{{
item
}}
auth
file"
template
:
>
...
...
@@ -14,8 +14,8 @@
dest={{ edxapp_app_dir }}/{{ item }}.auth.json
sudo_user
:
"
{{
edxapp_user
}}"
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
with_items
:
service_variants_enabled
# write the supervisor scripts for the service variants
...
...
@@ -28,7 +28,7 @@
when
:
celery_worker is not defined and not devstack
sudo_user
:
"
{{
supervisor_user
}}"
-
name
:
edxapp |
writing edxapp supervisor script
-
name
:
writing edxapp supervisor script
template
:
>
src=edxapp.conf.j2 dest={{ supervisor_cfg_dir }}/edxapp.conf
owner={{ supervisor_user }}
...
...
@@ -37,7 +37,7 @@
# write the supervisor script for celery workers
-
name
:
edxapp |
writing celery worker supervisor script
-
name
:
writing celery worker supervisor script
template
:
>
src=workers.conf.j2 dest={{ supervisor_cfg_dir }}/workers.conf
owner={{ supervisor_user }}
...
...
@@ -47,7 +47,7 @@
# Gather assets using rake if possible
-
name
:
edxapp |
gather {{ item }} static assets with rake
-
name
:
gather {{ item }} static assets with rake
shell
:
>
SERVICE_VARIANT={{ item }} rake {{ item }}:gather_assets:aws
executable=/bin/bash
...
...
@@ -56,23 +56,23 @@
when
:
celery_worker is not defined and not devstack and item != "lms-preview"
with_items
:
service_variants_enabled
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
environment
:
"
{{
edxapp_environment
}}"
-
name
:
edxapp |
syncdb and migrate
-
name
:
syncdb and migrate
shell
:
SERVICE_VARIANT=lms {{ edxapp_venv_bin}}/django-admin.py syncdb --migrate --noinput --settings=lms.envs.aws --pythonpath={{ edxapp_code_dir }}
when
:
migrate_db is defined and migrate_db|lower == "yes"
sudo_user
:
"
{{
edxapp_user
}}"
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
-
name
:
edxapp |
db migrate
-
name
:
db migrate
shell
:
SERVICE_VARIANT=lms {{ edxapp_venv_bin }}/django-admin.py migrate --noinput --settings=lms.envs.aws --pythonpath={{ edxapp_code_dir }}
when
:
migrate_only is defined and migrate_only|lower == "yes"
notify
:
-
"
edxapp
|
restart
edxapp"
-
"
edxapp
|
restart
edxapp_workers"
-
"
restart
edxapp"
-
"
restart
edxapp_workers"
sudo_user
:
"
{{
edxapp_user
}}"
playbooks/roles/edxlocal/tasks/main.yml
View file @
5c5b041f
...
...
@@ -10,33 +10,33 @@
# http://downloads.mysql.com/archives/mysql-5.1/mysql-5.1.62.tar.gz
#
---
-
name
:
edxlocal|
install packages needed for single server
-
name
:
install packages needed for single server
apt
:
pkg={{','.join(edxlocal_debian_pkgs)}} install_recommends=yes state=present
-
name
:
edxlocal |
create a database for edxapp
-
name
:
create a database for edxapp
mysql_db
:
>
db=edxapp
state=present
encoding=utf8
-
name
:
edxlocal |
create a database for xqueue
-
name
:
create a database for xqueue
mysql_db
:
>
db=xqueue
state=present
encoding=utf8
-
name
:
edxlocal |
create a database for ora
-
name
:
create a database for ora
mysql_db
:
>
db=ora
state=present
encoding=utf8
-
name
:
edxlocal |
create a database for discern
-
name
:
create a database for discern
mysql_db
:
>
db=discern
state=present
encoding=utf8
-
name
:
edxlocal |
install memcached
-
name
:
install memcached
apt
:
pkg=memcached state=present
playbooks/roles/elasticsearch/tasks/main.yml
View file @
5c5b041f
...
...
@@ -14,13 +14,13 @@
# - oraclejdk
# - elasticsearch
-
name
:
elasticsearch |
download elasticsearch
-
name
:
download elasticsearch
get_url
:
>
url={{ elasticsearch_url }}
dest=/var/tmp/{{ elasticsearch_file }}
force=no
-
name
:
elasticsearch |
install elasticsearch from local package
-
name
:
install elasticsearch from local package
shell
:
>
dpkg -i /var/tmp/elasticsearch-{{ elasticsearch_version }}.deb
executable=/bin/bash
...
...
@@ -29,7 +29,7 @@
-
elasticsearch
-
install
-
name
:
elasticsearch |
Ensure elasticsearch is enabled and started
-
name
:
Ensure elasticsearch is enabled and started
service
:
name=elasticsearch state=started enabled=yes
tags
:
-
elasticsearch
...
...
playbooks/roles/forum/handlers/main.yml
View file @
5c5b041f
---
-
name
:
forum |
restart the forum service
-
name
:
restart the forum service
supervisorctl_local
:
>
name=forum
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/forum/tasks/deploy.yml
View file @
5c5b041f
---
-
name
:
forum |
create the supervisor config
-
name
:
create the supervisor config
template
:
>
src=forum.conf.j2 dest={{ supervisor_cfg_dir }}/forum.conf
owner={{ supervisor_user }}
...
...
@@ -9,41 +9,41 @@
when
:
not devstack
register
:
forum_supervisor
-
name
:
forum |
create the supervisor wrapper
-
name
:
create the supervisor wrapper
template
:
>
src={{ forum_supervisor_wrapper|basename }}.j2
dest={{ forum_supervisor_wrapper }}
mode=0755
sudo_user
:
"
{{
forum_user
}}"
when
:
not devstack
notify
:
forum |
restart the forum service
notify
:
restart the forum service
-
name
:
forum |
git checkout forum repo into {{ forum_code_dir }}
-
name
:
git checkout forum repo into {{ forum_code_dir }}
git
:
dest={{ forum_code_dir }} repo={{ forum_source_repo }} version={{ forum_version }}
sudo_user
:
"
{{
forum_user
}}"
notify
:
forum |
restart the forum service
notify
:
restart the forum service
# TODO: This is done as the common_web_user
# since the process owner needs write access
# to the rbenv
-
name
:
forum |
install comments service bundle
-
name
:
install comments service bundle
shell
:
bundle install chdir={{ forum_code_dir }}
sudo_user
:
"
{{
common_web_user
}}"
environment
:
"
{{
forum_environment
}}"
notify
:
forum |
restart the forum service
notify
:
restart the forum service
# call supervisorctl update. this reloads
# the supervisorctl config and restarts
# the services if any of the configurations
# have changed.
#
-
name
:
forum |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
changed_when
:
supervisor_update.stdout != ""
when
:
start_services and not devstack
-
name
:
forum |
ensure forum is started
-
name
:
ensure forum is started
supervisorctl_local
:
>
name=forum
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/forum/tasks/main.yml
View file @
5c5b041f
...
...
@@ -21,26 +21,26 @@
# rbenv_ruby_version: "{{ forum_ruby_version }}"
# - forum
-
name
:
forum |
create application user
-
name
:
create application user
user
:
>
name="{{ forum_user }}" home="{{ forum_app_dir }}"
createhome=no
shell=/bin/false
notify
:
forum |
restart the forum service
notify
:
restart the forum service
-
name
:
forum |
create forum app dir
-
name
:
create forum app dir
file
:
>
path="{{ forum_app_dir }}" state=directory
owner="{{ forum_user }}" group="{{ common_web_group }}"
notify
:
forum |
restart the forum service
notify
:
restart the forum service
-
name
:
forum |
setup the forum env
-
name
:
setup the forum env
template
:
>
src=forum_env.j2 dest={{ forum_app_dir }}/forum_env
owner={{ forum_user }} group={{ common_web_user }}
mode=0644
notify
:
-
forum |
restart the forum service
-
restart the forum service
-
include
:
deploy.yml tags=deploy
playbooks/roles/forum/tasks/test.yml
View file @
5c5b041f
---
-
name
:
forum |
test that the required service are listening
-
name
:
test that the required service are listening
wait_for
:
port={{ item.port }} host={{ item.host }} timeout=30
with_items
:
"
{{
forum_services
}}"
with_items
:
forum_services
when
:
not devstack
-
name
:
forum |
test that mongo replica set members are listing
-
name
:
test that mongo replica set members are listing
wait_for
:
port={{ FORUM_MONGO_PORT }} host={{ item }} timeout=30
with_items
:
"
{{
FORUM_MONGO_HOSTS
}}"
with_items
:
FORUM_MONGO_HOSTS
when
:
not devstack
playbooks/roles/gh_mirror/tasks/main.yml
View file @
5c5b041f
...
...
@@ -28,39 +28,39 @@
---
-
name
:
gh_mirror |
install pip packages
-
name
:
install pip packages
pip
:
name={{ item }} state=present
with_items
:
gh_mirror_pip_pkgs
-
name
:
gh_mirror |
install debian packages
-
name
:
install debian packages
apt
:
>
pkg={{ ",".join(gh_mirror_debian_pkgs) }}
state=present
update_cache=yes
-
name
:
gh_mirror |
create gh_mirror user
-
name
:
create gh_mirror user
user
:
>
name={{ gh_mirror_user }}
state=present
-
name
:
gh_mirror |
create the gh_mirror data directory
-
name
:
create the gh_mirror data directory
file
:
>
path={{ gh_mirror_data_dir }}
state=directory
owner={{ gh_mirror_user }}
group={{ gh_mirror_group }}
-
name
:
gh_mirror |
create the gh_mirror app directory
-
name
:
create the gh_mirror app directory
file
:
>
path={{ gh_mirror_app_dir }}
state=directory
-
name
:
gh_mirror |
create org config
-
name
:
create org config
template
:
src=orgs.yml.j2 dest={{ gh_mirror_app_dir }}/orgs.yml
-
name
:
copying sync scripts
copy
:
src={{ item }} dest={{ gh_mirror_app_dir }}/{{ item }}
with_items
:
"
{{
gh_mirror_app_files
}}"
with_items
:
gh_mirror_app_files
-
name
:
creating cron job to update repos
cron
:
...
...
playbooks/roles/gh_users/tasks/main.yml
View file @
5c5b041f
...
...
@@ -12,34 +12,34 @@
# - mark
-
name
:
gh_users |
creating default .bashrc
-
name
:
creating default .bashrc
template
:
>
src=default.bashrc.j2 dest=/etc/skel/.bashrc
mode=0644 owner=root group=root
-
name
:
gh_users |
create gh group
-
name
:
create gh group
group
:
name=gh state=present
# TODO: give limited sudo access to this group
-
name
:
g
h_users | g
rant full sudo access to gh group
-
name
:
grant full sudo access to gh group
copy
:
>
content="%gh ALL=(ALL) NOPASSWD:ALL"
dest=/etc/sudoers.d/gh owner=root group=root
mode=0440 validate='visudo -cf %s'
-
name
:
gh_users |
create github users
-
name
:
create github users
user
:
name={{ item }} groups=gh
shell=/bin/bash
with_items
:
gh_users
-
name
:
gh_users |
create .ssh directory
-
name
:
create .ssh directory
file
:
path=/home/{{ item }}/.ssh state=directory mode=0700
owner={{ item }}
with_items
:
gh_users
-
name
:
gh_users |
copy github key[s] to .ssh/authorized_keys
-
name
:
copy github key[s] to .ssh/authorized_keys
get_url
:
url=https://github.com/{{ item }}.keys
dest=/home/{{ item }}/.ssh/authorized_keys mode=0600
...
...
playbooks/roles/gluster/tasks/main.yml
View file @
5c5b041f
---
# Install and configure simple glusterFS shared storage
-
name
:
gluster |
all | Install common packages
-
name
:
all | Install common packages
apt
:
name={{ item }} state=present
with_items
:
-
glusterfs-client
...
...
@@ -9,20 +9,20 @@
-
nfs-common
tags
:
gluster
-
name
:
gluster |
all | Install server packages
-
name
:
all | Install server packages
apt
:
name=glusterfs-server state=present
when
:
>
"{{ ansible_default_ipv4.address }}" "{{ gluster_peers|join(' ') }}"
tags
:
gluster
-
name
:
gluster |
all | enable server
-
name
:
all | enable server
service
:
name=glusterfs-server state=started enabled=yes
when
:
>
"{{ ansible_default_ipv4.address }}" in "{{ gluster_peers|join(' ') }}"
tags
:
gluster
# Ignoring error below so that we can move the data folder and have it be a link
-
name
:
gluster |
all | create folders
-
name
:
all | create folders
file
:
path={{ item.path }} state=directory
with_items
:
gluster_volumes
when
:
>
...
...
@@ -30,39 +30,39 @@
ignore_errors
:
yes
tags
:
gluster
-
name
:
gluster |
primary | create peers
-
name
:
primary | create peers
command
:
gluster peer probe {{ item }}
with_items
:
gluster_peers
when
:
ansible_default_ipv4.address == gluster_primary_ip
tags
:
gluster
-
name
:
gluster |
primary | create volumes
-
name
:
primary | create volumes
command
:
gluster volume create {{ item.name }} replica {{ item.replicas }} transport tcp {% for server in gluster_peers %}{{ server }}:{{ item.path }} {% endfor %}
with_items
:
gluster_volumes
when
:
ansible_default_ipv4.address == gluster_primary_ip
ignore_errors
:
yes
# There should be better error checking here
tags
:
gluster
-
name
:
gluster |
primary | start volumes
-
name
:
primary | start volumes
command
:
gluster volume start {{ item.name }}
with_items
:
gluster_volumes
when
:
ansible_default_ipv4.address == gluster_primary_ip
ignore_errors
:
yes
# There should be better error checking here
tags
:
gluster
-
name
:
gluster |
primary | set security
-
name
:
primary | set security
command
:
gluster volume set {{ item.name }} auth.allow {{ item.security }}
with_items
:
gluster_volumes
when
:
ansible_default_ipv4.address == gluster_primary_ip
tags
:
gluster
-
name
:
gluster |
primary | set performance cache
-
name
:
primary | set performance cache
command
:
gluster volume set {{ item.name }} performance.cache-size {{ item.cache_size }}
with_items
:
gluster_volumes
when
:
ansible_default_ipv4.address == gluster_primary_ip
tags
:
gluster
-
name
:
gluster |
all | mount volume
-
name
:
all | mount volume
mount
:
>
name={{ item.mount_location }}
src={{ gluster_primary_ip }}:{{ item.name }}
...
...
@@ -74,7 +74,7 @@
# This required due to an annoying bug in Ubuntu and gluster where it tries to mount the system
# before the network stack is up and can't lookup 127.0.0.1
-
name
:
gluster |
all | sleep mount
-
name
:
all | sleep mount
lineinfile
:
>
dest=/etc/rc.local
line='sleep 5; /bin/mount -a'
...
...
playbooks/roles/haproxy/handlers/main.yml
View file @
5c5b041f
...
...
@@ -14,11 +14,11 @@
# Overview:
#
#
-
name
:
haproxy |
restart haproxy
-
name
:
restart haproxy
service
:
name=haproxy state=restarted
-
name
:
haproxy |
reload haproxy
-
name
:
reload haproxy
service
:
name=haproxy state=reloaded
-
name
:
haproxy |
restart rsyslog
-
name
:
restart rsyslog
service
:
name=rsyslog state=restarted
playbooks/roles/haproxy/tasks/main.yml
View file @
5c5b041f
...
...
@@ -17,26 +17,26 @@
# so it allows for a configuration template to be overriden
# with a variable
-
name
:
haproxy |
Install haproxy
-
name
:
Install haproxy
apt
:
pkg=haproxy state={{ pkgs.haproxy.state }}
notify
:
haproxy |
restart haproxy
notify
:
restart haproxy
-
name
:
haproxy |
Server configuration file
-
name
:
Server configuration file
template
:
>
src={{ haproxy_template_dir }}/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg
owner=root group=root mode=0644
notify
:
haproxy |
reload haproxy
notify
:
reload haproxy
-
name
:
haproxy |
Enabled in default
-
name
:
Enabled in default
lineinfile
:
dest=/etc/default/haproxy regexp=^ENABLED=.$ line=ENABLED=1
notify
:
haproxy |
restart haproxy
notify
:
restart haproxy
-
name
:
haproxy |
install logrotate
-
name
:
install logrotate
template
:
src=haproxy.logrotate.j2 dest=/etc/logrotate.d/haproxy mode=0644
-
name
:
haproxy |
install rsyslog conf
-
name
:
install rsyslog conf
template
:
src=haproxy.rsyslog.j2 dest=/etc/rsyslog.d/haproxy.conf mode=0644
notify
:
haproxy |
restart rsyslog
notify
:
restart rsyslog
-
name
:
haproxy |
make sure haproxy has started
-
name
:
make sure haproxy has started
service
:
name=haproxy state=started
playbooks/roles/jenkins_master/handlers/main.yml
View file @
5c5b041f
---
-
name
:
jenkins_master |
restart Jenkins
-
name
:
restart Jenkins
service
:
name=jenkins state=restarted
-
name
:
jenkins_master |
start nginx
-
name
:
start nginx
service
:
name=nginx state=started
-
name
:
jenkins_master |
reload nginx
-
name
:
reload nginx
service
:
name=nginx state=reloaded
playbooks/roles/jenkins_master/tasks/main.yml
View file @
5c5b041f
---
-
name
:
jenkins_master |
install jenkins specific system packages
-
name
:
install jenkins specific system packages
apt
:
pkg={{','.join(jenkins_debian_pkgs)}}
state=present update_cache=yes
tags
:
-
jenkins
-
name
:
jenkins_master |
install jenkins extra system packages
-
name
:
install jenkins extra system packages
apt
:
pkg={{','.join(JENKINS_EXTRA_PKGS)}}
state=present update_cache=yes
tags
:
-
jenkins
-
name
:
jenkins_master |
create jenkins group
-
name
:
create jenkins group
group
:
name={{ jenkins_group }} state=present
-
name
:
jenkins_master |
add the jenkins user to the group
-
name
:
add the jenkins user to the group
user
:
name={{ jenkins_user }} append=yes groups={{ jenkins_group }}
# Should be resolved in the next release, but until then we need to do this
# https://issues.jenkins-ci.org/browse/JENKINS-20407
-
name
:
jenkins_master |
workaround for JENKINS-20407
-
name
:
workaround for JENKINS-20407
command
:
"
mkdir
-p
/var/run/jenkins"
-
name
:
jenkins_master |
download Jenkins package
-
name
:
download Jenkins package
get_url
:
url="{{ jenkins_deb_url }}" dest="/tmp/{{ jenkins_deb }}"
-
name
:
jenkins_master |
install Jenkins package
-
name
:
install Jenkins package
command
:
dpkg -i --force-depends "/tmp/{{ jenkins_deb }}"
-
name
:
jenkins_master |
stop Jenkins
-
name
:
stop Jenkins
service
:
name=jenkins state=stopped
# Move /var/lib/jenkins to Jenkins home (on the EBS)
-
name
:
jenkins_master |
move /var/lib/jenkins
-
name
:
move /var/lib/jenkins
command
:
mv /var/lib/jenkins {{ jenkins_home }}
creates={{ jenkins_home }}
-
name
:
jenkins_master |
set owner for Jenkins home
-
name
:
set owner for Jenkins home
file
:
path={{ jenkins_home }} recurse=yes state=directory
owner={{ jenkins_user }} group={{ jenkins_group }}
# Symlink /var/lib/jenkins to {{ COMMON_DATA_DIR }}/jenkins
# since Jenkins will expect its files to be in /var/lib/jenkins
-
name
:
jenkins_master |
symlink /var/lib/jenkins
-
name
:
symlink /var/lib/jenkins
file
:
src={{ jenkins_home }} dest=/var/lib/jenkins state=link
owner={{ jenkins_user }} group={{ jenkins_group }}
notify
:
-
jenkins_master |
restart Jenkins
-
restart Jenkins
-
name
:
jenkins_master |
make plugins directory
-
name
:
make plugins directory
sudo_user
:
jenkins
shell
:
mkdir -p {{ jenkins_home }}/plugins
# We first download the plugins to a temp directory and include
# the version in the file name. That way, if we increment
# the version, the plugin will be updated in Jenkins
-
name
:
jenkins_master |
download Jenkins plugins
-
name
:
download Jenkins plugins
get_url
:
url=http://updates.jenkins-ci.org/download/plugins/{{ item.name }}/{{ item.version }}/{{ item.name }}.hpi
dest=/tmp/{{ item.name }}_{{ item.version }}
with_items
:
"
{{
jenkins_plugins
}}"
with_items
:
jenkins_plugins
-
name
:
jenkins_master |
install Jenkins plugins
-
name
:
install Jenkins plugins
command
:
cp /tmp/{{ item.name }}_{{ item.version }} {{ jenkins_home }}/plugins/{{ item.name }}.hpi
with_items
:
"
{{
jenkins_plugins
}}"
with_items
:
jenkins_plugins
-
name
:
jenkins_master |
set Jenkins plugin permissions
-
name
:
set Jenkins plugin permissions
file
:
path={{ jenkins_home }}/plugins/{{ item.name }}.hpi
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700
with_items
:
"
{{
jenkins_plugins
}}"
with_items
:
jenkins_plugins
notify
:
-
jenkins_master |
restart Jenkins
-
restart Jenkins
# We had to fork some plugins to workaround
# certain issues. If these changes get merged
# upstream, we may be able to use the regular plugin install process.
# Until then, we compile and install the forks ourselves.
-
name
:
jenkins_master |
checkout custom plugin repo
-
name
:
checkout custom plugin repo
git
:
repo={{ item.repo_url }} dest=/tmp/{{ item.repo_name }} version={{ item.version }}
with_items
:
"
{{
jenkins_custom_plugins
}}"
with_items
:
jenkins_custom_plugins
-
name
:
jenkins_master |
compile custom plugins
-
name
:
compile custom plugins
command
:
mvn -Dmaven.test.skip=true install chdir=/tmp/{{ item.repo_name }}
with_items
:
"
{{
jenkins_custom_plugins
}}"
with_items
:
jenkins_custom_plugins
-
name
:
jenkins_master |
install custom plugins
-
name
:
install custom plugins
command
:
mv /tmp/{{ item.repo_name }}/target/{{ item.package }}
{{ jenkins_home }}/plugins/{{ item.package }}
with_items
:
"
{{
jenkins_custom_plugins
}}"
with_items
:
jenkins_custom_plugins
notify
:
-
jenkins_master |
restart Jenkins
-
restart Jenkins
-
name
:
jenkins_master |
set custom plugin permissions
-
name
:
set custom plugin permissions
file
:
path={{ jenkins_home }}/plugins/{{ item.package }}
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700
with_items
:
"
{{
jenkins_custom_plugins
}}"
with_items
:
jenkins_custom_plugins
# Plugins that are bundled with Jenkins are "pinned".
# Jenkins will overwrite updated plugins with its built-in version
# unless we create a ".pinned" file for the plugin.
# See https://issues.jenkins-ci.org/browse/JENKINS-13129
-
name
:
jenkins_master |
create plugin pin files
-
name
:
create plugin pin files
command
:
touch {{ jenkins_home }}/plugins/{{ item }}.jpi.pinned
creates={{ jenkins_home }}/plugins/{{ item }}.jpi.pinned
with_items
:
"
{{
jenkins_bundled_plugins
}}"
with_items
:
jenkins_bundled_plugins
-
name
:
jenkins_master |
setup nginix vhost
-
name
:
setup nginix vhost
template
:
src=etc/nginx/sites-available/jenkins.j2
dest=/etc/nginx/sites-available/jenkins
-
name
:
jenkins_master |
enable jenkins vhost
-
name
:
enable jenkins vhost
file
:
src=/etc/nginx/sites-available/jenkins
dest=/etc/nginx/sites-enabled/jenkins
state=link
notify
:
jenkins_master |
start nginx
notify
:
start nginx
playbooks/roles/jenkins_worker/tasks/jscover.yml
View file @
5c5b041f
---
-
name
:
jenkins_worker |
Install Java
-
name
:
Install Java
apt
:
pkg=openjdk-7-jre-headless state=present
-
name
:
jenkins_worker |
Download JSCover
-
name
:
Download JSCover
get_url
:
url={{ jscover_url }} dest=/var/tmp/jscover.zip
-
name
:
jenkins_worker |
Unzip JSCover
-
name
:
Unzip JSCover
shell
:
unzip /var/tmp/jscover.zip -d /var/tmp/jscover
creates=/var/tmp/jscover
-
name
:
jenkins_worker |
Install JSCover JAR
-
name
:
Install JSCover JAR
command
:
cp /var/tmp/jscover/target/dist/JSCover-all.jar /usr/local/bin/JSCover-all-{{ jscover_version }}.jar
creates=/usr/local/bin/JSCover-all-{{ jscover_version }}.jar
-
name
:
jenkins_worker |
Set JSCover permissions
-
name
:
Set JSCover permissions
file
:
path="/usr/local/bin/JSCover-all-{{ jscover_version }}.jar" state=file
owner=root group=root mode=0755
playbooks/roles/jenkins_worker/tasks/python.yml
View file @
5c5b041f
---
# Install scripts requiring a GitHub OAuth token
-
name
:
jenkins_worker |
Install requests Python library
-
name
:
Install requests Python library
pip
:
name=requests state=present
-
fail
:
jenkins_worker |
OAuth token not defined
-
fail
:
OAuth token not defined
when
:
github_oauth_token is not defined
-
name
:
jenkins_worker |
Install Python GitHub PR auth script
-
name
:
Install Python GitHub PR auth script
template
:
src="github_pr_auth.py.j2" dest="/usr/local/bin/github_pr_auth.py"
owner=root group=root
mode=755
-
name
:
jenkins_worker |
Install Python GitHub post status script
-
name
:
Install Python GitHub post status script
template
:
src="github_post_status.py.j2" dest="/usr/local/bin/github_post_status.py"
owner=root group=root
mode=755
# Create wheelhouse to enable fast virtualenv creation
-
name
:
jenkins_worker |
Create wheel virtualenv
-
name
:
Create wheel virtualenv
command
:
/usr/local/bin/virtualenv {{ jenkins_venv }} creates={{ jenkins_venv }}
sudo_user
:
"
{{
jenkins_user
}}"
-
name
:
jenkins_worker |
Install wheel
-
name
:
Install wheel
pip
:
name=wheel virtualenv={{ jenkins_venv }} virtualenv_command=/usr/local/bin/virtualenv
sudo_user
:
"
{{
jenkins_user
}}"
-
name
:
jenkins_worker |
Create wheelhouse dir
-
name
:
Create wheelhouse dir
file
:
path={{ jenkins_wheel_dir }} state=directory
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700
# (need to install each one in the venv to satisfy dependencies)
-
name
:
jenkins_worker |
Create wheel archives
-
name
:
Create wheel archives
shell
:
"
{{
jenkins_pip
}}
wheel
--wheel-dir={{
jenkins_wheel_dir
}}
\"
${item.pkg}
\"
&&
{{
jenkins_pip
}}
install
--use-wheel
--no-index
--find-links={{
jenkins_wheel_dir
}}
\"
${item.pkg}
\"
creates={{
jenkins_wheel_dir
}}/${item.wheel}"
sudo_user
:
"
{{
jenkins_user
}}"
with_items
:
"
{{
jenkins_wheels
}}"
with_items
:
jenkins_wheels
-
name
:
jenkins_worker |
Add wheel_venv.sh script
-
name
:
Add wheel_venv.sh script
template
:
src=wheel_venv.sh.j2 dest={{ jenkins_home }}/wheel_venv.sh
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700
playbooks/roles/jenkins_worker/tasks/system.yml
View file @
5c5b041f
---
-
name
:
jenkins_worker |
Create jenkins group
-
name
:
Create jenkins group
group
:
name={{ jenkins_group }} state=present
# The Jenkins account needs a login shell because Jenkins uses scp
-
name
:
jenkins_worker |
Add the jenkins user to the group and configure shell
-
name
:
Add the jenkins user to the group and configure shell
user
:
name={{ jenkins_user }} append=yes group={{ jenkins_group }} shell=/bin/bash
# Because of a bug in the latest release of the EC2 plugin
# we need to use a key generated by Amazon (not imported)
# To satisfy this, we allow users to log in as Jenkins
# using the same keypair the instance was started with.
-
name
:
jenkins_worker |
Create .ssh directory
-
name
:
Create .ssh directory
file
:
path={{ jenkins_home }}/.ssh state=directory
owner={{ jenkins_user }} group={{ jenkins_group }}
ignore_errors
:
yes
-
name
:
jenkins_worker |
Copy ssh keys for jenkins
-
name
:
Copy ssh keys for jenkins
command
:
cp /home/ubuntu/.ssh/authorized_keys /home/{{ jenkins_user }}/.ssh/authorized_keys
ignore_errors
:
yes
-
name
:
jenkins_worker |
Set key permissions
-
name
:
Set key permissions
file
:
path={{ jenkins_home }}/.ssh/authorized_keys
owner={{ jenkins_user }} group={{ jenkins_group }} mode=400
ignore_errors
:
yes
-
name
:
jenkins_worker |
Install system packages
-
name
:
Install system packages
apt
:
pkg={{','.join(jenkins_debian_pkgs)}}
state=present update_cache=yes
-
name
:
jenkins_worker |
Add script to set up environment variables
-
name
:
Add script to set up environment variables
template
:
src=jenkins_env.j2 dest={{ jenkins_home }}/jenkins_env
owner={{ jenkins_user }} group={{ jenkins_group }} mode=0500
# Need to add Github to known_hosts to avoid
# being prompted when using git through ssh
-
name
:
jenkins_worker |
Add github.com to known_hosts if it does not exist
-
name
:
Add github.com to known_hosts if it does not exist
shell
:
>
ssh-keygen -f {{ jenkins_home }}/.ssh/known_hosts -H -F github.com | grep -q found || ssh-keyscan -H github.com > {{ jenkins_home }}/.ssh/known_hosts
playbooks/roles/launch_ec2/tasks/main.yml
View file @
5c5b041f
...
...
@@ -3,7 +3,7 @@
# Will terminate an instance if one and only one already exists
# with the same name
-
name
:
l
aunch_ec2 | l
ookup tags for terminating existing instance
-
name
:
lookup tags for terminating existing instance
local_action
:
module
:
ec2_lookup
region
:
"
{{
region
}}"
...
...
@@ -12,7 +12,7 @@
register
:
tag_lookup
when
:
terminate_instance ==
true
-
name
:
launch_ec2 |
checking for other instances
-
name
:
checking for other instances
debug
:
msg="Too many results returned, not terminating!"
when
:
terminate_instance ==
true
and tag_lookup.instance_ids|length > 1
...
...
@@ -34,7 +34,7 @@
state
:
absent
when
:
terminate_instance ==
true
and elb and tag_lookup.instance_ids|length == 1
-
name
:
launch_ec2 |
Launch ec2 instance
-
name
:
Launch ec2 instance
local_action
:
module
:
ec2_local
keypair
:
"
{{
keypair
}}"
...
...
@@ -49,7 +49,7 @@
instance_profile_name
:
"
{{
instance_profile_name
}}"
register
:
ec2
-
name
:
launch_ec2 |
Add DNS name
-
name
:
Add DNS name
local_action
:
module
:
route53
overwrite
:
yes
...
...
@@ -59,9 +59,9 @@
ttl
:
300
record
:
"
{{
dns_name
}}.{{
dns_zone
}}"
value
:
"
{{
item.public_dns_name
}}"
with_items
:
"
{{
ec2.instances
}}"
with_items
:
ec2.instances
-
name
:
launch_ec2 |
Add DNS name studio
-
name
:
Add DNS name studio
local_action
:
module
:
route53
overwrite
:
yes
...
...
@@ -71,9 +71,9 @@
ttl
:
300
record
:
"
studio.{{
dns_name
}}.{{
dns_zone
}}"
value
:
"
{{
item.public_dns_name
}}"
with_items
:
"
{{
ec2.instances
}}"
with_items
:
ec2.instances
-
name
:
launch_ec2 |
Add DNS name preview
-
name
:
Add DNS name preview
local_action
:
module
:
route53
overwrite
:
yes
...
...
@@ -83,17 +83,17 @@
ttl
:
300
record
:
"
preview.{{
dns_name
}}.{{
dns_zone
}}"
value
:
"
{{
item.public_dns_name
}}"
with_items
:
"
{{
ec2.instances
}}"
with_items
:
ec2.instances
-
name
:
launch_ec2 |
Add new instance to host group
-
name
:
Add new instance to host group
local_action
:
>
add_host
hostname={{ item.public_ip }}
groupname=launched
with_items
:
"
{{
ec2.instances
}}"
with_items
:
ec2.instances
-
name
:
launch_ec2 |
Wait for SSH to come up
-
name
:
Wait for SSH to come up
local_action
:
>
wait_for
host={{ item.public_dns_name }}
...
...
@@ -101,4 +101,4 @@
port=22
delay=60
timeout=320
with_items
:
"
{{
ec2.instances
}}"
with_items
:
ec2.instances
playbooks/roles/legacy_ora/tasks/main.yml
View file @
5c5b041f
...
...
@@ -16,14 +16,14 @@
-
fail
:
msg="secure_dir not defined. This is a path to the secure ora config file."
when
:
secure_dir is not defined
-
name
:
legacy_ora |
create ora application config
-
name
:
create ora application config
copy
:
src={{secure_dir}}/files/{{COMMON_ENV_TYPE}}/legacy_ora/ora.env.json
dest={{ora_app_dir}}/env.json
sudo_user
:
"
{{
ora_user
}}"
register
:
env_state
-
name
:
legacy_ora |
create ora auth file
-
name
:
create ora auth file
copy
:
src={{secure_dir}}/files/{{COMMON_ENV_TYPE}}/legacy_ora/ora.auth.json
dest={{ora_app_dir}}/auth.json
...
...
@@ -31,13 +31,13 @@
register
:
auth_state
# Restart ORA Services
-
name
:
legacy_ora |
restart edx-ora
-
name
:
restart edx-ora
service
:
name=edx-ora
state=restarted
when
:
env_state.changed or auth_state.changed
-
name
:
legacy_ora |
restart edx-ora-celery
-
name
:
restart edx-ora-celery
service
:
name=edx-ora-celery
state=restarted
...
...
playbooks/roles/local_dev/tasks/main.yml
View file @
5c5b041f
---
-
name
:
local_dev |
install useful system packages
-
name
:
install useful system packages
apt
:
pkg={{','.join(local_dev_pkgs)}} install_recommends=yes
state=present update_cache=yes
-
name
:
local_dev |
set login shell for app accounts
-
name
:
set login shell for app accounts
user
:
name={{ item.user }} shell="/bin/bash"
with_items
:
"
{{
localdev_accounts
}}"
with_items
:
localdev_accounts
# Ensure forum user has permissions to access .gem and .rbenv
# This is a little twisty: the forum role sets the owner and group to www-data
# So we add the forum user to the www-data group and give group write permissions
-
name
:
local_dev |
add forum user to www-data group
-
name
:
add forum user to www-data group
user
:
name={{ forum_user }} groups={{ common_web_group }} append=yes
-
name
:
local_dev |
set forum rbenv and gem permissions
-
name
:
set forum rbenv and gem permissions
file
:
path={{ item }} state=directory mode=770
with_items
:
...
...
@@ -22,32 +22,32 @@
-
"
{{
forum_app_dir
}}/.rbenv"
# Create scripts to configure environment
-
name
:
local_dev |
create login scripts
-
name
:
create login scripts
template
:
src=app_bashrc.j2 dest={{ item.home }}/.bashrc
owner={{ item.user }} mode=755
with_items
:
"
{{
localdev_accounts
}}"
with_items
:
localdev_accounts
# Default to the correct git config
# No more accidentally force pushing to master! :)
-
name
:
local_dev |
configure git
-
name
:
configure git
copy
:
src=gitconfig dest={{ item.home }}/.gitconfig
owner={{ item.user }} mode=700
with_items
:
"
{{
localdev_accounts
}}"
with_items
:
localdev_accounts
# Configure X11 for application users
-
name
:
local_dev |
preserve DISPLAY for sudo
-
name
:
preserve DISPLAY for sudo
copy
:
src=x11_display dest=/etc/sudoers.d/x11_display
owner=root group=root mode=0440
-
name
:
lo
cal_dev | lo
gin share X11 auth to app users
-
name
:
login share X11 auth to app users
template
:
src=share_x11.j2 dest={{ localdev_home }}/share_x11
owner={{ localdev_user }} mode=0700
-
name
:
local_dev |
update bashrc with X11 share script
-
name
:
update bashrc with X11 share script
lineinfile
:
dest={{ localdev_home }}/.bashrc
regexp=". {{ localdev_home }}/share_x11"
...
...
playbooks/roles/mongo/tasks/main.yml
View file @
5c5b041f
---
-
name
:
mongo |
install python pymongo for mongo_user ansible module
-
name
:
install python pymongo for mongo_user ansible module
pip
:
>
name=pymongo state=present
version=2.6.3 extra_args="-i {{ COMMON_PYPI_MIRROR_URL }}"
-
name
:
mongo |
add the mongodb signing key
-
name
:
add the mongodb signing key
apt_key
:
>
id=7F0CEB10
url=http://docs.mongodb.org/10gen-gpg-key.asc
state=present
-
name
:
mongo |
add the mongodb repo to the sources list
-
name
:
add the mongodb repo to the sources list
apt_repository
:
>
repo='deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen'
state=present
-
name
:
mongo |
install mongo server and recommends
-
name
:
install mongo server and recommends
apt
:
>
pkg=mongodb-10gen={{ mongo_version }}
state=present install_recommends=yes
update_cache=yes
-
name
:
mongo |
create mongo dirs
-
name
:
create mongo dirs
file
:
>
path="{{ item }}" state=directory
owner="{{ mongo_user }}"
...
...
@@ -32,14 +32,14 @@
-
"
{{
mongo_dbpath
}}"
-
"
{{
mongo_log_dir
}}"
-
name
:
mongo |
stop mongo service
-
name
:
stop mongo service
service
:
name=mongodb state=stopped
-
name
:
mo
ngo | mo
ve mongodb to {{ mongo_data_dir }}
-
name
:
move mongodb to {{ mongo_data_dir }}
command
:
mv /var/lib/mongodb {{ mongo_data_dir}}/. creates={{ mongo_data_dir }}/mongodb
-
name
:
mongo |
copy mongodb key file
-
name
:
copy mongodb key file
copy
:
>
src={{ secure_dir }}/files/mongo_key
dest={{ mongo_key_file }}
...
...
@@ -48,27 +48,27 @@
group=mongodb
when
:
MONGO_CLUSTERED
-
name
:
mongo |
copy configuration template
-
name
:
copy configuration template
template
:
src=mongodb.conf.j2 dest=/etc/mongodb.conf backup=yes
notify
:
restart mongo
-
name
:
mongo |
start mongo service
-
name
:
start mongo service
service
:
name=mongodb state=started
-
name
:
mongo |
wait for mongo server to start
-
name
:
wait for mongo server to start
wait_for
:
port=27017 delay=2
-
name
:
mongo |
Create the file to initialize the mongod replica set
-
name
:
Create the file to initialize the mongod replica set
template
:
src=repset_init.j2 dest=/tmp/repset_init.js
when
:
MONGO_CLUSTERED
-
name
:
mongo |
Initialize the replication set
-
name
:
Initialize the replication set
shell
:
/usr/bin/mongo /tmp/repset_init.js
when
:
MONGO_CLUSTERED
# Ignore errors doesn't work because the module throws an exception
# it doesn't catch.
-
name
:
mongo |
create a mongodb user
-
name
:
create a mongodb user
mongodb_user
:
>
database={{ item.database }}
name={{ item.user }}
...
...
playbooks/roles/nginx/handlers/main.yml
View file @
5c5b041f
---
-
name
:
nginx |
restart nginx
-
name
:
restart nginx
service
:
name=nginx state=restarted
-
name
:
nginx |
reload nginx
-
name
:
reload nginx
service
:
name=nginx state=reloaded
playbooks/roles/nginx/tasks/main.yml
View file @
5c5b041f
...
...
@@ -2,7 +2,7 @@
# - common/tasks/main.yml
---
-
name
:
nginx |
create nginx app dirs
-
name
:
create nginx app dirs
file
:
>
path="{{ item }}"
state=directory
...
...
@@ -12,9 +12,9 @@
-
"
{{
nginx_app_dir
}}"
-
"
{{
nginx_sites_available_dir
}}"
-
"
{{
nginx_sites_enabled_dir
}}"
notify
:
nginx |
restart nginx
notify
:
restart nginx
-
name
:
nginx |
create nginx data dirs
-
name
:
create nginx data dirs
file
:
>
path="{{ item }}"
state=directory
...
...
@@ -23,66 +23,66 @@
with_items
:
-
"
{{
nginx_data_dir
}}"
-
"
{{
nginx_log_dir
}}"
notify
:
nginx |
restart nginx
notify
:
restart nginx
-
name
:
nginx |
Install nginx packages
-
name
:
Install nginx packages
apt
:
pkg={{','.join(nginx_debian_pkgs)}} state=present
notify
:
nginx |
restart nginx
notify
:
restart nginx
-
name
:
nginx |
Server configuration file
-
name
:
Server configuration file
template
:
>
src=nginx.conf.j2 dest=/etc/nginx/nginx.conf
owner=root group={{ common_web_user }} mode=0644
notify
:
nginx |
reload nginx
notify
:
reload nginx
-
name
:
nginx |
Creating common nginx configuration
-
name
:
Creating common nginx configuration
template
:
>
src=edx-release.j2 dest={{ nginx_sites_available_dir }}/edx-release
owner=root group=root mode=0600
notify
:
nginx |
reload nginx
notify
:
reload nginx
-
name
:
nginx |
Creating link for common nginx configuration
-
name
:
Creating link for common nginx configuration
file
:
>
src={{ nginx_sites_available_dir }}/edx-release
dest={{ nginx_sites_enabled_dir }}/edx-release
state=link owner=root group=root
notify
:
nginx |
reload nginx
notify
:
reload nginx
-
name
:
nginx |
Copying nginx configs for {{ nginx_sites }}
-
name
:
Copying nginx configs for {{ nginx_sites }}
template
:
>
src={{ item }}.j2 dest={{ nginx_sites_available_dir }}/{{ item }}
owner=root group={{ common_web_user }} mode=0640
notify
:
nginx |
reload nginx
notify
:
reload nginx
with_items
:
nginx_sites
-
name
:
nginx |
Creating nginx config links for {{ nginx_sites }}
-
name
:
Creating nginx config links for {{ nginx_sites }}
file
:
>
src={{ nginx_sites_available_dir }}/{{ item }}
dest={{ nginx_sites_enabled_dir }}/{{ item }}
state=link owner=root group=root
notify
:
nginx |
reload nginx
notify
:
reload nginx
with_items
:
nginx_sites
-
name
:
nginx |
Write out htpasswd file
-
name
:
Write out htpasswd file
htpasswd
:
>
name={{ NGINX_HTPASSWD_USER }}
password={{ NGINX_HTPASSWD_PASS }}
path={{ nginx_htpasswd_file }}
when
:
NGINX_HTPASSWD_USER and NGINX_HTPASSWD_PASS
-
name
:
nginx |
Create nginx log file location (just in case)
-
name
:
Create nginx log file location (just in case)
file
:
>
path={{ nginx_log_dir}} state=directory
owner={{ common_web_user }} group={{ common_web_user }}
-
name
:
nginx |
copy ssl cert
-
name
:
copy ssl cert
copy
:
>
src={{ NGINX_SSL_CERTIFICATE }}
dest=/etc/ssl/certs/{{ item|basename }}
owner=root group=root mode=0644
when
:
NGINX_ENABLE_SSL and NGINX_SSL_CERTIFICATE != 'ssl-cert-snakeoil.pem'
-
name
:
nginx |
copy ssl key
-
name
:
copy ssl key
copy
:
>
src={{ NGINX_SSL_KEY }}
dest=/etc/ssl/private/{{ item|basename }}
...
...
@@ -91,18 +91,18 @@
# removing default link
-
name
:
nginx |
Removing default nginx config and restart (enabled)
-
name
:
Removing default nginx config and restart (enabled)
file
:
path={{ nginx_sites_enabled_dir }}/default state=absent
notify
:
nginx |
reload nginx
notify
:
reload nginx
# Note that nginx logs to /var/log until it reads its configuration, so /etc/logrotate.d/nginx is still good
-
name
:
nginx |
Set up nginx access log rotation
-
name
:
Set up nginx access log rotation
template
:
>
dest=/etc/logrotate.d/nginx-access src=edx_logrotate_nginx_access.j2
owner=root group=root mode=644
-
name
:
nginx |
Set up nginx access log rotation
-
name
:
Set up nginx access log rotation
template
:
>
dest=/etc/logrotate.d/nginx-error src=edx_logrotate_nginx_error.j2
owner=root group=root mode=644
...
...
@@ -110,10 +110,10 @@
# If tasks that notify restart nginx don't change the state of the remote system
# their corresponding notifications don't get run. If nginx has been stopped for
# any reason, this will ensure that it is started up again.
-
name
:
nginx |
make sure nginx has started
-
name
:
make sure nginx has started
service
:
name=nginx state=started
when
:
start_services
-
name
:
nginx |
make sure nginx has stopped
-
name
:
make sure nginx has stopped
service
:
name=nginx state=stopped
when
:
not start_services
playbooks/roles/notifier/handlers/main.yml
View file @
5c5b041f
---
-
name
:
notifier |
restart notifier-scheduler
-
name
:
restart notifier-scheduler
supervisorctl_local
:
>
name=notifier-scheduler
state=restarted
config={{ supervisor_cfg }}
supervisorctl_path={{ supervisor_ctl }}
-
name
:
notifier |
restart notifier-celery-workers
-
name
:
restart notifier-celery-workers
supervisorctl_local
:
>
name=notifier-celery-workers
state=restarted
...
...
playbooks/roles/notifier/tasks/deploy.yml
View file @
5c5b041f
---
-
name
:
notifier |
checkout code
-
name
:
checkout code
git
:
dest={{ NOTIFIER_CODE_DIR }} repo={{ NOTIFIER_SOURCE_REPO }}
version={{ NOTIFIER_VERSION }}
sudo
:
true
sudo_user
:
"
{{
NOTIFIER_USER
}}"
notify
:
-
notifier |
restart notifier-scheduler
-
notifier |
restart notifier-celery-workers
-
restart notifier-scheduler
-
restart notifier-celery-workers
-
name
:
notifier |
source repo group perms
-
name
:
source repo group perms
file
:
path={{ NOTIFIER_SOURCE_REPO }} mode=2775 state=directory
-
name
:
notifier |
install application requirements
-
name
:
install application requirements
pip
:
requirements="{{ NOTIFIER_REQUIREMENTS_FILE }}"
virtualenv="{{ NOTIFIER_VENV_DIR }}" state=present
sudo
:
true
sudo_user
:
"
{{
NOTIFIER_USER
}}"
notify
:
-
notifier |
restart notifier-scheduler
-
notifier |
restart notifier-celery-workers
-
restart notifier-scheduler
-
restart notifier-celery-workers
# Syncdb for whatever reason always creates the file owned by www-data:www-data, and then
# complains it can't write because it's running as notifier. So this is to touch the file into
# place with proper perms first.
-
name
:
notifier |
fix permissions on notifer db file
-
name
:
fix permissions on notifer db file
file
:
>
path={{ NOTIFIER_DB_DIR }}/notifier.db state=touch owner={{ NOTIFIER_USER }} group={{ NOTIFIER_WEB_USER }}
mode=0664
sudo
:
true
notify
:
-
notifier |
restart notifier-scheduler
-
notifier |
restart notifier-celery-workers
-
restart notifier-scheduler
-
restart notifier-celery-workers
tags
:
-
deploy
-
name
:
notifier |
syncdb
-
name
:
syncdb
shell
:
>
cd {{ NOTIFIER_CODE_DIR }} && {{ NOTIFIER_VENV_DIR }}/bin/python manage.py syncdb
sudo
:
true
sudo_user
:
"
{{
NOTIFIER_USER
}}"
environment
:
notifier_env_vars
notify
:
-
notifier |
restart notifier-scheduler
-
notifier |
restart notifier-celery-workers
-
restart notifier-scheduler
-
restart notifier-celery-workers
playbooks/roles/notifier/tasks/main.yml
View file @
5c5b041f
...
...
@@ -17,86 +17,86 @@
# - common
# - notifier
#
-
name
:
notifier |
install notifier specific system packages
-
name
:
install notifier specific system packages
apt
:
pkg={{','.join(notifier_debian_pkgs)}} state=present
-
name
:
notifier |
check if incommon ca is installed
-
name
:
check if incommon ca is installed
command
:
test -e /usr/share/ca-certificates/incommon/InCommonServerCA.crt
register
:
incommon_present
ignore_errors
:
yes
-
name
:
c
ommon | c
reate incommon ca directory
-
name
:
create incommon ca directory
file
:
path="/usr/share/ca-certificates/incommon" mode=2775 state=directory
when
:
incommon_present|failed
-
name
:
common |
retrieve incommon server CA
-
name
:
retrieve incommon server CA
shell
:
curl https://www.incommon.org/cert/repository/InCommonServerCA.txt -o /usr/share/ca-certificates/incommon/InCommonServerCA.crt
when
:
incommon_present|failed
-
name
:
common |
add InCommon ca cert
-
name
:
add InCommon ca cert
lineinfile
:
dest=/etc/ca-certificates.conf
regexp='incommon/InCommonServerCA.crt'
line='incommon/InCommonServerCA.crt'
-
name
:
common |
update ca certs globally
-
name
:
update ca certs globally
shell
:
update-ca-certificates
-
name
:
notifier |
create notifier user {{ NOTIFIER_USER }}
-
name
:
create notifier user {{ NOTIFIER_USER }}
user
:
name={{ NOTIFIER_USER }} state=present shell=/bin/bash
home={{ NOTIFIER_HOME }} createhome=yes
-
name
:
notifier |
setup the notifier env
-
name
:
setup the notifier env
template
:
src=notifier_env.j2 dest={{ NOTIFIER_HOME }}/notifier_env
owner="{{ NOTIFIER_USER }}" group="{{ NOTIFIER_USER }}"
-
name
:
notifier |
drop a bash_profile
-
name
:
drop a bash_profile
copy
:
>
src=../../common/files/bash_profile
dest={{ NOTIFIER_HOME }}/.bash_profile
owner={{ NOTIFIER_USER }}
group={{ NOTIFIER_USER }}
-
name
:
notifier |
ensure .bashrc exists
-
name
:
ensure .bashrc exists
shell
:
touch {{ NOTIFIER_HOME }}/.bashrc
sudo
:
true
sudo_user
:
"
{{
NOTIFIER_USER
}}"
-
name
:
notifier |
add source of notifier_env to .bashrc
-
name
:
add source of notifier_env to .bashrc
lineinfile
:
dest={{ NOTIFIER_HOME }}/.bashrc
regexp='. {{ NOTIFIER_HOME }}/notifier_env'
line='. {{ NOTIFIER_HOME }}/notifier_env'
-
name
:
notifier |
add source venv to .bashrc
-
name
:
add source venv to .bashrc
lineinfile
:
dest={{ NOTIFIER_HOME }}/.bashrc
regexp='. {{ NOTIFIER_VENV_DIR }}/bin/activate'
line='. {{ NOTIFIER_VENV_DIR }}/bin/activate'
-
name
:
notifier |
create notifier DB directory
-
name
:
create notifier DB directory
file
:
path="{{ NOTIFIER_DB_DIR }}" mode=2775 state=directory owner={{ NOTIFIER_USER }} group={{ NOTIFIER_WEB_USER }}
-
name
:
notifier |
create notifier/bin directory
-
name
:
create notifier/bin directory
file
:
path="{{ NOTIFIER_HOME }}/bin" mode=2775 state=directory owner={{ NOTIFIER_USER }} group={{ NOTIFIER_USER }}
-
name
:
notifier |
supervisord config for celery workers
-
name
:
supervisord config for celery workers
template
:
>
src=edx/app/supervisor/conf.d/notifier-celery-workers.conf.j2
dest="{{ supervisor_cfg_dir }}/notifier-celery-workers.conf"
sudo_user
:
"
{{
supervisor_user
}}"
notify
:
notifier |
restart notifier-celery-workers
notify
:
restart notifier-celery-workers
-
name
:
notifier |
supervisord config for scheduler
-
name
:
supervisord config for scheduler
template
:
>
src=edx/app/supervisor/conf.d/notifier-scheduler.conf.j2
dest="{{ supervisor_cfg_dir }}/notifier-scheduler.conf"
sudo_user
:
"
{{
supervisor_user
}}"
notify
:
notifier |
restart notifier-scheduler
notify
:
restart notifier-scheduler
-
include
:
deploy.yml tags=deploy
playbooks/roles/ora/handlers/main.yml
View file @
5c5b041f
---
-
name
:
ora |
restart ora
-
name
:
restart ora
supervisorctl_local
:
>
name=ora
supervisorctl_path={{ supervisor_ctl }}
...
...
@@ -7,7 +7,7 @@
state=restarted
when
:
start_services and ora_installed is defined and not devstack
-
name
:
ora |
restart ora_celery
-
name
:
restart ora_celery
supervisorctl_local
:
>
name=ora_celery
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/ora/tasks/deploy.yml
View file @
5c5b041f
-
name
:
ora |
create supervisor scripts - ora, ora_celery
-
name
:
create supervisor scripts - ora, ora_celery
template
:
>
src={{ item }}.conf.j2 dest={{ supervisor_cfg_dir }}/{{ item }}.conf
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
with_items
:
[
'
ora'
,
'
ora_celery'
]
when
:
not devstack
-
include
:
ease.yml
-
name
:
ora |
create ora application config
-
name
:
create ora application config
template
:
src=ora.env.json.j2 dest={{ora_app_dir}}/ora.env.json
sudo_user
:
"
{{
ora_user
}}"
-
name
:
ora |
create ora auth file
-
name
:
create ora auth file
template
:
src=ora.auth.json.j2 dest={{ora_app_dir}}/ora.auth.json
sudo_user
:
"
{{
ora_user
}}"
-
name
:
ora |
setup the ora env
-
name
:
setup the ora env
notify
:
-
"
ora
|
restart
ora"
-
"
ora
|
restart
ora_celery"
-
"
restart
ora"
-
"
restart
ora_celery"
template
:
>
src=ora_env.j2 dest={{ ora_app_dir }}/ora_env
owner={{ ora_user }} group={{ common_web_user }}
mode=0644
# Do A Checkout
-
name
:
ora |
git checkout ora repo into {{ ora_app_dir }}
-
name
:
git checkout ora repo into {{ ora_app_dir }}
git
:
dest={{ ora_code_dir }} repo={{ ora_source_repo }} version={{ ora_version }}
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
# TODO: Check git.py _run_if_changed() to see if the logic there to skip running certain
# portions of the deploy needs to be incorporated here.
# Install the python pre requirements into {{ ora_venv_dir }}
-
name
:
ora |
install python pre-requirements
-
name
:
install python pre-requirements
pip
:
requirements="{{ ora_pre_requirements_file }}" virtualenv="{{ ora_venv_dir }}" state=present
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
# Install the python post requirements into {{ ora_venv_dir }}
-
name
:
ora |
install python post-requirements
-
name
:
install python post-requirements
pip
:
requirements="{{ ora_post_requirements_file }}" virtualenv="{{ ora_venv_dir }}" state=present
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
#Needed if using redis to prevent memory issues
-
name
:
ora |
change memory commit settings -- needed for redis
-
name
:
change memory commit settings -- needed for redis
command
:
sysctl vm.overcommit_memory=1
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
-
name
:
ora |
syncdb and migrate
-
name
:
syncdb and migrate
shell
:
SERVICE_VARIANT=ora {{ora_venv_dir}}/bin/django-admin.py syncdb --migrate --noinput --settings=edx_ora.aws --pythonpath={{ora_code_dir}}
when
:
migrate_db is defined and migrate_db|lower == "yes"
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
-
name
:
ora |
create users
-
name
:
create users
shell
:
SERVICE_VARIANT=ora {{ora_venv_dir}}/bin/django-admin.py update_users --settings=edx_ora.aws --pythonpath={{ora_code_dir}}
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
# call supervisorctl update. this reloads
...
...
@@ -83,13 +83,13 @@
# the services if any of the configurations
# have changed.
#
-
name
:
ora |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
when
:
start_services and not devstack
changed_when
:
supervisor_update.stdout != ""
-
name
:
ora |
ensure ora is started
-
name
:
ensure ora is started
supervisorctl_local
:
>
name=ora
supervisorctl_path={{ supervisor_ctl }}
...
...
@@ -97,7 +97,7 @@
state=started
when
:
start_services and not devstack
-
name
:
ora |
ensure ora_celery is started
-
name
:
ensure ora_celery is started
supervisorctl_local
:
>
name=ora_celery
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/ora/tasks/ease.yml
View file @
5c5b041f
# Do A Checkout
-
name
:
ora |
git checkout ease repo into its base dir
-
name
:
git checkout ease repo into its base dir
git
:
dest={{ora_ease_code_dir}} repo={{ora_ease_source_repo}} version={{ora_ease_version}}
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
-
name
:
ora |
install ease system packages
-
name
:
install ease system packages
apt
:
pkg={{item}} state=present
with_items
:
ora_ease_debian_pkgs
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
# Install the python pre requirements into {{ ora_ease_venv_dir }}
-
name
:
ora |
install ease python pre-requirements
-
name
:
install ease python pre-requirements
pip
:
requirements="{{ora_ease_pre_requirements_file}}" virtualenv="{{ora_ease_venv_dir}}" state=present
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
# Install the python post requirements into {{ ora_ease_venv_dir }}
-
name
:
ora |
install ease python post-requirements
-
name
:
install ease python post-requirements
pip
:
requirements="{{ora_ease_post_requirements_file}}" virtualenv="{{ora_ease_venv_dir}}" state=present
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
-
name
:
ora |
install ease python package
-
name
:
install ease python package
shell
:
>
. {{ ora_ease_venv_dir }}/bin/activate; cd {{ ora_ease_code_dir }}; python setup.py install
sudo_user
:
"
{{
ora_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
-
name
:
ora |
download and install nltk
-
name
:
download and install nltk
shell
:
|
set -e
curl -o {{ ora_nltk_tmp_file }} {{ ora_nltk_download_url }}
...
...
@@ -49,5 +49,5 @@
chdir={{ ora_data_dir }}
sudo_user
:
"
{{
common_web_user
}}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
playbooks/roles/ora/tasks/main.yml
View file @
5c5b041f
...
...
@@ -3,49 +3,49 @@
# - common/tasks/main.yml
---
-
name
:
ora |
create application user
-
name
:
create application user
user
:
>
name="{{ ora_user }}" home="{{ ora_app_dir }}"
createhome=no shell=/bin/false
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
-
name
:
ora |
create ora app dir
-
name
:
create ora app dir
file
:
>
path="{{ item }}" state=directory
owner="{{ ora_user }}" group="{{ common_web_group }}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
with_items
:
-
"
{{
ora_venvs_dir
}}"
-
"
{{
ora_app_dir
}}"
-
name
:
ora |
create ora data dir, owned by {{ common_web_user }}
-
name
:
create ora data dir, owned by {{ common_web_user }}
file
:
>
path="{{ item }}" state=directory
owner="{{ common_web_user }}" group="{{ common_web_group }}"
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
with_items
:
-
"
{{
ora_data_dir
}}"
-
"
{{
ora_data_course_dir
}}"
-
"
{{
ora_app_dir
}}/ml_models"
-
name
:
ora |
install debian packages that ora needs
-
name
:
install debian packages that ora needs
apt
:
pkg={{item}} state=present
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
with_items
:
ora_debian_pkgs
-
name
:
ora |
install debian packages for ease that ora needs
-
name
:
install debian packages for ease that ora needs
apt
:
pkg={{item}} state=present
notify
:
-
ora |
restart ora
-
ora |
restart ora_celery
-
restart ora
-
restart ora_celery
with_items
:
ora_ease_debian_pkgs
-
include
:
deploy.yml tags=deploy
...
...
playbooks/roles/oraclejdk/tasks/main.yml
View file @
5c5b041f
...
...
@@ -12,12 +12,12 @@
# - common
# - oraclejdk
-
name
:
oraclejdk |
check for Oracle Java version {{ oraclejdk_base }}
-
name
:
check for Oracle Java version {{ oraclejdk_base }}
command
:
test -d /usr/lib/jvm/{{ oraclejdk_base }}
ignore_errors
:
true
register
:
oraclejdk_present
-
name
:
oraclejdk |
download Oracle Java
-
name
:
download Oracle Java
shell
:
>
curl -b gpw_e24=http%3A%2F%2Fwww.oracle.com -O -L {{ oraclejdk_url }}
executable=/bin/bash
...
...
@@ -25,7 +25,7 @@
creates=/var/tmp/{{ oraclejdk_file }}
when
:
oraclejdk_present|failed
-
name
:
oraclejdk |
install Oracle Java
-
name
:
install Oracle Java
shell
:
>
mkdir -p /usr/lib/jvm && tar -C /usr/lib/jvm -zxvf /var/tmp/{{ oraclejdk_file }}
creates=/usr/lib/jvm/{{ oraclejdk_base }}
...
...
@@ -34,10 +34,10 @@
sudo
:
true
when
:
oraclejdk_present|failed
-
name
:
oraclejdk |
create symlink expected by elasticsearch
-
name
:
create symlink expected by elasticsearch
file
:
src=/usr/lib/jvm/{{ oraclejdk_base }} dest={{ oraclejdk_link }} state=link
when
:
oraclejdk_present|failed
-
name
:
oraclejdk |
add JAVA_HOME for Oracle Java
-
name
:
add JAVA_HOME for Oracle Java
template
:
src=java.sh.j2 dest=/etc/profile.d/java.sh owner=root group=root mode=0755
when
:
oraclejdk_present|failed
playbooks/roles/rabbitmq/tasks/main.yml
View file @
5c5b041f
...
...
@@ -3,80 +3,80 @@
# There is a bug with initializing multiple nodes in the HA cluster at once
# http://rabbitmq.1065348.n5.nabble.com/Rabbitmq-boot-failure-with-quot-tables-not-present-quot-td24494.html
-
name
:
rabbitmq |
trust rabbit repository
-
name
:
trust rabbit repository
apt_key
:
url={{rabbitmq_apt_key}} state=present
-
name
:
rabbitmq |
install python-software-properties if debian
-
name
:
install python-software-properties if debian
apt
:
pkg={{",".join(rabbitmq_debian_pkgs)}} state=present
-
name
:
rabbitmq |
add rabbit repository
-
name
:
add rabbit repository
apt_repository
:
repo="{{rabbitmq_repository}}" state=present
-
name
:
rabbitmq |
install rabbitmq
-
name
:
install rabbitmq
apt
:
pkg={{rabbitmq_pkg}} state=present update_cache=yes
-
name
:
rabbitmq |
stop rabbit cluster
-
name
:
stop rabbit cluster
service
:
name=rabbitmq-server state=stopped
# in case there are lingering processes, ignore errors
# silently
-
name
:
rabbitmq |
send sigterm to any running rabbitmq processes
-
name
:
send sigterm to any running rabbitmq processes
shell
:
pkill -u rabbitmq ||
true
# Defaulting to /var/lib/rabbitmq
-
name
:
rabbitmq |
create cookie directory
-
name
:
create cookie directory
file
:
>
path={{rabbitmq_cookie_dir}}
owner=rabbitmq group=rabbitmq mode=0755 state=directory
-
name
:
rabbitmq |
add rabbitmq erlang cookie
-
name
:
add rabbitmq erlang cookie
template
:
>
src=erlang.cookie.j2 dest={{rabbitmq_cookie_location}}
owner=rabbitmq group=rabbitmq mode=0400
register
:
erlang_cookie
# Defaulting to /etc/rabbitmq
-
name
:
rabbitmq |
create rabbitmq config directory
-
name
:
create rabbitmq config directory
file
:
>
path={{rabbitmq_config_dir}}
owner=root group=root mode=0755 state=directory
-
name
:
rabbitmq |
add rabbitmq environment configuration
-
name
:
add rabbitmq environment configuration
template
:
>
src=rabbitmq-env.conf.j2 dest={{rabbitmq_config_dir}}/rabbitmq-env.conf
owner=root group=root mode=0644
-
name
:
rabbitmq |
add rabbitmq cluster configuration
-
name
:
add rabbitmq cluster configuration
template
:
>
src=rabbitmq.config.j2 dest={{rabbitmq_config_dir}}/rabbitmq.config
owner=root group=root mode=0644
register
:
cluster_configuration
-
name
:
rabbitmq |
install plugins
-
name
:
install plugins
rabbitmq_plugin
:
names={{",".join(rabbitmq_plugins)}} state=enabled
# When rabbitmq starts up it creates a folder of metadata at '/var/lib/rabbitmq/mnesia'.
# This folder should be deleted before clustering is setup because it retains data
# that can conflict with the clustering information.
-
name
:
r
abbitmq | r
emove mnesia configuration
-
name
:
remove mnesia configuration
file
:
path={{rabbitmq_mnesia_folder}} state=absent
when
:
erlang_cookie.changed or cluster_configuration.changed or rabbitmq_refresh
-
name
:
rabbitmq |
start rabbit nodes
-
name
:
start rabbit nodes
service
:
name=rabbitmq-server state=restarted
-
name
:
rabbitmq |
wait for rabbit to start
-
name
:
wait for rabbit to start
wait_for
:
port={{ rabbitmq_management_port }} delay=2
-
name
:
r
abbitmq | r
emove guest user
-
name
:
remove guest user
rabbitmq_user
:
user="guest" state=absent
-
name
:
rabbitmq |
add vhosts
-
name
:
add vhosts
rabbitmq_vhost
:
name={{ item }} state=present
with_items
:
RABBITMQ_VHOSTS
-
name
:
rabbitmq |
add admin users
-
name
:
add admin users
rabbitmq_user
:
>
user='{{item[0].name}}' password='{{item[0].password}}'
read_priv='.*' write_priv='.*'
...
...
@@ -87,23 +87,23 @@
-
RABBITMQ_VHOSTS
when
:
"
'admins'
in
rabbitmq_auth_config"
-
name
:
rabbitmq |
make queues mirrored
-
name
:
make queues mirrored
shell
:
"
/usr/sbin/rabbitmqctl
set_policy
HA
'^(?!amq
\\
.).*'
'{
\"
ha-mode
\"
:
\"
all
\"
}'"
when
:
RABBITMQ_CLUSTERED or rabbitmq_clustered_hosts|length > 1
#
# Depends upon the management plugin
#
-
name
:
rabbitmq |
install admin tools
-
name
:
install admin tools
get_url
:
>
url=http://localhost:{{ rabbitmq_management_port }}/cli/rabbitmqadmin
dest=/usr/local/bin/rabbitmqadmin
-
name
:
rabbitmq |
ensure rabbitmqadmin attributes
-
name
:
ensure rabbitmqadmin attributes
file
:
>
path=/usr/local/bin/rabbitmqadmin owner=root
group=root mode=0655
-
name
:
rabbitmq |
stop rabbit nodes
-
name
:
stop rabbit nodes
service
:
name=rabbitmq-server state=restarted
when
:
not start_services
playbooks/roles/rbenv/tasks/main.yml
View file @
5c5b041f
...
...
@@ -34,95 +34,95 @@
-
fail
:
rbenv_ruby_version required for role
when
:
rbenv_ruby_version is not defined
-
name
:
rbenv |
create rbenv user {{ rbenv_user }}
-
name
:
create rbenv user {{ rbenv_user }}
user
:
>
name={{ rbenv_user }} home={{ rbenv_dir }}
shell=/bin/false createhome=no
when
:
rbenv_user != common_web_user
-
name
:
rbenv |
create rbenv dir if it does not exist
-
name
:
create rbenv dir if it does not exist
file
:
>
path="{{ rbenv_dir }}" owner="{{ rbenv_user }}"
state=directory
-
name
:
rbenv |
install build depends
-
name
:
install build depends
apt
:
pkg={{ ",".join(rbenv_debian_pkgs) }} state=present install_recommends=no
with_items
:
rbenv_debian_pkgs
-
name
:
rbenv |
update rbenv repo
-
name
:
update rbenv repo
git
:
>
repo=https://github.com/sstephenson/rbenv.git
dest={{ rbenv_dir }}/.rbenv version={{ rbenv_version }}
sudo_user
:
"
{{
rbenv_user
}}"
-
name
:
rbenv |
ensure ruby_env exists
-
name
:
ensure ruby_env exists
template
:
>
src=ruby_env.j2 dest={{ rbenv_dir }}/ruby_env
sudo_user
:
"
{{
rbenv_user
}}"
-
name
:
rbenv |
check ruby-build installed
-
name
:
check ruby-build installed
command
:
test -x /usr/local/bin/ruby-build
register
:
rbuild_present
ignore_errors
:
yes
-
name
:
rbenv |
if ruby-build exists, which versions we can install
-
name
:
if ruby-build exists, which versions we can install
command
:
/usr/local/bin/ruby-build --definitions
when
:
rbuild_present|success
register
:
installable_ruby_vers
ignore_errors
:
yes
### in this block, we (re)install ruby-build if it doesn't exist or if it can't install the requested version
-
name
:
rbenv |
create temporary directory
-
name
:
create temporary directory
command
:
mktemp -d
register
:
tempdir
sudo_user
:
"
{{
rbenv_user
}}"
when
:
rbuild_present|failed or (installable_ruby_vers is defined and rbenv_ruby_version not in installable_ruby_vers)
-
name
:
rbenv |
clone ruby-build repo
-
name
:
clone ruby-build repo
git
:
repo=https://github.com/sstephenson/ruby-build.git dest={{ tempdir.stdout }}/ruby-build
when
:
rbuild_present|failed or (installable_ruby_vers is defined and rbenv_ruby_version not in installable_ruby_vers)
sudo_user
:
"
{{
rbenv_user
}}"
-
name
:
rbenv |
install ruby-build
-
name
:
install ruby-build
command
:
./install.sh chdir={{ tempdir.stdout }}/ruby-build
when
:
rbuild_present|failed or (installable_ruby_vers is defined and rbenv_ruby_version not in installable_ruby_vers)
-
name
:
r
benv | r
emove temporary directory
-
name
:
remove temporary directory
file
:
path={{ tempdir.stdout }} state=absent
when
:
rbuild_present|failed or (installable_ruby_vers is defined and rbenv_ruby_version not in installable_ruby_vers)
-
name
:
rbenv |
check ruby {{ rbenv_ruby_version }} installed
-
name
:
check ruby {{ rbenv_ruby_version }} installed
shell
:
"
rbenv
versions
|
grep
{{
rbenv_ruby_version
}}"
register
:
ruby_installed
sudo_user
:
"
{{
rbenv_user
}}"
environment
:
"
{{
rbenv_environment
}}"
ignore_errors
:
yes
-
name
:
rbenv |
install ruby {{ rbenv_ruby_version }}
-
name
:
install ruby {{ rbenv_ruby_version }}
shell
:
"
rbenv
install
{{
rbenv_ruby_version
}}
creates={{
rbenv_dir
}}/.rbenv/versions/{{
rbenv_ruby_version
}}"
when
:
ruby_installed|failed
sudo_user
:
"
{{
rbenv_user
}}"
environment
:
"
{{
rbenv_environment
}}"
-
name
:
rbenv |
set global ruby {{ rbenv_ruby_version }}
-
name
:
set global ruby {{ rbenv_ruby_version }}
shell
:
"
rbenv
global
{{
rbenv_ruby_version
}}"
sudo_user
:
"
{{
rbenv_user
}}"
environment
:
"
{{
rbenv_environment
}}"
-
name
:
rbenv |
install bundler
-
name
:
install bundler
shell
:
"
gem
install
bundler
-v
{{
rbenv_bundler_version
}}"
sudo_user
:
"
{{
rbenv_user
}}"
environment
:
"
{{
rbenv_environment
}}"
-
name
:
r
benv | r
emove rbenv version of rake
-
name
:
remove rbenv version of rake
file
:
path="{{ rbenv_dir }}/.rbenv/versions/{{ rbenv_ruby_version }}/bin/rake" state=absent
-
name
:
rbenv |
install rake gem
-
name
:
install rake gem
shell
:
"
gem
install
rake
-v
{{
rbenv_rake_version
}}"
sudo_user
:
"
{{
rbenv_user
}}"
environment
:
"
{{
rbenv_environment
}}"
-
name
:
r
benv | r
ehash
-
name
:
rehash
shell
:
"
rbenv
rehash"
sudo_user
:
"
{{
rbenv_user
}}"
environment
:
"
{{
rbenv_environment
}}"
playbooks/roles/s3fs/tasks/main.yml
View file @
5c5b041f
...
...
@@ -25,17 +25,17 @@
#
# The role would need to include tasks like the following
#
# - name:
my_role |
create s3fs mount points
# - name: create s3fs mount points
# file:
# path={{ item.mount_point }} owner={{ item.owner }}
# group={{ item.group }} mode={{ item.mode }} state="directory"
# with_items:
"{{ my_role_s3fs_mounts }}"
# with_items:
my_role_s3fs_mounts
#
# - name: m
y_role | m
ount s3 buckets
# - name: mount s3 buckets
# mount:
# name={{ item.mount_point }} src={{ item.bucket }} fstype=fuse.s3fs
# opts=use_cache=/tmp,iam_role={{ task_iam_role }},allow_other state=mounted
# with_items:
"{{ myrole_s3fs_mounts }}"
# with_items:
myrole_s3fs_mounts
#
# Example play:
#
...
...
@@ -53,37 +53,37 @@
# - s3fs
#
-
name
:
s3fs |
install system packages
-
name
:
install system packages
apt
:
pkg={{','.join(s3fs_debian_pkgs)}} state=present
tags
:
-
s3fs
-
install
-
update
-
name
:
s3fs |
fetch package
-
name
:
fetch package
get_url
:
url={{ s3fs_download_url }}
dest={{ s3fs_temp_dir }}
-
name
:
s3fs |
extract package
-
name
:
extract package
shell
:
/bin/tar -xzf {{ s3fs_archive }}
chdir={{ s3fs_temp_dir }}
creates={{ s3fs_temp_dir }}/{{ s3fs_version }}/configure
-
name
:
s3fs |
configure
-
name
:
configure
shell
:
./configure
chdir={{ s3fs_temp_dir }}/{{ s3fs_version }}
creates={{ s3fs_temp_dir }}/{{ s3fs_version }}/config.status
-
name
:
s3fs |
make
-
name
:
make
shell
:
/usr/bin/make
chdir={{ s3fs_temp_dir }}/{{ s3fs_version }}
creates={{ s3fs_temp_dir }}/{{ s3fs_version }}/src/s3cmd
-
name
:
s3fs |
make install
-
name
:
make install
shell
:
/usr/bin/make install
chdir={{ s3fs_temp_dir }}/{{ s3fs_version }}
...
...
playbooks/roles/shibboleth/handlers/main.yml
View file @
5c5b041f
---
-
name
:
shibboleth |
restart shibd
-
name
:
restart shibd
service
:
name=shibd state=restarted
playbooks/roles/shibboleth/tasks/main.yml
View file @
5c5b041f
#Install shibboleth
---
-
name
:
shibboleth |
Installs shib and dependencies from apt
-
name
:
Installs shib and dependencies from apt
apt
:
pkg={{item}} install_recommends=no state=present update_cache=yes
with_items
:
-
shibboleth-sp2-schemas
...
...
@@ -9,46 +9,46 @@
-
libshibsp-doc
-
libapache2-mod-shib2
-
opensaml2-tools
notify
:
shibboleth |
restart shibd
notify
:
restart shibd
tags
:
-
shib
-
install
-
name
:
shibboleth |
Creates /etc/shibboleth/metadata directory
-
name
:
Creates /etc/shibboleth/metadata directory
file
:
path=/etc/shibboleth/metadata state=directory mode=2774 group=_shibd owner=_shibd
tags
:
-
shib
-
install
-
name
:
shibboleth |
Downloads metadata into metadata directory as backup
-
name
:
Downloads metadata into metadata directory as backup
get_url
:
url=https://idp.stanford.edu/Stanford-metadata.xml dest=/etc/shibboleth/metadata/idp-metadata.xml mode=0640 group=_shibd owner=_shibd
tags
:
-
shib
-
install
-
name
:
shibboleth |
writes out key and pem file
-
name
:
writes out key and pem file
template
:
src=sp.{{item}}.j2 dest=/etc/shibboleth/sp.{{item}} group=_shibd owner=_shibd mode=0600
with_items
:
-
key
-
pem
notify
:
shibboleth |
restart shibd
notify
:
restart shibd
tags
:
-
shib
-
install
-
name
:
shibboleth |
writes out configuration files
-
name
:
writes out configuration files
template
:
src={{item}}.j2 dest=/etc/shibboleth/{{item}} group=_shibd owner=_shibd mode=0644
with_items
:
-
attribute-map.xml
-
shibboleth2.xml
notify
:
shibboleth |
restart shibd
notify
:
restart shibd
tags
:
-
shib
-
install
-
name
:
shibboleth |
enables shib
-
name
:
enables shib
command
:
a2enmod shib2
notify
:
shibboleth |
restart shibd
notify
:
restart shibd
tags
:
-
shib
-
install
...
...
playbooks/roles/splunkforwarder/handlers/main.yml
View file @
5c5b041f
...
...
@@ -16,5 +16,5 @@
#
# Restart Splunk
-
name
:
splunkforwarder |
restart splunkforwarder
-
name
:
restart splunkforwarder
service
:
name=splunk state=restarted
playbooks/roles/splunkforwarder/tasks/main.yml
View file @
5c5b041f
...
...
@@ -22,83 +22,83 @@
#
# Install Splunk Forwarder
-
name
:
splunkforwarder|
install splunkforwarder specific system packages
-
name
:
install splunkforwarder specific system packages
apt
:
pkg={{','.join(splunk_debian_pkgs)}} state=present
tags
:
-
splunk
-
install
-
update
-
name
:
splunkforwarder |
download the splunk deb
-
name
:
download the splunk deb
get_url
:
>
dest="/tmp/{{SPLUNKFORWARDER_DEB}}"
url="{{SPLUNKFORWARDER_PACKAGE_LOCATION}}{{SPLUNKFORWARDER_DEB}}"
register
:
download_deb
-
name
:
splunkforwarder |
install splunk forwarder
-
name
:
install splunk forwarder
shell
:
gdebi -nq /tmp/{{SPLUNKFORWARDER_DEB}}
when
:
download_deb.changed
# Create splunk user
-
name
:
splunkforwarder |
create splunk user
-
name
:
create splunk user
user
:
name=splunk createhome=no state=present append=yes groups=syslog
when
:
download_deb.changed
# Need to start splunk manually so that it can create various files
# and directories that aren't created till the first run and are needed
# to run some of the below commands.
-
name
:
s
plunkforwarder | s
tart splunk manually
-
name
:
start splunk manually
shell
:
>
{{splunkforwarder_output_dir}}/bin/splunk start --accept-license --answer-yes --no-prompt
creates={{splunkforwarder_output_dir}}/var/lib/splunk
when
:
download_deb.changed
register
:
started_manually
-
name
:
s
plunkforwarder | s
top splunk manually
-
name
:
stop splunk manually
shell
:
>
{{splunkforwarder_output_dir}}/bin/splunk stop --accept-license --answer-yes --no-prompt
when
:
download_deb.changed and started_manually.changed
-
name
:
splunkforwarder |
create boot script
-
name
:
create boot script
shell
:
>
{{splunkforwarder_output_dir}}/bin/splunk enable boot-start -user splunk --accept-license --answer-yes --no-prompt
creates=/etc/init.d/splunk
register
:
create_boot_script
when
:
download_deb.changed
notify
:
splunkforwarder |
restart splunkforwarder
notify
:
restart splunkforwarder
# Update credentials
-
name
:
splunkforwarder |
update admin pasword
-
name
:
update admin pasword
shell
:
"
{{splunkforwarder_output_dir}}/bin/splunk
edit
user
admin
-password
{{SPLUNKFORWARDER_PASSWORD}}
-auth
admin:changeme
--accept-license
--answer-yes
--no-prompt"
when
:
download_deb.changed
notify
:
splunkforwarder |
restart splunkforwarder
notify
:
restart splunkforwarder
-
name
:
splunkforwarder |
add chkconfig to init script
-
name
:
add chkconfig to init script
shell
:
'
sed
-i
-e
"s/\/bin\/sh/\/bin\/sh\n#
chkconfig:
235
98
55/"
/etc/init.d/splunk'
when
:
download_deb.changed and create_boot_script.changed
notify
:
splunkforwarder |
restart splunkforwarder
notify
:
restart splunkforwarder
# Ensure permissions on splunk content
-
name
:
splunkforwarder |
ensure splunk forder permissions
-
name
:
ensure splunk forder permissions
file
:
path={{splunkforwarder_output_dir}} state=directory recurse=yes owner=splunk group=splunk
when
:
download_deb.changed
notify
:
splunkforwarder |
restart splunkforwarder
notify
:
restart splunkforwarder
# Drop template files.
-
name
:
splunkforwarder |
drop input configuration
-
name
:
drop input configuration
template
:
src=opt/splunkforwarder/etc/system/local/inputs.conf.j2
dest=/opt/splunkforwarder/etc/system/local/inputs.conf
owner=splunk
group=splunk
mode=644
notify
:
splunkforwarder |
restart splunkforwarder
notify
:
restart splunkforwarder
-
name
:
splunkforwarder |
create outputs config file
-
name
:
create outputs config file
template
:
src=opt/splunkforwarder/etc/system/local/outputs.conf.j2
dest=/opt/splunkforwarder/etc/system/local/outputs.conf
owner=splunk
group=splunk
mode=644
notify
:
splunkforwarder |
restart splunkforwarder
notify
:
restart splunkforwarder
playbooks/roles/supervisor/tasks/main.yml
View file @
5c5b041f
...
...
@@ -50,19 +50,19 @@
# supervisor_service: upstart-service-name
#
---
-
name
:
supervisor |
create application user
-
name
:
create application user
user
:
>
name="{{ supervisor_user }}"
createhome=no
shell=/bin/false
-
name
:
supervisor |
create supervisor service user
-
name
:
create supervisor service user
user
:
>
name="{{ supervisor_service_user }}"
createhome=no
shell=/bin/false
-
name
:
supervisor |
create supervisor directories
-
name
:
create supervisor directories
file
:
>
name={{ item }}
state=directory
...
...
@@ -73,7 +73,7 @@
-
"
{{
supervisor_venv_dir
}}"
-
"
{{
supervisor_cfg_dir
}}"
-
name
:
supervisor |
create supervisor directories
-
name
:
create supervisor directories
file
:
>
name={{ item }}
state=directory
...
...
@@ -84,29 +84,29 @@
-
"
{{
supervisor_log_dir
}}"
-
name
:
supervisor |
install supervisor in its venv
-
name
:
install supervisor in its venv
pip
:
name=supervisor virtualenv="{{supervisor_venv_dir}}" state=present
sudo_user
:
"
{{
supervisor_user
}}"
-
name
:
supervisor |
create supervisor upstart job
-
name
:
create supervisor upstart job
template
:
>
src=supervisor-upstart.conf.j2 dest=/etc/init/{{ supervisor_service }}.conf
owner=root group=root
-
name
:
supervisor |
create supervisor master config
-
name
:
create supervisor master config
template
:
>
src=supervisord.conf.j2 dest={{ supervisor_cfg }}
owner={{ supervisor_user }} group={{ supervisor_service_user }}
mode=0644
-
name
:
supervisor |
create a symlink for supervisortctl
-
name
:
create a symlink for supervisortctl
file
:
>
src={{ supervisor_ctl }}
dest={{ COMMON_BIN_DIR }}/{{ supervisor_ctl|basename }}
state=link
when
:
supervisor_service == "supervisor"
-
name
:
supervisor |
create a symlink for supervisor cfg
-
name
:
create a symlink for supervisor cfg
file
:
>
src={{ item }}
dest={{ COMMON_CFG_DIR }}/{{ item|basename }}
...
...
@@ -116,7 +116,7 @@
-
"
{{
supervisor_cfg
}}"
-
"
{{
supervisor_cfg_dir
}}"
-
name
:
s
upervisor | s
tart supervisor
-
name
:
start supervisor
service
:
>
name={{supervisor_service}}
state=started
...
...
@@ -124,7 +124,7 @@
# calling update on supervisor too soon after it
# starts will result in an errror.
-
name
:
supervisor |
wait for web port to be available
-
name
:
wait for web port to be available
wait_for
:
port={{ supervisor_http_bind_port }} timeout=5
when
:
start_supervisor.changed
...
...
@@ -134,7 +134,7 @@
# we don't use notifications for supervisor because
# they don't work well with parameterized roles.
# See https://github.com/ansible/ansible/issues/4853
-
name
:
supervisor |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
changed_when
:
supervisor_update.stdout != ""
playbooks/roles/xqueue/handlers/main.yml
View file @
5c5b041f
-
name
:
xqueue |
restart xqueue
-
name
:
restart xqueue
supervisorctl_local
:
>
name={{ item }}
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/xqueue/tasks/deploy.yml
View file @
5c5b041f
-
name
:
"
xqueue
|
writing
supervisor
scripts
-
xqueue,
xqueue
consumer"
-
name
:
"
writing
supervisor
scripts
-
xqueue,
xqueue
consumer"
template
:
>
src={{ item }}.conf.j2 dest={{ supervisor_cfg_dir }}/{{ item }}.conf
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
with_items
:
[
'
xqueue'
,
'
xqueue_consumer'
]
-
name
:
xqueue |
create xqueue application config
-
name
:
create xqueue application config
template
:
src=xqueue.env.json.j2 dest={{ xqueue_app_dir }}/xqueue.env.json mode=0644
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
-
name
:
xqueue |
create xqueue auth file
-
name
:
create xqueue auth file
template
:
src=xqueue.auth.json.j2 dest={{ xqueue_app_dir }}/xqueue.auth.json mode=0644
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
# Do A Checkout
-
name
:
xqueue |
git checkout xqueue repo into xqueue_code_dir
-
name
:
git checkout xqueue repo into xqueue_code_dir
git
:
dest={{ xqueue_code_dir }} repo={{ xqueue_source_repo }} version={{ xqueue_version }}
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
# Install the python pre requirements into {{ xqueue_venv_dir }}
-
name
:
xqueue |
install python pre-requirements
-
name
:
install python pre-requirements
pip
:
requirements="{{ xqueue_pre_requirements_file }}" virtualenv="{{ xqueue_venv_dir }}" state=present
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
# Install the python post requirements into {{ xqueue_venv_dir }}
-
name
:
xqueue |
install python post-requirements
-
name
:
install python post-requirements
pip
:
requirements="{{ xqueue_post_requirements_file }}" virtualenv="{{ xqueue_venv_dir }}" state=present
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
-
name
:
xqueue |
syncdb and migrate
-
name
:
syncdb and migrate
shell
:
>
SERVICE_VARIANT=xqueue {{ xqueue_venv_bin }}/django-admin.py syncdb --migrate --noinput --settings=xqueue.aws_settings --pythonpath={{ xqueue_code_dir }}
when
:
migrate_db is defined and migrate_db|lower == "yes"
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
-
name
:
xqueue |
create users
-
name
:
create users
shell
:
>
SERVICE_VARIANT=xqueue {{ xqueue_venv_bin }}/django-admin.py update_users --settings=xqueue.aws_settings --pythonpath={{ xqueue_code_dir }}
sudo_user
:
"
{{
xqueue_user
}}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
# call supervisorctl update. this reloads
# the supervisorctl config and restarts
# the services if any of the configurations
# have changed.
#
-
name
:
xqueue |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
changed_when
:
supervisor_update.stdout != ""
when
:
start_services
-
name
:
xqueue |
ensure xqueue, consumer is running
-
name
:
ensure xqueue, consumer is running
supervisorctl_local
:
>
name={{ item }}
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/xqueue/tasks/main.yml
View file @
5c5b041f
...
...
@@ -6,33 +6,33 @@
#
#
-
name
:
xqueue |
create application user
-
name
:
create application user
user
:
>
name="{{ xqueue_user }}"
home="{{ xqueue_app_dir }}"
createhome=no
shell=/bin/false
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
-
name
:
xqueue |
create xqueue app and venv dir
-
name
:
create xqueue app and venv dir
file
:
>
path="{{ item }}"
state=directory
owner="{{ xqueue_user }}"
group="{{ common_web_group }}"
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
with_items
:
-
"
{{
xqueue_app_dir
}}"
-
"
{{
xqueue_venvs_dir
}}"
-
name
:
xqueue |
install a bunch of system packages on which xqueue relies
-
name
:
install a bunch of system packages on which xqueue relies
apt
:
pkg={{','.join(xqueue_debian_pkgs)}} state=present
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
-
name
:
xqueue |
create xqueue db
-
name
:
create xqueue db
mysql_db
:
>
name={{xqueue_auth_config.DATABASES.default.NAME}}
login_host={{xqueue_auth_config.DATABASES.default.HOST}}
...
...
@@ -41,7 +41,7 @@
state=present
encoding=utf8
notify
:
-
xqueue |
restart xqueue
-
restart xqueue
when
:
xqueue_create_db is defined and xqueue_create_db|lower == "yes"
-
include
:
deploy.yml tags=deploy
...
...
playbooks/roles/xserver/handlers/main.yml
View file @
5c5b041f
...
...
@@ -14,7 +14,7 @@
# Overview:
#
-
name
:
xserver |
restart xserver
-
name
:
restart xserver
supervisorctl_local
:
>
name=xserver
supervisorctl_path={{ supervisor_ctl }}
...
...
playbooks/roles/xserver/tasks/deploy.yml
View file @
5c5b041f
-
name
:
"
xserver
|
writing
supervisor
script"
-
name
:
"
writing
supervisor
script"
template
:
>
src=xserver.conf.j2 dest={{ supervisor_cfg_dir }}/xserver.conf
owner={{ supervisor_user }} group={{ common_web_user }} mode=0644
-
name
:
xserver |
checkout code
-
name
:
checkout code
git
:
dest={{xserver_code_dir}} repo={{xserver_source_repo}} version={{xserver_version}}
sudo_user
:
"
{{
xserver_user
}}"
notify
:
xserver |
restart xserver
notify
:
restart xserver
-
name
:
xserver |
install requirements
-
name
:
install requirements
pip
:
requirements="{{xserver_requirements_file}}" virtualenv="{{ xserver_venv_dir }}" state=present
sudo_user
:
"
{{
xserver_user
}}"
notify
:
xserver |
restart xserver
notify
:
restart xserver
-
name
:
xserver |
install sandbox requirements
-
name
:
install sandbox requirements
pip
:
requirements="{{xserver_requirements_file}}" virtualenv="{{xserver_venv_sandbox_dir}}" state=present
sudo_user
:
"
{{
xserver_user
}}"
notify
:
xserver |
restart xserver
notify
:
restart xserver
-
name
:
xserver |
create xserver application config
-
name
:
create xserver application config
template
:
src=xserver.env.json.j2 dest={{ xserver_app_dir }}/env.json
sudo_user
:
"
{{
xserver_user
}}"
notify
:
xserver |
restart xserver
notify
:
restart xserver
-
name
:
xserver |
install read-only ssh key for the content repo that is required for grading
-
name
:
install read-only ssh key for the content repo that is required for grading
copy
:
>
src={{ XSERVER_LOCAL_GIT_IDENTITY }} dest={{ xserver_git_identity }}
owner={{ xserver_user }} group={{ xserver_user }} mode=0600
notify
:
xserver |
restart xserver
notify
:
restart xserver
-
name
:
xserver |
upload ssh script
-
name
:
upload ssh script
template
:
>
src=git_ssh.sh.j2 dest=/tmp/git_ssh.sh
owner={{ xserver_user }} mode=750
notify
:
xserver |
restart xserver
notify
:
restart xserver
-
name
:
xserver |
checkout grader code
-
name
:
checkout grader code
git
:
dest={{ XSERVER_GRADER_DIR }} repo={{ XSERVER_GRADER_SOURCE }} version={{ xserver_grader_version }}
environment
:
GIT_SSH
:
/tmp/git_ssh.sh
notify
:
xserver |
restart xserver
notify
:
restart xserver
sudo_user
:
"
{{
xserver_user
}}"
-
name
:
xserver |
remove read-only ssh key for the content repo
-
name
:
remove read-only ssh key for the content repo
file
:
path={{ xserver_git_identity }} state=absent
notify
:
xserver |
restart xserver
notify
:
restart xserver
# call supervisorctl update. this reloads
# the supervisorctl config and restarts
# the services if any of the configurations
# have changed.
#
-
name
:
xserver |
update supervisor configuration
-
name
:
update supervisor configuration
shell
:
"
{{
supervisor_ctl
}}
-c
{{
supervisor_cfg
}}
update"
register
:
supervisor_update
when
:
start_services
changed_when
:
supervisor_update.stdout != ""
-
name
:
xserver |
ensure xserver is started
-
name
:
ensure xserver is started
supervisorctl_local
:
>
name=xserver
supervisorctl_path={{ supervisor_ctl }}
...
...
@@ -65,7 +65,7 @@
state=started
when
:
start_services
-
name
:
xserver |
create a symlink for venv python
-
name
:
create a symlink for venv python
file
:
>
src="{{ xserver_venv_bin }}/{{ item }}"
dest={{ COMMON_BIN_DIR }}/{{ item }}.xserver
...
...
@@ -74,5 +74,5 @@
-
python
-
pip
-
name
:
xserver |
enforce app-armor rules
-
name
:
enforce app-armor rules
command
:
aa-enforce {{ xserver_venv_sandbox_dir }}
playbooks/roles/xserver/tasks/main.yml
View file @
5c5b041f
...
...
@@ -3,28 +3,28 @@
# access to the edX 6.00x repo which is not public
---
-
name
:
xserver |
checking for grader info
-
name
:
checking for grader info
fail
:
msg="You must define XSERVER_GRADER_DIR and XSERVER_GRADER_SOURCE to use this role!"
when
:
not XSERVER_GRADER_DIR or not XSERVER_GRADER_SOURCE
-
name
:
xserver |
checking for git identity
-
name
:
checking for git identity
fail
:
msg="You must define XSERVER_LOCAL_GIT_IDENTITY to use this role"
when
:
not XSERVER_LOCAL_GIT_IDENTITY
-
name
:
xserver |
create application user
-
name
:
create application user
user
:
>
name="{{ xserver_user }}"
home="{{ xserver_app_dir }}"
createhome=no
shell=/bin/false
-
name
:
xserver |
create application sandbox user
-
name
:
create application sandbox user
user
:
>
name="{{ xserver_sandbox_user }}"
createhome=no
shell=/bin/false
-
name
:
xserver |
create xserver app and data dirs
-
name
:
create xserver app and data dirs
file
:
>
path="{{ item }}"
state=directory
...
...
@@ -36,27 +36,27 @@
-
"
{{
xserver_data_dir
}}"
-
"
{{
xserver_data_dir
}}/data"
-
name
:
xserver |
create sandbox sudoers file
-
name
:
create sandbox sudoers file
template
:
src=99-sandbox.j2 dest=/etc/sudoers.d/99-sandbox owner=root group=root mode=0440
# Make sure this line is in the common-session file.
-
name
:
xserver |
ensure pam-limits module is loaded
-
name
:
ensure pam-limits module is loaded
lineinfile
:
dest=/etc/pam.d/common-session
regexp="session required pam_limits.so"
line="session required pam_limits.so"
-
name
:
xserver |
set sandbox limits
-
name
:
set sandbox limits
template
:
src={{ item }} dest=/etc/security/limits.d/sandbox.conf
first_available_file
:
-
"
{{
secure_dir
}}/sandbox.conf.j2"
-
"
sandbox.conf.j2"
-
name
:
xserver |
install system dependencies of xserver
-
name
:
install system dependencies of xserver
apt
:
pkg={{ item }} state=present
with_items
:
xserver_debian_pkgs
-
name
:
xserver |
load python-sandbox apparmor profile
-
name
:
load python-sandbox apparmor profile
template
:
src={{ item }} dest=/etc/apparmor.d/edx_apparmor_sandbox
first_available_file
:
-
"
{{
secure_dir
}}/files/edx_apparmor_sandbox.j2"
...
...
requirements.txt
View file @
5c5b041f
Jinja2
==2.7.1
MarkupSafe
==0.18
ansible
==1.4.4
PyYAML
==3.10
ansible
==1.3.2
Jinja2
==2.7.2
MarkupSafe
==0.18
argparse
==1.2.1
boto
==2.
10
.0
boto
==2.
23
.0
ecdsa
==0.10
paramiko
==1.12.0
pycrypto
==2.6.1
...
...
util/jenkins/ansible-provision.sh
View file @
5c5b041f
...
...
@@ -21,6 +21,16 @@
export
PYTHONUNBUFFERED
=
1
export
BOTO_CONFIG
=
/var/lib/jenkins/
${
aws_account
}
.boto
if
[[
-n
$WORKSPACE
]]
;
then
# setup a virtualenv in jenkins
if
[[
!
-d
".venv"
]]
;
then
virtualenv .venv
fi
source
.venv/bin/activate
pip install
-r
requirements.txt
fi
if
[[
-z
$WORKSPACE
]]
;
then
dir
=
$(
dirname
$0
)
source
"
$dir
/ascii-convert.sh"
...
...
@@ -146,7 +156,12 @@ security_group: $security_group
ami:
$ami
region:
$region
zone:
$zone
instance_tags: '{"environment": "
$environment
", "github_username": "
$github_username
", "Name": "
$name_tag
", "source": "jenkins", "owner": "
$BUILD_USER
"}'
instance_tags:
environment:
$environment
github_username:
$github_username
Name:
$name_tag
source: jenkins
owner:
$BUILD_USER
root_ebs_size:
$root_ebs_size
name_tag:
$name_tag
gh_users:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment