Commit 6c7c152a by Edward Zarecor

Merge branch 'master' into e0d/make-cache-size-configurable

parents e97dab46 000af8ca
......@@ -24,3 +24,6 @@ vagrant_ansible_inventory_default
## Make artifacts
.build
playbooks/edx-east/travis-test.yml
## Local virtualenv
/venv
- Role: rabbitmq
- Removed the RABBITMQ_CLUSTERED var and related tooling. The goal of the var was to be able to setup a cluster in the aws environment without having to know all the IPs of the cluster before hand. It relied on the `hostvars` ansible varible to work correctly which it no longer does in 1.9. This may get fixed in the future but for now, the "magic" setup doesn't work.
- Changed `rabbitmq_clustered_hosts` to RABBITMQ_CLUSTERED_HOSTS.
- Role: edxapp
- Removed SUBDOMAIN_BRANDING and SUBDOMAIN_COURSE_LISTINGS variables
......
......@@ -6,6 +6,8 @@ The goal of the edx/configuration project is to provide a simple, but
flexible, way for anyone to stand up an instance of Open edX that is
fully configured and ready-to-go.
Before getting started, please look at the [Open EdX Deployment options](https://open.edx.org/deployment-options), to see which method for deploying OpenEdX is right for you.
Building the platform takes place in two phases:
* Infrastructure provisioning
......@@ -17,6 +19,9 @@ and are free to use one, but not the other. The provisioning phase
stands-up the required resources and tags them with role identifiers
so that the configuration tool can come in and complete the job.
__Note__: The Cloudformation templates used for infrastructure provisioning
are no longer maintained. We are working to move to a more modern and flexible tool.
The reference platform is provisioned using an Amazon
[CloudFormation](http://aws.amazon.com/cloudformation/) template.
When the stack has been fully created you will have a new AWS Virtual
......@@ -28,11 +33,9 @@ The configuration phase is managed by [Ansible](http://ansible.com/).
We have provided a number of playbooks that will configure each of
the edX services.
This project is a re-write of the current edX provisioning and
configuration tools, we will be migrating features to this project
over time, so expect frequent changes.
__Important__:
The edX configuration scripts need to be run as root on your servers and will make changes to service configurations including, but not limited to, sshd, dhclient, sudo, apparmor and syslogd. Our scripts are made available as we use them and they implement our best practices. We strongly recommend that you review everything that these scripts will do before running them against your servers. We also recommend against running them against servers that are hosting other applications. No warranty is expressed or implied.
For more information including installation instruction please see the [Configuration Wiki](https://github.com/edx/configuration/wiki).
For more information including installation instruction please see the [OpenEdX Wiki](https://openedx.atlassian.net/wiki/display/OpenOPS/Open+edX+Operations+Home).
For info on any large recent changes please see the [change log](https://github.com/edx/configuration/blob/master/CHANGELOG.md).
......@@ -26,7 +26,7 @@ pkg: docker.pkg
clean:
rm -rf .build
docker.test.shard: $(foreach image,$(shell echo $(images) | tr ' ' '\n' | sed -n '$(SHARD)~$(SHARDS)p'),$(docker_test)$(image))
docker.test.shard: $(foreach image,$(shell echo $(images) | tr ' ' '\n' | awk 'NR%$(SHARDS)==$(SHARD)'),$(docker_test)$(image))
docker.build: $(foreach image,$(images),$(docker_build)$(image))
docker.test: $(foreach image,$(images),$(docker_test)$(image))
......@@ -52,8 +52,8 @@ $(docker_push)%: $(docker_pkg)%
.build/%/Dockerfile.d: docker/build/%/Dockerfile Makefile
@mkdir -p .build/$*
$(eval FROM=$(shell grep "FROM" $< | sed --regexp-extended "s/FROM //" | sed --regexp-extended "s/:/@/g"))
$(eval EDXOPS_FROM=$(shell echo "$(FROM)" | sed --regexp-extended "s#edxops/([^@]+)(@.*)?#\1#"))
$(eval FROM=$(shell grep "^\s*FROM" $< | sed -E "s/FROM //" | sed -E "s/:/@/g"))
$(eval EDXOPS_FROM=$(shell echo "$(FROM)" | sed -E "s#edxops/([^@]+)(@.*)?#\1#"))
@echo "$(docker_build)$*: $(docker_pull)$(FROM)" > $@
@if [ "$(EDXOPS_FROM)" != "$(FROM)" ]; then \
echo "$(docker_test)$*: $(docker_test)$(EDXOPS_FROM:@%=)" >> $@; \
......@@ -65,10 +65,10 @@ $(docker_push)%: $(docker_pkg)%
.build/%/Dockerfile.test: docker/build/%/Dockerfile Makefile
@mkdir -p .build/$*
@sed --regexp-extended "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@
@sed -E "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@
.build/%/Dockerfile.pkg: docker/build/%/Dockerfile Makefile
@mkdir -p .build/$*
@sed --regexp-extended "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@
@sed -E "s#FROM edxops/([^:]+)(:\S*)?#FROM \1:test#" $< > $@
-include $(foreach image,$(images),.build/$(image)/Dockerfile.d)
......@@ -4,7 +4,7 @@
Docker support for edX services is volatile and experimental.
We welcome interested testers and contributors. If you are
interested in paticipating, please join us on Slack at
interested in participating, please join us on Slack at
https://openedx.slack.com/messages/docker.
We do not and may never run run these images in production.
......
# Build using: docker build -f Dockerfile.gocd-agent -t gocd-agent .
# FROM edxops/precise-common:latest
FROM gocd/gocd-agent:16.2.1
LABEL version="0.01" \
description="This custom go-agent docker file installs additional requirements for the edx pipeline"
RUN apt-get update && apt-get install -y -q \
python \
python-dev \
python-distribute \
python-pip
# TODO: repalce this with a pip install command so we can version this properly
RUN git clone https://github.com/edx/tubular.git /opt/tubular
RUN pip install -r /opt/tubular/requirements.txt
RUN cd /opt/tubular;python setup.py install
\ No newline at end of file
##Usage
Start the container with this:
```docker run -ti -e GO_SERVER=your.go.server.ip_or_host gocd/gocd-agent```
If you need to start a few GoCD agents together, you can of course use the shell to do that. Start a few agents in the background, like this:
```for each in 1 2 3; do docker run -d --link angry_feynman:go-server gocd/gocd-agent; done```
##Getting into the container
Sometimes, you need a shell inside the container (to create test repositories, etc). docker provides an easy way to do that:
```docker exec -i -t CONTAINER-ID /bin/bash```
To check the agent logs, you can do this:
```docker exec -i -t CONTAINER-ID tail -f /var/log/go-agent/go-agent.log```
##Agent Configuration
The go-agent expects it's configuration to be found at ```/var/lib/go-agent/config/```. Sharing the
configuration between containers is done by mounting a volume at this location that contains any configuration files
necessary.
**Example docker run command:**
```docker run -ti -v /tmp/go-agent/conf:/var/lib/go-agent/config -e GO_SERVER=gocd.sandbox.edx.org 718d75c467c0 bash```
[How to setup auto registration for remote agents](https://docs.go.cd/current/advanced_usage/agent_auto_register.html)
- name: Configure instance(s)
hosts: all
sudo: True
roles:
- jenkins_analytics
#
# Requires MySQL-python be installed for system python
# This play will create databases and user for an application.
# It can be run like so:
#
# ansible-playbook -i 'localhost,' create_analytics_reports_dbs.yml -e@./db.yml
# ansible-playbook -c local -i 'localhost,' create_dbs_and_users.yml -e@./db.yml
#
# where the content of dbs.yml contains the following dictionaries
#
......@@ -50,7 +49,6 @@
# to system python.
- name: install python mysqldb module
pip: name={{item}} state=present
sudo: yes
with_items:
- MySQL-python
......
# Usage: ansible -i localhost, edx_service.yml -e@<PATH TO>/edx-secure/cloud_migrations/edx_service.yml -e@<PATH TO>/<DEPLOYMENT>-secure/cloud_migrations/vpcs/<ENVIRONMENT>-<DEPLOYMENT>.yml -e@<PATH TO>/edx-secure/cloud_migrations/idas/<CLUSTER>.yml
# Usage: ansible-playbook -i localhost, edx_service.yml -e@<PATH TO>/edx-secure/cloud_migrations/edx_service.yml -e@<PATH TO>/<DEPLOYMENT>-secure/cloud_migrations/vpcs/<ENVIRONMENT>-<DEPLOYMENT>.yml -e@<PATH TO>/edx-secure/cloud_migrations/idas/<CLUSTER>.yml
---
- name: Build application artifacts
......@@ -175,6 +175,7 @@
- name: Setup ELB DNS
route53:
profile: "{{ profile }}"
command: "create"
zone: "{{ dns_zone_name }}"
record: "{{ item.elb.name }}.{{ dns_zone_name }}"
......
......@@ -14,7 +14,7 @@
- name: stop certs service
service: name="certificates" state="stopped"
- name: checkout code
git: >
git_2_0_1: >
repo="{{ repo_url }}"
dest="{{ repo_path }}"
version="{{ certificates_version }}"
......
......@@ -34,6 +34,7 @@
- edxlocal
- role: mongo
when: "'localhost' in EDXAPP_MONGO_HOSTS"
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- { role: 'edxapp', celery_worker: True }
- edxapp
- notifier
......@@ -42,7 +43,6 @@
- edx_notes_api
- demo
- oauth_client_setup
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- oraclejdk
- role: elasticsearch
when: "'localhost' in EDXAPP_ELASTIC_SEARCH_CONFIG|map(attribute='host')"
......
# ansible-playbook -i 'admin.edx.org,' ./hotg.yml -e@/path/to/ansible/vars/edx.yml -e@/path/to/secure/ansible/vars/edx_admin.yml
- name: Install go-agent-docker-server
hosts: all
sudo: True
gather_facts: True
roles:
- aws
- go-agent-docker-server
......@@ -6,5 +6,4 @@
gather_facts: True
roles:
- aws
- supervisor
- go-server
......@@ -99,6 +99,11 @@
#depends on no other vars
depends_on: True
- db_host: "{{ EDXAPP_MYSQL_CSMH_REPLICA_HOST }}"
db_name: "{{ EDXAPP_MYSQL_CSMH_DB_NAME }}"
script_name: csmh-mysql.sh
depends_on: True
- db_host: "{{ AD_HOC_REPORTING_XQUEUE_MYSQL_REPLICA_HOST }}"
db_name: "{{ XQUEUE_MYSQL_DB_NAME }}"
script_name: xqueue-mysql.sh
......
......@@ -15,7 +15,7 @@
notify: restart alton
- name: checkout the code
git: >
git_2_0_1: >
dest="{{ alton_code_dir }}" repo="{{ alton_source_repo }}"
version="{{ alton_version }}" accept_hostkey=yes
sudo_user: "{{ alton_user }}"
......
......@@ -66,8 +66,8 @@
- name: migrate
shell: >
chdir={{ analytics_api_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }}
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }}
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ analytics_api_home }}/venvs/{{ analytics_api_service_name }}/bin/python ./manage.py migrate --noinput
sudo_user: "{{ analytics_api_user }}"
environment: "{{ analytics_api_environment }}"
......
{
"connection_user": "hadoop",
"credentials_file_url": "/edx/etc/edx-analytics-pipeline/output.json",
"exporter_output_bucket": "",
"geolocation_data": "/var/tmp/geolocation-data.mmdb",
"hive_user": "hadoop",
"host": "localhost",
"identifier": "local-devstack",
"manifest_input_format": "org.edx.hadoop.input.ManifestTextInputFormat",
"oddjob_jar": "hdfs://localhost:9000/edx-analytics-pipeline/packages/edx-analytics-hadoop-util.jar",
"tasks_branch": "origin/HEAD",
"tasks_log_path": "/tmp/acceptance/",
"tasks_output_url": "hdfs://localhost:9000/acceptance-test-output/",
"tasks_repo": "/edx/app/analytics_pipeline/analytics_pipeline",
"vertica_creds_url": "",
"wheel_url": "https://edx-wheelhouse.s3-website-us-east-1.amazonaws.com/Ubuntu/precise"
}
......@@ -10,9 +10,9 @@
#
#
# Tasks for role analytics_pipeline
#
#
# Overview:
#
#
# Prepare the machine to run the edX Analytics Data Pipeline. The pipeline currently "installs itself"
# via an ansible playbook that is not included in the edx/configuration repo. However, in order to
# run the pipeline in a devstack environment, some configuration needs to be performed. In a production
......@@ -24,7 +24,7 @@
# hadoop_master: ensures hadoop services are installed
# hive: the pipeline makes extensive usage of hive, so that needs to be installed as well
# sqoop: similarly to hive, the pipeline uses this tool extensively
#
#
# Example play:
#
# - name: Deploy all dependencies of edx-analytics-pipeline to the node
......@@ -83,7 +83,7 @@
- install:configuration
- name: util library source checked out
git: >
git_2_0_1: >
dest={{ analytics_pipeline_util_library.path }} repo={{ analytics_pipeline_util_library.repo }}
version={{ analytics_pipeline_util_library.version }}
tags:
......@@ -174,3 +174,22 @@
tags:
- install
- install:configuration
- name: store configuration for acceptance tests
copy: >
src=acceptance.json
dest=/var/tmp/acceptance.json
mode=644
tags:
- install
- install:configuration
- name: grant access to table storing test data in output database
mysql_user: >
user={{ ANALYTICS_PIPELINE_OUTPUT_DATABASE.username }}
password={{ ANALYTICS_PIPELINE_OUTPUT_DATABASE.password }}
priv=acceptance%.*:ALL
append_privs=yes
tags:
- install
- install:configuration
......@@ -42,7 +42,7 @@
{{ role_name|upper }}_VERSION: "master"
{{ role_name|upper }}_DJANGO_SETTINGS_MODULE: "{{ role_name }}.settings.production"
{{ role_name|upper }}_URL_ROOT: 'http://{{ role_name }}:18{{ port_suffix }}'
{{ role_name|upper }}_OAUTH_URL_ROOT: 'http://127.0.0.1:8000'
{{ role_name|upper }}_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ISSUER | default("http://127.0.0.1:8000/oauth2") }}'
{{ role_name|upper }}_SECRET_KEY: 'Your secret key here'
{{ role_name|upper }}_TIME_ZONE: 'UTC'
......@@ -63,7 +63,7 @@
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ '{{' }} {{ role_name|upper }}_OAUTH_URL_ROOT }}/oauth2'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ '{{' }} {{ role_name|upper }}_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ '{{' }} {{ role_name|upper }}_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
STATIC_ROOT: "{{ '{{' }} COMMON_DATA_DIR }}/{{ '{{' }} {{ role_name }}_service_name }}/staticfiles"
......
......@@ -16,13 +16,17 @@
# logs by security group.
# !! The buckets defined below MUST exist prior to enabling !!
# this feature and the instance IAM role must have write permissions
# to the buckets
# to the buckets, or you must specify the access and secret keys below.
AWS_S3_LOGS: false
# If there are any issues with the s3 sync an error
# log will be sent to the following address.
# This relies on your server being able to send mail
AWS_S3_LOGS_NOTIFY_EMAIL: dummy@example.com
AWS_S3_LOGS_FROM_EMAIL: dummy@example.com
# Credentials for S3 access in case the instance role doesn't have write
# permissions to S3
AWS_S3_LOGS_ACCESS_KEY_ID: ""
AWS_S3_LOGS_SECRET_KEY: ""
#
# vars are namespace with the module name.
......@@ -50,7 +54,7 @@ aws_s3_sync_script: "{{ aws_dirs.home.path }}/send-logs-to-s3"
aws_s3_logfile: "{{ aws_dirs.logs.path }}/s3-log-sync.log"
aws_region: "us-east-1"
# default path to the aws binary
aws_s3cmd: "{{ COMMON_BIN_DIR }}/s3cmd"
aws_s3cmd: "/usr/local/bin/s3cmd"
aws_cmd: "/usr/local/bin/aws"
#
# OS packages
......@@ -63,7 +67,6 @@ aws_pip_pkgs:
- https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
- awscli==1.4.2
- boto=="{{ common_boto_version }}"
- s3cmd==1.6.1
aws_redhat_pkgs: []
aws_s3cmd_version: s3cmd-1.5.0-beta1
aws_s3cmd_url: "http://files.edx.org/s3cmd/{{ aws_s3cmd_version }}.tar.gz"
......@@ -70,23 +70,6 @@
extra_args="-i {{ COMMON_PYPI_MIRROR_URL }}"
with_items: aws_pip_pkgs
- name: get s3cmd
get_url: >
url={{ aws_s3cmd_url }}
dest={{ aws_dirs.data.path }}/
- name: untar s3cmd
shell: >
tar xf {{ aws_dirs.data.path }}/{{ aws_s3cmd_version }}.tar.gz
creates={{ aws_dirs.data.path }}/{{ aws_s3cmd_version }}/s3cmd
chdir={{ aws_dirs.home.path }}
- name: create symlink for s3cmd
file: >
src={{ aws_dirs.home.path }}/{{ aws_s3cmd_version }}/s3cmd
dest={{ aws_s3cmd }}
state=link
- name: create s3 log sync script
template: >
dest={{ aws_s3_sync_script }}
......
......@@ -116,5 +116,11 @@ availability_zone=$(ec2metadata --availability-zone)
# region isn't available via the metadata service
region=${availability_zone:0:${{ lb }}#availability_zone{{ rb }} - 1}
{% if AWS_S3_LOGS_ACCESS_KEY_ID %}
auth_opts="--access_key {{ AWS_S3_LOGS_ACCESS_KEY_ID }} --secret_key {{ AWS_S3_LOGS_SECRET_KEY }}"
{% else %}
auth_opts=""
{% endif %}
s3_path="${2}/$sec_grp/"
$noop {{ aws_s3cmd }} --multipart-chunk-size-mb 5120 --disable-multipart sync $directory "s3://${bucket_path}/${sec_grp}/${instance_id}-${ip}/"
$noop {{ aws_s3cmd }} $auth_opts --multipart-chunk-size-mb 5120 --disable-multipart sync $directory "s3://${bucket_path}/${sec_grp}/${instance_id}-${ip}/"
......@@ -15,6 +15,13 @@
when: download_deb.changed
with_items: browser_s3_deb_pkgs
# Because the source location has been deprecated, we need to
# ensure it does not interfere with subsequent apt commands
- name: remove google chrome debian source list
file:
path: /etc/apt/sources.list.d/google-chrome.list
state: absent
- name: download ChromeDriver
get_url:
url={{ chromedriver_url }}
......
......@@ -41,7 +41,7 @@
when: CERTS_GIT_IDENTITY != "none"
- name: checkout certificates repo into {{ certs_code_dir }}
git: >
git_2_0_1: >
dest={{ certs_code_dir }} repo={{ CERTS_REPO }} version={{ certs_version }}
accept_hostkey=yes
sudo_user: "{{ certs_user }}"
......@@ -51,7 +51,7 @@
when: CERTS_GIT_IDENTITY != "none"
- name: checkout certificates repo into {{ certs_code_dir }}
git: >
git_2_0_1: >
dest={{ certs_code_dir }} repo={{ CERTS_REPO }} version={{ certs_version }}
accept_hostkey=yes
sudo_user: "{{ certs_user }}"
......
......@@ -46,8 +46,9 @@ CREDENTIALS_CACHES:
LOCATION: '{{ CREDENTIALS_MEMCACHE }}'
CREDENTIALS_DJANGO_SETTINGS_MODULE: "credentials.settings.production"
CREDENTIALS_URL_ROOT: 'http://credentials:18150'
CREDENTIALS_OAUTH_URL_ROOT: 'http://127.0.0.1:8000'
CREDENTIALS_DOMAIN: 'credentials'
CREDENTIALS_URL_ROOT: 'http://{{ CREDENTIALS_DOMAIN }}:18150'
CREDENTIALS_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ISSUER | default("http://127.0.0.1:8000/oauth2") }}'
CREDENTIALS_SECRET_KEY: 'SET-ME-TO-A-UNIQUE-LONG-RANDOM-STRING'
CREDENTIALS_TIME_ZONE: 'UTC'
......@@ -87,6 +88,9 @@ CREDENTIALS_STATIC_URL: '/static/'
# Example settings to use Amazon S3 as a storage backend with django storages:
# https://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html#amazon-s3
#
# Note, AWS_S3_CUSTOM_DOMAIN is required, otherwise boto will generate non-working
# querystring URLs for assets (see https://github.com/boto/boto/issues/1477)
#
# CREDENTIALS_BUCKET: mybucket
# credentials_s3_domain: s3.amazonaws.com
# CREDENTIALS_MEDIA_ROOT: 'media'
......@@ -94,7 +98,7 @@ CREDENTIALS_STATIC_URL: '/static/'
#
# CREDENTIALS_FILE_STORAGE_BACKEND:
# AWS_STORAGE_BUCKET_NAME: '{{ CREDENTIALS_BUCKET }}'
# AWS_CUSTOM_DOMAIN: '{{ CREDENTIALS_BUCKET }}.{{ credentials_s3_domain }}'
# AWS_S3_CUSTOM_DOMAIN: '{{ CREDENTIALS_BUCKET }}.{{ credentials_s3_domain }}'
# AWS_ACCESS_KEY_ID: 'XXXAWS_ACCESS_KEYXXX'
# AWS_SECRET_ACCESS_KEY: 'XXXAWS_SECRET_KEYXXX'
# AWS_QUERYSTRING_AUTH: False
......@@ -117,9 +121,14 @@ CREDENTIALS_FILE_STORAGE_BACKEND:
STATIC_ROOT: '{{ CREDENTIALS_STATIC_ROOT }}'
MEDIA_URL: '{{ CREDENTIALS_MEDIA_URL }}'
STATIC_URL: '{{ CREDENTIALS_STATIC_URL }}'
STATICFILES_STORAGE: 'django.contrib.staticfiles.storage.StaticFilesStorage'
STATICFILES_STORAGE: 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
DEFAULT_FILE_STORAGE: 'django.core.files.storage.FileSystemStorage'
# Note: the protocol for CORS whitelist values is necessary for matching the correct origin by nginx
CREDENTIALS_CORS_WHITELIST:
- "http://{{ CREDENTIALS_DOMAIN }}"
- "https://{{ CREDENTIALS_DOMAIN }}"
CREDENTIALS_VERSION: "master"
CREDENTIALS_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
......@@ -146,11 +155,11 @@ CREDENTIALS_SERVICE_CONFIG:
TIME_ZONE: '{{ CREDENTIALS_TIME_ZONE }}'
LANGUAGE_CODE: '{{ CREDENTIALS_LANGUAGE_CODE }}'
OAUTH2_PROVIDER_URL: '{{ CREDENTIALS_OAUTH_URL_ROOT }}/oauth2'
OAUTH2_PROVIDER_URL: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ CREDENTIALS_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ CREDENTIALS_OAUTH_URL_ROOT }}/oauth2'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ CREDENTIALS_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
# db config
......
......@@ -38,6 +38,9 @@
state: present
sudo_user: "{{ credentials_user }}"
with_items: "{{ credentials_requirements }}"
tags:
- install
- install:app-requirements
- name: create nodeenv
shell: >
......
......@@ -15,6 +15,11 @@ upstream credentials_app_server {
{% endfor %}
}
map $http_origin $cors_header {
default "";
'~*^({{ CREDENTIALS_CORS_WHITELIST|join('|')|replace('.', '\.') }})$' "$http_origin";
}
server {
server_name {{ CREDENTIALS_HOSTNAME }};
......@@ -39,6 +44,8 @@ server {
location ~ ^{{ CREDENTIALS_STATIC_URL }}(?P<file>.*) {
root {{ CREDENTIALS_STATIC_ROOT }};
add_header Access-Control-Allow-Origin $cors_header always;
add_header Cache-Control "max-age=31536000";
try_files /$file =404;
}
......
---
- name: check out the demo course
git: >
git_2_0_1: >
dest={{ demo_code_dir }} repo={{ demo_repo }} version={{ demo_version }}
accept_hostkey=yes
sudo_user: "{{ demo_edxapp_user }}"
......
......@@ -55,7 +55,10 @@ DISCOVERY_CACHES:
DISCOVERY_VERSION: "master"
DISCOVERY_DJANGO_SETTINGS_MODULE: "course_discovery.settings.production"
DISCOVERY_URL_ROOT: 'http://discovery:18381'
DISCOVERY_OAUTH_URL_ROOT: 'http://127.0.0.1:8000'
DISCOVERY_OAUTH_URL_ROOT: '{{ EDXAPP_LMS_ISSUER | default("http://127.0.0.1:8000/oauth2") }}'
DISCOVERY_EDX_DRF_EXTENSIONS:
OAUTH2_USER_INFO_URL: '{{ DISCOVERY_OAUTH_URL_ROOT }}/user_info'
DISCOVERY_SECRET_KEY: 'Your secret key here'
DISCOVERY_TIME_ZONE: 'UTC'
......@@ -79,7 +82,7 @@ DISCOVERY_SERVICE_CONFIG:
SOCIAL_AUTH_EDX_OIDC_KEY: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_KEY }}'
SOCIAL_AUTH_EDX_OIDC_SECRET: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_ID_TOKEN_DECRYPTION_KEY: '{{ DISCOVERY_SOCIAL_AUTH_EDX_OIDC_SECRET }}'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ DISCOVERY_OAUTH_URL_ROOT }}/oauth2'
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ DISCOVERY_OAUTH_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ discovery_service_name }}/staticfiles"
......@@ -95,6 +98,8 @@ DISCOVERY_SERVICE_CONFIG:
ECOMMERCE_API_URL: '{{ DISCOVERY_ECOMMERCE_API_URL }}'
COURSES_API_URL: '{{ DISCOVERY_COURSES_API_URL }}'
EDX_DRF_EXTENSIONS: '{{ DISCOVERY_EDX_DRF_EXTENSIONS }}'
DISCOVERY_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
......
......@@ -54,8 +54,8 @@
- name: migrate
shell: >
chdir={{ ecommerce_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }}
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }}
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ ecommerce_venv_dir }}/bin/python ./manage.py migrate --noinput
sudo_user: "{{ ecommerce_user }}"
environment: "{{ ecommerce_environment }}"
......
---
- name: git checkout edx_ansible repo into edx_ansible_code_dir
git: >
git_2_0_1: >
dest={{ edx_ansible_code_dir }} repo={{ edx_ansible_source_repo }} version={{ configuration_version }}
accept_hostkey=yes
sudo_user: "{{ edx_ansible_user }}"
......
......@@ -12,7 +12,7 @@ IFS=","
-v add verbosity to edx_ansible run
-h this
<repo> - must be one of edx-platform, edx-workers, xqueue, cs_comments_service, xserver, configuration, read-only-certificate-code, edx-analytics-data-api, edx-ora2, insights, ecommerce, programs, course_discovery
<repo> - must be one of edx-platform, edx-workers, xqueue, cs_comments_service, credentials, xserver, configuration, read-only-certificate-code, edx-analytics-data-api, edx-ora2, insights, ecommerce, programs, course_discovery
<version> - can be a commit or tag
EO
......@@ -48,6 +48,7 @@ edx_ansible_cmd="{{ edx_ansible_venv_bin }}/ansible-playbook -i localhost, -c lo
repos_to_cmd["edx-platform"]="$edx_ansible_cmd edxapp.yml -e 'edx_platform_version=$2'"
repos_to_cmd["edx-workers"]="$edx_ansible_cmd edxapp.yml -e 'edx_platform_version=$2' -e 'celery_worker=true'"
repos_to_cmd["xqueue"]="$edx_ansible_cmd xqueue.yml -e 'xqueue_version=$2' -e 'elb_pre_post=false'"
repos_to_cmd["credentials"]="$edx_ansible_cmd credentials.yml -e 'credentials_version=$2'"
repos_to_cmd["cs_comments_service"]="$edx_ansible_cmd forum.yml -e 'forum_version=$2'"
repos_to_cmd["xserver"]="$edx_ansible_cmd xserver.yml -e 'xserver_version=$2'"
repos_to_cmd["configuration"]="$edx_ansible_cmd edx_ansible.yml -e 'configuration_version=$2'"
......
......@@ -108,7 +108,6 @@ edx_notes_api_requirements_base: "{{ edx_notes_api_code_dir }}/requirements"
# Application python requirements
edx_notes_api_requirements:
- base.txt
- optional.txt
#
# OS packages
......
......@@ -55,8 +55,8 @@
- name: migrate
shell: >
chdir={{ edx_notes_api_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }}
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }}
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ edx_notes_api_home }}/venvs/{{ edx_notes_api_service_name }}/bin/python {{ edx_notes_api_manage }} migrate --noinput --settings="notesserver.settings.yaml_config"
sudo_user: "{{ edx_notes_api_user }}"
environment:
......
......@@ -163,7 +163,7 @@
- install:code
- name: checkout code over ssh
git: >
git_2_0_1: >
repo=git@{{ item.DOMAIN }}:{{ item.PATH }}/{{ item.REPO }}
dest={{ item.DESTINATION }} version={{ item.VERSION }}
accept_hostkey=yes key_file={{ edx_service_home }}/.ssh/{{ item.REPO }}
......@@ -176,7 +176,7 @@
- install:code
- name: checkout code over https
git: >
git_2_0_1: >
repo=https://{{ item.DOMAIN }}/{{ item.PATH }}/{{ item.REPO }}
dest={{ item.DESTINATION }} version={{ item.VERSION }}
sudo_user: "{{ edx_service_user }}"
......
......@@ -35,7 +35,7 @@
# Example play:
#
# export AWS_PROFILE=sandbox
# ansible-playbook -c local -i 'localhost,' edx_service_rds.yml -e@~/vpc-test.yml -e@~/e0dTest-edx.yml -e 'cluster=test'
# ansible-playbook -i 'localhost,' edx_service_rds.yml -e@/path/to/secure-repo/cloud_migrations/vpcs/vpc-file.yml -e@/path/to/secure-repo/cloud_migrations/dbs/e-d-c-rds.yml
#
# TODO:
# - handle db deletes and updates
......
......@@ -133,6 +133,7 @@ EDXAPP_CAS_ATTRIBUTE_PACKAGE: ""
EDXAPP_ENABLE_AUTO_AUTH: false
# Settings for enabling and configuring third party authorization
EDXAPP_ENABLE_THIRD_PARTY_AUTH: false
EDXAPP_ENABLE_OAUTH2_PROVIDER: false
EDXAPP_ENABLE_EDXNOTES: false
......@@ -142,9 +143,6 @@ EDXAPP_ENABLE_CREDIT_API: false
# Settings for enabling and JWT auth for DRF API's
EDXAPP_ENABLE_JWT_AUTH: false
EDXAPP_MODULESTORE_MAPPINGS:
'preview\.': 'draft-preferred'
EDXAPP_GIT_REPO_DIR: '/edx/var/edxapp/course_repos'
EDXAPP_GIT_REPO_EXPORT_DIR: '/edx/var/edxapp/export_course_repos'
......@@ -198,6 +196,7 @@ EDXAPP_FEATURES:
ENABLE_CREDIT_ELIGIBILITY: "{{ EDXAPP_ENABLE_CREDIT_ELIGIBILITY }}"
ENABLE_SPECIAL_EXAMS: false
ENABLE_JWT_AUTH: "{{ EDXAPP_ENABLE_JWT_AUTH }}"
ENABLE_OAUTH2_PROVIDER: "{{ EDXAPP_ENABLE_OAUTH2_PROVIDER }}"
EDXAPP_BOOK_URL: ""
# This needs to be set to localhost
......@@ -634,6 +633,26 @@ EDXAPP_LMS_SPLIT_DOC_STORE_CONFIG:
EDXAPP_CMS_DOC_STORE_CONFIG:
<<: *edxapp_generic_default_docstore
edxapp_databases:
# edxapp's edxapp-migrate scripts and the edxapp_migrate play
# will ensure that any DB not named read_replica will be migrated
# for both the lms and cms.
read_replica:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_REPLICA_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_REPLICA_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_REPLICA_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_REPLICA_HOST }}"
PORT: "{{ EDXAPP_MYSQL_REPLICA_PORT }}"
default:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_HOST }}"
PORT: "{{ EDXAPP_MYSQL_PORT }}"
ATOMIC_REQUESTS: True
edxapp_generic_auth_config: &edxapp_generic_auth
EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST: "{{ EDXAPP_EVENT_TRACKING_SEGMENTIO_EMIT_WHITELIST }}"
ECOMMERCE_API_SIGNING_KEY: "{{ EDXAPP_ECOMMERCE_API_SIGNING_KEY }}"
......@@ -662,24 +681,7 @@ edxapp_generic_auth_config: &edxapp_generic_auth
ssl: "{{ EDXAPP_MONGO_USE_SSL }}"
ADDITIONAL_OPTIONS: "{{ EDXAPP_CONTENTSTORE_ADDITIONAL_OPTS }}"
DOC_STORE_CONFIG: *edxapp_generic_default_docstore
DATABASES:
# edxapp's edxapp-migrate scripts and the edxapp_migrate play
# will ensure that any DB not named read_replica will be migrated
# for both the lms and cms.
read_replica:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_REPLICA_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_REPLICA_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_REPLICA_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_REPLICA_HOST }}"
PORT: "{{ EDXAPP_MYSQL_REPLICA_PORT }}"
default:
ENGINE: 'django.db.backends.mysql'
NAME: "{{ EDXAPP_MYSQL_DB_NAME }}"
USER: "{{ EDXAPP_MYSQL_USER }}"
PASSWORD: "{{ EDXAPP_MYSQL_PASSWORD }}"
HOST: "{{ EDXAPP_MYSQL_HOST }}"
PORT: "{{ EDXAPP_MYSQL_PORT }}"
DATABASES: "{{ edxapp_databases }}"
ANALYTICS_API_KEY: "{{ EDXAPP_ANALYTICS_API_KEY }}"
EMAIL_HOST_USER: "{{ EDXAPP_EMAIL_HOST_USER }}"
EMAIL_HOST_PASSWORD: "{{ EDXAPP_EMAIL_HOST_PASSWORD }}"
......@@ -822,7 +824,6 @@ generic_env_config: &edxapp_generic_env
CAS_SERVER_URL: "{{ EDXAPP_CAS_SERVER_URL }}"
CAS_EXTRA_LOGIN_PARAMS: "{{ EDXAPP_CAS_EXTRA_LOGIN_PARAMS }}"
CAS_ATTRIBUTE_CALLBACK: "{{ EDXAPP_CAS_ATTRIBUTE_CALLBACK }}"
HOSTNAME_MODULESTORE_DEFAULT_MAPPINGS: "{{ EDXAPP_MODULESTORE_MAPPINGS }}"
UNIVERSITY_EMAIL: "{{ EDXAPP_UNIVERSITY_EMAIL }}"
PRESS_EMAIL: "{{ EDXAPP_PRESS_EMAIL }}"
SOCIAL_MEDIA_FOOTER_URLS: "{{ EDXAPP_SOCIAL_MEDIA_FOOTER_URLS }}"
......
......@@ -63,7 +63,7 @@
# Do A Checkout
- name: checkout edx-platform repo into {{ edxapp_code_dir }}
git: >
git_2_0_1: >
dest={{ edxapp_code_dir }}
repo={{ edx_platform_repo }}
version={{ edx_platform_version }}
......@@ -90,7 +90,7 @@
# (yes, lowercase) to a Stanford-style theme and set
# edxapp_theme_name (again, lowercase) to its name.
- name: checkout Stanford-style theme
git: >
git_2_0_1: >
dest={{ edxapp_app_dir }}/themes/{{ edxapp_theme_name }}
repo={{ edxapp_theme_source_repo }}
version={{ edxapp_theme_version }}
......@@ -109,7 +109,7 @@
# EDXAPP_COMPREHENSIVE_THEME_DIR to the directory you want to check
# out to.
- name: checkout comprehensive theme
git: >
git_2_0_1: >
dest={{ EDXAPP_COMPREHENSIVE_THEME_DIR }}
repo={{ EDXAPP_COMPREHENSIVE_THEME_SOURCE_REPO }}
version={{ EDXAPP_COMPREHENSIVE_THEME_VERSION }}
......@@ -118,7 +118,7 @@
sudo_user: "{{ edxapp_user }}"
environment:
GIT_SSH: "{{ edxapp_git_ssh }}"
register: edxapp_theme_checkout
register: edxapp_comprehensive_theme_checkout
tags:
- install
- install:code
......
......@@ -9,6 +9,7 @@ edxlocal_databases:
- "{{ ORA_MYSQL_DB_NAME | default(None) }}"
- "{{ XQUEUE_MYSQL_DB_NAME | default(None) }}"
- "{{ EDXAPP_MYSQL_DB_NAME | default(None) }}"
- "{{ EDXAPP_MYSQL_CSMH_DB_NAME | default(None) }}"
- "{{ EDX_NOTES_API_MYSQL_DB_NAME | default(None) }}"
- "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}"
- "{{ ANALYTICS_API_DEFAULT_DB_NAME | default(None) }}"
......@@ -43,6 +44,11 @@ edxlocal_database_users:
pass: "{{ EDXAPP_MYSQL_PASSWORD | default(None) }}"
}
- {
db: "{{ EDXAPP_MYSQL_CSMH_DB_NAME | default(None) }}",
user: "{{ EDXAPP_MYSQL_CSMH_USER | default(None) }}",
pass: "{{ EDXAPP_MYSQL_CSMH_PASSWORD | default(None) }}"
}
- {
db: "{{ PROGRAMS_DEFAULT_DB_NAME | default(None) }}",
user: "{{ PROGRAMS_DATABASES.default.USER | default(None) }}",
pass: "{{ PROGRAMS_DATABASES.default.PASSWORD | default(None) }}"
......
......@@ -21,6 +21,7 @@
name: "{{ item.user }}"
password: "{{ item.pass }}"
priv: "{{ item.db }}.*:ALL"
append_privs: yes
when: item.db != None and item.db != ''
with_items: "{{ edxlocal_database_users }}"
......
......@@ -33,7 +33,7 @@ script.disable_dynamic: true
# to perform discovery when new nodes (master or data) are started:
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]
{%- if ELASTICSEARCH_CLUSTER_MEMBERS|length > 1 -%}
{% if ELASTICSEARCH_CLUSTER_MEMBERS|length > 1 -%}
discovery.zen.ping.unicast.hosts: ['{{ELASTICSEARCH_CLUSTER_MEMBERS|join("\',\'") }}']
......
......@@ -39,7 +39,7 @@
- install:configuration
- name: git checkout forum repo into {{ forum_code_dir }}
git: >
git_2_0_1: >
dest={{ forum_code_dir }} repo={{ forum_source_repo }} version={{ forum_version }}
accept_hostkey=yes
sudo_user: "{{ forum_user }}"
......
# Tasks to run if cloning repos to edx-platform.
- name: clone all course repos
git: dest={{ GITRELOAD_REPODIR }}/{{ item.name }} repo={{ item.url }} version={{ item.commit }}
git_2_0_1: dest={{ GITRELOAD_REPODIR }}/{{ item.name }} repo={{ item.url }} version={{ item.commit }}
sudo_user: "{{ common_web_user }}"
with_items: GITRELOAD_REPOS
......
##In order to use this role you must use a specific set of AMIs
[This role is for use with the AWS ECS AMIs listed here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html)
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
# Defaults for role go-agent-docker-server
#
# key for go-agents to autoregister with the go-server
GO_SERVER_AUTO_REGISTER_KEY: "dev-only-override-this-key"
GO_AGENT_DOCKER_RESOURCES: "tubular,python"
GO_AGENT_DOCKER_ENVIRONMENT: "sandbox"
GO_AGENT_DOCKER_CONF_HOME: "/tmp/go-agent/conf"
\ No newline at end of file
---
#
# edX Configuration
#
# github: https://github.com/edx/configuration
# wiki: https://github.com/edx/configuration/wiki
# code style: https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions
# license: https://github.com/edx/configuration/blob/master/LICENSE.TXT
#
#
#
# Tasks for role go-agent-docker-server
#
# Overview:
#
# Deploys go-server using aptitude!
#
# Dependencies:
# - openjdk7
#
# Example play:
#
# - name: Configure instance(s)
# hosts: go-server
# sudo: True
# vars_files:
# - "{{ secure_dir }}/admin/sandbox.yml"
# gather_facts: True
# roles:
# - common
#
- name: install go-server configuration
template:
src: edx/app/go-agent-docker-server/autoregister.properties.j2
dest: "{{ GO_AGENT_DOCKER_CONF_HOME }}/autoregister.properties"
mode: 0600
owner: root
group: root
agent.auto.register.key={{ GO_SERVER_AUTO_REGISTER_KEY }}
agent.auto.register.resources={{ GO_AGENT_DOCKER_RESOURCES }}
agent.auto.register.environments={{ GO_AGENT_DOCKER_ENVIRONMENT }}
\ No newline at end of file
......@@ -18,9 +18,9 @@ GO_AGENT_HOME: "/var/lib/go-agent/"
GO_AGENT_CONF_HOME: "/etc/default/"
# Java version settings
GO_AGENT_ORACLEJDK_VERSION: "7u51"
GO_AGENT_ORACLEJDK_BASE: "jdk1.7.0_51"
GO_AGENT_ORACLEJDK_BUILD: "b13"
GO_AGENT_ORACLEJDK_VERSION: "7u80"
GO_AGENT_ORACLEJDK_BASE: "jdk1.7.0_80"
GO_AGENT_ORACLEJDK_BUILD: "b15"
GO_AGENT_ORACLEJDK_LINK: "/usr/lib/jvm/java-7-oracle"
# java tuning
......@@ -34,4 +34,4 @@ GO_AGENT_APT_NAME: "go-agent"
# go-agent configuration settings
# override the server ip and port to connect an agent to it's go-server master.
GO_AGENT_SERVER_IP: 127.0.0.1
GO_AGENT_SERVER_PORT: 8153
\ No newline at end of file
GO_AGENT_SERVER_PORT: 8153
......@@ -14,14 +14,13 @@ GO_SERVER_SERVICE_NAME: "go-server"
GO_SERVER_USER: "go"
GO_SERVER_GROUP: "{{ GO_SERVER_USER }}"
GO_SERVER_VERSION: "16.1.0-2855"
GO_SERVER_HOME: "/var/lib/go-server/"
GO_SERVER_HOME: "/var/lib/go-server"
GO_SERVER_CONF_HOME: "/etc/go/"
# Java version settings
GO_SERVER_ORACLEJDK_VERSION: "7u51"
GO_SERVER_ORACLEJDK_BASE: "jdk1.7.0_51"
GO_SERVER_ORACLEJDK_BUILD: "b13"
GO_SERVER_ORACLEJDK_VERSION: "7u80"
GO_SERVER_ORACLEJDK_BASE: "jdk1.7.0_80"
GO_SERVER_ORACLEJDK_BUILD: "b15"
GO_SERVER_ORACLEJDK_LINK: "/usr/lib/jvm/java-7-oracle"
# java tuning
......@@ -42,3 +41,6 @@ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION: "{{ GO_SERVER_HOME }}/plugins/external/"
GO_SERVER_PASSWORD_FILE_NAME: "password.txt"
GO_SERVER_ADMIN_USERS: ["admin"]
GO_SERVER_CRUISE_CONTROL_DB_DESTIONATION: "/var/lib/go-server/db/h2db/cruise.h2.db"
# key for go-agents to autoregister with the go-server
GO_SERVER_AUTO_REGISTER_KEY: "dev-only-override-this-key"
......@@ -43,7 +43,15 @@
name: "{{ GO_SERVER_APT_NAME }}={{ GO_SERVER_VERSION }}"
update_cache: yes
- name: install go-server-oauth-login
- name: create go-server plugin directory
file:
path: "{{ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION }}"
state: directory
mode: 0776
owner: "{{ GO_SERVER_USER }}"
group: "{{ GO_SERVER_GROUP }}"
- name: install go-server oauth plugin
get_url:
url: "{{ GO_SERVER_OAUTH_LOGIN_JAR_URL }}"
dest: "{{ GO_SERVER_OAUTH_LOGIN_JAR_DESTINATION }}"
......@@ -68,14 +76,6 @@
owner: "{{ GO_SERVER_USER }}"
group: "{{ GO_SERVER_GROUP }}"
- name: copy go-server cruise database
copy:
src: cruise.h2.db
dest: "{{ GO_SERVER_CRUISE_CONTROL_DB_DESTIONATION }}"
mode: 0660
owner: "{{ GO_SERVER_USER }}"
group: "{{ GO_SERVER_GROUP }}"
- name: restart go-server
service:
name: "{{ GO_SERVER_SERVICE_NAME }}"
......
<cruise xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="77">
<server artifactsdir="artifacts" siteUrl="http://{{ ansible_fqdn }}:8153" secureSiteUrl="https://{{ ansible_fqdn }}:8154" commandRepositoryLocation="default" serverId="d3a0287d-7698-4afe-a687-c165e8295918">
<server artifactsdir="artifacts" siteUrl="http://{{ ansible_fqdn }}:8153" secureSiteUrl="https://{{ ansible_fqdn }}:8154" commandRepositoryLocation="default" serverId="d3a0287d-7698-4afe-a687-c165e8295918" agentAutoRegisterKey="{{ GO_SERVER_AUTO_REGISTER_KEY }}">
<security>
<passwordFile path="{{ GO_SERVER_CONF_HOME }}/{{ GO_SERVER_PASSWORD_FILE_NAME }}" />
<admins>
......
......@@ -9,7 +9,7 @@
#
##
# Defaults for role hadoop_common
#
#
HADOOP_COMMON_VERSION: 2.3.0
HADOOP_COMMON_USER_HOME: "{{ COMMON_APP_DIR }}/hadoop"
......@@ -60,3 +60,23 @@ hadoop_common_debian_pkgs:
- maven
hadoop_common_redhat_pkgs: []
#
# MapReduce/Yarn memory config (defaults for m1.medium)
# http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/TaskConfiguration_H2.html
#
# mapred_site_config:
# mapreduce.map.memory_mb: 768
# mapreduce.map.java.opts: '-Xmx512M'
# mapreduce.reduce.memory.mb: 1024
# mapreduce.reduce.java.opts: '-Xmx768M'
# yarn_site_config:
# yarn.app.mapreduce.am.resource.mb: 1024
# yarn.scheduler.minimum-allocation-mb: 32
# yarn.scheduler.maximum-allocation-mb: 2048
# yarn.nodemanager.resource.memory-mb: 2048
# yarn.nodemanager.vmem-pmem-ratio: 2.1
mapred_site_config: {}
yarn_site_config: {}
......@@ -6,4 +6,14 @@
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
{% if mapred_site_config is defined %}
{% for key,value in mapred_site_config.iteritems() %}
<property>
<name>{{ key }}}</name>
<value>{{ value }}</value>
</property>
{% endfor %}
{% endif %}
</configuration>
\ No newline at end of file
......@@ -5,9 +5,19 @@
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
{% if yarn_site_config is defined %}
{% for key,value in yarn_site_config.iteritems() %}
<property>
<name>{{ key }}}</name>
<value>{{ value }}</value>
</property>
{% endfor %}
{% endif %}
</configuration>
\ No newline at end of file
......@@ -14,7 +14,7 @@
mode=0755
- name: check out the harprofiler
git: >
git_2_0_1: >
dest={{ harprofiler_dir }}
repo={{ harprofiler_github_url }} version={{ harprofiler_version }}
accept_hostkey=yes
......
......@@ -71,8 +71,8 @@
- name: migrate
shell: >
chdir={{ insights_code_dir }}
DB_MIGRATION_USER={{ COMMON_MYSQL_MIGRATE_USER }}
DB_MIGRATION_PASS={{ COMMON_MYSQL_MIGRATE_PASS }}
DB_MIGRATION_USER='{{ COMMON_MYSQL_MIGRATE_USER }}'
DB_MIGRATION_PASS='{{ COMMON_MYSQL_MIGRATE_PASS }}'
{{ insights_home }}/venvs/{{ insights_service_name }}/bin/python {{ insights_manage }} migrate --noinput
sudo_user: "{{ insights_user }}"
environment: "{{ insights_environment }}"
......
# Jenkins Analytics
A role that sets up Jenkins for scheduling analytics tasks.
This role performs the following steps:
* Installs Jenkins using `jenkins_master`.
* Configures `config.xml` to enable security and use
Linux Auth Domain.
* Creates Jenkins credentials.
* Enables the use of Jenkins CLI.
* Installs a seed job from configured repository, launches it and waits
for it to finish.
## Configuration
When you are using vagrant you **need** to set `VAGRANT_JENKINS_LOCAL_VARS_FILE`
environment variable. This variable must point to a file containing
all required variables from this section.
This file needs to contain, at least, the following variables
(see the next few sections for more information about them):
* `JENKINS_ANALYTICS_USER_PASSWORD_HASHED`
* `JENKINS_ANALYTICS_USER_PASSWORD_PLAIN`
* `JENKINS_ANALYTICS_GITHUB_KEY` or `JENKINS_ANALYTICS_CREDENTIALS`
### End-user editable configuration
#### Jenkins user password
You'll need to override default `jenkins` user password, please do that
as this sets up the **shell** password for this user.
You'll need to set both a plain password and a hashed one.
To obtain a hashed password use the `mkpasswd` command, for example:
`mkpasswd --method=sha-512`. (Note: a hashed password is required
to have clean "changed"/"unchanged" notification for this step
in Ansible.)
* `JENKINS_ANALYTICS_USER_PASSWORD_HASHED`: hashed password
* `JENKINS_ANALYTICS_USER_PASSWORD_PLAIN`: plain password
#### Jenkins seed job configuration
This will be filled as part of PR[#2830](https://github.com/edx/configuration/pull/2830).
For now go with defaults.
#### Jenkins credentials
Jenkins contains its own credential store. To fill it with credentials,
please use the `JENKINS_ANALYTICS_CREDENTIALS` variable. This variable
is a list of objects, each object representing a single credential.
For now passwords and ssh-keys are supported.
If you only need credentials to access github repositories
you can override `JENKINS_ANALYTICS_GITHUB_KEY`,
which should contain contents of private key used for
authentication to checkout github repositories.
Each credential has a unique ID, which is used to match
the credential to the task(s) for which it is needed
Examples of credentials variables:
JENKINS_ANALYTICS_GITHUB_KEY: "{{ lookup('file', 'path to keyfile') }}"
JENKINS_ANALYTICS_CREDENTIALS:
# id is a scope-unique credential identifier
- id: test-password
# Scope must be global. To have other scopes you'll need to modify addCredentials.groovy
scope: GLOBAL
# Username associated with this password
username: jenkins
type: username-password
description: Autogenerated by ansible
password: 'password'
# id is a scope-unique credential identifier
- id: github-deploy-key
scope: GLOBAL
# Username this ssh-key is attached to
username: git
# Type of credential, see other entries for example
type: ssh-private-key
passphrase: 'foobar'
description: Generated by ansible
privatekey: |
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,....
Key contents
-----END RSA PRIVATE KEY-----
#### Other useful variables
* `JENKINS_ANALYTICS_CONCURRENT_JOBS_COUNT`: Configures number of
executors (or concurrent jobs this Jenkins instance can
execute). Defaults to `2`.
### General configuration
Following variables are used by this role:
Variables used by command waiting on Jenkins start-up after running
`jenkins_master` role:
jenkins_connection_retries: 60
jenkins_connection_delay: 0.5
#### Auth realm
Jenkins auth realm encapsulates user management in Jenkins, that is:
* What users can log in
* What credentials they use to log in
Realm type stored in `jenkins_auth_realm.name` variable.
In future we will try to enable other auth domains, while
preserving the ability to run cli.
##### Unix Realm
For now only `unix` realm supported -- which requires every Jenkins
user to have a shell account on the server.
Unix realm requires the following settings:
* `service`: Jenkins uses PAM configuration for this service. `su` is
a safe choice as it doesn't require a user to have the ability to login
remotely.
* `plain_password`: plaintext password, **you should change** default values.
* `hashed_password`: hashed password
Example realm configuration:
jenkins_auth_realm:
name: unix
service: su
plain_password: jenkins
hashed_password: $6$rAVyI.p2wXVDKk5w$y0G1MQehmHtvaPgdtbrnvAsBqYQ99g939vxrdLXtPQCh/e7GJVwbnqIKZpve8EcMLTtq.7sZwTBYV9Tdjgf1k.
#### Seed job configuration
Seed job is configured in `jenkins_seed_job` variable, which has the following
attributes:
* `name`: Name of the job in Jenkins.
* `time_trigger`: A Jenkins cron entry defining how often this job should run.
* `removed_job_action`: what to do when a job created by a previous run of seed job
is missing from current run. This can be either `DELETE` or`IGNORE`.
* `removed_view_action`: what to do when a view created by a previous run of seed job
is missing from current run. This can be either `DELETE` or`IGNORE`.
* `scm`: Scm object is used to define seed job repository and related settings.
It has the following properties:
* `scm.type`: It must have value of `git`.
* `scm.url`: URL for the repository.
* `scm.credential_id`: Id of a credential to use when authenticating to the
repository.
This setting is optional. If it is missing or falsy, credentials will be omitted.
Please note that when you use ssh repository url, you'll need to set up a key regardless
of whether the repository is public or private (to establish an ssh connection
you need a valid public key).
* `scm.target_jobs`: A shell glob expression relative to repo root selecting
jobs to import.
* `scm.additional_classpath`: A path relative to repo root, pointing to a
directory that contains additional groovy scripts used by the seed jobs.
Example scm configuration:
jenkins_seed_job:
name: seed
time_trigger: "H * * * *"
removed_job_action: "DELETE"
removed_view_action: "IGNORE"
scm:
type: git
url: "git@github.com:edx-ops/edx-jenkins-job-dsl.git"
credential_id: "github-deploy-key"
target_jobs: "jobs/analytics-edx-jenkins.edx.org/*Jobs.groovy"
additional_classpath: "src/main/groovy"
Known issues
------------
1. Playbook named `execute_ansible_cli.yaml`, should be converted to an
Ansible module (it is already used in a module-ish way).
2. Anonymous user has discover and get job permission, as without it
`get-job`, `build <<job>>` commands wouldn't work.
Giving anonymous these permissions is a workaround for
transient Jenkins issue (reported [couple][1] [of][2] [times][3]).
3. We force unix authentication method -- that is, every user that can login
to Jenkins also needs to have a shell account on master.
Dependencies
------------
- `jenkins_master`
[1]: https://issues.jenkins-ci.org/browse/JENKINS-12543
[2]: https://issues.jenkins-ci.org/browse/JENKINS-11024
[3]: https://issues.jenkins-ci.org/browse/JENKINS-22143
---
# See README.md for variable descriptions
JENKINS_ANALYTICS_USER_PASSWORD_HASHED: $6$rAVyI.p2wXVDKk5w$y0G1MQehmHtvaPgdtbrnvAsBqYQ99g939vxrdLXtPQCh/e7GJVwbnqIKZpve8EcMLTtq.7sZwTBYV9Tdjgf1k.
JENKINS_ANALYTICS_USER_PASSWORD_PLAIN: jenkins
JENKINS_ANALYTICS_CREDENTIALS:
- id: github-deploy-key
scope: GLOBAL
username: git
type: ssh-private-key
passphrase: null
description: Autogenerated by ansible
privatekey: "{{ JENKINS_ANALYTICS_GITHUB_KEY }}"
JENKINS_ANALYTICS_CONCURRENT_JOBS_COUNT: 2
jenkins_credentials_root: '/tmp/credentials'
jenkins_credentials_file_dest: "{{ jenkins_credentials_root }}/credentials.json"
jenkins_credentials_script: "{{ jenkins_credentials_root }}/addCredentials.groovy"
jenkins_connection_retries: 240
jenkins_connection_delay: 1
jenkins_auth_realm:
name: unix
service: su
# Change this default password: (see README.md to see how you can do it)
plain_password: "{{ JENKINS_ANALYTICS_USER_PASSWORD_PLAIN }}"
hashed_password: "{{ JENKINS_ANALYTICS_USER_PASSWORD_HASHED }}"
jenkins_seed_job:
name: analytics-seed-job
time_trigger: "H * * * *"
removed_job_action: "DELETE"
removed_view_action: "IGNORE"
scm:
type: git
url: "git@github.com:edx-ops/edx-jenkins-job-dsl.git"
credential_id: "github-deploy-key"
target_jobs: "jobs/analytics-edx-jenkins.edx.org/*Jobs.groovy"
additional_classpath: "src/main/groovy"
---
- fail: msg=for now we can execute commands iff jenkins auth realm is unix
when: jenkins_auth_realm.name != "unix"
- set_fact:
jenkins_cli_root: "/tmp/jenkins-cli/{{ ansible_ssh_user }}"
- set_fact:
jenkins_cli_jar: "{{ jenkins_cli_root }}/jenkins_cli.jar"
jenkins_cli_pass: "{{ jenkins_cli_root }}/jenkins_cli_pass"
- name: create cli dir
file: name={{ jenkins_cli_root }} state=directory mode="700"
- name: create pass file
template: src=jenkins-pass-file.j2 dest={{ jenkins_cli_pass }} mode="600"
- name: Wait for Jenkins CLI
uri:
url: "http://localhost:{{ jenkins_port }}/cli/"
method: GET
return_content: yes
status_code: 200,403
register: result
until: (result.status is defined) and ((result.status == 403) or (results.status == 200))
retries: "{{ jenkins_connection_retries }}"
delay: "{{ jenkins_connection_delay }}"
changed_when: false
- name: get cli
get_url:
url: "http://localhost:{{ jenkins_port }}/jnlpJars/jenkins-cli.jar"
dest: "{{ jenkins_cli_jar }}"
- name: login
command: java -jar {{ jenkins_cli_jar }} -s http://localhost:{{ jenkins_port }}
login --username={{ jenkins_user }}
--password-file={{ jenkins_cli_pass }}
- name: execute command
shell: >
{{ jenkins_command_prefix|default('') }} java -jar {{ jenkins_cli_jar }} -s http://localhost:{{ jenkins_port }}
{{ jenkins_command_string }}
register: jenkins_command_output
ignore_errors: "{{ jenkins_ignore_cli_errors|default (False) }}"
- name: "clean up --- remove the credentials dir"
file: name=jenkins_cli_root state=absent
- name: "clean up --- remove cached Jenkins credentials"
command: rm -rf $HOME/.jenkins
---
- fail: msg=included unix realm by accident
when: jenkins_auth_realm.name != "unix"
- fail: msg=Please change default password for jenkins user
when: jenkins_auth_realm.plain_password == 'jenkins'
- user:
name: "{{ jenkins_user }}"
groups: shadow
append: yes
password: "{{ jenkins_auth_realm.hashed_password }}"
update_password: always
- name: template config.xml
template:
src: jenkins.config.main.xml
dest: "{{ jenkins_home }}/config.xml"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
# Unconditionally restart Jenkins, this has two side-effects:
# 1. Jenkins uses new auth realm
# 2. We guarantee that jenkins is started (this is not certain
# as Jenkins is started by handlers from jenkins_master,
# these handlers are launched after this role).
- name: restart Jenkins
service: name=jenkins state=restarted
# Upload Jenkins credentials
- name: create credentials dir
file: name={{ jenkins_credentials_root }} state=directory
- name: upload groovy script
template:
src: addCredentials.groovy
dest: "{{ jenkins_credentials_script }}"
mode: "600"
- name: upload credentials file
template:
src: credentials_file.json.j2
dest: "{{ jenkins_credentials_file_dest }}"
mode: "600"
owner: "{{ jenkins_user }}"
- name: add credentials
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "groovy {{ jenkins_credentials_script }}"
- name: clean up
file: name={{ jenkins_credentials_root }} state=absent
# Upload seed job
- name: upload job file
template: src=seed_job_template.xml dest=/tmp/{{ jenkins_seed_job.name }} mode="600"
- name: check if job is present
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "get-job {{ jenkins_seed_job.name }}"
jenkins_ignore_cli_errors: yes
- set_fact:
get_job_output: "{{ jenkins_command_output }}"
# Upload seed job to Jenkins
- name: Create seed job if absent
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "create-job {{ jenkins_seed_job.name }}"
jenkins_command_prefix: "cat /tmp/{{ jenkins_seed_job.name }} | "
when: get_job_output.rc != 0
- name: update seed job
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "update-job {{ jenkins_seed_job.name }}"
jenkins_command_prefix: "cat /tmp/{{ jenkins_seed_job.name }} | "
when: get_job_output.rc == 0
# Build the seed job
- name: Build the seed job
include: execute_jenkins_cli.yaml
vars:
jenkins_command_string: "build {{ jenkins_seed_job.name }} -s"
/**
* This script can be run via the Jenkins CLI as follows:
*
* java -jar /var/jenkins/war/WEB-INF/jenkins-cli.jar -s http://localhost:8080 groovy addCredentials.groovy
*
* For a given json file, this script will create a set of credentials.
* The script can be run safely multiple times and it will update each changed credential
* (deleting credentials is not currently supported).
*
* This is useful in conjunction with the job-dsl to bootstrap a barebone Jenkins instance.
*
* This script will currently fail if the plugins it requires have not been installed:
*
* credentials-plugin
* credentials-ssh-plugin
*/
import com.cloudbees.plugins.credentials.Credentials
import com.cloudbees.plugins.credentials.CredentialsScope
import com.cloudbees.plugins.credentials.common.IdCredentials
import com.cloudbees.plugins.credentials.domains.Domain
import hudson.model.*
import com.cloudbees.plugins.credentials.SystemCredentialsProvider
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl
import com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey
import groovy.json.JsonSlurper;
boolean addUsernamePassword(scope, id, username, password, description) {
provider = SystemCredentialsProvider.getInstance()
provider.getCredentials().add(new UsernamePasswordCredentialsImpl(scope, id, description, username, password))
provider.save()
return true
}
boolean addSSHUserPrivateKey(scope, id, username, privateKey, passphrase, description) {
provider = SystemCredentialsProvider.getInstance()
source = new BasicSSHUserPrivateKey.DirectEntryPrivateKeySource(privateKey)
provider.getCredentials().add(new BasicSSHUserPrivateKey(scope, id, username, source, passphrase, description))
provider.save()
return true
}
def jsonFile = new File("{{ jenkins_credentials_file_dest }}");
if (!jsonFile.exists()){
throw RuntimeException("Credentials file does not exist on remote host");
}
def jsonSlurper = new JsonSlurper()
def credentialList = jsonSlurper.parse(new FileReader(jsonFile))
credentialList.each { credential ->
if (credential.scope != "GLOBAL"){
throw new RuntimeException("Sorry for now only global scope is supported");
}
scope = CredentialsScope.valueOf(credential.scope)
def provider = SystemCredentialsProvider.getInstance();
def toRemove = [];
for (Credentials current_credentials: provider.getCredentials()){
if (current_credentials instanceof IdCredentials){
if (current_credentials.getId() == credential.id){
toRemove.add(current_credentials);
}
}
}
toRemove.each {curr ->provider.getCredentials().remove(curr)};
if (credential.type == "username-password") {
addUsernamePassword(scope, credential.id, credential.username, credential.password, credential.description)
}
if (credential.type == "ssh-private-key") {
if (credential.passphrase != null && credential.passphrase.trim().length() == 0){
credential.passphrase = null;
}
addSSHUserPrivateKey(scope, credential.id, credential.username, credential.privatekey, credential.passphrase, credential.description)
}
}
{{ JENKINS_ANALYTICS_CREDENTIALS|to_json }}
\ No newline at end of file
{{ jenkins_auth_realm.plain_password }}
\ No newline at end of file
<?xml version='1.0' encoding='UTF-8'?>
<hudson>
<disabledAdministrativeMonitors/>
<version>1.638</version>
<numExecutors>{{ JENKINS_ANALYTICS_CONCURRENT_JOBS_COUNT }}</numExecutors>
<mode>NORMAL</mode>
<useSecurity>true</useSecurity>
{% if jenkins_auth_realm.name == "unix" %}
<authorizationStrategy class="hudson.security.GlobalMatrixAuthorizationStrategy">
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.Create:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.Delete:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.ManageDomains:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.Update:jenkins</permission>
<permission>com.cloudbees.plugins.credentials.CredentialsProvider.View:jenkins</permission>
<permission>hudson.model.Computer.Build:jenkins</permission>
<permission>hudson.model.Computer.Configure:jenkins</permission>
<permission>hudson.model.Computer.Connect:jenkins</permission>
<permission>hudson.model.Computer.Create:jenkins</permission>
<permission>hudson.model.Computer.Delete:jenkins</permission>
<permission>hudson.model.Computer.Disconnect:jenkins</permission>
<permission>hudson.model.Hudson.Administer:jenkins</permission>
<permission>hudson.model.Hudson.ConfigureUpdateCenter:jenkins</permission>
<permission>hudson.model.Hudson.Read:jenkins</permission>
<permission>hudson.model.Hudson.RunScripts:jenkins</permission>
<permission>hudson.model.Hudson.UploadPlugins:jenkins</permission>
<permission>hudson.model.Item.Build:jenkins</permission>
<permission>hudson.model.Item.Cancel:jenkins</permission>
<permission>hudson.model.Item.Configure:jenkins</permission>
<permission>hudson.model.Item.Create:jenkins</permission>
<permission>hudson.model.Item.Delete:jenkins</permission>
<permission>hudson.model.Item.Discover:anonymous</permission>
<permission>hudson.model.Item.Discover:jenkins</permission>
<permission>hudson.model.Item.Move:jenkins</permission>
<permission>hudson.model.Item.Read:anonymous</permission>
<permission>hudson.model.Item.Read:jenkins</permission>
<permission>hudson.model.Item.Workspace:jenkins</permission>
<permission>hudson.model.Run.Delete:jenkins</permission>
<permission>hudson.model.Run.Update:jenkins</permission>
<permission>hudson.model.View.Configure:jenkins</permission>
<permission>hudson.model.View.Create:jenkins</permission>
<permission>hudson.model.View.Delete:jenkins</permission>
<permission>hudson.model.View.Read:jenkins</permission>
<permission>hudson.scm.SCM.Tag:jenkins</permission>
</authorizationStrategy>
<securityRealm class="hudson.security.PAMSecurityRealm" plugin="pam-auth@1.2">
<serviceName>{{ jenkins_auth_realm.service }}</serviceName>
</securityRealm>
{% endif %}
<disableRememberMe>false</disableRememberMe>
<projectNamingStrategy class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
<workspaceDir>${JENKINS_HOME}/workspace/${ITEM_FULLNAME}</workspaceDir>
<buildsDir>${ITEM_ROOTDIR}/builds</buildsDir>
<markupFormatter class="hudson.markup.EscapedMarkupFormatter"/>
<jdks/>
<viewsTabBar class="hudson.views.DefaultViewsTabBar"/>
<myViewsTabBar class="hudson.views.DefaultMyViewsTabBar"/>
<clouds/>
<quietPeriod>5</quietPeriod>
<scmCheckoutRetryCount>0</scmCheckoutRetryCount>
<views>
<hudson.model.AllView>
<owner class="hudson" reference="../../.."/>
<name>All</name>
<filterExecutors>false</filterExecutors>
<filterQueue>false</filterQueue>
<properties class="hudson.model.View$PropertyList"/>
</hudson.model.AllView>
</views>
<primaryView>All</primaryView>
<slaveAgentPort>0</slaveAgentPort>
<label>312312321</label>
<nodeProperties/>
<globalNodeProperties/>
</hudson>
<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description>
Seed job autogenerated by ansible, it will be overridden.
</description>
<keepDependencies>false</keepDependencies>
<properties>
<jenkins.advancedqueue.AdvancedQueueSorterJobProperty plugin="PrioritySorter@2.9">
<useJobPriority>false</useJobPriority>
<priority>-1</priority>
</jenkins.advancedqueue.AdvancedQueueSorterJobProperty>
</properties>
<scm class="hudson.plugins.git.GitSCM" plugin="git@2.4.0">
<configVersion>2</configVersion>
<userRemoteConfigs>
<hudson.plugins.git.UserRemoteConfig>
<url>{{ jenkins_seed_job.scm.url}}</url>
{% if jenkins_seed_job.scm.credential_id is defined and jenkins_seed_job.scm.credential_id %}
<credentialsId>{{ jenkins_seed_job.scm.credential_id }}</credentialsId>
{% endif %}
</hudson.plugins.git.UserRemoteConfig>
</userRemoteConfigs>
<branches>
<hudson.plugins.git.BranchSpec>
<name>master</name>
</hudson.plugins.git.BranchSpec>
</branches>
<doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
<browser class="hudson.plugins.git.browser.AssemblaWeb">
<url></url>
</browser>
<submoduleCfg class="list"/>
<extensions/>
</scm>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers>
<hudson.triggers.TimerTrigger>
<spec>{{ jenkins_seed_job.time_trigger }}</spec>
</hudson.triggers.TimerTrigger>
</triggers>
<concurrentBuild>false</concurrentBuild>
<builders>
<hudson.plugins.gradle.Gradle plugin="gradle@1.24">
<description></description>
<switches></switches>
<tasks>clean test</tasks>
<rootBuildScriptDir></rootBuildScriptDir>
<buildFile></buildFile>
<gradleName>(x)</gradleName>
<useWrapper>true</useWrapper>
<makeExecutable>false</makeExecutable>
<fromRootBuildScriptDir>true</fromRootBuildScriptDir>
<useWorkspaceAsHome>false</useWorkspaceAsHome>
</hudson.plugins.gradle.Gradle>
<javaposse.jobdsl.plugin.ExecuteDslScripts plugin="job-dsl@1.43">
<targets>{{ jenkins_seed_job.scm.target_jobs }}</targets>
<usingScriptText>false</usingScriptText>
<ignoreExisting>false</ignoreExisting>
<removedJobAction>{{ jenkins_seed_job.removed_job_action }}</removedJobAction>
<removedViewAction>{{ jenkins_seed_job.removed_view_action }}</removedViewAction>
<lookupStrategy>JENKINS_ROOT</lookupStrategy>
<additionalClasspath>{{ jenkins_seed_job.scm.additional_classpath }}</additionalClasspath>
</javaposse.jobdsl.plugin.ExecuteDslScripts>
</builders>
<publishers/>
<buildWrappers/>
</project>
......@@ -19,7 +19,9 @@ jenkins_plugins:
- { name: "build-name-setter", version: "1.3" }
- { name: "build-pipeline-plugin", version: "1.4" }
- { name: "build-timeout", version: "1.14.1" }
- { name: "build-user-vars-plugin", version: "1.5" }
- { name: "buildgraph-view", version: "1.1.1" }
- { name: "cloudbees-folder", version: "5.2.1" }
- { name: "cobertura", version: "1.9.6" }
- { name: "copyartifact", version: "1.32.1" }
- { name: "copy-to-slave", version: "1.4.3" }
......@@ -34,15 +36,19 @@ jenkins_plugins:
- { name: "github", version: "1.14.0" }
- { name: "github-api", version: "1.69" }
- { name: "github-oauth", version: "0.20" }
- { name: "github-sqs-plugin", version: "1.6" }
- { name: "github-sqs-plugin", version: "1.5" }
- { name: "gradle", version: "1.24" }
- { name: "grails", version: "1.7" }
- { name: "groovy-postbuild", version: "2.2" }
- { name: "htmlpublisher", version: "1.3" }
- { name: "javadoc", version: "1.3" }
- { name: "jobConfigHistory", version: "2.10" }
- { name: "job-dsl", version: "1.43" }
- { name: "junit", version: "1.3" }
- { name: "ldap", version: "1.11" }
- { name: "mailer", version: "1.16" }
- { name: "mapdb-api", version: "1.0.6.0" }
- { name: "mask-passwords", version: "2.8" }
- { name: "matrix-auth", version: "1.2" }
- { name: "matrix-project", version: "1.4" }
- { name: "monitoring", version: "1.56.0" }
......
......@@ -99,7 +99,7 @@
path: "{{ jenkins_home }}/plugins/{{ item.item.name }}.hpi"
owner: "{{ jenkins_user }}"
group: "{{ jenkins_group }}"
mode: 644
mode: "644"
with_items: jenkins_plugin_downloads.results
when: item.changed
notify:
......@@ -110,7 +110,7 @@
# upstream, we may be able to use the regular plugin install process.
# Until then, we compile and install the forks ourselves.
- name: checkout custom plugin repo
git: >
git_2_0_1: >
repo={{ item.repo_url }} dest=/tmp/{{ item.repo_name }} version={{ item.version }}
accept_hostkey=yes
with_items: jenkins_custom_plugins
......@@ -131,7 +131,7 @@
- name: set custom plugin permissions
file: path={{ jenkins_home }}/plugins/{{ item.item.package }}
owner={{ jenkins_user }} group={{ jenkins_group }} mode=700
owner={{ jenkins_user }} group={{ jenkins_group }} mode="700"
with_items: jenkins_custom_plugins_checkout.results
when: item.changed
......
......@@ -16,6 +16,10 @@ jenkins_debian_pkgs:
# packer direct download URL
packer_url: "https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip"
# custom firefox
custom_firefox_version: 42.0
custom_firefox_url: "https://ftp.mozilla.org/pub/firefox/releases/{{ custom_firefox_version }}/linux-x86_64/en-US/firefox-{{ custom_firefox_version }}.tar.bz2"
# Pip-accel itself and other workarounds that need to be installed with pip
pip_accel_reqs:
# Install Shapely with pip as it does not install cleanly
......
......@@ -6,7 +6,7 @@
# refers to the --depth-setting of git clone. A value of 1
# will truncate all history prior to the last revision.
- name: Create shallow clone of edx-platform
git: >
git_2_0_1: >
repo=https://github.com/edx/edx-platform.git
dest={{ jenkins_home }}/shallow-clone
version={{ jenkins_edx_platform_version }}
......@@ -74,7 +74,23 @@
chdir={{ jenkins_home }}
sudo_user: "{{ jenkins_user }}"
# Remove the shallow-clone directory now that we archive
# Remove the shallow-clone directory now that we are
# done with it
- name: Remove shallow-clone
file: path={{ jenkins_home }}/shallow-clone state=absent
# Although firefox is installed through the browsers role, install
# a newer copy under the jenkins home directory. This will allow
# platform pull requests to use a custom firefox path to a different
# version
- name: Install custom firefox to jenkins home
get_url:
url: "{{ custom_firefox_url }}"
dest: "{{ jenkins_home }}/firefox-{{ custom_firefox_version }}.tar.bz2"
- name: unpack custom firefox version
unarchive:
src: "{{ jenkins_home }}/firefox-{{ custom_firefox_version }}.tar.bz2"
dest: "{{ jenkins_home }}"
creates: "{{ jenkins_home }}/firefox"
copy: no
# Courtesy of Gregory Nicholas
_subcommand_opts()
{
local awkfile command cur usage
command=$1
cur=${COMP_WORDS[COMP_CWORD]}
awkfile=/tmp/paver-option-awkscript-$$.awk
echo '
BEGIN {
opts = "";
}
{
for (i = 1; i <= NF; i = i + 1) {
# Match short options (-a, -S, -3)
# or long options (--long-option, --another_option)
# in output from paver help [subcommand]
if ($i ~ /^(-[A-Za-z0-9]|--[A-Za-z][A-Za-z0-9_-]*)/) {
opt = $i;
# remove trailing , and = characters.
match(opt, "[,=]");
if (RSTART > 0) {
opt = substr(opt, 0, RSTART);
}
opts = opts " " opt;
}
}
}
END {
print opts
}' > $awkfile
usage=`paver help $command`
options=`echo "$usage"|awk -f $awkfile`
COMPREPLY=( $(compgen -W "$options" -- "$cur") )
}
_paver()
{
local cur prev
COMPREPLY=()
# Variable to hold the current word
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD - 1]}"
# Build a list of the available tasks from: `paver --help --quiet`
local cmds=$(paver -hq | awk '/^ ([a-zA-Z][a-zA-Z0-9_]+)/ {print $1}')
subcmd="${COMP_WORDS[1]}"
# Generate possible matches and store them in the
# array variable COMPREPLY
if [[ -n $subcmd ]]
then
case $subcmd in
test_system)
_test_system_args
if [[ -n $COMPREPLY ]]
then
return 0
fi
;;
test_bokchoy)
_test_bokchoy_args
if [[ -n $COMPREPLY ]]
then
return 0
fi
;;
*)
;;
esac
if [[ ${#COMP_WORDS[*]} == 3 ]]
then
_subcommand_opts $subcmd
return 0
else
if [[ "$cur" == -* ]]
then
_subcommand_opts $subcmd
return 0
else
COMPREPLY=( $(compgen -o nospace -- "$cur") )
fi
fi
fi
if [[ ${#COMP_WORDS[*]} == 2 ]]
then
COMPREPLY=( $(compgen -W "${cmds}" -- "$cur") )
fi
}
_test_system_args()
{
local cur prev
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD - 1]}"
case "$prev" in
-s|--system)
COMPREPLY=( $(compgen -W "lms cms" -- "$cur") )
return 0
;;
*)
;;
esac
}
_test_bokchoy_args()
{
local bokchoy_tests cur prev
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD - 1]}"
case "$prev" in
-d|--test_dir)
bokchoy_tests=`find common/test/acceptance -name \*.py| sed 's:common/test/acceptance/::'`
COMPREPLY=( $(compgen -o filenames -W "$bokchoy_tests" -- $cur) )
return 0
;;
-t|--test_spec)
bokchoy_tests=`find common/test/acceptance/tests -name \*.py| sed 's:common/test/acceptance/::'`
COMPREPLY=( $(compgen -o filenames -W "$bokchoy_tests" -- $cur) )
return 0
;;
*)
;;
esac
}
# Assign the auto-completion function for our command.
complete -F _paver -o default paver
......@@ -60,9 +60,11 @@
# Create scripts to add paver autocomplete
- name: add paver autocomplete
template:
src=paver_autocomplete dest={{ item.home }}/.paver_autocomplete
owner={{ item.user }} mode=755
copy:
src: paver_autocomplete
dest: "{{ item.home }}/.paver_autocomplete"
owner: "{{ item.user }}"
mode: 0755
with_items: localdev_accounts
when: item.user != 'None'
ignore_errors: yes
......
# Courtesy of Gregory Nicholas
_paver()
{
local cur
COMPREPLY=()
# Variable to hold the current word
cur="${COMP_WORDS[COMP_CWORD]}"
# Build a list of the available tasks from: `paver --help --quiet`
local cmds=$(paver -hq | awk '/^ ([a-zA-Z][a-zA-Z0-9_]+)/ {print $1}')
# Generate possible matches and store them in the
# array variable COMPREPLY
COMPREPLY=($(compgen -W "${cmds}" $cur))
}
# Assign the auto-completion function for our command.
complete -F _paver paver
\ No newline at end of file
......@@ -56,3 +56,16 @@ locust_debian_pkgs:
- gfortran
locust_redhat_pkgs: []
# ulimit variables
ulimit_config:
- domain: '*'
type: soft
item: nofile
value: 4096
- domain: '*'
type: hard
item: nofile
value: 4096
ulimit_conf_file: "/etc/security/limits.conf"
......@@ -72,3 +72,9 @@
name={{ locust_service_name }}
when: not disable_edx_services
sudo_user: "{{ supervisor_service_user }}"
- name: increase file descriptor limit of the system (Session Logout and Login would be required)
lineinfile:
dest: "{{ ulimit_conf_file }}"
line: "{{ item.domain }} {{ item.type }} {{ item.item }} {{ item.value }}"
with_items: "{{ ulimit_config }}"
......@@ -58,7 +58,7 @@ server {
{% endif %}
location ~ ^/static/(?P<file>.*) {
root {{ COMMON_DATA_DIR }}/{{ programs_service_name }};
root {{ PROGRAMS_DATA_DIR }};
try_files /staticfiles/$file =404;
# Request that the browser use SSL for these connections. Repeated here
......@@ -71,6 +71,13 @@ server {
add_header Cache-Control "public; max-age=3600";
}
location ~ ^/media/(?P<file>.*) {
root {{ PROGRAMS_DATA_DIR }};
try_files /media/$file =404;
# django / app always assigns new filenames so these can be cached forever.
add_header Cache-Control "public; max-age=31536000";
}
location / {
try_files $uri @proxy_to_app;
}
......
---
- name: checkout code
git:
git_2_0_1:
dest={{ NOTIFIER_CODE_DIR }} repo={{ NOTIFIER_SOURCE_REPO }}
version={{ NOTIFIER_VERSION }}
accept_hostkey=yes
......@@ -31,7 +31,7 @@
when: NOTIFIER_GIT_IDENTITY != ""
- name: checkout theme
git: >
git_2_0_1: >
dest={{ NOTIFIER_CODE_DIR }}/{{ NOTIFIER_THEME_NAME }}
repo={{ NOTIFIER_THEME_REPO }}
version={{ NOTIFIER_THEME_VERSION }}
......
......@@ -9,3 +9,6 @@ oraclejdk_arch: "x64"
oraclejdk_file: "jdk-{{ oraclejdk_version }}-{{ oraclejdk_platform }}-{{ oraclejdk_arch }}.tar.gz"
oraclejdk_url: "http://download.oracle.com/otn-pub/java/jdk/{{ oraclejdk_version }}-{{ oraclejdk_build }}/{{ oraclejdk_file }}"
oraclejdk_link: "/usr/lib/jvm/java-8-oracle"
oraclejdk_debian_pkgs:
- curl
......@@ -12,6 +12,10 @@
# - common
# - oraclejdk
- name: install debian needed pkgs
apt: pkg={{ item }}
with_items: oraclejdk_debian_pkgs
- name: download Oracle Java
shell: >
curl -b gpw_e24=http%3A%2F%2Fwww.oracle.com -b oraclelicense=accept-securebackup-cookie -O -L {{ oraclejdk_url }}
......
......@@ -55,6 +55,43 @@ PROGRAMS_PLATFORM_NAME: 'Your Platform Name Here'
# See: https://github.com/ottoyiu/django-cors-headers/.
PROGRAMS_CORS_ORIGIN_WHITELIST: []
PROGRAMS_DATA_DIR: '{{ COMMON_DATA_DIR }}/{{ programs_service_name }}'
PROGRAMS_MEDIA_ROOT: '{{ PROGRAMS_DATA_DIR }}/media'
PROGRAMS_MEDIA_URL: '/media/'
# Example settings to use Amazon S3 as a storage backend for user-uploaded files
# https://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html#amazon-s3
#
# This is only for user-uploaded files and does not cover static assets that ship
# with the code.
#
# Note, AWS_S3_CUSTOM_DOMAIN is required, otherwise boto will generate non-working
# querystring URLs for assets (see https://github.com/boto/boto/issues/1477)
#
# Note, set AWS_S3_CUSTOM_DOMAIN to the cloudfront domain instead, when that is in use.
#
# PROGRAMS_BUCKET: mybucket
# programs_s3_domain: s3.amazonaws.com
# PROGRAMS_MEDIA_ROOT: 'media' # NOTE use '$source_ip/media' for an edx sandbox
#
# PROGRAMS_MEDIA_STORAGE_BACKEND:
# DEFAULT_FILE_STORAGE: 'programs.apps.core.s3utils.MediaS3BotoStorage'
# MEDIA_ROOT: '{{ PROGRAMS_MEDIA_ROOT }}'
# MEDIA_URL: 'https://{{ PROGRAMS_BUCKET }}.{{ programs_s3_domain }}/{{ PROGRAMS_MEDIA_ROOT }}/'
# AWS_STORAGE_BUCKET_NAME: '{{ PROGRAMS_BUCKET }}'
# AWS_S3_CUSTOM_DOMAIN: '{{ PROGRAMS_BUCKET }}.{{ programs_s3_domain }}'
# AWS_QUERYSTRING_AUTH: false
# AWS_QUERYSTRING_EXPIRE: false
# AWS_DEFAULT_ACL: ''
# AWS_HEADERS:
# Cache-Control: max-age=31536000
#
#
PROGRAMS_MEDIA_STORAGE_BACKEND:
DEFAULT_FILE_STORAGE: 'django.core.files.storage.FileSystemStorage'
MEDIA_ROOT: '{{ PROGRAMS_MEDIA_ROOT }}'
MEDIA_URL: '{{ PROGRAMS_MEDIA_URL }}'
PROGRAMS_SERVICE_CONFIG:
SECRET_KEY: '{{ PROGRAMS_SECRET_KEY }}'
TIME_ZONE: '{{ PROGRAMS_TIME_ZONE }}'
......@@ -66,7 +103,7 @@ PROGRAMS_SERVICE_CONFIG:
SOCIAL_AUTH_EDX_OIDC_URL_ROOT: '{{ PROGRAMS_SOCIAL_AUTH_EDX_OIDC_URL_ROOT }}'
SOCIAL_AUTH_REDIRECT_IS_HTTPS: '{{ PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS }}'
STATIC_ROOT: "{{ COMMON_DATA_DIR }}/{{ programs_service_name }}/staticfiles"
STATIC_ROOT: '{{ PROGRAMS_DATA_DIR }}/staticfiles'
# db config
DATABASE_OPTIONS:
connect_timeout: 10
......@@ -76,7 +113,10 @@ PROGRAMS_SERVICE_CONFIG:
CORS_ORIGIN_WHITELIST: '{{ PROGRAMS_CORS_ORIGIN_WHITELIST }}'
PUBLIC_URL_ROOT: '{{ PROGRAMS_URL_ROOT }}'
ORGANIZATIONS_API_URL_ROOT: '{{ PROGRAMS_ORGANIZATIONS_API_URL_ROOT }}'
MEDIA_STORAGE_BACKEND: '{{ PROGRAMS_MEDIA_STORAGE_BACKEND }}'
PROGRAMS_REPOS:
- PROTOCOL: "{{ COMMON_GIT_PROTOCOL }}"
......@@ -130,6 +170,7 @@ programs_requirements:
#
programs_debian_pkgs:
- libjpeg-dev
- libmysqlclient-dev
- libssl-dev
......
......@@ -88,6 +88,14 @@
- "compress"
when: not devstack
# NOTE this isn't used or needed when s3 is used for PROGRAMS_MEDIA_STORAGE_BACKEND
- name: create programs media dir
file: >
path="{{ item }}" state=directory mode=0775
owner="{{ programs_user }}" group="{{ common_web_group }}"
with_items:
- "{{ PROGRAMS_MEDIA_ROOT }}"
- name: write out the supervisor wrapper
template:
src: "edx/app/programs/programs.sh.j2"
......
......@@ -19,11 +19,11 @@ RABBIT_USERS:
- name: 'celery'
password: 'celery'
RABBITMQ_CLUSTERED: !!null
RABBITMQ_VHOSTS:
- '/'
RABBITMQ_CLUSTERED_HOSTS: []
# Internal role variables below this line
# option to force deletion of the mnesia dir
......@@ -56,7 +56,5 @@ rabbitmq_auth_config:
erlang_cookie: "{{ RABBIT_ERLANG_COOKIE }}"
admins: "{{ RABBIT_USERS }}"
rabbitmq_clustered_hosts: []
rabbitmq_plugins:
- rabbitmq_management
......@@ -134,7 +134,7 @@
- name: make queues mirrored
shell: >
/usr/sbin/rabbitmqctl -p {{ item }} set_policy HA "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
when: RABBITMQ_CLUSTERED or rabbitmq_clustered_hosts|length > 1
when: RABBITMQ_CLUSTERED_HOSTS|length > 1
with_items: RABBITMQ_VHOSTS
tags:
- ha
......
......@@ -2,19 +2,8 @@
[{rabbit, [
{log_levels, [{connection, info}]},
{% if RABBITMQ_CLUSTERED -%}
{%- set hosts= [] -%}
{%- for host in hostvars.keys() -%}
{% do hosts.append("rabbit@ip-" + host.replace('.','-')) %}
{%- endfor %}
{cluster_nodes, {['{{ hosts|join("\',\'") }}'], disc}}
{%- else -%}
{# If rabbitmq_clustered_hosts is set, use that instead assuming an aws stack.
{#
Note: That these names should include the node name prefix. eg. 'rabbit@hostname'
#}
{cluster_nodes, {['{{ rabbitmq_clustered_hosts|join("\',\'") }}'], disc}}
{%- endif %}
{cluster_nodes, {['{{ RABBITMQ_CLUSTERED_HOSTS|join("\',\'") }}'], disc}}
]}].
......@@ -59,7 +59,7 @@
- install:base
- name: update rbenv repo
git: >
git_2_0_1: >
repo=https://github.com/sstephenson/rbenv.git
dest={{ rbenv_dir }}/.rbenv version={{ rbenv_version }}
accept_hostkey=yes
......
......@@ -19,6 +19,7 @@ MIGRATION_COMMANDS = {
'insights': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'analytics_api': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'credentials': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
'discovery': ". {env_file}; {python} {code_dir}/manage.py migrate --noinput --list",
}
HIPCHAT_USER = "PreSupervisor"
......@@ -91,7 +92,7 @@ if __name__ == '__main__':
ecom_migration_args.add_argument("--ecommerce-env",
help="Location of the ecommerce environment file.")
ecom_migration_args.add_argument("--ecommerce-code-dir",
help="Location to of the ecommerce code.")
help="Location of the ecommerce code.")
programs_migration_args = parser.add_argument_group("programs_migrations",
"Args for running programs migration checks.")
......@@ -100,7 +101,7 @@ if __name__ == '__main__':
programs_migration_args.add_argument("--programs-env",
help="Location of the programs environment file.")
programs_migration_args.add_argument("--programs-code-dir",
help="Location to of the programs code.")
help="Location of the programs code.")
credentials_migration_args = parser.add_argument_group("credentials_migrations",
"Args for running credentials migration checks.")
......@@ -109,7 +110,16 @@ if __name__ == '__main__':
credentials_migration_args.add_argument("--credentials-env",
help="Location of the credentials environment file.")
credentials_migration_args.add_argument("--credentials-code-dir",
help="Location to of the credentials code.")
help="Location of the credentials code.")
discovery_migration_args = parser.add_argument_group("discovery_migrations",
"Args for running discovery migration checks.")
discovery_migration_args.add_argument("--discovery-python",
help="Path to python to use for executing migration check.")
discovery_migration_args.add_argument("--discovery-env",
help="Location of the discovery environment file.")
discovery_migration_args.add_argument("--discovery-code-dir",
help="Location of the discovery code.")
insights_migration_args = parser.add_argument_group("insights_migrations",
"Args for running insights migration checks.")
......@@ -118,7 +128,7 @@ if __name__ == '__main__':
insights_migration_args.add_argument("--insights-env",
help="Location of the insights environment file.")
insights_migration_args.add_argument("--insights-code-dir",
help="Location to of the insights code.")
help="Location of the insights code.")
analyticsapi_migration_args = parser.add_argument_group("analytics_api_migrations",
"Args for running analytics_api migration checks.")
......@@ -127,7 +137,7 @@ if __name__ == '__main__':
analyticsapi_migration_args.add_argument("--analytics-api-env",
help="Location of the analytics_api environment file.")
analyticsapi_migration_args.add_argument("--analytics-api-code-dir",
help="Location to of the analytics_api code.")
help="Location of the analytics_api code.")
hipchat_args = parser.add_argument_group("hipchat",
"Args for hipchat notification.")
......@@ -233,6 +243,7 @@ if __name__ == '__main__':
"ecommerce": {'python': args.ecommerce_python, 'env_file': args.ecommerce_env, 'code_dir': args.ecommerce_code_dir},
"programs": {'python': args.programs_python, 'env_file': args.programs_env, 'code_dir': args.programs_code_dir},
"credentials": {'python': args.credentials_python, 'env_file': args.credentials_env, 'code_dir': args.credentials_code_dir},
"discovery": {'python': args.discovery_python, 'env_file': args.discovery_env, 'code_dir': args.discovery_code_dir},
"insights": {'python': args.insights_python, 'env_file': args.insights_env, 'code_dir': args.insights_code_dir},
"analytics_api": {'python': args.analytics_api_python, 'env_file': args.analytics_api_env, 'code_dir': args.analytics_api_code_dir}
}
......
......@@ -17,4 +17,11 @@ setuid {{ supervisor_user }}
{% set credentials_command = "" %}
{% endif %}
exec {{ supervisor_venv_dir }}/bin/python {{ supervisor_app_dir }}/pre_supervisor_checks.py --available={{ supervisor_available_dir }} --enabled={{ supervisor_cfg_dir }} {% if SUPERVISOR_HIPCHAT_API_KEY is defined %}--hipchat-api-key {{ SUPERVISOR_HIPCHAT_API_KEY }} --hipchat-room {{ SUPERVISOR_HIPCHAT_ROOM }} {% endif %} {% if edxapp_code_dir is defined %}--edxapp-python {{ COMMON_BIN_DIR }}/python.edxapp --edxapp-code-dir {{ edxapp_code_dir }} --edxapp-env {{ edxapp_app_dir }}/edxapp_env{% endif %} {% if xqueue_code_dir is defined %}--xqueue-code-dir {{ xqueue_code_dir }} --xqueue-python {{ COMMON_BIN_DIR }}/python.xqueue {% endif %} {% if ecommerce_code_dir is defined %}--ecommerce-env {{ ecommerce_home }}/ecommerce_env --ecommerce-code-dir {{ ecommerce_code_dir }} --ecommerce-python {{ COMMON_BIN_DIR }}/python.ecommerce {% endif %} {% if insights_code_dir is defined %}--insights-env {{ insights_home }}/insights_env --insights-code-dir {{ insights_code_dir }} --insights-python {{ COMMON_BIN_DIR }}/python.insights {% endif %} {% if analytics_api_code_dir is defined %}--analytics-api-env {{ analytics_api_home }}/analytics_api_env --analytics-api-code-dir {{ analytics_api_code_dir }} --analytics-api-python {{ COMMON_BIN_DIR }}/python.analytics_api {% endif %} {{ programs_command }} {{ credentials_command }}
{% if discovery_code_dir is defined %}
{% set discovery_command = "--discovery-env " + discovery_home + "/discovery_env --discovery-code-dir " + discovery_code_dir + " --discovery-python " + COMMON_BIN_DIR + "/python.discovery" %}
{% else %}
{% set discovery_command = "" %}
{% endif %}
exec {{ supervisor_venv_dir }}/bin/python {{ supervisor_app_dir }}/pre_supervisor_checks.py --available={{ supervisor_available_dir }} --enabled={{ supervisor_cfg_dir }} {% if SUPERVISOR_HIPCHAT_API_KEY is defined %}--hipchat-api-key {{ SUPERVISOR_HIPCHAT_API_KEY }} --hipchat-room {{ SUPERVISOR_HIPCHAT_ROOM }} {% endif %} {% if edxapp_code_dir is defined %}--edxapp-python {{ COMMON_BIN_DIR }}/python.edxapp --edxapp-code-dir {{ edxapp_code_dir }} --edxapp-env {{ edxapp_app_dir }}/edxapp_env{% endif %} {% if xqueue_code_dir is defined %}--xqueue-code-dir {{ xqueue_code_dir }} --xqueue-python {{ COMMON_BIN_DIR }}/python.xqueue {% endif %} {% if ecommerce_code_dir is defined %}--ecommerce-env {{ ecommerce_home }}/ecommerce_env --ecommerce-code-dir {{ ecommerce_code_dir }} --ecommerce-python {{ COMMON_BIN_DIR }}/python.ecommerce {% endif %} {% if insights_code_dir is defined %}--insights-env {{ insights_home }}/insights_env --insights-code-dir {{ insights_code_dir }} --insights-python {{ COMMON_BIN_DIR }}/python.insights {% endif %} {% if analytics_api_code_dir is defined %}--analytics-api-env {{ analytics_api_home }}/analytics_api_env --analytics-api-code-dir {{ analytics_api_code_dir }} --analytics-api-python {{ COMMON_BIN_DIR }}/python.analytics_api {% endif %} {{ programs_command }} {{ discovery_command }} {{ credentials_command }}
......@@ -21,7 +21,7 @@
#
- name: Create clone of edx-platform
git: >
git_2_0_1: >
repo=https://github.com/edx/edx-platform.git
dest={{ test_build_server_repo_path }}/edx-platform-clone
version={{ test_edx_platform_version }}
......
......@@ -43,7 +43,7 @@
# Do A Checkout
- name: git checkout xqueue repo into xqueue_code_dir
git: >
git_2_0_1: >
dest={{ xqueue_code_dir }} repo={{ xqueue_source_repo }} version={{ xqueue_version }}
accept_hostkey=yes
sudo_user: "{{ xqueue_user }}"
......
......@@ -3,7 +3,7 @@
# a per queue basis.
- name: checkout grader code
git: >
git_2_0_1: >
dest={{ xqwatcher_app_dir }}/data/{{ item.COURSE }} repo={{ item.GIT_REPO }}
version={{ item.GIT_REF }}
ssh_opts="{{ xqwatcher_course_git_ssh_opts }}"
......
......@@ -19,7 +19,7 @@
- restart xserver
- name: checkout code
git: >
git_2_0_1: >
dest={{ xserver_code_dir }} repo={{ xserver_source_repo }} version={{xserver_version}}
accept_hostkey=yes
sudo_user: "{{ xserver_user }}"
......@@ -58,7 +58,7 @@
notify: restart xserver
- name: checkout grader code
git: >
git_2_0_1: >
dest={{ XSERVER_GRADER_DIR }} repo={{ XSERVER_GRADER_SOURCE }} version={{ xserver_grader_version }}
accept_hostkey=yes
environment:
......
......@@ -42,7 +42,7 @@
notify: restart xsy
- name: checkout the code
git: >
git_2_0_1: >
dest="{{ xsy_code_dir }}" repo="{{ xsy_source_repo }}"
version="{{ xsy_version }}" accept_hostkey=yes
sudo_user: "{{ xsy_user }}"
......
---
#EDXAPP_PREVIEW_LMS_BASE: preview-${deploy_host}
#EDXAPP_LMS_BASE: ${deploy_host}
#EDXAPP_CMS_BASE: studio-${deploy_host}
#EDXAPP_SITE_NAME: ${deploy_host}
#CERTS_DOWNLOAD_URL: "http://${deploy_host}:18090"
#CERTS_VERIFY_URL: "http://${deploy_host}:18090"
#edx_internal: True
#COMMON_USER_INFO:
# - name: ${github_username}
# github: true
# type: admin
#USER_CMD_PROMPT: '[$name_tag] '
#COMMON_ENABLE_NEWRELIC_APP: $enable_newrelic
#COMMON_ENABLE_DATADOG: $enable_datadog
#FORUM_NEW_RELIC_ENABLE: $enable_newrelic
#ENABLE_PERFORMANCE_COURSE: $performance_course
#ENABLE_DEMO_TEST_COURSE: $demo_test_course
#ENABLE_EDX_DEMO_COURSE: $edx_demo_course
#EDXAPP_NEWRELIC_LMS_APPNAME: sandbox-${dns_name}-edxapp-lms
#EDXAPP_NEWRELIC_CMS_APPNAME: sandbox-${dns_name}-edxapp-cms
#EDXAPP_NEWRELIC_WORKERS_APPNAME: sandbox-${dns_name}-edxapp-workers
#XQUEUE_NEWRELIC_APPNAME: sandbox-${dns_name}-xqueue
#FORUM_NEW_RELIC_APP_NAME: sandbox-${dns_name}-forums
#SANDBOX_USERNAME: $github_username
#EDXAPP_ECOMMERCE_PUBLIC_URL_ROOT: "https://ecommerce-${deploy_host}"
#EDXAPP_ECOMMERCE_API_URL: "https://ecommerce-${deploy_host}/api/v2"
#
#ECOMMERCE_ECOMMERCE_URL_ROOT: "https://ecommerce-${deploy_host}"
#ECOMMERCE_LMS_URL_ROOT: "https://${deploy_host}"
#ECOMMERCE_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
#
#PROGRAMS_LMS_URL_ROOT: "https://${deploy_host}"
#PROGRAMS_URL_ROOT: "https://programs-${deploy_host}"
#PROGRAMS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
#
#CREDENTIALS_LMS_URL_ROOT: "https://${deploy_host}"
#CREDENTIALS_URL_ROOT: "https://credentials-${deploy_host}"
#CREDENTIALS_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
#COURSE_DISCOVERY_ECOMMERCE_API_URL: "https://ecommerce-${deploy_host}/api/v2"
#
#DISCOVERY_OAUTH_URL_ROOT: "https://${deploy_host}"
#DISCOVERY_URL_ROOT: "https://discovery-${deploy_host}"
#DISCOVERY_SOCIAL_AUTH_REDIRECT_IS_HTTPS: true
## These flags are used to toggle role installation
## in the plays that install each server cluster
#COMMON_NEWRELIC_LICENSE: ''
#COMMON_AWS_SYNC: True
#NEWRELIC_LICENSE_KEY: ''
#NEWRELIC_LOGWATCH: []
# - logwatch-cms-errors.j2
# - logwatch-lms-errors.j2
#COMMON_ENABLE_NEWRELIC: True
## Datadog Settings
#datadog_api_key: ""
#COMMON_DATADOG_API_KEY: ""
#DATADOG_API_KEY: ""
## NGINX settings:
#NGINX_ENABLE_SSL: True
#NGINX_SSL_CERTIFICATE: '/path/to/ssl.crt"
#NGINX_SSL_KEY: '/path/to/ssl.key'
#NGINX_SERVER_ERROR_IMG: https://files.edx.org/images-public/edx-sad-small.png
#EDXAPP_XBLOCK_FS_STORAGE_BUCKET: 'your-xblock-storage-bucket'
#EDXAPP_XBLOCK_FS_STORAGE_PREFIX: 'sandbox-edx/'
#EDXAPP_LMS_SSL_NGINX_PORT: 443
#EDXAPP_CMS_SSL_NGINX_PORT: 443
#EDXAPP_LMS_NGINX_PORT: 80
#EDXAPP_LMS_PREVIEW_NGINX_PORT: 80
#EDXAPP_CMS_NGINX_PORT: 80
#EDXAPP_WORKERS:
# lms: 2
# cms: 2
#XSERVER_GRADER_DIR: "/edx/var/xserver/data/content-mit-600x~2012_Fall"
#XSERVER_GRADER_SOURCE: "git@github.com:/MITx/6.00x.git"
#CERTS_BUCKET: "verify-test.example.org"
#migrate_db: "yes"
#openid_workaround: True
#rabbitmq_ip: "127.0.0.1"
#rabbitmq_refresh: True
#COMMON_HOSTNAME: edx-server
#COMMON_DEPLOYMENT: edx
#COMMON_ENVIRONMENT: sandbox
#ora_gunicorn_workers: 1
#AS_WORKERS: 1
#ANALYTICS_WORKERS: 1
#ANALYTICS_API_GUNICORN_WORKERS: 1
#XQUEUE_WORKERS_PER_QUEUE: 2
## Settings for Grade downloads
#EDXAPP_GRADE_STORAGE_TYPE: 's3'
#EDXAPP_GRADE_BUCKET: 'your-grade-bucket'
#EDXAPP_GRADE_ROOT_PATH: 'sandbox'
#EDXAPP_SEGMENT_IO: 'true'
#EDXAPP_SEGMENT_IO_LMS: 'true'
#EDXAPP_SEGMENT_IO_KEY: 'your segment.io key'
#EDXAPP_SEGMENT_IO_LMS_KEY: 'your segment.io key'
#EDXAPP_YOUTUBE_API_KEY: "Your Youtube API Key"
#
#EDXAPP_FEATURES:
# AUTH_USE_OPENID_PROVIDER: true
# CERTIFICATES_ENABLED: true
# ENABLE_DISCUSSION_SERVICE: true
# ENABLE_DISCUSSION_HOME_PANEL: true
# ENABLE_INSTRUCTOR_ANALYTICS: false
# SUBDOMAIN_BRANDING: false
# SUBDOMAIN_COURSE_LISTINGS: false
# PREVIEW_LMS_BASE: "{{ EDXAPP_PREVIEW_LMS_BASE }}"
# ENABLE_S3_GRADE_DOWNLOADS: true
# USE_CUSTOM_THEME: "{{ edxapp_use_custom_theme }}"
# ENABLE_MKTG_SITE: "{{ EDXAPP_ENABLE_MKTG_SITE }}"
# AUTOMATIC_AUTH_FOR_TESTING: "{{ EDXAPP_ENABLE_AUTO_AUTH }}"
# ENABLE_THIRD_PARTY_AUTH: "{{ EDXAPP_ENABLE_THIRD_PARTY_AUTH }}"
# AUTOMATIC_VERIFY_STUDENT_IDENTITY_FOR_TESTING: true
# ENABLE_PAYMENT_FAKE: true
# ENABLE_VIDEO_UPLOAD_PIPELINE: true
# SEPARATE_VERIFICATION_FROM_PAYMENT: true
# ENABLE_COMBINED_LOGIN_REGISTRATION: true
# ENABLE_CORS_HEADERS: true
# ENABLE_MOBILE_REST_API: true
# ENABLE_OAUTH2_PROVIDER: true
# LICENSING: true
# CERTIFICATES_HTML_VIEW: true
#
#EDXAPP_CORS_ORIGIN_WHITELIST:
# - "example.org"
# - "www.example.org"
# - "{{ ECOMMERCE_ECOMMERCE_URL_ROOT }}"
#
#EDXAPP_VIDEO_UPLOAD_PIPELINE:
# BUCKET: "your-video-bucket"
# ROOT_PATH: "edx-video-upload-pipeline/unprocessed"
#
#EDXAPP_CC_PROCESSOR_NAME: "CyberSource2"
#EDXAPP_CC_PROCESSOR:
# CyberSource2:
# PURCHASE_ENDPOINT: "/shoppingcart/payment_fake/"
# SECRET_KEY: ""
# ACCESS_KEY: ""
# PROFILE_ID: ""
#
#EDXAPP_PROFILE_IMAGE_BACKEND:
# class: storages.backends.s3boto.S3BotoStorage
# options:
# location: /{{ ansible_ec2_public_ipv4 }}
# bucket: your-profile-image-bucket
# custom_domain: yourcloudfrontdomain.cloudfront.net
# headers:
# Cache-Control: max-age-{{ EDXAPP_PROFILE_IMAGE_MAX_AGE }}
#EDXAPP_PROFILE_IMAGE_SECRET_KEY: "SECRET KEY HERE"
#
##TODO: remove once ansible_provision.sh stops sucking or is burned to the ground
#EDXAPP_PROFILE_IMAGE_MAX_AGE: 31536000
#
## send logs to s3
#AWS_S3_LOGS: true
#AWS_S3_LOGS_NOTIFY_EMAIL: devops+logs@example.com
#AWS_S3_LOGS_FROM_EMAIL: devops@example.com
#EDX_ANSIBLE_DUMP_VARS: true
#configuration_version: release
#CERTS_AWS_KEY: 'AWS SECRET KEY HERE'
#CERTS_AWS_ID: 'AWS KEY ID HERE'
#CERTS_REPO: "git@github.com:/edx/certificates"
#XSERVER_GIT_IDENTITY: |
# -----BEGIN RSA PRIVATE KEY-----
# ssh private key here
# -----END RSA PRIVATE KEY-----
#CERTS_GIT_IDENTITY: "{{ XSERVER_GIT_IDENTITY }}"
#EDXAPP_INSTALL_PRIVATE_REQUIREMENTS: true
#EDXAPP_USE_GIT_IDENTITY: true
#_local_git_identity: |
# -----BEGIN RSA PRIVATE KEY-----
# ssh private key here
# -----END RSA PRIVATE KEY-----
#
#EDXAPP_GIT_IDENTITY: "{{ _local_git_identity }}"
#
################################################################
##
## Analytics API Settings
##
#ANALYTICS_API_PIP_EXTRA_ARGS: "--use-wheel --no-index --find-links=http://edx-wheelhouse.s3-website-us-east-1.amazonaws.com/Ubuntu/precise/Python-2.7"
#ANALYTICS_API_GIT_IDENTITY: "{{ _local_git_identity }}"
#
#TESTCOURSES_EXPORTS:
# - github_url: "https://github.com/edx/demo-performance-course.git"
# install: "{{ ENABLE_PERFORMANCE_COURSE }}"
# course_id: "course-v1:DemoX+PERF101+course"
# - github_url: "https://github.com/edx/demo-test-course.git"
# install: "{{ ENABLE_DEMO_TEST_COURSE }}"
# course_id: "course-v1:edX+Test101+course"
# - github_url: "https://github.com/edx/edx-demo-course.git"
# install: "{{ ENABLE_EDX_DEMO_COURSE }}"
# course_id: "course-v1:edX+DemoX+Demo_Course"
#
#EDXAPP_FILE_UPLOAD_STORAGE_BUCKET_NAME: edxuploads-sandbox
#EDXAPP_AWS_STORAGE_BUCKET_NAME: edxuploads-sandbox
#
#EDXAPP_SESSION_COOKIE_SECURE: true
#
## Celery Flower configuration
## By default, we now turn on Google OAuth2 configuration
## This disables that on sandboxes so you can use flower to manage your
## local celery processes.
#FLOWER_AUTH_REGEX: ""
#
################################################################
##
## LOCUST Settings
##
#LOCUST_GIT_IDENTITY: "{{ _local_git_identity }}"
......@@ -14,6 +14,7 @@
EDXAPP_LMS_BASE: 127.0.0.1:8000
EDXAPP_OAUTH_ENFORCE_SECURE: false
EDXAPP_LMS_BASE_SCHEME: http
ECOMMERCE_DJANGO_SETTINGS_MODULE: "ecommerce.settings.devstack"
roles:
- common
- vhost
......@@ -25,10 +26,12 @@
- oraclejdk
- elasticsearch
- forum
- ecommerce
- ecomworker
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- analytics_api
- insights
- local_dev
- demo
- analytics_api
- analytics_pipeline
- insights
- oauth_client_setup
......@@ -28,7 +28,7 @@
serial: 1
gather_facts: True
vars:
rabbitmq_clustered_hosts:
RABBITMQ_CLUSTERED_HOSTS:
- "rabbit@cluster1"
- "rabbit@cluster2"
- "rabbit@cluster3"
......
......@@ -22,13 +22,13 @@
- mysql
- edxlocal
- mongo
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- edxapp
- oraclejdk
- elasticsearch
- forum
- ecommerce
- ecomworker
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- programs
- role: notifier
NOTIFIER_DIGEST_TASK_INTERVAL: "5"
......
......@@ -32,10 +32,10 @@
- mysql
- edxlocal
- mongo
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- edxapp
- { role: 'edxapp', celery_worker: True }
- demo
- { role: 'rabbitmq', rabbitmq_ip: '127.0.0.1' }
- oraclejdk
- elasticsearch
- forum
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment