Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
C
configuration
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
OpenEdx
configuration
Commits
ca7081a8
Commit
ca7081a8
authored
May 06, 2015
by
Fred Smith
Committed by
Fred Smith
May 06, 2015
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Clean up out of date files
parent
c8e57049
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
0 additions
and
412 deletions
+0
-412
cloud_migrations/vpc-migrate-analytics_api.yml
+0
-203
cloud_migrations/vpc-migrate-xqwatcher.yml
+0
-181
doc/cfn-output-example.png
+0
-0
git-hooks/post-checkout.in
+0
-14
git-hooks/pre-commit.in
+0
-14
No files found.
cloud_migrations/vpc-migrate-analytics_api.yml
deleted
100644 → 0
View file @
c8e57049
#
# Overview:
# This play needs to be run per environment-deployment and you will need to
# provide the boto environment and vpc_id as arguments
#
# ansible-playbook -i 'localhost,' ./vpc-migrate-analytics_api-edge-stage.yml \
# -e 'profile=edge vpc_id=vpc-416f9b24'
#
# Caveats
#
# - This requires ansible 1.6
# - Required the following branch of Ansible /e0d/add-instance-profile from
# https://github.com/e0d/ansible.git
# - This play isn't full idempotent because of and ec2 module update issue
# with ASGs. This can be worked around by deleting the ASG and re-running
# the play
# - The instance_profile_name will need to be created in advance as there
# isn't a way to do so from ansible.
#
# Prequisities:
# Create a iam ec2 role
#
-
name
:
Add resources for the Analytics API
hosts
:
localhost
connection
:
local
gather_facts
:
False
tasks
:
# Fail intermittantly with the following error:
# The specified rule does not exist in this security group
-
name
:
Create instance security group
ec2_group
:
profile
:
"
{{
profile
}}"
description
:
"
Open
up
SSH
access"
name
:
"
{{
security_group
}}"
vpc_id
:
"
{{
vpc_id
}}"
region
:
"
{{
ec2_region
}}"
rules
:
-
proto
:
tcp
from_port
:
"
{{
sec_group_ingress_from_port
}}"
to_port
:
"
{{
sec_group_ingress_to_port
}}"
cidr_ip
:
"
{{
item
}}"
with_items
:
sec_group_ingress_cidrs
register
:
created_sec_group
ignore_errors
:
True
-
name
:
debug
debug
:
msg
:
"
Registered
created_sec_group:
{{
created_sec_group
}}"
# Needs ansible 1.7 for vpc support of elbs
# - name: Create elb security group
# ec2_group:
# profile: "{{ profile }}"
# description: "ELB security group"
# name: "ELB-{{ security_group }}"
# vpc_id: "{{ vpc_id }}"
# region: "{{ ec2_region }}"
# rules:
# - proto: tcp
# from_port: "443"
# to_port: "443"
# cidr_ip: "0.0.0.0/0"
# register: created_elb_sec_group
# ignore_errors: True
# Needs 1.7 for VPC support
# - name: "Create ELB"
# ec2_elb_lb:
# profile: "{{ profile }}"
# region: "{{ ec2_region }}"
# zones:
# - us-east-1b
# - us-east-1c
# name: "{{ edp }}"
# state: present
# security_group_ids: "{{ created_elb_sec_group.group_id }}"
# listeners:
# - protocol: https
# load_balancer_port: 443
# instance_protocol: http # optional, defaults to value of protocol setting
# instance_port: 80
# # ssl certificate required for https or ssl
# ssl_certificate_id: "{{ ssl_cert }}"
# instance_profile_name was added by me in my fork
-
name
:
Create the launch configuration
ec2_lc
:
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
lc_name
}}"
image_id
:
"
{{
lc_ami
}}"
key_name
:
"
{{
key_name
}}"
security_groups
:
"
{{
created_sec_group.results[0].group_id
}}"
instance_type
:
"
{{
instance_type
}}"
instance_profile_name
:
"
{{
instance_profile_name
}}"
volumes
:
-
device_name
:
"
/dev/sda1"
volume_size
:
"
{{
instance_volume_size
}}"
-
name
:
Create ASG
ec2_asg
:
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
asg_name
}}"
launch_config_name
:
"
{{
lc_name
}}"
load_balancers
:
"
{{
elb_name
}}"
availability_zones
:
-
us-east-1b
-
us-east-1c
min_size
:
0
max_size
:
2
desired_capacity
:
1
vpc_zone_identifier
:
"
{{
subnets|join(',')
}}"
instance_tags
:
Name
:
"
{{
env
}}-{{
deployment
}}-{{
play
}}"
autostack
:
"
true"
environment
:
"
{{
env
}}"
deployment
:
"
{{
deployment
}}"
play
:
"
{{
play
}}"
services
:
"
{{
play
}}"
register
:
asg
-
name
:
debug
debug
:
msg
:
"
DEBUG:
{{
asg
}}"
-
name
:
Create scale up policy
ec2_scaling_policy
:
state
:
present
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
edp
}}-ScaleUpPolicy"
adjustment_type
:
"
ChangeInCapacity"
asg_name
:
"
{{
asg_name
}}"
scaling_adjustment
:
1
min_adjustment_step
:
1
cooldown
:
60
register
:
scale_up_policy
-
name
:
debug
debug
:
msg
:
"
Registered
scale_up_policy:
{{
scale_up_policy
}}"
-
name
:
Create scale down policy
ec2_scaling_policy
:
state
:
present
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
edp
}}-ScaleDownPolicy"
adjustment_type
:
"
ChangeInCapacity"
asg_name
:
"
{{
asg_name
}}"
scaling_adjustment
:
-1
min_adjustment_step
:
1
cooldown
:
60
register
:
scale_down_policy
-
name
:
debug
debug
:
msg
:
"
Registered
scale_down_policy:
{{
scale_down_policy
}}"
#
# Sometimes the scaling policy reports itself changed, but
# does not return data about the policy. It's bad enough
# that consistent data isn't returned when things
# have and have not changed; this make writing idempotent
# tasks difficult.
-
name
:
create high-cpu alarm
ec2_metric_alarm
:
state
:
present
region
:
"
{{
ec2_region
}}"
name
:
"
cpu-high"
metric
:
"
CPUUtilization"
namespace
:
"
AWS/EC2"
statistic
:
Average
comparison
:
"
>="
threshold
:
90.0
period
:
300
evaluation_periods
:
2
unit
:
"
Percent"
description
:
"
Scale-up
if
CPU
>
90%
for
10
minutes"
dimensions
:
{
"
AutoScalingGroupName"
:
"
{{
asg_name
}}"
}
alarm_actions
:
[
"
{{
scale_up_policy.arn
}}"
]
when
:
scale_up_policy.arn is defined
-
name
:
create low-cpu alarm
ec2_metric_alarm
:
state
:
present
region
:
"
{{
ec2_region
}}"
name
:
"
cpu-low"
metric
:
"
CPUUtilization"
namespace
:
"
AWS/EC2"
statistic
:
Average
comparison
:
"
<="
threshold
:
50.0
period
:
300
evaluation_periods
:
2
unit
:
"
Percent"
description
:
"
Scale-down
if
CPU
<
50%
for
10
minutes"
dimensions
:
{
"
AutoScalingGroupName"
:
"
{{
asg_name
}}"
}
alarm_actions
:
[
"
{{
scale_down_policy.arn
}}"
]
when
:
scale_down_policy.arn is defined
\ No newline at end of file
cloud_migrations/vpc-migrate-xqwatcher.yml
deleted
100644 → 0
View file @
c8e57049
#
# Overview:
# This play needs to be run per environment-deployment and you will need to
# provide the boto environment and vpc_id as arguments
#
# ansible-playbook -i 'localhost,' ./vpc-migrate-xqwatcher-edge-stage.yml \
# -e 'profile=edge vpc_id=vpc-416f9b24'
#
# Caveats
#
# - This requires ansible 1.6
# - Required the following branch of Ansible /e0d/add-instance-profile from
# https://github.com/e0d/ansible.git
# - This play isn't full idempotent because of and ec2 module update issue
# with ASGs. This can be worked around by deleting the ASG and re-running
# the play
# - The instance_profile_name will need to be created in advance as there
# isn't a way to do so from ansible.
#
# Prequisities:
# Create a iam ec2 role
#
-
name
:
Add resources for the XQWatcher
hosts
:
localhost
connection
:
local
gather_facts
:
False
tasks
:
# ignore_error is used here because this module is not idempotent
# If tags already exist, the task will fail with the following message
# Tags already exists in subnet
-
name
:
Update subnet tags
ec2_tag
:
resource
:
"
{{
item
}}"
region
:
"
{{
ec2_region
}}"
state
:
present
tags
:
Name
:
"
{{
edp
}}-subnet"
play
:
xqwatcher
immutable_metadata
:
"
{'purpose':'{{
environment
}}-{{
deployment
}}-internal-{{
play
}}','target':'ec2'}"
with_items
:
subnets
ignore_errors
:
True
# Fail intermittantly with the following error:
# The specified rule does not exist in this security group
-
name
:
Create security group
ec2_group
:
profile
:
"
{{
profile
}}"
description
:
"
Open
up
SSH
access"
name
:
"
{{
security_group
}}"
vpc_id
:
"
{{
vpc_id
}}"
region
:
"
{{
ec2_region
}}"
rules
:
-
proto
:
tcp
from_port
:
"
{{
sec_group_ingress_from_port
}}"
to_port
:
"
{{
sec_group_ingress_to_port
}}"
cidr_ip
:
"
{{
item
}}"
with_items
:
sec_group_ingress_cidrs
register
:
created_sec_group
ignore_errors
:
True
-
name
:
debug
debug
:
msg
:
"
Registered
created_sec_group:
{{
created_sec_group
}}"
# instance_profile_name was added by me in my fork
-
name
:
Create the launch configuration
ec2_lc
:
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
lc_name
}}"
image_id
:
"
{{
lc_ami
}}"
key_name
:
"
{{
key_name
}}"
security_groups
:
"
{{
created_sec_group.results[0].group_id
}}"
instance_type
:
"
{{
instance_type
}}"
instance_profile_name
:
"
{{
instance_profile_name
}}"
volumes
:
-
device_name
:
"
/dev/sda1"
volume_size
:
"
{{
instance_volume_size
}}"
-
name
:
Create ASG
ec2_asg
:
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
asg_name
}}"
launch_config_name
:
"
{{
lc_name
}}"
min_size
:
0
max_size
:
0
desired_capacity
:
0
vpc_zone_identifier
:
"
{{
subnets|join(',')
}}"
instance_tags
:
Name
:
"
{{
env
}}-{{
deployment
}}-{{
play
}}"
autostack
:
"
true"
environment
:
"
{{
env
}}"
deployment
:
"
{{
deployment
}}"
play
:
"
{{
play
}}"
services
:
"
{{
play
}}"
register
:
asg
-
name
:
debug
debug
:
msg
:
"
DEBUG:
{{
asg
}}"
-
name
:
Create scale up policy
ec2_scaling_policy
:
state
:
present
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
edp
}}-ScaleUpPolicy"
adjustment_type
:
"
ChangeInCapacity"
asg_name
:
"
{{
asg_name
}}"
scaling_adjustment
:
1
min_adjustment_step
:
1
cooldown
:
60
register
:
scale_up_policy
tags
:
-
foo
-
name
:
debug
debug
:
msg
:
"
Registered
scale_up_policy:
{{
scale_up_policy
}}"
-
name
:
Create scale down policy
ec2_scaling_policy
:
state
:
present
profile
:
"
{{
profile
}}"
region
:
"
{{
ec2_region
}}"
name
:
"
{{
edp
}}-ScaleDownPolicy"
adjustment_type
:
"
ChangeInCapacity"
asg_name
:
"
{{
asg_name
}}"
scaling_adjustment
:
-1
min_adjustment_step
:
1
cooldown
:
60
register
:
scale_down_policy
-
name
:
debug
debug
:
msg
:
"
Registered
scale_down_policy:
{{
scale_down_policy
}}"
#
# Sometimes the scaling policy reports itself changed, but
# does not return data about the policy. It's bad enough
# that consistent data isn't returned when things
# have and have not changed; this make writing idempotent
# tasks difficult.
-
name
:
create high-cpu alarm
ec2_metric_alarm
:
state
:
present
region
:
"
{{
ec2_region
}}"
name
:
"
cpu-high"
metric
:
"
CPUUtilization"
namespace
:
"
AWS/EC2"
statistic
:
Average
comparison
:
"
>="
threshold
:
90.0
period
:
300
evaluation_periods
:
2
unit
:
"
Percent"
description
:
"
Scale-up
if
CPU
>
90%
for
10
minutes"
dimensions
:
{
"
AutoScalingGroupName"
:
"
{{
asg_name
}}"
}
alarm_actions
:
[
"
{{
scale_up_policy.arn
}}"
]
when
:
scale_up_policy.arn is defined
-
name
:
create low-cpu alarm
ec2_metric_alarm
:
state
:
present
region
:
"
{{
ec2_region
}}"
name
:
"
cpu-low"
metric
:
"
CPUUtilization"
namespace
:
"
AWS/EC2"
statistic
:
Average
comparison
:
"
<="
threshold
:
50.0
period
:
300
evaluation_periods
:
2
unit
:
"
Percent"
description
:
"
Scale-down
if
CPU
<
50%
for
10
minutes"
dimensions
:
{
"
AutoScalingGroupName"
:
"
{{
asg_name
}}"
}
alarm_actions
:
[
"
{{
scale_down_policy.arn
}}"
]
when
:
scale_down_policy.arn is defined
\ No newline at end of file
doc/cfn-output-example.png
deleted
100644 → 0
View file @
c8e57049
220 KB
git-hooks/post-checkout.in
deleted
100755 → 0
View file @
c8e57049
#!/bin/sh
dir
=
`
git rev-parse
--show-toplevel
`
if
[
-z
$dir
]
;
then
exit
1
fi
echo
-n
Setting up hooks from git-hooks..
$dir
/util/sync_hooks.sh
>
/dev/null
if
[
$?
-eq
0
]
;
then
echo
.
done
.
else
exit
1
fi
git-hooks/pre-commit.in
deleted
100755 → 0
View file @
c8e57049
#!/bin/sh
dir
=
`
git rev-parse
--show-toplevel
`
if
[
-z
$dir
]
;
then
exit
1
fi
echo
-n
Checking JSON parses..
$dir
/util/json_lint.sh
if
[
$?
-eq
0
]
;
then
echo
.
it does!
else
exit
1
fi
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment