I was having problems where concurrent installs could trample on each
other. The instance that immediately affected me was output caching
from ec2.py: the output of that command is different between staging and
prod, and both were being written to /tmp/ansible_ec2.cache and .index.
Fix here is to write to a local temp directory. This creates empty temp
dirs to ensure that they are created in all repos.
While less likely, you could have collisions on named ssh sockets.
Those are named with just the instance name, which could be re-used
across VPC's. Putting those in the ./tmp dir too prevents that.
Note that for consistency I did away with just the plain ec2.ini file,
and instead now there are prod- and stage- variants. This is clean but
now means that you'll need to change your install command to look
something like this:
ANSIBLE_EC2_INI=prod-ec2.ini ANSIBLE_CONFIG=prod-ansible.cfg ansible-playbook -c ssh -u ubuntu -i ./ec2.py prod-app.yml
Conflicts:
playbooks/edx-west/ansible.cfg
| Name |
Last commit
|
Last update |
|---|---|---|
| .. | ||
| callback_plugins | Loading commit data... | |
| edx-east | Loading commit data... | |
| edx-west | Loading commit data... | |
| files | Loading commit data... | |
| group_vars | Loading commit data... | |
| library | Loading commit data... | |
| roles | Loading commit data... | |
| secure_example | Loading commit data... | |
| util | Loading commit data... | |
| vagrant | Loading commit data... | |
| ansible.cfg | Loading commit data... | |
| create_role.yml | Loading commit data... | |
| ec2.ini | Loading commit data... | |
| ec2.py | Loading commit data... | |
| edx_sandbox.yml | Loading commit data... | |
| inventory.ini | Loading commit data... | |
| log_server.yml | Loading commit data... | |
| run_role.yml | Loading commit data... | |
| vagrant-devstack.yml | Loading commit data... | |
| vagrant-fullstack.yml | Loading commit data... |