Commit 265d9adb by Andrew Newdigate

Merge branch 'devel' of github.com:ansible/ansible into devel

Conflicts:
	library/monitoring/pagerduty
parents 9d97c560 a37a8424
...@@ -39,3 +39,8 @@ debian/ ...@@ -39,3 +39,8 @@ debian/
*.swp *.swp
*.swo *.swo
credentials.yml credentials.yml
# test output
.coverage
results.xml
coverage.xml
/test/units/cover-html
...@@ -6,17 +6,91 @@ Ansible Changes By Release ...@@ -6,17 +6,91 @@ Ansible Changes By Release
Major features/changes: Major features/changes:
* The deprecated legacy variable templating system has been finally removed. Use {{ foo }} always not $foo or ${foo}. * The deprecated legacy variable templating system has been finally removed. Use {{ foo }} always not $foo or ${foo}.
* Any data file can also be JSON. Use sparingly -- with great power comes great responsibility. Starting file with "{" or "[" denotes JSON.
* Added 'gathering' param for ansible.cfg to change the default gather_facts policy.
* Accelerate improvements:
- multiple users can connect with different keys, when `accelerate_multi_key = yes` is specified in the ansible.cfg.
- daemon lifetime is now based on the time from the last activity, not the time from the daemon's launch.
* ansible-playbook now accepts --force-handlers to run handlers even if tasks result in failures
New Modules: New Modules:
* packaging: cpanm * files: replace
* packaging: cpanm (Perl)
* packaging: portage
* packaging: composer (PHP)
* packaging: homebrew_tap (OS X)
* packaging: homebrew_cask (OS X)
* packaging: apt_rpm
* packaging: layman
* monitoring: logentries
* monitoring: rollbar_deployment
* monitoring: librato_annotation
* notification: nexmo (SMS)
* notification: twilio (SMS)
* notification: slack (Slack.com)
* notification: typetalk (Typetalk.in)
* notification: sns (Amazon)
* system: debconf * system: debconf
* system: ufw
* system: locale_gen
* system: alternatives
* system: capabilities
* net_infrastructure: bigip_facts
* net_infrastructure: dnssimple
* net_infrastructure: lldp
* web_infrastructure: apache2_module
* cloud: digital_ocean_domain
* cloud: digital_ocean_sshkey
* cloud: rax_identity
* cloud: rax_cbs (cloud block storage)
* cloud: rax_cbs_attachments
* cloud: ec2_asg (configure autoscaling groups)
* cloud: ec2_scaling_policy
* cloud: ec2_metric_alarm
Other notable changes: Other notable changes:
* info pending * example callback plugin added for hipchat
* added example inventory plugin for vcenter/vsphere
* added example inventory plugin for doing really trivial inventory from SSH config files
* libvirt module now supports destroyed and paused as states
* s3 module can specify metadata
* security token additions to ec2 modules
* setup module code moved into module_utils/, facts now accessible by other modules
* synchronize module sets relative dirs based on inventory or role path
* misc bugfixes and other parameters
* the ec2_key module now has wait/wait_timeout parameters
* added version_compare filter (see docs)
* added ability for module documentation YAML to utilize shared module snippets for common args
* apt module now accepts "deb" parameter to install local dpkg files
* regex_replace filter plugin added
* ... to be filled in from changelogs ...
*
## 1.5.4 "Love Walks In" - April 1, 2014
- Security fix for safe_eval, which further hardens the checking of the evaluation function.
- Changing order of variable precendence for system facts, to ensure that inventory variables take precedence over any facts that may be set on a host.
## 1.5.3 "Love Walks In" - March 13, 2014
- Fix validate_certs and run_command errors from previous release
- Fixes to the git module related to host key checking
## 1.5.2 "Love Walks In" - March 11, 2014
- Fix module errors in airbrake and apt from previous release
## 1.5.1 "Love Walks In" - March 10, 2014
- Force command action to not be executed by the shell unless specifically enabled.
- Validate SSL certs accessed through urllib*.
- Implement new default cipher class AES256 in ansible-vault.
- Misc bug fixes.
## 1.5 "Love Walks In" - Feb 28, 2014 ## 1.5 "Love Walks In" - February 28, 2014
Major features/changes: Major features/changes:
......
...@@ -66,8 +66,10 @@ Functions and Methods ...@@ -66,8 +66,10 @@ Functions and Methods
* In general, functions should not be 'too long' and should describe a meaningful amount of work * In general, functions should not be 'too long' and should describe a meaningful amount of work
* When code gets too nested, that's usually the sign the loop body could benefit from being a function * When code gets too nested, that's usually the sign the loop body could benefit from being a function
* Parts of our existing code are not the best examples of this at times.
* Functions should have names that describe what they do, along with docstrings * Functions should have names that describe what they do, along with docstrings
* Functions should be named with_underscores * Functions should be named with_underscores
* "Don't repeat yourself" is generally a good philosophy
Variables Variables
========= =========
...@@ -76,6 +78,16 @@ Variables ...@@ -76,6 +78,16 @@ Variables
* Ansible python code uses identifiers like 'ClassesLikeThis and variables_like_this * Ansible python code uses identifiers like 'ClassesLikeThis and variables_like_this
* Module parameters should also use_underscores and not runtogether * Module parameters should also use_underscores and not runtogether
Module Security
===============
* Modules must take steps to avoid passing user input from the shell and always check return codes
* always use module.run_command instead of subprocess or Popen or os.system -- this is mandatory
* if you use need the shell you must pass use_unsafe_shell=True to module.run_command
* if you do not need the shell, avoid using the shell
* any variables that can come from the user input with use_unsafe_shell=True must be wrapped by pipes.quote(x)
* downloads of https:// resource urls must import module_utils.urls and use the fetch_url method
Misc Preferences Misc Preferences
================ ================
...@@ -149,16 +161,19 @@ All contributions to the core repo should preserve original licenses and new con ...@@ -149,16 +161,19 @@ All contributions to the core repo should preserve original licenses and new con
Module Documentation Module Documentation
==================== ====================
All module pull requests must include a DOCUMENTATION docstring (YAML format, see other modules for examples) as well as an EXAMPLES docstring, which All module pull requests must include a DOCUMENTATION docstring (YAML format,
is free form. see other modules for examples) as well as an EXAMPLES docstring, which is free form.
When adding new modules, any new parameter must have a "version_added" attribute. When submitting a new module, the module should have a "version_added" When adding new modules, any new parameter must have a "version_added" attribute.
attribute in the pull request as well, set to the current development version. When submitting a new module, the module should have a "version_added" attribute in the
pull request as well, set to the current development version.
Be sure to check grammar and spelling. Be sure to check grammar and spelling.
It's frequently the case that modules get submitted with YAML that isn't valid, so you can run "make webdocs" from the checkout to preview your module's documentation. It's frequently the case that modules get submitted with YAML that isn't valid,
If it fails to build, take a look at your DOCUMENTATION string or you might have a Python syntax error in there too. so you can run "make webdocs" from the checkout to preview your module's documentation.
If it fails to build, take a look at your DOCUMENTATION string
or you might have a Python syntax error in there too.
Python Imports Python Imports
============== ==============
......
...@@ -29,13 +29,9 @@ content up on places like github to share with others. ...@@ -29,13 +29,9 @@ content up on places like github to share with others.
Sharing A Feature Idea Sharing A Feature Idea
---------------------- ----------------------
If you have an idea for a new feature, you can open a new ticket at Ideas are very welcome and the best place to share them is the [Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join)) or #ansible on irc.freenode.net.
[github.com/ansible/ansible](https://github.com/ansible/ansible), though in general we like to
talk about feature ideas first and bring in lots of people into the discussion. Consider stopping While you can file a feature request on GitHub, pull requests are a much better way to get your feature added than submitting a feature request. Open source is all about itch scratching, and it's less likely that someone else will have the same itches as yourself. We keep code reasonably simple on purpose so it's easy to dive in and make additions, but be sure to read the "Contributing Code" section below too -- as it doesn't hurt to have a discussion about a feature first -- we're inclined to have preferences about how incoming features might be implemented, and that can save confusion later.
by the
[Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join))
or #ansible on irc.freenode.net. There is an overview about more mailing lists
later in this document.
Helping with Documentation Helping with Documentation
-------------------------- --------------------------
...@@ -58,18 +54,24 @@ The Ansible project keeps it’s source on github at ...@@ -58,18 +54,24 @@ The Ansible project keeps it’s source on github at
and takes contributions through and takes contributions through
[github pull requests](https://help.github.com/articles/using-pull-requests). [github pull requests](https://help.github.com/articles/using-pull-requests).
It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first time, that revisions are needed. (This is not usually needed for module development)
especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first
time, that revisions are needed. (This is not usually needed for module development) Note that we do keep Ansible to a particular aesthetic, so if you are unclear about whether a feature
is a good fit or not, having the discussion on the development list is often a lot easier than having
to modify a pull request later.
When submitting patches, be sure to run the unit tests first “make tests” and always use When submitting patches, be sure to run the unit tests first “make tests” and always use
“git rebase” vs “git merge” (aliasing git pull to git pull --rebase is a great idea) to “git rebase” vs “git merge” (aliasing git pull to git pull --rebase is a great idea) to
avoid merge commits in your submissions. We will require resubmission of pull requests that avoid merge commits in your submissions. There are also integration tests that can be run in the "tests/integration" directory.
contain merge commits.
In order to keep the history clean and better audit incoming code, we will require resubmission of pull requests that contain merge commits. Use "git pull --rebase" vs "git pull" and "git rebase" vs "git merge". Also be sure to use topic branches to keep your additions on different branches, such that they won't pick up stray commits later.
We’ll then review your contributions and engage with you about questions and so on.
As we have a very large and active community, so it may take awhile to get your contributions
in! See the notes about priorities in a later section for understanding our work queue.
We’ll then review your contributions and engage with you about questions and so on. Please be Patches should be made against the 'devel' branch.
advised we have a very large and active community, so it may take awhile to get your contributions
in! Patches should be made against the 'devel' branch.
Contributions can be for new features like modules, or to fix bugs you or others have found. If you Contributions can be for new features like modules, or to fix bugs you or others have found. If you
are interested in writing new modules to be included in the core Ansible distribution, please refer are interested in writing new modules to be included in the core Ansible distribution, please refer
...@@ -87,6 +89,8 @@ required. You're now live! ...@@ -87,6 +89,8 @@ required. You're now live!
Reporting A Bug Reporting A Bug
--------------- ---------------
Ansible practices responsible disclosure - if this is a security related bug, email security@ansible.com instead of filing a ticket or posting to the Google Group and you will recieve a prompt response.
Bugs should be reported to [github.com/ansible/ansible](http://github.com/ansible/ansible) after Bugs should be reported to [github.com/ansible/ansible](http://github.com/ansible/ansible) after
signing up for a free github account. Before reporting a bug, please use the bug/issue search signing up for a free github account. Before reporting a bug, please use the bug/issue search
to see if the issue has already been reported. to see if the issue has already been reported.
...@@ -108,6 +112,44 @@ the mailing list or IRC first. As we are a very high volume project, if you det ...@@ -108,6 +112,44 @@ the mailing list or IRC first. As we are a very high volume project, if you det
you do have a bug, please be sure to open the issue yourself to ensure we have a record of you do have a bug, please be sure to open the issue yourself to ensure we have a record of
it. Don’t rely on someone else in the community to file the bug report for you. it. Don’t rely on someone else in the community to file the bug report for you.
It may take some time to get to your report, see "A Note About Priorities" below.
A Note About Priorities
=======================
Ansible was one of the top 5 projects with the most OSS contributors on GitHub in 2013, and well over
600 people have added code to the project. As a result, we have a LOT of incoming activity to process.
In the interest of transparency, we're telling you how we do this.
In our bug tracker you'll notice some labels - P1, P2, P3, P4, and P5. These are our internal
priority orders that we use to sort tickets.
With some exceptions for easy merges (like documentation typos for instance),
we're going to spend most of our time working on P1 and P2 items first, including pull requests.
These usually relate to important
bugs or features affecting large segments of the userbase. So if you see something categorized
"P3 or P4", and it's not appearing to get a lot of immediate attention, this is why.
These labels don't really have definition - they are a simple ordering. However something
affecting a major module (yum, apt, etc) is likely to be prioritized higher than a module
affecting a smaller number of users.
Since we place a strong emphasis on testing and code review, it may take a few months for a minor feature to get merged.
Don't worry though -- we'll also take periodic sweeps through the lower priority queues and give
them some attention as well, particularly in the area of new module changes. So it doesn't neccessarily
mean that we'll be exhausting all of the higher-priority queues before getting to your ticket.
Release Numbering
=================
Releases ending in ".0" are major releases and this is where all new features land. Releases ending
in another integer, like "0.X.1" and "0.X.2" are dot releases, and these are only going to contain
bugfixes. Typically we don't do dot releases for minor releases, but may occasionally decide to cut
dot releases containing a large number of smaller fixes if it's still a fairly long time before
the next release comes out.
Online Resources Online Resources
================ ================
...@@ -166,10 +208,9 @@ Community Code of Conduct ...@@ -166,10 +208,9 @@ Community Code of Conduct
------------------------- -------------------------
Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please
treat others as you expect to be treated, keep discussions positive, and avoid discrimination treat others as you expect to be treated, keep discussions positive, and avoid discrimination, profanity, allegations of Cthulhu worship, or engaging in controversial debates (except vi vs emacs is cool).
or engaging in controversial debates (except vi vs emacs is cool). Posts to mailing lists
should remain focused around Ansible and IT automation. Abuse of these community guidelines Posts to mailing lists should remain focused around Ansible and IT automation. Abuse of these community guidelines will not be tolerated and may result in banning from community resources.
will not be tolerated and may result in banning from community resources.
Contributors License Agreement Contributors License Agreement
------------------------------ ------------------------------
......
...@@ -20,7 +20,7 @@ OS = $(shell uname -s) ...@@ -20,7 +20,7 @@ OS = $(shell uname -s)
# Manpages are currently built with asciidoc -- would like to move to markdown # Manpages are currently built with asciidoc -- would like to move to markdown
# This doesn't evaluate until it's called. The -D argument is the # This doesn't evaluate until it's called. The -D argument is the
# directory of the target file ($@), kinda like `dirname`. # directory of the target file ($@), kinda like `dirname`.
MANPAGES := docs/man/man1/ansible.1 docs/man/man1/ansible-playbook.1 docs/man/man1/ansible-pull.1 docs/man/man1/ansible-doc.1 MANPAGES := docs/man/man1/ansible.1 docs/man/man1/ansible-playbook.1 docs/man/man1/ansible-pull.1 docs/man/man1/ansible-doc.1 docs/man/man1/ansible-galaxy.1 docs/man/man1/ansible-vault.1
ifneq ($(shell which a2x 2>/dev/null),) ifneq ($(shell which a2x 2>/dev/null),)
ASCII2MAN = a2x -D $(dir $@) -d manpage -f manpage $< ASCII2MAN = a2x -D $(dir $@) -d manpage -f manpage $<
ASCII2HTMLMAN = a2x -D docs/html/man/ -d manpage -f xhtml ASCII2HTMLMAN = a2x -D docs/html/man/ -d manpage -f xhtml
...@@ -172,3 +172,4 @@ deb: debian ...@@ -172,3 +172,4 @@ deb: debian
webdocs: $(MANPAGES) webdocs: $(MANPAGES)
(cd docsite/; make docs) (cd docsite/; make docs)
docs: $(MANPAGES)
[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) [![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) [![PyPI downloads](https://pypip.in/d/ansible/badge.png)](https://pypi.python.org/pypi/ansible)
Ansible Ansible
======= =======
......
...@@ -14,6 +14,11 @@ Active Development ...@@ -14,6 +14,11 @@ Active Development
Previous Previous
++++++++ ++++++++
=======
1.6 "The Cradle Will Rock" - NEXT
1.5.3 "Love Walks In" -------- 03-13-2014
1.5.2 "Love Walks In" -------- 03-11-2014
1.5.1 "Love Walks In" -------- 03-10-2014
1.5 "Love Walks In" -------- 02-28-2014 1.5 "Love Walks In" -------- 02-28-2014
1.4.5 "Could This Be Magic?" - 02-12-2014 1.4.5 "Could This Be Magic?" - 02-12-2014
1.4.4 "Could This Be Magic?" - 01-06-2014 1.4.4 "Could This Be Magic?" - 01-06-2014
......
...@@ -128,14 +128,11 @@ class Cli(object): ...@@ -128,14 +128,11 @@ class Cli(object):
this_path = os.path.expanduser(options.vault_password_file) this_path = os.path.expanduser(options.vault_password_file)
try: try:
f = open(this_path, "rb") f = open(this_path, "rb")
tmp_vault_pass=f.read() tmp_vault_pass=f.read().strip()
f.close() f.close()
except (OSError, IOError), e: except (OSError, IOError), e:
raise errors.AnsibleError("Could not read %s: %s" % (this_path, e)) raise errors.AnsibleError("Could not read %s: %s" % (this_path, e))
# get rid of newline chars
tmp_vault_pass = tmp_vault_pass.strip()
if not options.ask_vault_pass: if not options.ask_vault_pass:
vault_pass = tmp_vault_pass vault_pass = tmp_vault_pass
...@@ -160,8 +157,6 @@ class Cli(object): ...@@ -160,8 +157,6 @@ class Cli(object):
if options.su_user or options.ask_su_pass: if options.su_user or options.ask_su_pass:
options.su = True options.su = True
elif options.sudo_user or options.ask_sudo_pass:
options.sudo = True
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
options.su_user = options.su_user or C.DEFAULT_SU_USER options.su_user = options.su_user or C.DEFAULT_SU_USER
if options.tree: if options.tree:
......
...@@ -98,7 +98,7 @@ def get_man_text(doc): ...@@ -98,7 +98,7 @@ def get_man_text(doc):
if 'option_keys' in doc and len(doc['option_keys']) > 0: if 'option_keys' in doc and len(doc['option_keys']) > 0:
text.append("Options (= is mandatory):\n") text.append("Options (= is mandatory):\n")
for o in doc['option_keys']: for o in sorted(doc['option_keys']):
opt = doc['options'][o] opt = doc['options'][o]
if opt.get('required', False): if opt.get('required', False):
...@@ -146,10 +146,15 @@ def get_snippet_text(doc): ...@@ -146,10 +146,15 @@ def get_snippet_text(doc):
text.append("- name: %s" % (desc)) text.append("- name: %s" % (desc))
text.append(" action: %s" % (doc['module'])) text.append(" action: %s" % (doc['module']))
for o in doc['options']: for o in sorted(doc['options'].keys()):
opt = doc['options'][o] opt = doc['options'][o]
desc = tty_ify("".join(opt['description'])) desc = tty_ify("".join(opt['description']))
if opt.get('required', False):
s = o + "=" s = o + "="
else:
s = o
text.append(" %-20s # %s" % (s, desc)) text.append(" %-20s # %s" % (s, desc))
text.append('') text.append('')
......
...@@ -170,7 +170,7 @@ def build_option_parser(action): ...@@ -170,7 +170,7 @@ def build_option_parser(action):
parser.set_usage("usage: %prog init [options] role_name") parser.set_usage("usage: %prog init [options] role_name")
parser.add_option( parser.add_option(
'-p', '--init-path', dest='init_path', default="./", '-p', '--init-path', dest='init_path', default="./",
help='The path in which the skeleton role will be created.' help='The path in which the skeleton role will be created. '
'The default is the current working directory.') 'The default is the current working directory.')
elif action == "install": elif action == "install":
parser.set_usage("usage: %prog install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]") parser.set_usage("usage: %prog install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]")
...@@ -192,7 +192,7 @@ def build_option_parser(action): ...@@ -192,7 +192,7 @@ def build_option_parser(action):
if action != "init": if action != "init":
parser.add_option( parser.add_option(
'-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH, '-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH,
help='The path to the directory containing your roles.' help='The path to the directory containing your roles. '
'The default is the roles_path configured in your ' 'The default is the roles_path configured in your '
'ansible.cfg file (/etc/ansible/roles if not configured)') 'ansible.cfg file (/etc/ansible/roles if not configured)')
...@@ -655,7 +655,7 @@ def execute_install(args, options, parser): ...@@ -655,7 +655,7 @@ def execute_install(args, options, parser):
if role_name == "" or role_name.startswith("#"): if role_name == "" or role_name.startswith("#"):
continue continue
elif role_name.find(',') != -1: elif ',' in role_name:
role_name,role_version = role_name.split(',',1) role_name,role_version = role_name.split(',',1)
role_name = role_name.strip() role_name = role_name.strip()
role_version = role_version.strip() role_version = role_version.strip()
......
...@@ -78,6 +78,8 @@ def main(args): ...@@ -78,6 +78,8 @@ def main(args):
help="one-step-at-a-time: confirm each task before running") help="one-step-at-a-time: confirm each task before running")
parser.add_option('--start-at-task', dest='start_at', parser.add_option('--start-at-task', dest='start_at',
help="start the playbook at the task matching this name") help="start the playbook at the task matching this name")
parser.add_option('--force-handlers', dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
options, args = parser.parse_args(args) options, args = parser.parse_args(args)
...@@ -122,14 +124,11 @@ def main(args): ...@@ -122,14 +124,11 @@ def main(args):
this_path = os.path.expanduser(options.vault_password_file) this_path = os.path.expanduser(options.vault_password_file)
try: try:
f = open(this_path, "rb") f = open(this_path, "rb")
tmp_vault_pass=f.read() tmp_vault_pass=f.read().strip()
f.close() f.close()
except (OSError, IOError), e: except (OSError, IOError), e:
raise errors.AnsibleError("Could not read %s: %s" % (this_path, e)) raise errors.AnsibleError("Could not read %s: %s" % (this_path, e))
# get rid of newline chars
tmp_vault_pass = tmp_vault_pass.strip()
if not options.ask_vault_pass: if not options.ask_vault_pass:
vault_pass = tmp_vault_pass vault_pass = tmp_vault_pass
...@@ -137,7 +136,7 @@ def main(args): ...@@ -137,7 +136,7 @@ def main(args):
for extra_vars_opt in options.extra_vars: for extra_vars_opt in options.extra_vars:
if extra_vars_opt.startswith("@"): if extra_vars_opt.startswith("@"):
# Argument is a YAML file (JSON is a subset of YAML) # Argument is a YAML file (JSON is a subset of YAML)
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:])) extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:], vault_password=vault_pass))
elif extra_vars_opt and extra_vars_opt[0] in '[{': elif extra_vars_opt and extra_vars_opt[0] in '[{':
# Arguments as YAML # Arguments as YAML
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml(extra_vars_opt)) extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml(extra_vars_opt))
...@@ -194,7 +193,8 @@ def main(args): ...@@ -194,7 +193,8 @@ def main(args):
su=options.su, su=options.su,
su_pass=su_pass, su_pass=su_pass,
su_user=options.su_user, su_user=options.su_user,
vault_password=vault_pass vault_password=vault_pass,
force_handlers=options.force_handlers
) )
if options.listhosts or options.listtasks or options.syntax: if options.listhosts or options.listtasks or options.syntax:
...@@ -206,12 +206,12 @@ def main(args): ...@@ -206,12 +206,12 @@ def main(args):
playnum += 1 playnum += 1
play = ansible.playbook.Play(pb, play_ds, play_basedir) play = ansible.playbook.Play(pb, play_ds, play_basedir)
label = play.name label = play.name
if options.listhosts:
hosts = pb.inventory.list_hosts(play.hosts) hosts = pb.inventory.list_hosts(play.hosts)
print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts))
for host in hosts: # Filter all tasks by given tags
print ' %s' % host if pb.only_tags != 'all':
if options.listtasks: if options.subset and not hosts:
continue
matched_tags, unmatched_tags = play.compare_tags(pb.only_tags) matched_tags, unmatched_tags = play.compare_tags(pb.only_tags)
# Remove skipped tasks # Remove skipped tasks
...@@ -223,6 +223,13 @@ def main(args): ...@@ -223,6 +223,13 @@ def main(args):
if unknown_tags: if unknown_tags:
continue continue
if options.listhosts:
print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts))
for host in hosts:
print ' %s' % host
if options.listtasks:
print ' play #%d (%s):' % (playnum, label) print ' play #%d (%s):' % (playnum, label)
for task in play.tasks(): for task in play.tasks():
......
...@@ -44,6 +44,8 @@ import subprocess ...@@ -44,6 +44,8 @@ import subprocess
import sys import sys
import datetime import datetime
import socket import socket
import random
import time
from ansible import utils from ansible import utils
from ansible.utils import cmd_functions from ansible.utils import cmd_functions
from ansible import errors from ansible import errors
...@@ -102,6 +104,8 @@ def main(args): ...@@ -102,6 +104,8 @@ def main(args):
help='purge checkout after playbook run') help='purge checkout after playbook run')
parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true',
help='only run the playbook if the repository has been updated') help='only run the playbook if the repository has been updated')
parser.add_option('-s', '--sleep', dest='sleep', default=None,
help='sleep for random interval (between 0 and n number of seconds) before starting. this is a useful way to disperse git requests')
parser.add_option('-f', '--force', dest='force', default=False, parser.add_option('-f', '--force', dest='force', default=False,
action='store_true', action='store_true',
help='run the playbook even if the repository could ' help='run the playbook even if the repository could '
...@@ -117,6 +121,8 @@ def main(args): ...@@ -117,6 +121,8 @@ def main(args):
'Defaults to behavior of repository module.') 'Defaults to behavior of repository module.')
parser.add_option('-i', '--inventory-file', dest='inventory', parser.add_option('-i', '--inventory-file', dest='inventory',
help="location of the inventory host file") help="location of the inventory host file")
parser.add_option('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON", default=[])
parser.add_option('-v', '--verbose', default=False, action="callback", parser.add_option('-v', '--verbose', default=False, action="callback",
callback=increment_debug, callback=increment_debug,
help='Pass -vvvv to ansible-playbook') help='Pass -vvvv to ansible-playbook')
...@@ -126,6 +132,8 @@ def main(args): ...@@ -126,6 +132,8 @@ def main(args):
'Default is %s.' % DEFAULT_REPO_TYPE) 'Default is %s.' % DEFAULT_REPO_TYPE)
parser.add_option('--vault-password-file', dest='vault_password_file', parser.add_option('--vault-password-file', dest='vault_password_file',
help="vault password file") help="vault password file")
parser.add_option('-K', '--ask-sudo-pass', default=False, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password')
options, args = parser.parse_args(args) options, args = parser.parse_args(args)
hostname = socket.getfqdn() hostname = socket.getfqdn()
...@@ -162,7 +170,18 @@ def main(args): ...@@ -162,7 +170,18 @@ def main(args):
inv_opts, base_opts, options.module_name, repo_opts inv_opts, base_opts, options.module_name, repo_opts
) )
# RUN THE CHECKOUT COMMAND if options.sleep:
try:
secs = random.randint(0,int(options.sleep));
except ValueError:
parser.error("%s is not a number." % options.sleep)
return 1
print >>sys.stderr, "Sleeping for %d seconds..." % secs
time.sleep(secs);
# RUN THe CHECKOUT COMMAND
rc, out, err = cmd_functions.run_cmd(cmd, live=True) rc, out, err = cmd_functions.run_cmd(cmd, live=True)
if rc != 0: if rc != 0:
...@@ -185,6 +204,10 @@ def main(args): ...@@ -185,6 +204,10 @@ def main(args):
cmd += " --vault-password-file=%s" % options.vault_password_file cmd += " --vault-password-file=%s" % options.vault_password_file
if options.inventory: if options.inventory:
cmd += ' -i "%s"' % options.inventory cmd += ' -i "%s"' % options.inventory
for ev in options.extra_vars:
cmd += ' -e "%s"' % ev
if options.ask_sudo_pass:
cmd += ' -K'
os.chdir(options.dest) os.chdir(options.dest)
# RUN THE PLAYBOOK COMMAND # RUN THE PLAYBOOK COMMAND
......
...@@ -52,7 +52,7 @@ def build_option_parser(action): ...@@ -52,7 +52,7 @@ def build_option_parser(action):
sys.exit() sys.exit()
# options for all actions # options for all actions
#parser.add_option('-c', '--cipher', dest='cipher', default="AES", help="cipher to use") #parser.add_option('-c', '--cipher', dest='cipher', default="AES256", help="cipher to use")
parser.add_option('--debug', dest='debug', action="store_true", help="debug") parser.add_option('--debug', dest='debug', action="store_true", help="debug")
parser.add_option('--vault-password-file', dest='password_file', parser.add_option('--vault-password-file', dest='password_file',
help="vault password file") help="vault password file")
...@@ -105,7 +105,6 @@ def _read_password(filename): ...@@ -105,7 +105,6 @@ def _read_password(filename):
f = open(filename, "rb") f = open(filename, "rb")
data = f.read() data = f.read()
f.close f.close
# get rid of newline chars
data = data.strip() data = data.strip()
return data return data
...@@ -119,7 +118,7 @@ def execute_create(args, options, parser): ...@@ -119,7 +118,7 @@ def execute_create(args, options, parser):
else: else:
password = _read_password(options.password_file) password = _read_password(options.password_file)
cipher = 'AES' cipher = 'AES256'
if hasattr(options, 'cipher'): if hasattr(options, 'cipher'):
cipher = options.cipher cipher = options.cipher
...@@ -133,7 +132,7 @@ def execute_decrypt(args, options, parser): ...@@ -133,7 +132,7 @@ def execute_decrypt(args, options, parser):
else: else:
password = _read_password(options.password_file) password = _read_password(options.password_file)
cipher = 'AES' cipher = 'AES256'
if hasattr(options, 'cipher'): if hasattr(options, 'cipher'):
cipher = options.cipher cipher = options.cipher
...@@ -161,15 +160,12 @@ def execute_edit(args, options, parser): ...@@ -161,15 +160,12 @@ def execute_edit(args, options, parser):
def execute_encrypt(args, options, parser): def execute_encrypt(args, options, parser):
if len(args) > 1:
raise errors.AnsibleError("'create' does not accept more than one filename")
if not options.password_file: if not options.password_file:
password, new_password = utils.ask_vault_passwords(ask_vault_pass=True, confirm_vault=True) password, new_password = utils.ask_vault_passwords(ask_vault_pass=True, confirm_vault=True)
else: else:
password = _read_password(options.password_file) password = _read_password(options.password_file)
cipher = 'AES' cipher = 'AES256'
if hasattr(options, 'cipher'): if hasattr(options, 'cipher'):
cipher = options.cipher cipher = options.cipher
......
'\" t
.\" Title: ansible-galaxy
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 03/16/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.6
.\" Language: English
.\"
.TH "ANSIBLE\-GALAXY" "1" "03/16/2014" "Ansible 1\&.6" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
.ad l
.\" -----------------------------------------------------------------
.\" * MAIN CONTENT STARTS HERE *
.\" -----------------------------------------------------------------
.SH "NAME"
ansible-galaxy \- manage roles using galaxy\&.ansible\&.com
.SH "SYNOPSIS"
.sp
ansible\-galaxy [init|info|install|list|remove] [\-\-help] [options] \&...
.SH "DESCRIPTION"
.sp
\fBAnsible Galaxy\fR is a shared repository for Ansible roles (added in ansible version 1\&.2)\&. The ansible\-galaxy command can be used to manage these roles, or by creating a skeleton framework for roles you\(cqd like to upload to Galaxy\&.
.SH "COMMON OPTIONS"
.PP
\fB\-h\fR, \fB\-\-help\fR
.RS 4
Show a help message related to the given sub\-command\&.
.RE
.SH "INSTALL"
.sp
The \fBinstall\fR sub\-command is used to install roles\&.
.SS "USAGE"
.sp
$ ansible\-galaxy install [options] [\-r FILE | role_name(s)[,version] | tar_file(s)]
.sp
Roles can be installed in several different ways:
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
A username\&.rolename[,version] \- this will install a single role\&. The Galaxy API will be contacted to provide the information about the role, and the corresponding \&.tar\&.gz will be downloaded from
\fBgithub\&.com\fR\&. If the version is omitted, the most recent version available will be installed\&.
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
A file name, using
\fB\-r\fR
\- this will install multiple roles listed one per line\&. The format of each line is the same as above: username\&.rolename[,version]
.RE
.sp
.RS 4
.ie n \{\
\h'-04'\(bu\h'+03'\c
.\}
.el \{\
.sp -1
.IP \(bu 2.3
.\}
A \&.tar\&.gz of a valid role you\(cqve downloaded directly from
\fBgithub\&.com\fR\&. This is mainly useful when the system running Ansible does not have access to the Galaxy API, for instance when behind a firewall or proxy\&.
.RE
.SS "OPTIONS"
.PP
\fB\-f\fR, \fB\-\-force\fR
.RS 4
Force overwriting an existing role\&.
.RE
.PP
\fB\-i\fR, \fB\-\-ignore\-errors\fR
.RS 4
Ignore errors and continue with the next specified role\&.
.RE
.PP
\fB\-n\fR, \fB\-\-no\-deps\fR
.RS 4
Don\(cqt download roles listed as dependencies\&.
.RE
.PP
\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR
.RS 4
The path to the directory containing your roles\&. The default is the
\fBroles_path\fR
configured in your
\fBansible\&.cfg\fR
file (/etc/ansible/roles if not configured)
.RE
.PP
\fB\-r\fR \fIROLE_FILE\fR, \fB\-\-role\-file=\fR\fIROLE_FILE\fR
.RS 4
A file containing a list of roles to be imported, as specified above\&. This option cannot be used if a rolename or \&.tar\&.gz have been specified\&.
.RE
.SH "REMOVE"
.sp
The \fBremove\fR sub\-command is used to remove one or more roles\&.
.SS "USAGE"
.sp
$ ansible\-galaxy remove role1 role2 \&...
.SS "OPTIONS"
.PP
\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR
.RS 4
The path to the directory containing your roles\&. The default is the
\fBroles_path\fR
configured in your
\fBansible\&.cfg\fR
file (/etc/ansible/roles if not configured)
.RE
.SH "INIT"
.sp
The \fBinit\fR command is used to create an empty role suitable for uploading to https://galaxy\&.ansible\&.com (or for roles in general)\&.
.SS "USAGE"
.sp
$ ansible\-galaxy init [options] role_name
.SS "OPTIONS"
.PP
\fB\-f\fR, \fB\-\-force\fR
.RS 4
Force overwriting an existing role\&.
.RE
.PP
\fB\-p\fR \fIINIT_PATH\fR, \fB\-\-init\-path=\fR\fIINIT_PATH\fR
.RS 4
The path in which the skeleton role will be created\&.The default is the current working directory\&.
.RE
.SH "LIST"
.sp
The \fBlist\fR sub\-command is used to show what roles are currently instaled\&. You can specify a role name, and if installed only that role will be shown\&.
.SS "USAGE"
.sp
$ ansible\-galaxy list [role_name]
.SS "OPTIONS"
.PP
\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR
.RS 4
The path to the directory containing your roles\&. The default is the
\fBroles_path\fR
configured in your
\fBansible\&.cfg\fR
file (/etc/ansible/roles if not configured)
.RE
.SH "AUTHOR"
.sp
Ansible was originally written by Michael DeHaan\&. See the AUTHORS file for a complete list of contributors\&.
.SH "COPYRIGHT"
.sp
Copyright \(co 2014, Michael DeHaan
.sp
Ansible is released under the terms of the GPLv3 License\&.
.SH "SEE ALSO"
.sp
\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
ansible-galaxy(1)
===================
:doctype: manpage
:man source: Ansible
:man version: %VERSION%
:man manual: System administration commands
NAME
----
ansible-galaxy - manage roles using galaxy.ansible.com
SYNOPSIS
--------
ansible-galaxy [init|info|install|list|remove] [--help] [options] ...
DESCRIPTION
-----------
*Ansible Galaxy* is a shared repository for Ansible roles (added in
ansible version 1.2). The ansible-galaxy command can be used to manage
these roles, or by creating a skeleton framework for roles you'd like
to upload to Galaxy.
COMMON OPTIONS
--------------
*-h*, *--help*::
Show a help message related to the given sub-command.
INSTALL
-------
The *install* sub-command is used to install roles.
USAGE
~~~~~
$ ansible-galaxy install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]
Roles can be installed in several different ways:
* A username.rolename[,version] - this will install a single role. The Galaxy
API will be contacted to provide the information about the role, and the
corresponding .tar.gz will be downloaded from *github.com*. If the version
is omitted, the most recent version available will be installed.
* A file name, using *-r* - this will install multiple roles listed one per
line. The format of each line is the same as above: username.rolename[,version]
* A .tar.gz of a valid role you've downloaded directly from *github.com*. This
is mainly useful when the system running Ansible does not have access to
the Galaxy API, for instance when behind a firewall or proxy.
OPTIONS
~~~~~~~
*-f*, *--force*::
Force overwriting an existing role.
*-i*, *--ignore-errors*::
Ignore errors and continue with the next specified role.
*-n*, *--no-deps*::
Don't download roles listed as dependencies.
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
*-r* 'ROLE_FILE', *--role-file=*'ROLE_FILE'::
A file containing a list of roles to be imported, as specified above. This
option cannot be used if a rolename or .tar.gz have been specified.
REMOVE
------
The *remove* sub-command is used to remove one or more roles.
USAGE
~~~~~
$ ansible-galaxy remove role1 role2 ...
OPTIONS
~~~~~~~
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
INIT
----
The *init* command is used to create an empty role suitable for uploading
to https://galaxy.ansible.com (or for roles in general).
USAGE
~~~~~
$ ansible-galaxy init [options] role_name
OPTIONS
~~~~~~~
*-f*, *--force*::
Force overwriting an existing role.
*-p* 'INIT_PATH', *--init-path=*'INIT_PATH'::
The path in which the skeleton role will be created.The default is the current
working directory.
LIST
----
The *list* sub-command is used to show what roles are currently instaled.
You can specify a role name, and if installed only that role will be shown.
USAGE
~~~~~
$ ansible-galaxy list [role_name]
OPTIONS
~~~~~~~
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
AUTHOR
------
Ansible was originally written by Michael DeHaan. See the AUTHORS file
for a complete list of contributors.
COPYRIGHT
---------
Copyright © 2014, Michael DeHaan
Ansible is released under the terms of the GPLv3 License.
SEE ALSO
--------
*ansible*(1), *ansible-pull*(1), *ansible-doc*(1)
Extensive documentation is available in the documentation site:
<http://docs.ansible.com>. IRC and mailing list info can be found
in file CONTRIBUTING.md, available in: <https://github.com/ansible/ansible>
...@@ -91,6 +91,66 @@ Prompt for the password to use for playbook plays that request sudo access, if a ...@@ -91,6 +91,66 @@ Prompt for the password to use for playbook plays that request sudo access, if a
Desired sudo user (default=root)\&. Desired sudo user (default=root)\&.
.RE .RE
.PP .PP
\fB\-S\fR, \fB\-\-su\fR
.RS 4
run operations with su\&.
.RE
.PP
\fB\-\-ask\-su\-pass\fR
.RS 4
Prompt for the password to use for playbook plays that request su access, if any\&.
.RE
.PP
\fB\-R\fR, \fISU_USER\fR, \fB\-\-sudo\-user=\fR\fISU_USER\fR
.RS 4
Desired su user (default=root)\&.
.RE
.PP
\fB\-\-ask\-vault\-pass\fR
.RS 4
Ask for vault password\&.
.RE
.PP
\fB\-\-vault\-password\-file=\fR\fIVAULT_PASSWORD_FILE\fR
.RS 4
Vault password file\&.
.RE
.PP
\fB\-\-force\-handlers\fR
.RS 4
Run play handlers even if a task fails\&.
.RE
.PP
\fB\-\-list\-hosts\fR
.RS 4
Outputs a list of matching hosts without executing anything else\&.
.RE
.PP
\fB\-\-list\-tasks\fR
.RS 4
List all tasks that would be executed\&.
.RE
.PP
\fB\-\-start\-at\-task=\fR\fISTART_AT\fR
.RS 4
Start the playbook at the task matching this name\&.
.RE
.PP
\fB\-\-step\fR
.RS 4
one-step-at-a-time: confirm each task before running\&.
.RE
.PP
\fB\-\-syntax\-check\fR
.RS 4
Perform a syntax check on the playbook, but do not execute it\&.
.RE
.PP
\fB\-\-private\-key\fR
.RS 4
Use this file to authenticate the connection\&.
.RE
.PP
\fB\-t\fR, \fITAGS\fR, \fB\fI\-\-tags=\fR\fR\fB\*(AqTAGS\fR \fB\-t\fR, \fITAGS\fR, \fB\fI\-\-tags=\fR\fR\fB\*(AqTAGS\fR
.RS 4 .RS 4
Only run plays and tasks tagged with these values\&. Only run plays and tasks tagged with these values\&.
...@@ -147,6 +207,13 @@ is mostly useful for crontab or kickstarts\&. ...@@ -147,6 +207,13 @@ is mostly useful for crontab or kickstarts\&.
.RS 4 .RS 4
Further limits the selected host/group patterns\&. Further limits the selected host/group patterns\&.
.RE .RE
.PP
\fB\-\-version\fR
.RS 4
Show program's version number and exit\&.
.RE
.SH "ENVIRONMENT" .SH "ENVIRONMENT"
.sp .sp
The following environment variables may be specified\&. The following environment variables may be specified\&.
......
...@@ -76,11 +76,11 @@ access, if any. ...@@ -76,11 +76,11 @@ access, if any.
Desired sudo user (default=root). Desired sudo user (default=root).
*-t*, 'TAGS', *'--tags=*'TAGS':: *-t*, 'TAGS', *--tags=*'TAGS'::
Only run plays and tasks tagged with these values. Only run plays and tasks tagged with these values.
*'--skip-tags=*'SKIP_TAGS':: *--skip-tags=*'SKIP_TAGS'::
Only run plays and tasks whose tags do not match these values. Only run plays and tasks whose tags do not match these values.
......
'\" t
.\" Title: ansible-vault
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 03/17/2014
.\" Manual: System administration commands
.\" Source: Ansible 1.6
.\" Language: English
.\"
.TH "ANSIBLE\-VAULT" "1" "03/17/2014" "Ansible 1\&.6" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
.ad l
.\" -----------------------------------------------------------------
.\" * MAIN CONTENT STARTS HERE *
.\" -----------------------------------------------------------------
.SH "NAME"
ansible-vault \- manage encrypted YAML data\&.
.SH "SYNOPSIS"
.sp
ansible\-vault [create|decrypt|edit|encrypt|rekey] [\-\-help] [options] file_name
.SH "DESCRIPTION"
.sp
\fBansible\-vault\fR can encrypt any structured data file used by Ansible\&. This can include \fBgroup_vars/\fR or \fBhost_vars/\fR inventory variables, variables loaded by \fBinclude_vars\fR or \fBvars_files\fR, or variable files passed on the ansible\-playbook command line with \fB\-e @file\&.yml\fR or \fB\-e @file\&.json\fR\&. Role variables and defaults are also included!
.sp
Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault\&. If you\(cqd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted\&.
.SH "COMMON OPTIONS"
.sp
The following options are available to all sub\-commands:
.PP
\fB\-\-vault\-password\-file=\fR\fIFILE\fR
.RS 4
A file containing the vault password to be used during the encryption/decryption steps\&. Be sure to keep this file secured if it is used\&.
.RE
.PP
\fB\-h\fR, \fB\-\-help\fR
.RS 4
Show a help message related to the given sub\-command\&.
.RE
.PP
\fB\-\-debug\fR
.RS 4
Enable debugging output for troubleshooting\&.
.RE
.SH "CREATE"
.sp
\fB$ ansible\-vault create [options] FILE\fR
.sp
The \fBcreate\fR sub\-command is used to initialize a new encrypted file\&.
.sp
First you will be prompted for a password\&. The password used with vault currently must be the same for all files you wish to use together at the same time\&.
.sp
After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim\&. Once you are done with the editor session, the file will be saved as encrypted data\&.
.sp
The default cipher is AES (which is shared\-secret based)\&.
.SH "EDIT"
.sp
\fB$ ansible\-vault edit [options] FILE\fR
.sp
The \fBedit\fR sub\-command is used to modify a file which was previously encrypted using ansible\-vault\&.
.sp
This command will decrypt the file to a temporary file and allow you to edit the file, saving it back when done and removing the temporary file\&.
.SH "REKEY"
.sp
*$ ansible\-vault rekey [options] FILE_1 [FILE_2, \&..., FILE_N]
.sp
The \fBrekey\fR command is used to change the password on a vault\-encrypted files\&. This command can update multiple files at once, and will prompt for both the old and new passwords before modifying any data\&.
.SH "ENCRYPT"
.sp
*$ ansible\-vault encrypt [options] FILE_1 [FILE_2, \&..., FILE_N]
.sp
The \fBencrypt\fR sub\-command is used to encrypt pre\-existing data files\&. As with the \fBrekey\fR command, you can specify multiple files in one command\&.
.SH "DECRYPT"
.sp
*$ ansible\-vault decrypt [options] FILE_1 [FILE_2, \&..., FILE_N]
.sp
The \fBdecrypt\fR sub\-command is used to remove all encryption from data files\&. The files will be stored as plain\-text YAML once again, so be sure that you do not run this command on data files with active passwords or other sensitive data\&. In most cases, users will want to use the \fBedit\fR sub\-command to modify the files securely\&.
.SH "AUTHOR"
.sp
Ansible was originally written by Michael DeHaan\&. See the AUTHORS file for a complete list of contributors\&.
.SH "COPYRIGHT"
.sp
Copyright \(co 2014, Michael DeHaan
.sp
Ansible is released under the terms of the GPLv3 License\&.
.SH "SEE ALSO"
.sp
\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
ansible-vault(1)
================
:doctype: manpage
:man source: Ansible
:man version: %VERSION%
:man manual: System administration commands
NAME
----
ansible-vault - manage encrypted YAML data.
SYNOPSIS
--------
ansible-vault [create|decrypt|edit|encrypt|rekey] [--help] [options] file_name
DESCRIPTION
-----------
*ansible-vault* can encrypt any structured data file used by Ansible. This can include
*group_vars/* or *host_vars/* inventory variables, variables loaded by *include_vars* or
*vars_files*, or variable files passed on the ansible-playbook command line with
*-e @file.yml* or *-e @file.json*. Role variables and defaults are also included!
Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with
vault. If you’d like to not betray what variables you are even using, you can go as far to
keep an individual task file entirely encrypted.
COMMON OPTIONS
--------------
The following options are available to all sub-commands:
*--vault-password-file=*'FILE'::
A file containing the vault password to be used during the encryption/decryption
steps. Be sure to keep this file secured if it is used.
*-h*, *--help*::
Show a help message related to the given sub-command.
*--debug*::
Enable debugging output for troubleshooting.
CREATE
------
*$ ansible-vault create [options] FILE*
The *create* sub-command is used to initialize a new encrypted file.
First you will be prompted for a password. The password used with vault currently
must be the same for all files you wish to use together at the same time.
After providing a password, the tool will launch whatever editor you have defined
with $EDITOR, and defaults to vim. Once you are done with the editor session, the
file will be saved as encrypted data.
The default cipher is AES (which is shared-secret based).
EDIT
----
*$ ansible-vault edit [options] FILE*
The *edit* sub-command is used to modify a file which was previously encrypted
using ansible-vault.
This command will decrypt the file to a temporary file and allow you to edit the
file, saving it back when done and removing the temporary file.
REKEY
-----
*$ ansible-vault rekey [options] FILE_1 [FILE_2, ..., FILE_N]
The *rekey* command is used to change the password on a vault-encrypted files.
This command can update multiple files at once, and will prompt for both the
old and new passwords before modifying any data.
ENCRYPT
-------
*$ ansible-vault encrypt [options] FILE_1 [FILE_2, ..., FILE_N]
The *encrypt* sub-command is used to encrypt pre-existing data files. As with the
*rekey* command, you can specify multiple files in one command.
DECRYPT
-------
*$ ansible-vault decrypt [options] FILE_1 [FILE_2, ..., FILE_N]
The *decrypt* sub-command is used to remove all encryption from data files. The files
will be stored as plain-text YAML once again, so be sure that you do not run this
command on data files with active passwords or other sensitive data. In most cases,
users will want to use the *edit* sub-command to modify the files securely.
AUTHOR
------
Ansible was originally written by Michael DeHaan. See the AUTHORS file
for a complete list of contributors.
COPYRIGHT
---------
Copyright © 2014, Michael DeHaan
Ansible is released under the terms of the GPLv3 License.
SEE ALSO
--------
*ansible*(1), *ansible-pull*(1), *ansible-doc*(1)
Extensive documentation is available in the documentation site:
<http://docs.ansible.com>. IRC and mailing list info can be found
in file CONTRIBUTING.md, available in: <https://github.com/ansible/ansible>
...@@ -123,7 +123,7 @@ a lot shorter than this:: ...@@ -123,7 +123,7 @@ a lot shorter than this::
for arg in arguments: for arg in arguments:
# ignore any arguments without an equals in it # ignore any arguments without an equals in it
if arg.find("=") != -1: if "=" in arg:
(key, value) = arg.split("=") (key, value) = arg.split("=")
......
...@@ -140,16 +140,16 @@ Then you can use the facts inside your template, like this:: ...@@ -140,16 +140,16 @@ Then you can use the facts inside your template, like this::
.. _programatic_access_to_a_variable: .. _programatic_access_to_a_variable:
How do I access a variable name programatically? How do I access a variable name programmatically?
++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together, like so:: via a role parameter or other input. Variable names can be built by adding strings together, like so::
{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }} {{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}
The trick about going through hostvars is neccessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname' The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname'
is a magic variable that indiciates the current host you are looping over in the host loop. is a magic variable that indicates the current host you are looping over in the host loop.
.. _first_host_in_a_group: .. _first_host_in_a_group:
...@@ -179,17 +179,7 @@ Notice how we interchanged the bracket syntax for dots -- that can be done anywh ...@@ -179,17 +179,7 @@ Notice how we interchanged the bracket syntax for dots -- that can be done anywh
How do I copy files recursively onto a target host? How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++
The "copy" module doesn't handle recursive copies of directories. A common solution to do this is to use a local action to call 'rsync' to recursively copy files to the managed servers. The "copy" module has a recursive parameter, though if you want to do something more efficient for a large number of files, take a look at the "synchronize" module instead, which wraps rsync. See the module index for info on both of these modules.
Here is an example::
---
# ...
tasks:
- name: recursively copy files from management server to target
local_action: command rsync -a /path/to/files $inventory_hostname:/path/to/target/
Note that you'll need passphrase-less SSH or ssh-agent set up to let rsync copy without prompting for a passphrase or password.
.. _shell_env: .. _shell_env:
...@@ -256,7 +246,7 @@ Great question! Documentation for Ansible is kept in the main project git repos ...@@ -256,7 +246,7 @@ Great question! Documentation for Ansible is kept in the main project git repos
How do I keep secret data in my playbook? How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publically or keep things in source control, see :doc:`playbooks_vault`. If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :doc:`playbooks_vault`.
.. _i_dont_see_my_question: .. _i_dont_see_my_question:
......
...@@ -129,7 +129,7 @@ it will be automatically discoverable via a dynamic group like so:: ...@@ -129,7 +129,7 @@ it will be automatically discoverable via a dynamic group like so::
- ping - ping
Using this philosophy can be a great way to manage groups dynamically, without Using this philosophy can be a great way to manage groups dynamically, without
having to maintain seperate inventory. having to maintain separate inventory.
.. _aws_pull: .. _aws_pull:
......
Google Cloud Platform Guide
===========================
.. gce_intro:
Introduction
------------
.. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the GCE modules and how they work together. Upgrades via github pull requests are welcomed!
Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing
load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties.
The GCE modules all require the apache-libcloud module, which you can install from pip:
.. code-block:: bash
$ pip install apache-libcloud
.. note:: If you're using Ansible on Mac OS X, libcloud also needs to access a CA cert chain. You'll need to download one (you can get one for `here <http://curl.haxx.se/docs/caextract.html>`_.)
Credentials
-----------
To work with the GCE modules, you'll first need to get some credentials. You can create new one from the `console <https://console.developers.google.com/>`_ by going to the "APIs and Auth" section. Once you've created a new client ID and downloaded the generated private key (in the `pkcs12 format <http://en.wikipedia.org/wiki/PKCS_12>`_), you'll need to convert the key by running the following command:
.. code-block:: bash
$ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem
There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions:
* by providing to the modules directly
* by populating a ``secrets.py`` file
Calling Modules By Passing Credentials
``````````````````````````````````````
For the GCE modules you can specify the credentials as arguments:
* ``service_account_email``: email associated with the project
* ``pem_file``: path to the pem file
* ``project_id``: id of the project
For example, to create a new instance using the cloud module, you can use the following configuration:
.. code-block:: yaml
- name: Create instance(s)
hosts: localhost
connection: local
gather_facts: no
vars:
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
machine_type: n1-standard-1
image: debian-7
tasks:
- name: Launch instances
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
Calling Modules with secrets.py
```````````````````````````````
Create a file ``secrets.py`` looking like following, and put it in some folder which is in your ``$PYTHONPATH``:
.. code-block:: python
GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.pem')
GCE_KEYWORD_PARAMS = {'project': 'project-name'}
Now the modules can be used as above, but the account information can be omitted.
GCE Dynamic Inventory
---------------------
The best way to interact with your hosts is to use the gce inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
Note that when using the inventory script ``gce.py``, you also need to populate the ``gce.ini`` file that you can find in the plugins/inventory directory of the ansible checkout.
To use the GCE dynamic inventory script, copy ``gce.py`` from ``plugins/inventory`` into your inventory directory and make it executable. You can specify credentials for ``gce.py`` using the ``GCE_INI_PATH`` environment variable -- the default is to look for gce.ini in the same directory as the inventory script.
Let's see if inventory is working:
.. code-block:: bash
$ ./gce.py --list
You should see output describing the hosts you have, if any, running in Google Compute Engine.
Now let's see if we can use the inventory script to talk to Google.
.. code-block:: bash
$ GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup
hostname | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"x.x.x.x"
],
As with all dynamic inventory plugins in Ansible, you can configure the inventory path in ansible.cfg. The recommended way to use the inventory is to create an ``inventory`` directory, and place both the ``gce.py`` script and a file containing ``localhost`` in it. This can allow for cloud inventory to be used alongside local inventory (such as a physical datacenter) or machines running in different providers.
Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead of an individual file will cause ansible to evaluate each file in that directory for inventory.
Let's once again use our inventory script to see if it can talk to Google Cloud:
.. code-block:: bash
$ ansible all -i inventory/ -m setup
hostname | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"x.x.x.x"
],
The output should be similar to the previous command. If you're wanting less output and just want to check for SSH connectivity, use "-m" ping instead.
Use Cases
---------
For the following use case, let's use this small shell script as a wrapper.
.. code-block:: bash
#!/bin/bash
PLAYBOOK="$1"
if [ -z $PLAYBOOK ]; then
echo "You need to pass a playback as argument to this script."
exit 1
fi
export SSL_CERT_FILE=$(pwd)/cacert.cer
export ANSIBLE_HOST_KEY_CHECKING=False
if [ ! -f "$SSL_CERT_FILE" ]; then
curl -O http://curl.haxx.se/ca/cacert.pem
fi
ansible-playbook -v -i inventory/ "$PLAYBOOK"
Create an instance
``````````````````
The GCE module provides the ability to provision instances within Google Compute Engine. The provisioning task is typically performed from your Ansible control server against Google Cloud's API.
A playbook would looks like this:
.. code-block:: yaml
- name: Create instance(s)
hosts: localhost
gather_facts: no
connection: local
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
tasks:
- name: Launch instances
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
tags: webserver
register: gce
- name: Wait for SSH to come up
wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60
with_items: gce.instance_data
- name: add_host hostname={{ item.public_ip }} groupname=new_instances
- name: Manage new instances
hosts: new_instances
connection: ssh
roles:
- base_configuration
- production_server
Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.
Configuring instances in a group
````````````````````````````````
All of the created instances in GCE are grouped by tag. Since this is a cloud, it's probably best to ignore hostnames and just focus on group management.
Normally we'd also use roles here, but the following example is a simple one. Here we will also use the "gce_net" module to open up access to port 80 on
these nodes.
The variables in the 'vars' section could also be kept in a 'vars_files' file or something encrypted with Ansible-vault, if you so choose. This is just
a basic example of what is possible::
- name: Setup web servers
hosts: tag_webserver
gather_facts: no
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: unique-id@developer.gserviceaccount.com
pem_file: /path/to/project.pem
project_id: project-id
roles:
- name: Install lighttpd
apt: pkg=lighttpd state=installed
sudo: True
- name: Allow HTTP
local_action: gce_net
args:
fwname: "all-http"
name: "default"
allowed: "tcp:80"
state: "present"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
By pointing your browser to the IP of the server, you should see a page welcoming you.
Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions!
...@@ -172,7 +172,7 @@ Here's another example, from the same template:: ...@@ -172,7 +172,7 @@ Here's another example, from the same template::
{% endfor %} {% endfor %}
This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for
each monitoring hosts's default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts. each monitoring hosts' default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts.
You can learn a lot more about Jinja2 and its capabilities `here <http://jinja.pocoo.org/docs/>`_, and you You can learn a lot more about Jinja2 and its capabilities `here <http://jinja.pocoo.org/docs/>`_, and you
can read more about Ansible variables in general in the :doc:`playbooks_variables` section. can read more about Ansible variables in general in the :doc:`playbooks_variables` section.
...@@ -184,7 +184,7 @@ The Rolling Upgrade ...@@ -184,7 +184,7 @@ The Rolling Upgrade
Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's
orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible
referes to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it. refers to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it.
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_upgrade.yml``. Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_upgrade.yml``.
...@@ -201,7 +201,7 @@ The next part is the update play. The first part looks like this:: ...@@ -201,7 +201,7 @@ The next part is the update play. The first part looks like this::
user: root user: root
serial: 1 serial: 1
This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will paralleize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time. This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will parallelize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time.
Here is the next part of the update play:: Here is the next part of the update play::
......
...@@ -7,7 +7,7 @@ Introduction ...@@ -7,7 +7,7 @@ Introduction
```````````` ````````````
Vagrant is a tool to manage virtual machine environments, and allows you to Vagrant is a tool to manage virtual machine environments, and allows you to
configure and use reproducable work environments on top of various configure and use reproducible work environments on top of various
virtualization and cloud platforms. It also has integration with Ansible as a virtualization and cloud platforms. It also has integration with Ansible as a
provisioner for these virtual machines, and the two tools work together well. provisioner for these virtual machines, and the two tools work together well.
......
...@@ -8,8 +8,9 @@ This section is new and evolving. The idea here is explore particular use cases ...@@ -8,8 +8,9 @@ This section is new and evolving. The idea here is explore particular use cases
guide_aws guide_aws
guide_rax guide_rax
guide_gce
guide_vagrant guide_vagrant
guide_rolling_upgrade guide_rolling_upgrade
Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continous Deployment, and more. Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continuous Deployment, and more.
...@@ -3,7 +3,7 @@ Ansible Guru ...@@ -3,7 +3,7 @@ Ansible Guru
While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a bit more. While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a bit more.
`Ansible Guru <http://ansible.com/ansible-guru>`_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shoudn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert. `Ansible Guru <http://ansible.com/ansible-guru>`_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shouldn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert.
For those interested, click through the link above. You can sign up in minutes! For those interested, click through the link above. You can sign up in minutes!
......
...@@ -16,7 +16,7 @@ We believe simplicity is relevant to all sizes of environments and design for bu ...@@ -16,7 +16,7 @@ We believe simplicity is relevant to all sizes of environments and design for bu
Ansible manages machines in an agentless manner. There is never a question of how to Ansible manages machines in an agentless manner. There is never a question of how to
upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized -- it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems. upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized -- it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
This documentation covers the current released version of Ansible (1.5) and also some development version features (1.6). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release. This documentation covers the current released version of Ansible (1.5.3) and also some development version features (1.6). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release.
.. _an_introduction: .. _an_introduction:
......
...@@ -248,7 +248,7 @@ Be sure to use a high enough ``--forks`` value if you want to get all of your jo ...@@ -248,7 +248,7 @@ Be sure to use a high enough ``--forks`` value if you want to get all of your jo
very quickly. After the time limit (in seconds) runs out (``-B``), the process on very quickly. After the time limit (in seconds) runs out (``-B``), the process on
the remote nodes will be terminated. the remote nodes will be terminated.
Typically you'll be only be backgrounding long-running Typically you'll only be backgrounding long-running
shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this. shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this.
.. _checking_facts: .. _checking_facts:
......
...@@ -211,6 +211,16 @@ is very very conservative:: ...@@ -211,6 +211,16 @@ is very very conservative::
forks=5 forks=5
.. _gathering:
gathering
=========
New in 1.6, the 'gathering' setting controls the default policy of facts gathering (variables discovered about remote systems).
The value 'implicit' is the default, meaning facts will be gathered per play unless 'gather_facts: False' is set in the play. The value 'explicit' is the inverse, facts will not be gathered unless directly requested in the play.
The value 'smart' means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save fact gathering time.
hash_behaviour hash_behaviour
============== ==============
...@@ -310,6 +320,13 @@ different locations:: ...@@ -310,6 +320,13 @@ different locations::
Most users will not need to use this feature. See :doc:`developing_plugins` for more details Most users will not need to use this feature. See :doc:`developing_plugins` for more details
.. _module_lang:
module_lang
===========
This is to set the default language to communicate between the module and the system. By default, the value is 'C'.
.. _module_name: .. _module_name:
module_name module_name
...@@ -422,6 +439,10 @@ choose to establish a convention to checkout roles in /opt/mysite/roles like so: ...@@ -422,6 +439,10 @@ choose to establish a convention to checkout roles in /opt/mysite/roles like so:
roles_path = /opt/mysite/roles roles_path = /opt/mysite/roles
Additional paths can be provided separated by colon characters, in the same way as other pathstrings::
roles_path = /opt/mysite/roles:/opt/othersite/roles
Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths
that were searched. that were searched.
...@@ -622,4 +643,29 @@ This setting controls the timeout for the socket connect call, and should be kep ...@@ -622,4 +643,29 @@ This setting controls the timeout for the socket connect call, and should be kep
Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you're on a very fast and reliable LAN. If you're connecting to systems over the internet, it may be necessary to increase this timeout. Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you're on a very fast and reliable LAN. If you're connecting to systems over the internet, it may be necessary to increase this timeout.
.. _accelerate_daemon_timeout:
accelerate_daemon_timeout
=========================
.. versionadded:: 1.6
This setting controls the timeout for the accelerated daemon, as measured in minutes. The default daemon timeout is 30 minutes::
accelerate_daemon_timeout = 30
Note, prior to 1.6, the timeout was hard-coded from the time of the daemon's launch. For version 1.6+, the timeout is now based on the last activity to the daemon and is configurable via this option.
.. _accelerate_multi_key:
accelerate_multi_key
====================
.. versionadded:: 1.6
If enabled, this setting allows multiple private keys to be uploaded to the daemon. Any clients connecting to the daemon must also enable this option::
accelerate_multi_key = yes
New clients first connect to the target node over SSH to upload the key, which is done via a local socket file, so they must have the same access as the user that launched the daemon originally.
...@@ -28,11 +28,11 @@ It is expected that many Ansible users with a reasonable amount of physical hard ...@@ -28,11 +28,11 @@ It is expected that many Ansible users with a reasonable amount of physical hard
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has layer that allows it to represent data for multiple configuration management systems (even at the same time), and has
been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler been referred to as a 'lightweight CMDB' by some admins.
using Cobbler's XMLRPC API.
To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible and `chmod +x` the file. cobblerd will now need To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible and `chmod +x` the file. cobblerd will now need
to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``). to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``).
This particular script will communicate with Cobbler using Cobbler's XMLRPC API.
First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet. First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet.
......
...@@ -204,6 +204,18 @@ You may also wish to install from ports, run: ...@@ -204,6 +204,18 @@ You may also wish to install from ports, run:
$ sudo make -C /usr/ports/sysutils/ansible install $ sudo make -C /usr/ports/sysutils/ansible install
.. _from_brew:
Latest Releases Via Homebrew (Mac OSX)
++++++++++++++++++++++++++++++++++++++
To install on a Mac, make sure you have Homebrew, then run:
.. code-block:: bash
$ brew update
$ brew install ansible
.. _from_pip: .. _from_pip:
Latest Releases Via Pip Latest Releases Via Pip
......
...@@ -17,7 +17,7 @@ handle executing system commands. ...@@ -17,7 +17,7 @@ handle executing system commands.
Let's review how we execute three different modules from the command line:: Let's review how we execute three different modules from the command line::
ansible webservers -m service -a "name=httpd state=running" ansible webservers -m service -a "name=httpd state=started"
ansible webservers -m ping ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now" ansible webservers -m command -a "/sbin/reboot -t now"
......
...@@ -8,7 +8,7 @@ You Might Not Need This! ...@@ -8,7 +8,7 @@ You Might Not Need This!
Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation. Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation.
For users on 1.5 and later, accelerate mode only makes sense if you are (A) are managing from an Enterprise Linux 6 or earlier host For users on 1.5 and later, accelerate mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host
and still are on paramiko, or (B) can't enable TTYs with sudo as described in the pipelining docs. and still are on paramiko, or (B) can't enable TTYs with sudo as described in the pipelining docs.
If you can use pipelining, Ansible will reduce the amount of files transferred over the wire, If you can use pipelining, Ansible will reduce the amount of files transferred over the wire,
...@@ -76,4 +76,11 @@ As noted above, accelerated mode also supports running tasks via sudo, however t ...@@ -76,4 +76,11 @@ As noted above, accelerated mode also supports running tasks via sudo, however t
* You must remove requiretty from your sudoers options. * You must remove requiretty from your sudoers options.
* Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo'ed commands. * Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo'ed commands.
As of Ansible version `1.6`, you can also allow the use of multiple keys for connections from multiple Ansible management nodes. To do so, add the following option
to your `ansible.cfg` configuration::
accelerate_multi_key = yes
When enabled, the daemon will open a UNIX socket file (by default `$ANSIBLE_REMOTE_TEMP/.ansible-accelerate/.local.socket`). New connections over SSH can
use this socket file to upload new keys to the daemon.
...@@ -51,6 +51,8 @@ The top level of the directory would contain files and directories like so:: ...@@ -51,6 +51,8 @@ The top level of the directory would contain files and directories like so::
foo.sh # <-- script files for use with the script resource foo.sh # <-- script files for use with the script resource
vars/ # vars/ #
main.yml # <-- variables associated with this role main.yml # <-- variables associated with this role
meta/ #
main.yml # <-- role dependencies
webtier/ # same kind of structure as "common" was above, done for the webtier role webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # "" monitoring/ # ""
...@@ -223,8 +225,8 @@ What about just the first 10, and then the next 10?:: ...@@ -223,8 +225,8 @@ What about just the first 10, and then the next 10?::
And of course just basic ad-hoc stuff is also possible.:: And of course just basic ad-hoc stuff is also possible.::
ansible -i production -m ping ansible boston -i production -m ping
ansible -i production -m command -a '/sbin/reboot' --limit boston ansible boston -i production -m command -a '/sbin/reboot'
And there are some useful commands to know (at least in 1.1 and higher):: And there are some useful commands to know (at least in 1.1 and higher)::
......
...@@ -23,7 +23,7 @@ The environment can also be stored in a variable, and accessed like so:: ...@@ -23,7 +23,7 @@ The environment can also be stored in a variable, and accessed like so::
- hosts: all - hosts: all
remote_user: root remote_user: root
# here we make a variable named "env" that is a dictionary # here we make a variable named "proxy_env" that is a dictionary
vars: vars:
proxy_env: proxy_env:
http_proxy: http://proxy.example.com:8080 http_proxy: http://proxy.example.com:8080
......
...@@ -350,7 +350,7 @@ Assuming you load balance your checkout location, ansible-pull scales essentiall ...@@ -350,7 +350,7 @@ Assuming you load balance your checkout location, ansible-pull scales essentiall
Run ``ansible-pull --help`` for details. Run ``ansible-pull --help`` for details.
There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to using ansible in push mode to configure ansible-pull via a crontab! There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to configure ansible-pull via a crontab from push mode.
.. _tips_and_tricks: .. _tips_and_tricks:
...@@ -370,7 +370,7 @@ package is installed. Try it! ...@@ -370,7 +370,7 @@ package is installed. Try it!
To see what hosts would be affected by a playbook before you run it, you To see what hosts would be affected by a playbook before you run it, you
can do this:: can do this::
ansible-playbook playbook.yml --list-hosts. ansible-playbook playbook.yml --list-hosts
.. seealso:: .. seealso::
......
...@@ -7,6 +7,8 @@ in Ansible, and are typically used to load variables or templates with informati ...@@ -7,6 +7,8 @@ in Ansible, and are typically used to load variables or templates with informati
.. note:: This is considered an advanced feature, and many users will probably not rely on these features. .. note:: This is considered an advanced feature, and many users will probably not rely on these features.
.. note:: Lookups occur on the local computer, not on the remote computer.
.. contents:: Topics .. contents:: Topics
.. _getting_file_contents: .. _getting_file_contents:
......
...@@ -250,7 +250,7 @@ that matches a given criteria, and some of the filenames are determined by varia ...@@ -250,7 +250,7 @@ that matches a given criteria, and some of the filenames are determined by varia
- name: INTERFACES | Create Ansible header for /etc/network/interfaces - name: INTERFACES | Create Ansible header for /etc/network/interfaces
template: src={{ item }} dest=/etc/foo.conf template: src={{ item }} dest=/etc/foo.conf
with_first_found: with_first_found:
- "{{ansible_virtualization_type}_foo.conf" - "{{ansible_virtualization_type}}_foo.conf"
- "default_foo.conf" - "default_foo.conf"
This tool also has a long form version that allows for configurable search paths. Here's an example:: This tool also has a long form version that allows for configurable search paths. Here's an example::
......
...@@ -101,7 +101,7 @@ Inside a template you automatically have access to all of the variables that are ...@@ -101,7 +101,7 @@ Inside a template you automatically have access to all of the variables that are
it's more than that -- you can also read variables about other hosts. We'll show how to do that in a bit. it's more than that -- you can also read variables about other hosts. We'll show how to do that in a bit.
.. note:: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible .. note:: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible
templates are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate playbooks are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate
pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock
possibilities. possibilities.
...@@ -208,11 +208,62 @@ To get the symmetric difference of 2 lists (items exclusive to each list):: ...@@ -208,11 +208,62 @@ To get the symmetric difference of 2 lists (items exclusive to each list)::
{{ list1 | symmetric_difference(list2) }} {{ list1 | symmetric_difference(list2) }}
.. _version_comparison_filters:
Version Comparison Filters
--------------------------
.. versionadded:: 1.6
To compare a version number, such as checking if the ``ansible_distribution_version``
version is greater than or equal to '12.04', you can use the ``version_compare`` filter::
The ``version_compare`` filter can also be used to evaluate the ``ansible_distribution_version``::
{{ ansible_distribution_version | version_compare('12.04', '>=') }}
If ``ansible_distribution_version`` is greater than or equal to 12, this filter will return True, otherwise
it will return False.
The ``version_compare`` filter accepts the following operators::
<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne
This filter also accepts a 3rd parameter, ``strict`` which defines if strict version parsing should
be used. The default is ``False``, and if set as ``True`` will use more strict version parsing::
{{ sample_version_var | version_compare('1.0', operator='lt', strict=True) }}
.. _random_filter
Random Number Filter
--------------------------
.. versionadded:: 1.6
To get a random number from 0 to supplied end::
{{ 59 |random}} * * * * root /script/from/cron
Get a random number from 0 to 100 but in steps of 10::
{{ 100 |random(step=10) }} => 70
Get a random number from 1 to 100 but in steps of 10::
{{ 100 |random(1, 10) }} => 31
{{ 100 |random(start=1, step=10) }} => 51
.. _other_useful_filters: .. _other_useful_filters:
Other Useful Filters Other Useful Filters
-------------------- --------------------
To concatenate a list into a string::
{{ list | join(" ") }}
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':: To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
{{ path | basename }} {{ path | basename }}
...@@ -240,6 +291,14 @@ doesn't know it is a boolean value:: ...@@ -240,6 +291,14 @@ doesn't know it is a boolean value::
- debug: msg=test - debug: msg=test
when: some_string_value | bool when: some_string_value | bool
To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
A few useful filters are typically added with each new Ansible release. The development documentation shows A few useful filters are typically added with each new Ansible release. The development documentation shows
how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones
to be added to core so everyone can make use of them. to be added to core so everyone can make use of them.
...@@ -837,8 +896,11 @@ If multiple variables of the same name are defined in different places, they win ...@@ -837,8 +896,11 @@ If multiple variables of the same name are defined in different places, they win
* -e variables always win * -e variables always win
* then comes "most everything else" * then comes "most everything else"
* then comes variables defined in inventory * then comes variables defined in inventory
* then comes facts discovered about a system
* then "role defaults", which are the most "defaulty" and lose in priority to everything. * then "role defaults", which are the most "defaulty" and lose in priority to everything.
.. note:: In versions prior to 1.5.4, facts discovered about a system were in the "most everything else" category above.
That seems a little theoretical. Let's show some examples and where you would choose to put what based on the kind of That seems a little theoretical. Let's show some examples and where you would choose to put what based on the kind of
control you might want over values. control you might want over values.
...@@ -880,7 +942,7 @@ See :doc:`playbooks_roles` for more info about this:: ...@@ -880,7 +942,7 @@ See :doc:`playbooks_roles` for more info about this::
--- ---
# file: roles/x/defaults/main.yml # file: roles/x/defaults/main.yml
# if not overriden in inventory or as a parameter, this is the value that will be used # if not overridden in inventory or as a parameter, this is the value that will be used
http_port: 80 http_port: 80
if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden
......
...@@ -14,7 +14,7 @@ What Can Be Encrypted With Vault ...@@ -14,7 +14,7 @@ What Can Be Encrypted With Vault
The vault feature can encrypt any structured data file used by Ansible. This can include "group_vars/" or "host_vars/" inventory variables, variables loaded by "include_vars" or "vars_files", or variable files passed on the ansible-playbook command line with "-e @file.yml" or "-e @file.json". Role variables and defaults are also included! The vault feature can encrypt any structured data file used by Ansible. This can include "group_vars/" or "host_vars/" inventory variables, variables loaded by "include_vars" or "vars_files", or variable files passed on the ansible-playbook command line with "-e @file.yml" or "-e @file.json". Role variables and defaults are also included!
Because Ansible tasks, handlers, and so on are also data, these two can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :) Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :)
.. _creating_files: .. _creating_files:
......
...@@ -22,8 +22,17 @@ sudo_user = root ...@@ -22,8 +22,17 @@ sudo_user = root
#ask_pass = True #ask_pass = True
transport = smart transport = smart
remote_port = 22 remote_port = 22
module_lang = C
# additional paths to search for roles in, colon seperated # plays will gather facts by default, which contain information about
# the remote system.
#
# smart - gather by default, but don't regather if already gathered
# implicit - gather by default, turn off with gather_facts: False
# explicit - do not gather by default, must say gather_facts: True
gathering = implicit
# additional paths to search for roles in, colon separated
#roles_path = /etc/ansible/roles #roles_path = /etc/ansible/roles
# uncomment this to disable SSH key host checking # uncomment this to disable SSH key host checking
...@@ -82,7 +91,7 @@ ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} ...@@ -82,7 +91,7 @@ ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid}
# to revert the behavior to pre-1.3. # to revert the behavior to pre-1.3.
#error_on_undefined_vars = False #error_on_undefined_vars = False
# set plugin path directories here, seperate with colons # set plugin path directories here, separate with colons
action_plugins = /usr/share/ansible_plugins/action_plugins action_plugins = /usr/share/ansible_plugins/action_plugins
callback_plugins = /usr/share/ansible_plugins/callback_plugins callback_plugins = /usr/share/ansible_plugins/callback_plugins
connection_plugins = /usr/share/ansible_plugins/connection_plugins connection_plugins = /usr/share/ansible_plugins/connection_plugins
...@@ -98,6 +107,20 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins ...@@ -98,6 +107,20 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 # set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
#nocolor = 1 #nocolor = 1
# the CA certificate path used for validating SSL certs. This path
# should exist on the controlling node, not the target nodes
# common locations:
# RHEL/CentOS: /etc/pki/tls/certs/ca-bundle.crt
# Fedora : /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
# Ubuntu : /usr/share/ca-certificates/cacert.org/cacert.org.crt
#ca_file_path =
# the http user-agent string to use when fetching urls. Some web server
# operators block the default urllib user agent as it is frequently used
# by malicious attacks/scripts, so we set it to something unique to
# avoid issues.
#http_user_agent = ansible-agent
[paramiko_connection] [paramiko_connection]
# uncomment this line to cause the paramiko connection plugin to not record new host # uncomment this line to cause the paramiko connection plugin to not record new host
...@@ -145,3 +168,14 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins ...@@ -145,3 +168,14 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins
accelerate_port = 5099 accelerate_port = 5099
accelerate_timeout = 30 accelerate_timeout = 30
accelerate_connect_timeout = 5.0 accelerate_connect_timeout = 5.0
# The daemon timeout is measured in minutes. This time is measured
# from the last activity to the accelerate daemon.
accelerate_daemon_timeout = 30
# If set to yes, accelerate_multi_key will allow multiple
# private keys to be uploaded to it, though each user must
# have access to the system via SSH to add a new key. The default
# is "no".
#accelerate_multi_key = yes
...@@ -17,7 +17,7 @@ and do not wish to install them from your operating system package manager, you ...@@ -17,7 +17,7 @@ and do not wish to install them from your operating system package manager, you
can install them from pip can install them from pip
$ easy_install pip # if pip is not already available $ easy_install pip # if pip is not already available
$ pip install pyyaml jinja2 $ pip install pyyaml jinja2 nose passlib pycrypto
From there, follow ansible instructions on docs.ansible.com as normal. From there, follow ansible instructions on docs.ansible.com as normal.
......
...@@ -185,7 +185,7 @@ def process_module(module, options, env, template, outputname, module_map): ...@@ -185,7 +185,7 @@ def process_module(module, options, env, template, outputname, module_map):
fname = module_map[module] fname = module_map[module]
# ignore files with extensions # ignore files with extensions
if os.path.basename(fname).find(".") != -1: if "." in os.path.basename(fname):
return return
# use ansible core library to parse out doc metadata YAML and plaintext examples # use ansible core library to parse out doc metadata YAML and plaintext examples
......
...@@ -93,6 +93,10 @@ def boilerplate_module(modfile, args, interpreter): ...@@ -93,6 +93,10 @@ def boilerplate_module(modfile, args, interpreter):
# Argument is a YAML file (JSON is a subset of YAML) # Argument is a YAML file (JSON is a subset of YAML)
complex_args = utils.combine_vars(complex_args, utils.parse_yaml_from_file(args[1:])) complex_args = utils.combine_vars(complex_args, utils.parse_yaml_from_file(args[1:]))
args='' args=''
elif args.startswith("{"):
# Argument is a YAML document (not a file)
complex_args = utils.combine_vars(complex_args, utils.parse_yaml(args))
args=''
inject = {} inject = {}
if interpreter: if interpreter:
......
...@@ -115,6 +115,12 @@ def log_unflock(runner): ...@@ -115,6 +115,12 @@ def log_unflock(runner):
except OSError: except OSError:
pass pass
def set_playbook(callback, playbook):
''' used to notify callback plugins of playbook context '''
callback.playbook = playbook
for callback_plugin in callback_plugins:
callback_plugin.playbook = playbook
def set_play(callback, play): def set_play(callback, play):
''' used to notify callback plugins of context ''' ''' used to notify callback plugins of context '''
callback.play = play callback.play = play
...@@ -250,7 +256,7 @@ def regular_generic_msg(hostname, result, oneline, caption): ...@@ -250,7 +256,7 @@ def regular_generic_msg(hostname, result, oneline, caption):
def banner_cowsay(msg): def banner_cowsay(msg):
if msg.find(": [") != -1: if ": [" in msg:
msg = msg.replace("[","") msg = msg.replace("[","")
if msg.endswith("]"): if msg.endswith("]"):
msg = msg[:-1] msg = msg[:-1]
......
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import sys import sys
import constants import constants
...@@ -37,7 +36,7 @@ else: ...@@ -37,7 +36,7 @@ else:
# curses returns an error (e.g. could not find terminal) # curses returns an error (e.g. could not find terminal)
ANSIBLE_COLOR=False ANSIBLE_COLOR=False
if os.getenv("ANSIBLE_FORCE_COLOR") is not None: if constants.ANSIBLE_FORCE_COLOR:
ANSIBLE_COLOR=True ANSIBLE_COLOR=True
# --- begin "pretty" # --- begin "pretty"
......
...@@ -93,8 +93,8 @@ else: ...@@ -93,8 +93,8 @@ else:
DIST_MODULE_PATH = '/usr/share/ansible/' DIST_MODULE_PATH = '/usr/share/ansible/'
# check all of these extensions when looking for yaml files for things like # check all of these extensions when looking for yaml files for things like
# group variables # group variables -- really anything we can load
YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml" ] YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml", ".json" ]
# sections in config file # sections in config file
DEFAULTS='defaults' DEFAULTS='defaults'
...@@ -134,6 +134,7 @@ DEFAULT_SU = get_config(p, DEFAULTS, 'su', 'ANSIBLE_SU', False, boolean=True) ...@@ -134,6 +134,7 @@ DEFAULT_SU = get_config(p, DEFAULTS, 'su', 'ANSIBLE_SU', False, boolean=True)
DEFAULT_SU_FLAGS = get_config(p, DEFAULTS, 'su_flags', 'ANSIBLE_SU_FLAGS', '') DEFAULT_SU_FLAGS = get_config(p, DEFAULTS, 'su_flags', 'ANSIBLE_SU_FLAGS', '')
DEFAULT_SU_USER = get_config(p, DEFAULTS, 'su_user', 'ANSIBLE_SU_USER', 'root') DEFAULT_SU_USER = get_config(p, DEFAULTS, 'su_user', 'ANSIBLE_SU_USER', 'root')
DEFAULT_ASK_SU_PASS = get_config(p, DEFAULTS, 'ask_su_pass', 'ANSIBLE_ASK_SU_PASS', False, boolean=True) DEFAULT_ASK_SU_PASS = get_config(p, DEFAULTS, 'ask_su_pass', 'ANSIBLE_ASK_SU_PASS', False, boolean=True)
DEFAULT_GATHERING = get_config(p, DEFAULTS, 'gathering', 'ANSIBLE_GATHERING', 'implicit').lower()
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '/usr/share/ansible_plugins/action_plugins') DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '/usr/share/ansible_plugins/action_plugins')
DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '/usr/share/ansible_plugins/callback_plugins') DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '/usr/share/ansible_plugins/callback_plugins')
...@@ -143,6 +144,7 @@ DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', ' ...@@ -143,6 +144,7 @@ DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', '
DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '/usr/share/ansible_plugins/filter_plugins') DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '/usr/share/ansible_plugins/filter_plugins')
DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', '')) DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', ''))
ANSIBLE_FORCE_COLOR = get_config(p, DEFAULTS, 'force_color', 'ANSIBLE_FORCE_COLOR', None, boolean=True)
ANSIBLE_NOCOLOR = get_config(p, DEFAULTS, 'nocolor', 'ANSIBLE_NOCOLOR', None, boolean=True) ANSIBLE_NOCOLOR = get_config(p, DEFAULTS, 'nocolor', 'ANSIBLE_NOCOLOR', None, boolean=True)
ANSIBLE_NOCOWS = get_config(p, DEFAULTS, 'nocows', 'ANSIBLE_NOCOWS', None, boolean=True) ANSIBLE_NOCOWS = get_config(p, DEFAULTS, 'nocows', 'ANSIBLE_NOCOWS', None, boolean=True)
DISPLAY_SKIPPED_HOSTS = get_config(p, DEFAULTS, 'display_skipped_hosts', 'DISPLAY_SKIPPED_HOSTS', True, boolean=True) DISPLAY_SKIPPED_HOSTS = get_config(p, DEFAULTS, 'display_skipped_hosts', 'DISPLAY_SKIPPED_HOSTS', True, boolean=True)
...@@ -160,9 +162,11 @@ ZEROMQ_PORT = get_config(p, 'fireball_connection', 'zeromq_po ...@@ -160,9 +162,11 @@ ZEROMQ_PORT = get_config(p, 'fireball_connection', 'zeromq_po
ACCELERATE_PORT = get_config(p, 'accelerate', 'accelerate_port', 'ACCELERATE_PORT', 5099, integer=True) ACCELERATE_PORT = get_config(p, 'accelerate', 'accelerate_port', 'ACCELERATE_PORT', 5099, integer=True)
ACCELERATE_TIMEOUT = get_config(p, 'accelerate', 'accelerate_timeout', 'ACCELERATE_TIMEOUT', 30, integer=True) ACCELERATE_TIMEOUT = get_config(p, 'accelerate', 'accelerate_timeout', 'ACCELERATE_TIMEOUT', 30, integer=True)
ACCELERATE_CONNECT_TIMEOUT = get_config(p, 'accelerate', 'accelerate_connect_timeout', 'ACCELERATE_CONNECT_TIMEOUT', 1.0, floating=True) ACCELERATE_CONNECT_TIMEOUT = get_config(p, 'accelerate', 'accelerate_connect_timeout', 'ACCELERATE_CONNECT_TIMEOUT', 1.0, floating=True)
ACCELERATE_DAEMON_TIMEOUT = get_config(p, 'accelerate', 'accelerate_daemon_timeout', 'ACCELERATE_DAEMON_TIMEOUT', 30, integer=True)
ACCELERATE_KEYS_DIR = get_config(p, 'accelerate', 'accelerate_keys_dir', 'ACCELERATE_KEYS_DIR', '~/.fireball.keys') ACCELERATE_KEYS_DIR = get_config(p, 'accelerate', 'accelerate_keys_dir', 'ACCELERATE_KEYS_DIR', '~/.fireball.keys')
ACCELERATE_KEYS_DIR_PERMS = get_config(p, 'accelerate', 'accelerate_keys_dir_perms', 'ACCELERATE_KEYS_DIR_PERMS', '700') ACCELERATE_KEYS_DIR_PERMS = get_config(p, 'accelerate', 'accelerate_keys_dir_perms', 'ACCELERATE_KEYS_DIR_PERMS', '700')
ACCELERATE_KEYS_FILE_PERMS = get_config(p, 'accelerate', 'accelerate_keys_file_perms', 'ACCELERATE_KEYS_FILE_PERMS', '600') ACCELERATE_KEYS_FILE_PERMS = get_config(p, 'accelerate', 'accelerate_keys_file_perms', 'ACCELERATE_KEYS_FILE_PERMS', '600')
ACCELERATE_MULTI_KEY = get_config(p, 'accelerate', 'accelerate_multi_key', 'ACCELERATE_MULTI_KEY', False, boolean=True)
PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True) PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True)
# characters included in auto-generated passwords # characters included in auto-generated passwords
......
...@@ -99,12 +99,40 @@ class Inventory(object): ...@@ -99,12 +99,40 @@ class Inventory(object):
self.host_list = os.path.join(self.host_list, "") self.host_list = os.path.join(self.host_list, "")
self.parser = InventoryDirectory(filename=host_list) self.parser = InventoryDirectory(filename=host_list)
self.groups = self.parser.groups.values() self.groups = self.parser.groups.values()
elif utils.is_executable(host_list): else:
# check to see if the specified file starts with a
# shebang (#!/), so if an error is raised by the parser
# class we can show a more apropos error
shebang_present = False
try:
inv_file = open(host_list)
first_line = inv_file.readlines()[0]
inv_file.close()
if first_line.startswith('#!'):
shebang_present = True
except:
pass
if utils.is_executable(host_list):
try:
self.parser = InventoryScript(filename=host_list) self.parser = InventoryScript(filename=host_list)
self.groups = self.parser.groups.values() self.groups = self.parser.groups.values()
except:
if not shebang_present:
raise errors.AnsibleError("The file %s is marked as executable, but failed to execute correctly. " % host_list + \
"If this is not supposed to be an executable script, correct this with `chmod -x %s`." % host_list)
else:
raise
else: else:
try:
self.parser = InventoryParser(filename=host_list) self.parser = InventoryParser(filename=host_list)
self.groups = self.parser.groups.values() self.groups = self.parser.groups.values()
except:
if shebang_present:
raise errors.AnsibleError("The file %s looks like it should be an executable inventory script, but is not marked executable. " % host_list + \
"Perhaps you want to correct this with `chmod +x %s`?" % host_list)
else:
raise
utils.plugins.vars_loader.add_directory(self.basedir(), with_subdir=True) utils.plugins.vars_loader.add_directory(self.basedir(), with_subdir=True)
else: else:
...@@ -208,12 +236,14 @@ class Inventory(object): ...@@ -208,12 +236,14 @@ class Inventory(object):
""" """
# The regex used to match on the range, which can be [x] or [x-y]. # The regex used to match on the range, which can be [x] or [x-y].
pattern_re = re.compile("^(.*)\[([0-9]+)(?:(?:-)([0-9]+))?\](.*)$") pattern_re = re.compile("^(.*)\[([-]?[0-9]+)(?:(?:-)([0-9]+))?\](.*)$")
m = pattern_re.match(pattern) m = pattern_re.match(pattern)
if m: if m:
(target, first, last, rest) = m.groups() (target, first, last, rest) = m.groups()
first = int(first) first = int(first)
if last: if last:
if first < 0:
raise errors.AnsibleError("invalid range: negative indices cannot be used as the first item in a range")
last = int(last) last = int(last)
else: else:
last = first last = first
...@@ -245,10 +275,13 @@ class Inventory(object): ...@@ -245,10 +275,13 @@ class Inventory(object):
right = 0 right = 0
left=int(left) left=int(left)
right=int(right) right=int(right)
try:
if left != right: if left != right:
return hosts[left:right] return hosts[left:right]
else: else:
return [ hosts[left] ] return [ hosts[left] ]
except IndexError:
raise errors.AnsibleError("no hosts matching the pattern '%s' were found" % pat)
def _create_implicit_localhost(self, pattern): def _create_implicit_localhost(self, pattern):
new_host = Host(pattern) new_host = Host(pattern)
...@@ -363,9 +396,9 @@ class Inventory(object): ...@@ -363,9 +396,9 @@ class Inventory(object):
vars_results = [ plugin.run(host, vault_password=vault_password) for plugin in self._vars_plugins ] vars_results = [ plugin.run(host, vault_password=vault_password) for plugin in self._vars_plugins ]
for updated in vars_results: for updated in vars_results:
if updated is not None: if updated is not None:
vars.update(updated) vars = utils.combine_vars(vars, updated)
vars.update(host.get_variables()) vars = utils.combine_vars(vars, host.get_variables())
if self.parser is not None: if self.parser is not None:
vars = utils.combine_vars(vars, self.parser.get_host_variables(host)) vars = utils.combine_vars(vars, self.parser.get_host_variables(host))
return vars return vars
......
...@@ -41,10 +41,7 @@ def detect_range(line = None): ...@@ -41,10 +41,7 @@ def detect_range(line = None):
Returnes True if the given line contains a pattern, else False. Returnes True if the given line contains a pattern, else False.
''' '''
if (line.find("[") != -1 and if 0 <= line.find("[") < line.find(":") < line.find("]"):
line.find(":") != -1 and
line.find("]") != -1 and
line.index("[") < line.index(":") < line.index("]")):
return True return True
else: else:
return False return False
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import ansible.constants as C import ansible.constants as C
from ansible import utils
class Host(object): class Host(object):
''' a single ansible host ''' ''' a single ansible host '''
...@@ -56,7 +57,7 @@ class Host(object): ...@@ -56,7 +57,7 @@ class Host(object):
results = {} results = {}
groups = self.get_groups() groups = self.get_groups()
for group in sorted(groups, key=lambda g: g.depth): for group in sorted(groups, key=lambda g: g.depth):
results.update(group.get_variables()) results = utils.combine_vars(results, group.get_variables())
results.update(self.vars) results.update(self.vars)
results['inventory_hostname'] = self.name results['inventory_hostname'] = self.name
results['inventory_hostname_short'] = self.name.split('.')[0] results['inventory_hostname_short'] = self.name.split('.')[0]
......
...@@ -23,6 +23,7 @@ from ansible.inventory.group import Group ...@@ -23,6 +23,7 @@ from ansible.inventory.group import Group
from ansible.inventory.expand_hosts import detect_range from ansible.inventory.expand_hosts import detect_range
from ansible.inventory.expand_hosts import expand_hostname_range from ansible.inventory.expand_hosts import expand_hostname_range
from ansible import errors from ansible import errors
from ansible import utils
import shlex import shlex
import re import re
import ast import ast
...@@ -47,6 +48,20 @@ class InventoryParser(object): ...@@ -47,6 +48,20 @@ class InventoryParser(object):
self._parse_group_variables() self._parse_group_variables()
return self.groups return self.groups
@staticmethod
def _parse_value(v):
if "#" not in v:
try:
return ast.literal_eval(v)
# Using explicit exceptions.
# Likely a string that literal_eval does not like. We wil then just set it.
except ValueError:
# For some reason this was thought to be malformed.
pass
except SyntaxError:
# Is this a hash with an equals at the end?
pass
return v
# [webservers] # [webservers]
# alpha # alpha
...@@ -65,10 +80,10 @@ class InventoryParser(object): ...@@ -65,10 +80,10 @@ class InventoryParser(object):
active_group_name = 'ungrouped' active_group_name = 'ungrouped'
for line in self.lines: for line in self.lines:
line = line.split("#")[0].strip() line = utils.before_comment(line).strip()
if line.startswith("[") and line.endswith("]"): if line.startswith("[") and line.endswith("]"):
active_group_name = line.replace("[","").replace("]","") active_group_name = line.replace("[","").replace("]","")
if line.find(":vars") != -1 or line.find(":children") != -1: if ":vars" in line or ":children" in line:
active_group_name = active_group_name.rsplit(":", 1)[0] active_group_name = active_group_name.rsplit(":", 1)[0]
if active_group_name not in self.groups: if active_group_name not in self.groups:
new_group = self.groups[active_group_name] = Group(name=active_group_name) new_group = self.groups[active_group_name] = Group(name=active_group_name)
...@@ -94,11 +109,11 @@ class InventoryParser(object): ...@@ -94,11 +109,11 @@ class InventoryParser(object):
# FQDN foo.example.com # FQDN foo.example.com
if hostname.count(".") == 1: if hostname.count(".") == 1:
(hostname, port) = hostname.rsplit(".", 1) (hostname, port) = hostname.rsplit(".", 1)
elif (hostname.find("[") != -1 and elif ("[" in hostname and
hostname.find("]") != -1 and "]" in hostname and
hostname.find(":") != -1 and ":" in hostname and
(hostname.rindex("]") < hostname.rindex(":")) or (hostname.rindex("]") < hostname.rindex(":")) or
(hostname.find("]") == -1 and hostname.find(":") != -1)): ("]" not in hostname and ":" in hostname)):
(hostname, port) = hostname.rsplit(":", 1) (hostname, port) = hostname.rsplit(":", 1)
hostnames = [] hostnames = []
...@@ -122,12 +137,7 @@ class InventoryParser(object): ...@@ -122,12 +137,7 @@ class InventoryParser(object):
(k,v) = t.split("=", 1) (k,v) = t.split("=", 1)
except ValueError, e: except ValueError, e:
raise errors.AnsibleError("Invalid ini entry: %s - %s" % (t, str(e))) raise errors.AnsibleError("Invalid ini entry: %s - %s" % (t, str(e)))
try: host.set_variable(k, self._parse_value(v))
host.set_variable(k,ast.literal_eval(v))
except:
# most likely a string that literal_eval
# doesn't like, so just set it
host.set_variable(k,v)
self.groups[active_group_name].add_host(host) self.groups[active_group_name].add_host(host)
# [southeast:children] # [southeast:children]
...@@ -141,7 +151,7 @@ class InventoryParser(object): ...@@ -141,7 +151,7 @@ class InventoryParser(object):
line = line.strip() line = line.strip()
if line is None or line == '': if line is None or line == '':
continue continue
if line.startswith("[") and line.find(":children]") != -1: if line.startswith("[") and ":children]" in line:
line = line.replace("[","").replace(":children]","") line = line.replace("[","").replace(":children]","")
group = self.groups.get(line, None) group = self.groups.get(line, None)
if group is None: if group is None:
...@@ -166,7 +176,7 @@ class InventoryParser(object): ...@@ -166,7 +176,7 @@ class InventoryParser(object):
group = None group = None
for line in self.lines: for line in self.lines:
line = line.strip() line = line.strip()
if line.startswith("[") and line.find(":vars]") != -1: if line.startswith("[") and ":vars]" in line:
line = line.replace("[","").replace(":vars]","") line = line.replace("[","").replace(":vars]","")
group = self.groups.get(line, None) group = self.groups.get(line, None)
if group is None: if group is None:
...@@ -178,16 +188,11 @@ class InventoryParser(object): ...@@ -178,16 +188,11 @@ class InventoryParser(object):
elif line == '': elif line == '':
pass pass
elif group: elif group:
if line.find("=") == -1: if "=" not in line:
raise errors.AnsibleError("variables assigned to group must be in key=value form") raise errors.AnsibleError("variables assigned to group must be in key=value form")
else: else:
(k, v) = [e.strip() for e in line.split("=", 1)] (k, v) = [e.strip() for e in line.split("=", 1)]
# When the value is a single-quoted or double-quoted string group.set_variable(k, self._parse_value(v))
if re.match(r"^(['\"]).*\1$", v):
# Unquote the string
group.set_variable(k, re.sub(r"^['\"]|['\"]$", '', v))
else:
group.set_variable(k, v)
def get_host_variables(self, host): def get_host_variables(self, host):
return {} return {}
...@@ -86,7 +86,7 @@ def _load_vars_from_path(path, results, vault_password=None): ...@@ -86,7 +86,7 @@ def _load_vars_from_path(path, results, vault_password=None):
if stat.S_ISDIR(pathstat.st_mode): if stat.S_ISDIR(pathstat.st_mode):
# support organizing variables across multiple files in a directory # support organizing variables across multiple files in a directory
return True, _load_vars_from_folder(path, results) return True, _load_vars_from_folder(path, results, vault_password=vault_password)
# regular file # regular file
elif stat.S_ISREG(pathstat.st_mode): elif stat.S_ISREG(pathstat.st_mode):
...@@ -105,7 +105,7 @@ def _load_vars_from_path(path, results, vault_password=None): ...@@ -105,7 +105,7 @@ def _load_vars_from_path(path, results, vault_password=None):
raise errors.AnsibleError("Expected a variable file or directory " raise errors.AnsibleError("Expected a variable file or directory "
"but found a non-file object at path %s" % (path, )) "but found a non-file object at path %s" % (path, ))
def _load_vars_from_folder(folder_path, results): def _load_vars_from_folder(folder_path, results, vault_password=None):
""" """
Load all variables within a folder recursively. Load all variables within a folder recursively.
""" """
...@@ -123,9 +123,10 @@ def _load_vars_from_folder(folder_path, results): ...@@ -123,9 +123,10 @@ def _load_vars_from_folder(folder_path, results):
# filesystem lists them. # filesystem lists them.
names.sort() names.sort()
paths = [os.path.join(folder_path, name) for name in names] # do not parse hidden files or dirs, e.g. .svn/
paths = [os.path.join(folder_path, name) for name in names if not name.startswith('.')]
for path in paths: for path in paths:
_found, results = _load_vars_from_path(path, results) _found, results = _load_vars_from_path(path, results, vault_password=vault_password)
return results return results
......
...@@ -95,7 +95,7 @@ class ModuleReplacer(object): ...@@ -95,7 +95,7 @@ class ModuleReplacer(object):
for line in lines: for line in lines:
if line.find(REPLACER) != -1: if REPLACER in line:
output.write(self.slurp(os.path.join(self.snippet_path, "basic.py"))) output.write(self.slurp(os.path.join(self.snippet_path, "basic.py")))
snippet_names.append('basic') snippet_names.append('basic')
elif line.startswith('from ansible.module_utils.'): elif line.startswith('from ansible.module_utils.'):
...@@ -103,7 +103,7 @@ class ModuleReplacer(object): ...@@ -103,7 +103,7 @@ class ModuleReplacer(object):
import_error = False import_error = False
if len(tokens) != 3: if len(tokens) != 3:
import_error = True import_error = True
if line.find(" import *") == -1: if " import *" not in line:
import_error = True import_error = True
if import_error: if import_error:
raise errors.AnsibleError("error importing module in %s, expecting format like 'from ansible.module_utils.basic import *'" % module_path) raise errors.AnsibleError("error importing module in %s, expecting format like 'from ansible.module_utils.basic import *'" % module_path)
......
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
try: try:
from distutils.version import LooseVersion from distutils.version import LooseVersion
HAS_LOOSE_VERSION = True HAS_LOOSE_VERSION = True
...@@ -14,33 +42,44 @@ AWS_REGIONS = ['ap-northeast-1', ...@@ -14,33 +42,44 @@ AWS_REGIONS = ['ap-northeast-1',
'us-west-2'] 'us-west-2']
def ec2_argument_keys_spec(): def aws_common_argument_spec():
return dict( return dict(
ec2_url=dict(),
aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'], no_log=True), aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'], no_log=True),
aws_access_key=dict(aliases=['ec2_access_key', 'access_key']), aws_access_key=dict(aliases=['ec2_access_key', 'access_key']),
validate_certs=dict(default=True, type='bool'),
security_token=dict(no_log=True),
profile=dict(),
) )
return spec
def ec2_argument_spec(): def ec2_argument_spec():
spec = ec2_argument_keys_spec() spec = aws_common_argument_spec()
spec.update( spec.update(
dict( dict(
region=dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS), region=dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS),
validate_certs=dict(default=True, type='bool'),
ec2_url=dict(),
) )
) )
return spec return spec
def get_ec2_creds(module): def boto_supports_profile_name():
return hasattr(boto.ec2.EC2Connection, 'profile_name')
def get_aws_connection_info(module):
# Check module args for credentials, then check environment vars # Check module args for credentials, then check environment vars
# access_key
ec2_url = module.params.get('ec2_url') ec2_url = module.params.get('ec2_url')
ec2_secret_key = module.params.get('aws_secret_key') access_key = module.params.get('aws_access_key')
ec2_access_key = module.params.get('aws_access_key') secret_key = module.params.get('aws_secret_key')
security_token = module.params.get('security_token')
region = module.params.get('region') region = module.params.get('region')
profile_name = module.params.get('profile')
validate_certs = module.params.get('validate_certs')
if not ec2_url: if not ec2_url:
if 'EC2_URL' in os.environ: if 'EC2_URL' in os.environ:
...@@ -48,21 +87,27 @@ def get_ec2_creds(module): ...@@ -48,21 +87,27 @@ def get_ec2_creds(module):
elif 'AWS_URL' in os.environ: elif 'AWS_URL' in os.environ:
ec2_url = os.environ['AWS_URL'] ec2_url = os.environ['AWS_URL']
if not ec2_access_key: if not access_key:
if 'EC2_ACCESS_KEY' in os.environ: if 'EC2_ACCESS_KEY' in os.environ:
ec2_access_key = os.environ['EC2_ACCESS_KEY'] access_key = os.environ['EC2_ACCESS_KEY']
elif 'AWS_ACCESS_KEY_ID' in os.environ: elif 'AWS_ACCESS_KEY_ID' in os.environ:
ec2_access_key = os.environ['AWS_ACCESS_KEY_ID'] access_key = os.environ['AWS_ACCESS_KEY_ID']
elif 'AWS_ACCESS_KEY' in os.environ: elif 'AWS_ACCESS_KEY' in os.environ:
ec2_access_key = os.environ['AWS_ACCESS_KEY'] access_key = os.environ['AWS_ACCESS_KEY']
else:
# in case access_key came in as empty string
access_key = None
if not ec2_secret_key: if not secret_key:
if 'EC2_SECRET_KEY' in os.environ: if 'EC2_SECRET_KEY' in os.environ:
ec2_secret_key = os.environ['EC2_SECRET_KEY'] secret_key = os.environ['EC2_SECRET_KEY']
elif 'AWS_SECRET_ACCESS_KEY' in os.environ: elif 'AWS_SECRET_ACCESS_KEY' in os.environ:
ec2_secret_key = os.environ['AWS_SECRET_ACCESS_KEY'] secret_key = os.environ['AWS_SECRET_ACCESS_KEY']
elif 'AWS_SECRET_KEY' in os.environ: elif 'AWS_SECRET_KEY' in os.environ:
ec2_secret_key = os.environ['AWS_SECRET_KEY'] secret_key = os.environ['AWS_SECRET_KEY']
else:
# in case secret_key came in as empty string
secret_key = None
if not region: if not region:
if 'EC2_REGION' in os.environ: if 'EC2_REGION' in os.environ:
...@@ -75,35 +120,71 @@ def get_ec2_creds(module): ...@@ -75,35 +120,71 @@ def get_ec2_creds(module):
if not region: if not region:
region = boto.config.get('Boto', 'ec2_region') region = boto.config.get('Boto', 'ec2_region')
return ec2_url, ec2_access_key, ec2_secret_key, region if not security_token:
if 'AWS_SECURITY_TOKEN' in os.environ:
security_token = os.environ['AWS_SECURITY_TOKEN']
else:
# in case security_token came in as empty string
security_token = None
boto_params = dict(aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
security_token=security_token)
# profile_name only works as a key in boto >= 2.24
# so only set profile_name if passed as an argument
if profile_name:
if not boto_supports_profile_name():
module.fail_json("boto does not support profile_name before 2.24")
boto_params['profile_name'] = profile_name
if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"):
boto_params['validate_certs'] = validate_certs
return region, ec2_url, boto_params
def get_ec2_creds(module):
''' for compatibility mode with old modules that don't/can't yet
use ec2_connect method '''
region, ec2_url, boto_params = get_aws_connection_info(module)
return ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region
def boto_fix_security_token_in_profile(conn, profile_name):
''' monkey patch for boto issue boto/boto#2100 '''
profile = 'profile ' + profile_name
if boto.config.has_option(profile, 'aws_security_token'):
conn.provider.set_security_token(boto.config.get(profile, 'aws_security_token'))
return conn
def connect_to_aws(aws_module, region, **params):
conn = aws_module.connect_to_region(region, **params)
if params.get('profile_name'):
conn = boto_fix_security_token_in_profile(conn, params['profile_name'])
return conn
def ec2_connect(module): def ec2_connect(module):
""" Return an ec2 connection""" """ Return an ec2 connection"""
ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) region, ec2_url, boto_params = get_aws_connection_info(module)
validate_certs = module.params.get('validate_certs', True)
# If we have a region specified, connect to its endpoint. # If we have a region specified, connect to its endpoint.
if region: if region:
try: try:
if HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): ec2 = connect_to_aws(boto.ec2, region, **boto_params)
ec2 = boto.ec2.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, validate_certs=validate_certs)
else:
ec2 = boto.ec2.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key)
except boto.exception.NoAuthHandlerFound, e: except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e)) module.fail_json(msg=str(e))
# Otherwise, no region so we fallback to the old connection method # Otherwise, no region so we fallback to the old connection method
elif ec2_url: elif ec2_url:
try: try:
if HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params)
ec2 = boto.connect_ec2_endpoint(ec2_url, aws_access_key, aws_secret_key, validate_certs=validate_certs)
else:
ec2 = boto.connect_ec2_endpoint(ec2_url, aws_access_key, aws_secret_key)
except boto.exception.NoAuthHandlerFound, e: except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg = str(e)) module.fail_json(msg=str(e))
else: else:
module.fail_json(msg="Either region or ec2_url must be specified") module.fail_json(msg="Either region or ec2_url must be specified")
return ec2
return ec2
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Franck Cuny <franck.cuny@gmail.com>, 2014
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
USER_AGENT_PRODUCT="Ansible-gce" USER_AGENT_PRODUCT="Ansible-gce"
USER_AGENT_VERSION="v1" USER_AGENT_VERSION="v1"
......
def add_git_host_key(module, url, accept_hostkey=True): # This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import hmac
from hashlib import sha1
HASHED_KEY_MAGIC = "|1|"
def add_git_host_key(module, url, accept_hostkey=True, create_dir=True):
""" idempotently add a git url hostkey """ """ idempotently add a git url hostkey """
...@@ -8,7 +40,7 @@ def add_git_host_key(module, url, accept_hostkey=True): ...@@ -8,7 +40,7 @@ def add_git_host_key(module, url, accept_hostkey=True):
known_host = check_hostkey(module, fqdn) known_host = check_hostkey(module, fqdn)
if not known_host: if not known_host:
if accept_hostkey: if accept_hostkey:
rc, out, err = add_host_key(module, fqdn) rc, out, err = add_host_key(module, fqdn, create_dir=create_dir)
if rc != 0: if rc != 0:
module.fail_json(msg="failed to add %s hostkey: %s" % (fqdn, out + err)) module.fail_json(msg="failed to add %s hostkey: %s" % (fqdn, out + err))
else: else:
...@@ -30,41 +62,94 @@ def get_fqdn(repo_url): ...@@ -30,41 +62,94 @@ def get_fqdn(repo_url):
return result return result
def check_hostkey(module, fqdn): def check_hostkey(module, fqdn):
return not not_in_host_file(module, fqdn)
""" use ssh-keygen to check if key is known """ # this is a variant of code found in connection_plugins/paramiko.py and we should modify
# the paramiko code to import and use this.
result = False def not_in_host_file(self, host):
keygen_cmd = module.get_bin_path('ssh-keygen', True)
this_cmd = keygen_cmd + " -H -F " + fqdn
rc, out, err = module.run_command(this_cmd)
if rc == 0 and out != "":
result = True if 'USER' in os.environ:
user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts")
else: else:
# Check the main system location user_host_file = "~/.ssh/known_hosts"
this_cmd = keygen_cmd + " -H -f /etc/ssh/ssh_known_hosts -F " + fqdn user_host_file = os.path.expanduser(user_host_file)
rc, out, err = module.run_command(this_cmd)
host_file_list = []
host_file_list.append(user_host_file)
host_file_list.append("/etc/ssh/ssh_known_hosts")
host_file_list.append("/etc/ssh/ssh_known_hosts2")
hfiles_not_found = 0
for hf in host_file_list:
if not os.path.exists(hf):
hfiles_not_found += 1
continue
try:
host_fh = open(hf)
except IOError, e:
hfiles_not_found += 1
continue
else:
data = host_fh.read()
host_fh.close()
for line in data.split("\n"):
if line is None or " " not in line:
continue
tokens = line.split()
if tokens[0].find(HASHED_KEY_MAGIC) == 0:
# this is a hashed known host entry
try:
(kn_salt,kn_host) = tokens[0][len(HASHED_KEY_MAGIC):].split("|",2)
hash = hmac.new(kn_salt.decode('base64'), digestmod=sha1)
hash.update(host)
if hash.digest() == kn_host.decode('base64'):
return False
except:
# invalid hashed host key, skip it
continue
else:
# standard host file entry
if host in tokens[0]:
return False
if rc == 0: return True
if out != "":
result = True
return result
def add_host_key(module, fqdn, key_type="rsa"): def add_host_key(module, fqdn, key_type="rsa", create_dir=False):
""" use ssh-keyscan to add the hostkey """ """ use ssh-keyscan to add the hostkey """
result = False result = False
keyscan_cmd = module.get_bin_path('ssh-keyscan', True) keyscan_cmd = module.get_bin_path('ssh-keyscan', True)
if not os.path.exists(os.path.expanduser("~/.ssh/")): if 'USER' in os.environ:
module.fail_json(msg="%s does not exist" % os.path.expanduser("~/.ssh/")) user_ssh_dir = os.path.expandvars("~${USER}/.ssh/")
user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts")
else:
user_ssh_dir = "~/.ssh/"
user_host_file = "~/.ssh/known_hosts"
user_ssh_dir = os.path.expanduser(user_ssh_dir)
if not os.path.exists(user_ssh_dir):
if create_dir:
try:
os.makedirs(user_ssh_dir, 0700)
except:
module.fail_json(msg="failed to create host key directory: %s" % user_ssh_dir)
else:
module.fail_json(msg="%s does not exist" % user_ssh_dir)
elif not os.path.isdir(user_ssh_dir):
module.fail_json(msg="%s is not a directory" % user_ssh_dir)
this_cmd = "%s -t %s %s" % (keyscan_cmd, key_type, fqdn)
this_cmd = "%s -t %s %s >> ~/.ssh/known_hosts" % (keyscan_cmd, key_type, fqdn)
rc, out, err = module.run_command(this_cmd) rc, out, err = module.run_command(this_cmd)
module.append_to_file(user_host_file, out)
return rc, out, err return rc, out, err
import os # This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
def rax_argument_spec(): def rax_argument_spec():
return dict( return dict(
......
import os
import re
import types
import ConfigParser
import shlex
class RegistrationBase(object):
def __init__(self, module, username=None, password=None):
self.module = module
self.username = username
self.password = password
def configure(self):
raise NotImplementedError("Must be implemented by a sub-class")
def enable(self):
# Remove any existing redhat.repo
redhat_repo = '/etc/yum.repos.d/redhat.repo'
if os.path.isfile(redhat_repo):
os.unlink(redhat_repo)
def register(self):
raise NotImplementedError("Must be implemented by a sub-class")
def unregister(self):
raise NotImplementedError("Must be implemented by a sub-class")
def unsubscribe(self):
raise NotImplementedError("Must be implemented by a sub-class")
def update_plugin_conf(self, plugin, enabled=True):
plugin_conf = '/etc/yum/pluginconf.d/%s.conf' % plugin
if os.path.isfile(plugin_conf):
cfg = ConfigParser.ConfigParser()
cfg.read([plugin_conf])
if enabled:
cfg.set('main', 'enabled', 1)
else:
cfg.set('main', 'enabled', 0)
fd = open(plugin_conf, 'rwa+')
cfg.write(fd)
fd.close()
def subscribe(self, **kwargs):
raise NotImplementedError("Must be implemented by a sub-class")
class Rhsm(RegistrationBase):
def __init__(self, module, username=None, password=None):
RegistrationBase.__init__(self, module, username, password)
self.config = self._read_config()
self.module = module
def _read_config(self, rhsm_conf='/etc/rhsm/rhsm.conf'):
'''
Load RHSM configuration from /etc/rhsm/rhsm.conf.
Returns:
* ConfigParser object
'''
# Read RHSM defaults ...
cp = ConfigParser.ConfigParser()
cp.read(rhsm_conf)
# Add support for specifying a default value w/o having to standup some configuration
# Yeah, I know this should be subclassed ... but, oh well
def get_option_default(self, key, default=''):
sect, opt = key.split('.', 1)
if self.has_section(sect) and self.has_option(sect, opt):
return self.get(sect, opt)
else:
return default
cp.get_option = types.MethodType(get_option_default, cp, ConfigParser.ConfigParser)
return cp
def enable(self):
'''
Enable the system to receive updates from subscription-manager.
This involves updating affected yum plugins and removing any
conflicting yum repositories.
'''
RegistrationBase.enable(self)
self.update_plugin_conf('rhnplugin', False)
self.update_plugin_conf('subscription-manager', True)
def configure(self, **kwargs):
'''
Configure the system as directed for registration with RHN
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'config']
# Pass supplied **kwargs as parameters to subscription-manager. Ignore
# non-configuration parameters and replace '_' with '.'. For example,
# 'server_hostname' becomes '--system.hostname'.
for k,v in kwargs.items():
if re.search(r'^(system|rhsm)_', k):
args.append('--%s=%s' % (k.replace('_','.'), v))
self.module.run_command(args, check_rc=True)
@property
def is_registered(self):
'''
Determine whether the current system
Returns:
* Boolean - whether the current system is currently registered to
RHN.
'''
# Quick version...
if False:
return os.path.isfile('/etc/pki/consumer/cert.pem') and \
os.path.isfile('/etc/pki/consumer/key.pem')
args = ['subscription-manager', 'identity']
rc, stdout, stderr = self.module.run_command(args, check_rc=False)
if rc == 0:
return True
else:
return False
def register(self, username, password, autosubscribe, activationkey):
'''
Register the current system to the provided RHN server
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'register']
# Generate command arguments
if activationkey:
args.append('--activationkey "%s"' % activationkey)
else:
if autosubscribe:
args.append('--autosubscribe')
if username:
args.extend(['--username', username])
if password:
args.extend(['--password', password])
# Do the needful...
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
def unsubscribe(self):
'''
Unsubscribe a system from all subscribed channels
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'unsubscribe', '--all']
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
def unregister(self):
'''
Unregister a currently registered system
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'unregister']
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
def subscribe(self, regexp):
'''
Subscribe current system to available pools matching the specified
regular expression
Raises:
* Exception - if error occurs while running command
'''
# Available pools ready for subscription
available_pools = RhsmPools(self.module)
for pool in available_pools.filter(regexp):
pool.subscribe()
class RhsmPool(object):
'''
Convenience class for housing subscription information
'''
def __init__(self, module, **kwargs):
self.module = module
for k,v in kwargs.items():
setattr(self, k, v)
def __str__(self):
return str(self.__getattribute__('_name'))
def subscribe(self):
args = "subscription-manager subscribe --pool %s" % self.PoolId
rc, stdout, stderr = self.module.run_command(args, check_rc=True)
if rc == 0:
return True
else:
return False
class RhsmPools(object):
"""
This class is used for manipulating pools subscriptions with RHSM
"""
def __init__(self, module):
self.module = module
self.products = self._load_product_list()
def __iter__(self):
return self.products.__iter__()
def _load_product_list(self):
"""
Loads list of all availaible pools for system in data structure
"""
args = "subscription-manager list --available"
rc, stdout, stderr = self.module.run_command(args, check_rc=True)
products = []
for line in stdout.split('\n'):
# Remove leading+trailing whitespace
line = line.strip()
# An empty line implies the end of a output group
if len(line) == 0:
continue
# If a colon ':' is found, parse
elif ':' in line:
(key, value) = line.split(':',1)
key = key.strip().replace(" ", "") # To unify
value = value.strip()
if key in ['ProductName', 'SubscriptionName']:
# Remember the name for later processing
products.append(RhsmPool(self.module, _name=value, key=value))
elif products:
# Associate value with most recently recorded product
products[-1].__setattr__(key, value)
# FIXME - log some warning?
#else:
# warnings.warn("Unhandled subscription key/value: %s/%s" % (key,value))
return products
def filter(self, regexp='^$'):
'''
Return a list of RhsmPools whose name matches the provided regular expression
'''
r = re.compile(regexp)
for product in self.products:
if r.search(product._name):
yield product
...@@ -85,7 +85,7 @@ class Task(object): ...@@ -85,7 +85,7 @@ class Task(object):
elif x.startswith("with_"): elif x.startswith("with_"):
if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"): if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"):
utils.warning("It is unneccessary to use '{{' in loops, leave variables in loop expressions bare.") utils.warning("It is unnecessary to use '{{' in loops, leave variables in loop expressions bare.")
plugin_name = x.replace("with_","") plugin_name = x.replace("with_","")
if plugin_name in utils.plugins.lookup_loader: if plugin_name in utils.plugins.lookup_loader:
...@@ -97,7 +97,7 @@ class Task(object): ...@@ -97,7 +97,7 @@ class Task(object):
elif x in [ 'changed_when', 'failed_when', 'when']: elif x in [ 'changed_when', 'failed_when', 'when']:
if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"): if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"):
utils.warning("It is unneccessary to use '{{' in conditionals, leave variables in loop expressions bare.") utils.warning("It is unnecessary to use '{{' in conditionals, leave variables in loop expressions bare.")
elif x.startswith("when_"): elif x.startswith("when_"):
utils.deprecated("The 'when_' conditional has been removed. Switch to using the regular unified 'when' statements as described on docs.ansible.com.","1.5", removed=True) utils.deprecated("The 'when_' conditional has been removed. Switch to using the regular unified 'when' statements as described on docs.ansible.com.","1.5", removed=True)
...@@ -206,8 +206,12 @@ class Task(object): ...@@ -206,8 +206,12 @@ class Task(object):
self.changed_when = ds.get('changed_when', None) self.changed_when = ds.get('changed_when', None)
self.failed_when = ds.get('failed_when', None) self.failed_when = ds.get('failed_when', None)
self.async_seconds = int(ds.get('async', 0)) # not async by default self.async_seconds = ds.get('async', 0) # not async by default
self.async_poll_interval = int(ds.get('poll', 10)) # default poll = 10 seconds self.async_seconds = template.template_from_string(play.basedir, self.async_seconds, self.module_vars)
self.async_seconds = int(self.async_seconds)
self.async_poll_interval = ds.get('poll', 10) # default poll = 10 seconds
self.async_poll_interval = template.template_from_string(play.basedir, self.async_poll_interval, self.module_vars)
self.async_poll_interval = int(self.async_poll_interval)
self.notify = ds.get('notify', []) self.notify = ds.get('notify', [])
self.first_available_file = ds.get('first_available_file', None) self.first_available_file = ds.get('first_available_file', None)
......
...@@ -31,18 +31,43 @@ class ActionModule(object): ...@@ -31,18 +31,43 @@ class ActionModule(object):
def __init__(self, runner): def __init__(self, runner):
self.runner = runner self.runner = runner
def _assemble_from_fragments(self, src_path, delimiter=None): def _assemble_from_fragments(self, src_path, delimiter=None, compiled_regexp=None):
''' assemble a file from a directory of fragments ''' ''' assemble a file from a directory of fragments '''
tmpfd, temp_path = tempfile.mkstemp() tmpfd, temp_path = tempfile.mkstemp()
tmp = os.fdopen(tmpfd,'w') tmp = os.fdopen(tmpfd,'w')
delimit_me = False delimit_me = False
add_newline = False
for f in sorted(os.listdir(src_path)): for f in sorted(os.listdir(src_path)):
if compiled_regexp and not compiled_regexp.search(f):
continue
fragment = "%s/%s" % (src_path, f) fragment = "%s/%s" % (src_path, f)
if delimit_me and delimiter: if not os.path.isfile(fragment):
continue
fragment_content = file(fragment).read()
# always put a newline between fragments if the previous fragment didn't end with a newline.
if add_newline:
tmp.write('\n')
# delimiters should only appear between fragments
if delimit_me:
if delimiter:
# un-escape anything like newlines
delimiter = delimiter.decode('unicode-escape')
tmp.write(delimiter) tmp.write(delimiter)
if os.path.isfile(fragment): # always make sure there's a newline after the
tmp.write(file(fragment).read()) # delimiter, so lines don't run together
if delimiter[-1] != '\n':
tmp.write('\n')
tmp.write(fragment_content)
delimit_me = True delimit_me = True
if fragment_content.endswith('\n'):
add_newline = False
else:
add_newline = True
tmp.close() tmp.close()
return temp_path return temp_path
...@@ -52,6 +77,7 @@ class ActionModule(object): ...@@ -52,6 +77,7 @@ class ActionModule(object):
options = {} options = {}
if complex_args: if complex_args:
options.update(complex_args) options.update(complex_args)
options.update(utils.parse_kv(module_args)) options.update(utils.parse_kv(module_args))
src = options.get('src', None) src = options.get('src', None)
...@@ -59,6 +85,7 @@ class ActionModule(object): ...@@ -59,6 +85,7 @@ class ActionModule(object):
delimiter = options.get('delimiter', None) delimiter = options.get('delimiter', None)
remote_src = utils.boolean(options.get('remote_src', 'yes')) remote_src = utils.boolean(options.get('remote_src', 'yes'))
if src is None or dest is None: if src is None or dest is None:
result = dict(failed=True, msg="src and dest are required") result = dict(failed=True, msg="src and dest are required")
return ReturnData(conn=conn, comm_ok=False, result=result) return ReturnData(conn=conn, comm_ok=False, result=result)
......
...@@ -33,7 +33,7 @@ class ActionModule(object): ...@@ -33,7 +33,7 @@ class ActionModule(object):
module_name = 'command' module_name = 'command'
module_args += " #USE_SHELL" module_args += " #USE_SHELL"
if tmp.find("tmp") == -1: if "tmp" not in tmp:
tmp = self.runner._make_tmp_path(conn) tmp = self.runner._make_tmp_path(conn)
(module_path, is_new_style, shebang) = self.runner._copy_module(conn, tmp, module_name, module_args, inject, complex_args=complex_args) (module_path, is_new_style, shebang) = self.runner._copy_module(conn, tmp, module_name, module_args, inject, complex_args=complex_args)
......
...@@ -54,6 +54,16 @@ class ActionModule(object): ...@@ -54,6 +54,16 @@ class ActionModule(object):
raw = utils.boolean(options.get('raw', 'no')) raw = utils.boolean(options.get('raw', 'no'))
force = utils.boolean(options.get('force', 'yes')) force = utils.boolean(options.get('force', 'yes'))
# content with newlines is going to be escaped to safely load in yaml
# now we need to unescape it so that the newlines are evaluated properly
# when writing the file to disk
if content:
if isinstance(content, unicode):
try:
content = content.decode('unicode-escape')
except UnicodeDecodeError:
pass
if (source is None and content is None and not 'first_available_file' in inject) or dest is None: if (source is None and content is None and not 'first_available_file' in inject) or dest is None:
result=dict(failed=True, msg="src (or content) and dest are required") result=dict(failed=True, msg="src (or content) and dest are required")
return ReturnData(conn=conn, result=result) return ReturnData(conn=conn, result=result)
...@@ -325,7 +335,7 @@ class ActionModule(object): ...@@ -325,7 +335,7 @@ class ActionModule(object):
src = open(source) src = open(source)
src_contents = src.read(8192) src_contents = src.read(8192)
st = os.stat(source) st = os.stat(source)
if src_contents.find("\x00") != -1: if "\x00" in src_contents:
diff['src_binary'] = 1 diff['src_binary'] = 1
elif st[stat.ST_SIZE] > utils.MAX_FILE_SIZE_FOR_DIFF: elif st[stat.ST_SIZE] > utils.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = utils.MAX_FILE_SIZE_FOR_DIFF diff['src_larger'] = utils.MAX_FILE_SIZE_FOR_DIFF
......
...@@ -83,6 +83,7 @@ class ActionModule(object): ...@@ -83,6 +83,7 @@ class ActionModule(object):
inv_group = ansible.inventory.Group(name=group) inv_group = ansible.inventory.Group(name=group)
inventory.add_group(inv_group) inventory.add_group(inv_group)
for host in hosts: for host in hosts:
if host in self.runner.inventory._vars_per_host:
del self.runner.inventory._vars_per_host[host] del self.runner.inventory._vars_per_host[host]
inv_host = inventory.get_host(host) inv_host = inventory.get_host(host)
if not inv_host: if not inv_host:
......
...@@ -77,11 +77,11 @@ class ActionModule(object): ...@@ -77,11 +77,11 @@ class ActionModule(object):
# Is 'prompt' a key in 'args'? # Is 'prompt' a key in 'args'?
elif 'prompt' in args: elif 'prompt' in args:
self.pause_type = 'prompt' self.pause_type = 'prompt'
self.prompt = "[%s]\n%s: " % (hosts, args['prompt']) self.prompt = "[%s]\n%s:\n" % (hosts, args['prompt'])
# Is 'args' empty, then this is the default prompted pause # Is 'args' empty, then this is the default prompted pause
elif len(args.keys()) == 0: elif len(args.keys()) == 0:
self.pause_type = 'prompt' self.pause_type = 'prompt'
self.prompt = "[%s]\nPress enter to continue: " % hosts self.prompt = "[%s]\nPress enter to continue:\n" % hosts
# I have no idea what you're trying to do. But it's so wrong. # I have no idea what you're trying to do. But it's so wrong.
else: else:
raise ae("invalid pause type given. must be one of: %s" % \ raise ae("invalid pause type given. must be one of: %s" % \
......
...@@ -128,7 +128,7 @@ class ActionModule(object): ...@@ -128,7 +128,7 @@ class ActionModule(object):
result = handler.run(conn, tmp, 'raw', module_args, inject) result = handler.run(conn, tmp, 'raw', module_args, inject)
# clean up after # clean up after
if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES: if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES:
self.runner._low_level_exec_command(conn, 'rm -rf %s >/dev/null 2>&1' % tmp, tmp) self.runner._low_level_exec_command(conn, 'rm -rf %s >/dev/null 2>&1' % tmp, tmp)
result.result['changed'] = True result.result['changed'] = True
......
...@@ -26,27 +26,55 @@ class ActionModule(object): ...@@ -26,27 +26,55 @@ class ActionModule(object):
def __init__(self, runner): def __init__(self, runner):
self.runner = runner self.runner = runner
self.inject = None
def _get_absolute_path(self, path=None):
if 'vars' in self.inject:
if '_original_file' in self.inject['vars']:
# roles
path = utils.path_dwim_relative(self.inject['_original_file'], 'files', path, self.runner.basedir)
elif 'inventory_dir' in self.inject['vars']:
# non-roles
abs_dir = os.path.abspath(self.inject['vars']['inventory_dir'])
path = os.path.join(abs_dir, path)
return path
def _process_origin(self, host, path, user): def _process_origin(self, host, path, user):
if not host in ['127.0.0.1', 'localhost']: if not host in ['127.0.0.1', 'localhost']:
if user:
return '%s@%s:%s' % (user, host, path) return '%s@%s:%s' % (user, host, path)
else: else:
return '%s:%s' % (host, path)
else:
if not ':' in path:
if not path.startswith('/'):
path = self._get_absolute_path(path=path)
return path return path
def _process_remote(self, host, path, user): def _process_remote(self, host, path, user):
transport = self.runner.transport transport = self.runner.transport
return_data = None return_data = None
if not host in ['127.0.0.1', 'localhost'] or transport != "local": if not host in ['127.0.0.1', 'localhost'] or transport != "local":
if user:
return_data = '%s@%s:%s' % (user, host, path) return_data = '%s@%s:%s' % (user, host, path)
else: else:
return_data = '%s:%s' % (host, path)
else:
return_data = path return_data = path
if not ':' in return_data:
if not return_data.startswith('/'):
return_data = self._get_absolute_path(path=return_data)
return return_data return return_data
def setup(self, module_name, inject): def setup(self, module_name, inject):
''' Always default to localhost as delegate if None defined ''' ''' Always default to localhost as delegate if None defined '''
self.inject = inject
# Store original transport and sudo values. # Store original transport and sudo values.
self.original_transport = inject.get('ansible_connection', self.runner.transport) self.original_transport = inject.get('ansible_connection', self.runner.transport)
self.original_sudo = self.runner.sudo self.original_sudo = self.runner.sudo
...@@ -65,6 +93,8 @@ class ActionModule(object): ...@@ -65,6 +93,8 @@ class ActionModule(object):
''' generates params and passes them on to the rsync module ''' ''' generates params and passes them on to the rsync module '''
self.inject = inject
# load up options # load up options
options = {} options = {}
if complex_args: if complex_args:
...@@ -122,6 +152,7 @@ class ActionModule(object): ...@@ -122,6 +152,7 @@ class ActionModule(object):
if process_args or use_delegate: if process_args or use_delegate:
user = None user = None
if utils.boolean(options.get('set_remote_user', 'yes')):
if use_delegate: if use_delegate:
user = inject['hostvars'][conn.delegate].get('ansible_ssh_user') user = inject['hostvars'][conn.delegate].get('ansible_ssh_user')
...@@ -167,12 +198,15 @@ class ActionModule(object): ...@@ -167,12 +198,15 @@ class ActionModule(object):
if rsync_path: if rsync_path:
options['rsync_path'] = '"' + rsync_path + '"' options['rsync_path'] = '"' + rsync_path + '"'
module_items = ' '.join(['%s=%s' % (k, v) for (k, module_args = ""
v) in options.items()])
if self.runner.noop_on_check(inject): if self.runner.noop_on_check(inject):
module_items += " CHECKMODE=True" module_args = "CHECKMODE=True"
# run the module and store the result
result = self.runner._execute_module(conn, tmp, 'synchronize', module_args, complex_args=options, inject=inject)
# reset the sudo property
self.runner.sudo = self.original_sudo
return self.runner._execute_module(conn, tmp, 'synchronize', return result
module_items, inject=inject)
...@@ -85,7 +85,7 @@ class ActionModule(object): ...@@ -85,7 +85,7 @@ class ActionModule(object):
# template the source data locally & get ready to transfer # template the source data locally & get ready to transfer
try: try:
resultant = template.template_from_file(self.runner.basedir, source, inject) resultant = template.template_from_file(self.runner.basedir, source, inject, vault_password=self.runner.vault_pass)
except Exception, e: except Exception, e:
result = dict(failed=True, msg=str(e)) result = dict(failed=True, msg=str(e))
return ReturnData(conn=conn, comm_ok=False, result=result) return ReturnData(conn=conn, comm_ok=False, result=result)
...@@ -123,6 +123,7 @@ class ActionModule(object): ...@@ -123,6 +123,7 @@ class ActionModule(object):
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=source, before=dest_contents, after=resultant)) return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=source, before=dest_contents, after=resultant))
else: else:
res = self.runner._execute_module(conn, tmp, 'copy', module_args, inject=inject, complex_args=complex_args) res = self.runner._execute_module(conn, tmp, 'copy', module_args, inject=inject, complex_args=complex_args)
if res.result.get('changed', False):
res.diff = dict(before=dest_contents, after=resultant) res.diff = dict(before=dest_contents, after=resultant)
return res return res
else: else:
......
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# (c) 2013, Michael Scherer <misc@zarb.org>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import distutils.spawn
import os
import subprocess
from ansible import errors
from ansible.callbacks import vvv
class Connection(object):
''' Local lxc based connections '''
def _search_executable(self, executable):
cmd = distutils.spawn.find_executable(executable)
if not cmd:
raise errors.AnsibleError("%s command not found in PATH") % executable
return cmd
def _check_domain(self, domain):
p = subprocess.Popen([self.cmd, '-q', '-c', 'lxc:///', 'dominfo', domain],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode:
raise errors.AnsibleError("%s is not a lxc defined in libvirt" % domain)
def __init__(self, runner, host, port, *args, **kwargs):
self.lxc = host
self.cmd = self._search_executable('virsh')
self._check_domain(host)
self.runner = runner
self.host = host
# port is unused, since this is local
self.port = port
def connect(self, port=None):
''' connect to the lxc; nothing to do here '''
vvv("THIS IS A LOCAL LXC DIR", host=self.lxc)
return self
def _generate_cmd(self, executable, cmd):
if executable:
local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', executable , '-c', cmd]
else:
local_cmd = '%s -q -c lxc:/// lxc-enter-namespace %s -- %s' % (self.cmd, self.lxc, cmd)
return local_cmd
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh'):
''' run a command on the chroot '''
# We enter lxc as root so sudo stuff can be ignored
local_cmd = self._generate_cmd(executable, cmd)
vvv("EXEC %s" % (local_cmd), host=self.lxc)
p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring),
cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
return (p.returncode, '', stdout, stderr)
def _normalize_path(self, path, prefix):
if not path.startswith(os.path.sep):
path = os.path.join(os.path.sep, path)
normpath = os.path.normpath(path)
return os.path.join(prefix, normpath[1:])
def put_file(self, in_path, out_path):
''' transfer a file from local to lxc '''
out_path = self._normalize_path(out_path, '/')
vvv("PUT %s TO %s" % (in_path, out_path), host=self.lxc)
local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', '/bin/tee', out_path]
vvv("EXEC %s" % (local_cmd), host=self.lxc)
p = subprocess.Popen(local_cmd, cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate(open(in_path,'rb').read())
def fetch_file(self, in_path, out_path):
''' fetch a file from lxc to local '''
in_path = self._normalize_path(in_path, '/')
vvv("FETCH %s TO %s" % (in_path, out_path), host=self.lxc)
local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', '/bin/cat', in_path]
vvv("EXEC %s" % (local_cmd), host=self.lxc)
p = subprocess.Popen(local_cmd, cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
open(out_path,'wb').write(stdout)
def close(self):
''' terminate the connection; nothing to do here '''
pass
...@@ -23,8 +23,11 @@ import types ...@@ -23,8 +23,11 @@ import types
import pipes import pipes
import glob import glob
import re import re
import operator as py_operator
from ansible import errors from ansible import errors
from ansible.utils import md5s from ansible.utils import md5s
from distutils.version import LooseVersion, StrictVersion
from random import SystemRandom
def to_nice_yaml(*a, **kw): def to_nice_yaml(*a, **kw):
'''Make verbose, human readable yaml''' '''Make verbose, human readable yaml'''
...@@ -42,8 +45,6 @@ def failed(*a, **kw): ...@@ -42,8 +45,6 @@ def failed(*a, **kw):
''' Test if task result yields failed ''' ''' Test if task result yields failed '''
item = a[0] item = a[0]
if type(item) != dict: if type(item) != dict:
print "DEBUG: GOT A"
print item
raise errors.AnsibleFilterError("|failed expects a dictionary") raise errors.AnsibleFilterError("|failed expects a dictionary")
rc = item.get('rc',0) rc = item.get('rc',0)
failed = item.get('failed',False) failed = item.get('failed',False)
...@@ -129,6 +130,15 @@ def search(value, pattern='', ignorecase=False): ...@@ -129,6 +130,15 @@ def search(value, pattern='', ignorecase=False):
''' Perform a `re.search` returning a boolean ''' ''' Perform a `re.search` returning a boolean '''
return regex(value, pattern, ignorecase, 'search') return regex(value, pattern, ignorecase, 'search')
def regex_replace(value='', pattern='', replacement='', ignorecase=False):
''' Perform a `re.sub` returning a string '''
if ignorecase:
flags = re.I
else:
flags = 0
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def unique(a): def unique(a):
return set(a) return set(a)
...@@ -144,6 +154,37 @@ def symmetric_difference(a, b): ...@@ -144,6 +154,37 @@ def symmetric_difference(a, b):
def union(a, b): def union(a, b):
return set(a).union(b) return set(a).union(b)
def version_compare(value, version, operator='eq', strict=False):
''' Perform a version comparison on a value '''
op_map = {
'==': 'eq', '=': 'eq', 'eq': 'eq',
'<': 'lt', 'lt': 'lt',
'<=': 'le', 'le': 'le',
'>': 'gt', 'gt': 'gt',
'>=': 'ge', 'ge': 'ge',
'!=': 'ne', '<>': 'ne', 'ne': 'ne'
}
if strict:
Version = StrictVersion
else:
Version = LooseVersion
if operator in op_map:
operator = op_map[operator]
else:
raise errors.AnsibleFilterError('Invalid operator type')
try:
method = getattr(py_operator, operator)
return method(Version(str(value)), Version(str(version)))
except Exception, e:
raise errors.AnsibleFilterError('Version comparison: %s' % e)
def rand(end, start=0, step=1):
r = SystemRandom()
return r.randrange(start, end, step)
class FilterModule(object): class FilterModule(object):
''' Ansible core jinja2 filters ''' ''' Ansible core jinja2 filters '''
...@@ -198,6 +239,7 @@ class FilterModule(object): ...@@ -198,6 +239,7 @@ class FilterModule(object):
'match': match, 'match': match,
'search': search, 'search': search,
'regex': regex, 'regex': regex,
'regex_replace': regex_replace,
# list # list
'unique' : unique, 'unique' : unique,
...@@ -205,5 +247,11 @@ class FilterModule(object): ...@@ -205,5 +247,11 @@ class FilterModule(object):
'difference': difference, 'difference': difference,
'symmetric_difference': symmetric_difference, 'symmetric_difference': symmetric_difference,
'union': union, 'union': union,
# version comparison
'version_compare': version_compare,
# random numbers
'random': rand,
} }
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from ansible import utils from ansible import utils
import os
import urllib2 import urllib2
try: try:
import json import json
...@@ -24,6 +25,8 @@ except ImportError: ...@@ -24,6 +25,8 @@ except ImportError:
# this can be made configurable, not should not use ansible.cfg # this can be made configurable, not should not use ansible.cfg
ANSIBLE_ETCD_URL = 'http://127.0.0.1:4001' ANSIBLE_ETCD_URL = 'http://127.0.0.1:4001'
if os.getenv('ANSIBLE_ETCD_URL') is not None:
ANSIBLE_ETCD_URL = os.environ['ANSIBLE_ETCD_URL']
class etcd(): class etcd():
def __init__(self, url=ANSIBLE_ETCD_URL): def __init__(self, url=ANSIBLE_ETCD_URL):
......
...@@ -32,6 +32,17 @@ class LookupModule(object): ...@@ -32,6 +32,17 @@ class LookupModule(object):
ret = [] ret = []
for term in terms: for term in terms:
'''
http://docs.python.org/2/library/subprocess.html#popen-constructor
The shell argument (which defaults to False) specifies whether to use the
shell as the program to execute. If shell is True, it is recommended to pass
args as a string rather than as a sequence
https://github.com/ansible/ansible/issues/6550
'''
term = str(term)
p = subprocess.Popen(term, cwd=self.basedir, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE) p = subprocess.Popen(term, cwd=self.basedir, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(stdout, stderr) = p.communicate() (stdout, stderr) = p.communicate()
if p.returncode == 0: if p.returncode == 0:
......
...@@ -30,18 +30,21 @@ class AsyncPoller(object): ...@@ -30,18 +30,21 @@ class AsyncPoller(object):
self.hosts_to_poll = [] self.hosts_to_poll = []
self.completed = False self.completed = False
# Get job id and which hosts to poll again in the future # flag to determine if at least one host was contacted
jid = None self.active = False
# True to work with & below # True to work with & below
skipped = True skipped = True
for (host, res) in results['contacted'].iteritems(): for (host, res) in results['contacted'].iteritems():
if res.get('started', False): if res.get('started', False):
self.hosts_to_poll.append(host) self.hosts_to_poll.append(host)
jid = res.get('ansible_job_id', None) jid = res.get('ansible_job_id', None)
self.runner.vars_cache[host]['ansible_job_id'] = jid
self.active = True
else: else:
skipped = skipped & res.get('skipped', False) skipped = skipped & res.get('skipped', False)
self.results['contacted'][host] = res self.results['contacted'][host] = res
for (host, res) in results['dark'].iteritems(): for (host, res) in results['dark'].iteritems():
self.runner.vars_cache[host]['ansible_job_id'] = ''
self.results['dark'][host] = res self.results['dark'][host] = res
if not skipped: if not skipped:
...@@ -49,14 +52,13 @@ class AsyncPoller(object): ...@@ -49,14 +52,13 @@ class AsyncPoller(object):
raise errors.AnsibleError("unexpected error: unable to determine jid") raise errors.AnsibleError("unexpected error: unable to determine jid")
if len(self.hosts_to_poll)==0: if len(self.hosts_to_poll)==0:
raise errors.AnsibleError("unexpected error: no hosts to poll") raise errors.AnsibleError("unexpected error: no hosts to poll")
self.jid = jid
def poll(self): def poll(self):
""" Poll the job status. """ Poll the job status.
Returns the changes in this iteration.""" Returns the changes in this iteration."""
self.runner.module_name = 'async_status' self.runner.module_name = 'async_status'
self.runner.module_args = "jid=%s" % self.jid self.runner.module_args = "jid={{ansible_job_id}}"
self.runner.pattern = "*" self.runner.pattern = "*"
self.runner.background = 0 self.runner.background = 0
self.runner.complex_args = None self.runner.complex_args = None
...@@ -75,13 +77,14 @@ class AsyncPoller(object): ...@@ -75,13 +77,14 @@ class AsyncPoller(object):
self.results['contacted'][host] = res self.results['contacted'][host] = res
poll_results['contacted'][host] = res poll_results['contacted'][host] = res
if res.get('failed', False) or res.get('rc', 0) != 0: if res.get('failed', False) or res.get('rc', 0) != 0:
self.runner.callbacks.on_async_failed(host, res, self.jid) self.runner.callbacks.on_async_failed(host, res, self.runner.vars_cache[host]['ansible_job_id'])
else: else:
self.runner.callbacks.on_async_ok(host, res, self.jid) self.runner.callbacks.on_async_ok(host, res, self.runner.vars_cache[host]['ansible_job_id'])
for (host, res) in results['dark'].iteritems(): for (host, res) in results['dark'].iteritems():
self.results['dark'][host] = res self.results['dark'][host] = res
poll_results['dark'][host] = res poll_results['dark'][host] = res
self.runner.callbacks.on_async_failed(host, res, self.jid) if host in self.hosts_to_poll:
self.runner.callbacks.on_async_failed(host, res, self.runner.vars_cache[host].get('ansible_job_id','XX'))
self.hosts_to_poll = hosts self.hosts_to_poll = hosts
if len(hosts)==0: if len(hosts)==0:
...@@ -92,7 +95,7 @@ class AsyncPoller(object): ...@@ -92,7 +95,7 @@ class AsyncPoller(object):
def wait(self, seconds, poll_interval): def wait(self, seconds, poll_interval):
""" Wait a certain time for job completion, check status every poll_interval. """ """ Wait a certain time for job completion, check status every poll_interval. """
# jid is None when all hosts were skipped # jid is None when all hosts were skipped
if self.jid is None: if not self.active:
return self.results return self.results
clock = seconds - poll_interval clock = seconds - poll_interval
...@@ -103,7 +106,7 @@ class AsyncPoller(object): ...@@ -103,7 +106,7 @@ class AsyncPoller(object):
for (host, res) in poll_results['polled'].iteritems(): for (host, res) in poll_results['polled'].iteritems():
if res.get('started'): if res.get('started'):
self.runner.callbacks.on_async_poll(host, res, self.jid, clock) self.runner.callbacks.on_async_poll(host, res, self.runner.vars_cache[host]['ansible_job_id'], clock)
clock = clock - poll_interval clock = clock - poll_interval
......
...@@ -23,6 +23,8 @@ import ast ...@@ -23,6 +23,8 @@ import ast
import yaml import yaml
import traceback import traceback
from ansible import utils
# modules that are ok that they do not have documentation strings # modules that are ok that they do not have documentation strings
BLACKLIST_MODULES = [ BLACKLIST_MODULES = [
'async_wrapper', 'accelerate', 'async_status' 'async_wrapper', 'accelerate', 'async_status'
...@@ -34,6 +36,10 @@ def get_docstring(filename, verbose=False): ...@@ -34,6 +36,10 @@ def get_docstring(filename, verbose=False):
in the given file. in the given file.
Parse DOCUMENTATION from YAML and return the YAML doc or None Parse DOCUMENTATION from YAML and return the YAML doc or None
together with EXAMPLES, as plain text. together with EXAMPLES, as plain text.
DOCUMENTATION can be extended using documentation fragments
loaded by the PluginLoader from the module_docs_fragments
directory.
""" """
doc = None doc = None
...@@ -46,6 +52,41 @@ def get_docstring(filename, verbose=False): ...@@ -46,6 +52,41 @@ def get_docstring(filename, verbose=False):
if isinstance(child, ast.Assign): if isinstance(child, ast.Assign):
if 'DOCUMENTATION' in (t.id for t in child.targets): if 'DOCUMENTATION' in (t.id for t in child.targets):
doc = yaml.safe_load(child.value.s) doc = yaml.safe_load(child.value.s)
fragment_slug = doc.get('extends_documentation_fragment',
'doesnotexist').lower()
# Allow the module to specify a var other than DOCUMENTATION
# to pull the fragment from, using dot notation as a separator
if '.' in fragment_slug:
fragment_name, fragment_var = fragment_slug.split('.', 1)
fragment_var = fragment_var.upper()
else:
fragment_name, fragment_var = fragment_slug, 'DOCUMENTATION'
if fragment_slug != 'doesnotexist':
fragment_class = utils.plugins.fragment_loader.get(fragment_name)
assert fragment_class is not None
fragment_yaml = getattr(fragment_class, fragment_var, '{}')
fragment = yaml.safe_load(fragment_yaml)
if fragment.has_key('notes'):
notes = fragment.pop('notes')
if notes:
if not doc.has_key('notes'):
doc['notes'] = []
doc['notes'].extend(notes)
if 'options' not in fragment.keys():
raise Exception("missing options in fragment, possibly misformatted?")
for key, value in fragment.items():
if not doc.has_key(key):
doc[key] = value
else:
doc[key].update(value)
if 'EXAMPLES' in (t.id for t in child.targets): if 'EXAMPLES' in (t.id for t in child.targets):
plainexamples = child.value.s[1:] # Skip first empty line plainexamples = child.value.s[1:] # Skip first empty line
except: except:
......
# (c) 2014, Will Thames <will@thames.id.au>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class ModuleDocFragment(object):
# AWS only documentation fragment
DOCUMENTATION = """
options:
ec2_url:
description:
- Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used
required: false
default: null
aliases: []
aws_secret_key:
description:
- AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used.
required: false
default: null
aliases: [ 'ec2_secret_key', 'secret_key' ]
aws_access_key:
description:
- AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used.
required: false
default: null
aliases: [ 'ec2_access_key', 'access_key' ]
validate_certs:
description:
- When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
required: false
default: "yes"
choices: ["yes", "no"]
aliases: []
version_added: "1.5"
profile:
description:
- uses a boto profile. Only works with boto >= 2.24.0
required: false
default: null
aliases: []
version_added: "1.6"
security_token:
description:
- security token to authenticate against AWS
required: false
default: null
aliases: []
version_added: "1.6"
requirements:
- boto
notes:
- The following environment variables can be used C(AWS_ACCESS_KEY) or
C(EC2_ACCESS_KEY) or C(AWS_ACCESS_KEY_ID),
C(AWS_SECRET_KEY) or C(EC2_SECRET_KEY) or C(AWS_SECRET_ACCESS_KEY),
C(AWS_REGION) or C(EC2_REGION), C(AWS_SECURITY_TOKEN)
- Ansible uses the boto configuration file (typically ~/.boto) if no
credentials are provided. See http://boto.readthedocs.org/en/latest/boto_config_tut.html
- C(AWS_REGION) or C(EC2_REGION) can be typically be used to specify the
AWS region, when required, but
this can also be configured in the boto config file
"""
# (c) 2014, Matt Martz <matt@sivel.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class ModuleDocFragment(object):
# Standard files documentation fragment
DOCUMENTATION = """
options:
path:
description:
- 'path to the file being managed. Aliases: I(dest), I(name)'
required: true
default: []
aliases: ['dest', 'name']
state:
description:
- If C(directory), all immediate subdirectories will be created if they
do not exist. If C(file), the file will NOT be created if it does not
exist, see the M(copy) or M(template) module if you want that behavior.
If C(link), the symbolic link will be created or changed. Use C(hard)
for hardlinks. If C(absent), directories will be recursively deleted,
and files or symlinks will be unlinked. If C(touch) (new in 1.4), an empty file will
be created if the c(path) does not exist, while an existing file or
directory will receive updated file access and modification times (similar
to the way `touch` works from the command line).
required: false
default: file
choices: [ file, link, directory, hard, touch, absent ]
src:
required: false
default: null
choices: []
description:
- path of the file to link to (applies only to C(state= link or hard)). Will accept absolute,
relative and nonexisting (with C(force)) paths. Relative paths are not expanded.
recurse:
required: false
default: "no"
choices: [ "yes", "no" ]
version_added: "1.1"
description:
- recursively set the specified file attributes (applies only to state=directory)
"""
# (c) 2014, Matt Martz <matt@sivel.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class ModuleDocFragment(object):
# Standard Rackspace only documentation fragment
DOCUMENTATION = """
options:
api_key:
description:
- Rackspace API key (overrides I(credentials))
aliases:
- password
credentials:
description:
- File to find the Rackspace credentials in (ignored if I(api_key) and
I(username) are provided)
default: null
aliases:
- creds_file
env:
description:
- Environment as configured in ~/.pyrax.cfg,
see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration)
version_added: 1.5
region:
description:
- Region to create an instance in
default: DFW
username:
description:
- Rackspace username (overrides I(credentials))
verify_ssl:
description:
- Whether or not to require SSL validation of API endpoints
version_added: 1.5
requirements:
- pyrax
notes:
- The following environment variables can be used, C(RAX_USERNAME),
C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION).
- C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file
appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating)
- C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file
- C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
"""
# Documentation fragment including attributes to enable communication
# of other OpenStack clouds. Not all rax modules support this.
OPENSTACK = """
options:
api_key:
description:
- Rackspace API key (overrides I(credentials))
aliases:
- password
auth_endpoint:
description:
- The URI of the authentication service
default: https://identity.api.rackspacecloud.com/v2.0/
version_added: 1.5
credentials:
description:
- File to find the Rackspace credentials in (ignored if I(api_key) and
I(username) are provided)
default: null
aliases:
- creds_file
env:
description:
- Environment as configured in ~/.pyrax.cfg,
see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration)
version_added: 1.5
identity_type:
description:
- Authentication machanism to use, such as rackspace or keystone
default: rackspace
version_added: 1.5
region:
description:
- Region to create an instance in
default: DFW
tenant_id:
description:
- The tenant ID used for authentication
version_added: 1.5
tenant_name:
description:
- The tenant name used for authentication
version_added: 1.5
username:
description:
- Rackspace username (overrides I(credentials))
verify_ssl:
description:
- Whether or not to require SSL validation of API endpoints
version_added: 1.5
requirements:
- pyrax
notes:
- The following environment variables can be used, C(RAX_USERNAME),
C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION).
- C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file
appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating)
- C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file
- C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
"""
...@@ -30,7 +30,7 @@ _basedirs = [] ...@@ -30,7 +30,7 @@ _basedirs = []
def push_basedir(basedir): def push_basedir(basedir):
# avoid pushing the same absolute dir more than once # avoid pushing the same absolute dir more than once
basedir = os.path.abspath(basedir) basedir = os.path.realpath(basedir)
if basedir not in _basedirs: if basedir not in _basedirs:
_basedirs.insert(0, basedir) _basedirs.insert(0, basedir)
...@@ -99,7 +99,7 @@ class PluginLoader(object): ...@@ -99,7 +99,7 @@ class PluginLoader(object):
ret = [] ret = []
ret += self._extra_dirs ret += self._extra_dirs
for basedir in _basedirs: for basedir in _basedirs:
fullpath = os.path.abspath(os.path.join(basedir, self.subdir)) fullpath = os.path.realpath(os.path.join(basedir, self.subdir))
if os.path.isdir(fullpath): if os.path.isdir(fullpath):
files = glob.glob("%s/*" % fullpath) files = glob.glob("%s/*" % fullpath)
for file in files: for file in files:
...@@ -111,7 +111,7 @@ class PluginLoader(object): ...@@ -111,7 +111,7 @@ class PluginLoader(object):
# look in any configured plugin paths, allow one level deep for subcategories # look in any configured plugin paths, allow one level deep for subcategories
configured_paths = self.config.split(os.pathsep) configured_paths = self.config.split(os.pathsep)
for path in configured_paths: for path in configured_paths:
path = os.path.abspath(os.path.expanduser(path)) path = os.path.realpath(os.path.expanduser(path))
contents = glob.glob("%s/*" % path) contents = glob.glob("%s/*" % path)
for c in contents: for c in contents:
if os.path.isdir(c) and c not in ret: if os.path.isdir(c) and c not in ret:
...@@ -131,7 +131,7 @@ class PluginLoader(object): ...@@ -131,7 +131,7 @@ class PluginLoader(object):
''' Adds an additional directory to the search path ''' ''' Adds an additional directory to the search path '''
self._paths = None self._paths = None
directory = os.path.abspath(directory) directory = os.path.realpath(directory)
if directory is not None: if directory is not None:
if with_subdir: if with_subdir:
...@@ -240,4 +240,9 @@ filter_loader = PluginLoader( ...@@ -240,4 +240,9 @@ filter_loader = PluginLoader(
'filter_plugins' 'filter_plugins'
) )
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.utils.module_docs_fragments',
os.path.join(os.path.dirname(__file__), 'module_docs_fragments'),
'',
)
def isprintable(instring): def isprintable(instring):
if isinstance(instring, str):
#http://stackoverflow.com/a/3637294 #http://stackoverflow.com/a/3637294
import string import string
printset = set(string.printable) printset = set(string.printable)
isprintable = set(instring).issubset(printset) isprintable = set(instring).issubset(printset)
return isprintable return isprintable
else:
return True
def count_newlines_from_end(str): def count_newlines_from_end(str):
i = len(str) i = len(str)
......
...@@ -88,8 +88,14 @@ def lookup(name, *args, **kwargs): ...@@ -88,8 +88,14 @@ def lookup(name, *args, **kwargs):
vars = kwargs.get('vars', None) vars = kwargs.get('vars', None)
if instance is not None: if instance is not None:
# safely catch run failures per #5059
try:
ran = instance.run(*args, inject=vars, **kwargs) ran = instance.run(*args, inject=vars, **kwargs)
return ",".join(ran) except Exception, e:
ran = None
if ran:
ran = ",".join(ran)
return ran
else: else:
raise errors.AnsibleError("lookup plugin (%s) not found" % name) raise errors.AnsibleError("lookup plugin (%s) not found" % name)
...@@ -193,7 +199,7 @@ class J2Template(jinja2.environment.Template): ...@@ -193,7 +199,7 @@ class J2Template(jinja2.environment.Template):
def new_context(self, vars=None, shared=False, locals=None): def new_context(self, vars=None, shared=False, locals=None):
return jinja2.runtime.Context(self.environment, vars.add_locals(locals), self.name, self.blocks) return jinja2.runtime.Context(self.environment, vars.add_locals(locals), self.name, self.blocks)
def template_from_file(basedir, path, vars): def template_from_file(basedir, path, vars, vault_password=None):
''' run a file through the templating engine ''' ''' run a file through the templating engine '''
fail_on_undefined = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR fail_on_undefined = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
...@@ -310,7 +316,13 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False): ...@@ -310,7 +316,13 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False):
if os.path.exists(filesdir): if os.path.exists(filesdir):
basedir = filesdir basedir = filesdir
# 6227
if isinstance(data, unicode):
try:
data = data.decode('utf-8') data = data.decode('utf-8')
except UnicodeEncodeError, e:
pass
try: try:
t = environment.from_string(data) t = environment.from_string(data)
except Exception, e: except Exception, e:
...@@ -332,7 +344,10 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False): ...@@ -332,7 +344,10 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False):
res = jinja2.utils.concat(rf) res = jinja2.utils.concat(rf)
except TypeError, te: except TypeError, te:
if 'StrictUndefined' in str(te): if 'StrictUndefined' in str(te):
raise errors.AnsibleUndefinedVariable("unable to look up a name or access an attribute in template string") raise errors.AnsibleUndefinedVariable(
"Unable to look up a name or access an attribute in template string. " + \
"Make sure your variable name does not contain invalid characters like '-'."
)
else: else:
raise errors.AnsibleError("an unexpected type error occured. Error was %s" % te) raise errors.AnsibleError("an unexpected type error occured. Error was %s" % te)
return res return res
......
...@@ -196,7 +196,7 @@ def main(): ...@@ -196,7 +196,7 @@ def main():
template_parameters=dict(required=False, type='dict', default={}), template_parameters=dict(required=False, type='dict', default={}),
state=dict(default='present', choices=['present', 'absent']), state=dict(default='present', choices=['present', 'absent']),
template=dict(default=None, required=True), template=dict(default=None, required=True),
disable_rollback=dict(default=False), disable_rollback=dict(default=False, type='bool'),
tags=dict(default=None) tags=dict(default=None)
) )
) )
...@@ -250,7 +250,7 @@ def main(): ...@@ -250,7 +250,7 @@ def main():
operation = 'CREATE' operation = 'CREATE'
except Exception, err: except Exception, err:
error_msg = boto_exception(err) error_msg = boto_exception(err)
if 'AlreadyExistsException' in error_msg: if 'AlreadyExistsException' in error_msg or 'already exists' in error_msg:
update = True update = True
else: else:
module.fail_json(msg=error_msg) module.fail_json(msg=error_msg)
......
...@@ -20,7 +20,7 @@ DOCUMENTATION = ''' ...@@ -20,7 +20,7 @@ DOCUMENTATION = '''
module: digital_ocean module: digital_ocean
short_description: Create/delete a droplet/SSH_key in DigitalOcean short_description: Create/delete a droplet/SSH_key in DigitalOcean
description: description:
- Create/delete a droplet in DigitalOcean and optionally waits for it to be 'running', or deploy an SSH key. - Create/delete a droplet in DigitalOcean and optionally wait for it to be 'running', or deploy an SSH key.
version_added: "1.3" version_added: "1.3"
options: options:
command: command:
...@@ -35,10 +35,10 @@ options: ...@@ -35,10 +35,10 @@ options:
choices: ['present', 'active', 'absent', 'deleted'] choices: ['present', 'active', 'absent', 'deleted']
client_id: client_id:
description: description:
- Digital Ocean manager id. - DigitalOcean manager id.
api_key: api_key:
description: description:
- Digital Ocean api key. - DigitalOcean api key.
id: id:
description: description:
- Numeric, the droplet id you want to operate on. - Numeric, the droplet id you want to operate on.
...@@ -47,34 +47,40 @@ options: ...@@ -47,34 +47,40 @@ options:
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key. - String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key.
unique_name: unique_name:
description: description:
- Bool, require unique hostnames. By default, digital ocean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence. - Bool, require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence.
version_added: "1.4" version_added: "1.4"
default: "no" default: "no"
choices: [ "yes", "no" ] choices: [ "yes", "no" ]
size_id: size_id:
description: description:
- Numeric, this is the id of the size you would like the droplet created at. - Numeric, this is the id of the size you would like the droplet created with.
image_id: image_id:
description: description:
- Numeric, this is the id of the image you would like the droplet created with. - Numeric, this is the id of the image you would like the droplet created with.
region_id: region_id:
description: description:
- "Numeric, this is the id of the region you would like your server" - "Numeric, this is the id of the region you would like your server to be created in."
ssh_key_ids: ssh_key_ids:
description: description:
- Optional, comma separated list of ssh_key_ids that you would like to be added to the server - Optional, comma separated list of ssh_key_ids that you would like to be added to the server.
virtio: virtio:
description: description:
- "Bool, turn on virtio driver in droplet for improved network and storage I/O" - "Bool, turn on virtio driver in droplet for improved network and storage I/O."
version_added: "1.4" version_added: "1.4"
default: "yes" default: "yes"
choices: [ "yes", "no" ] choices: [ "yes", "no" ]
private_networking: private_networking:
description: description:
- "Bool, add an additional, private network interface to droplet for inter-droplet communication" - "Bool, add an additional, private network interface to droplet for inter-droplet communication."
version_added: "1.4" version_added: "1.4"
default: "no" default: "no"
choices: [ "yes", "no" ] choices: [ "yes", "no" ]
backups_enabled:
description:
- Optional, Boolean, enables backups for your droplet.
version_added: "1.6"
default: "no"
choices: [ "yes", "no" ]
wait: wait:
description: description:
- Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned. - Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned.
...@@ -164,11 +170,11 @@ try: ...@@ -164,11 +170,11 @@ try:
import dopy import dopy
from dopy.manager import DoError, DoManager from dopy.manager import DoError, DoManager
except ImportError, e: except ImportError, e:
print "failed=True msg='dopy >= 0.2.2 required for this module'" print "failed=True msg='dopy >= 0.2.3 required for this module'"
sys.exit(1) sys.exit(1)
if dopy.__version__ < '0.2.2': if dopy.__version__ < '0.2.3':
print "failed=True msg='dopy >= 0.2.2 required for this module'" print "failed=True msg='dopy >= 0.2.3 required for this module'"
sys.exit(1) sys.exit(1)
class TimeoutError(DoError): class TimeoutError(DoError):
...@@ -229,8 +235,8 @@ class Droplet(JsonfyMixIn): ...@@ -229,8 +235,8 @@ class Droplet(JsonfyMixIn):
cls.manager = DoManager(client_id, api_key) cls.manager = DoManager(client_id, api_key)
@classmethod @classmethod
def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False): def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False, backups_enabled=False):
json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking) json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking, backups_enabled)
droplet = cls(json) droplet = cls(json)
return droplet return droplet
...@@ -333,7 +339,8 @@ def core(module): ...@@ -333,7 +339,8 @@ def core(module):
region_id=getkeyordie('region_id'), region_id=getkeyordie('region_id'),
ssh_key_ids=module.params['ssh_key_ids'], ssh_key_ids=module.params['ssh_key_ids'],
virtio=module.params['virtio'], virtio=module.params['virtio'],
private_networking=module.params['private_networking'] private_networking=module.params['private_networking'],
backups_enabled=module.params['backups_enabled'],
) )
if droplet.is_powered_on(): if droplet.is_powered_on():
...@@ -348,7 +355,7 @@ def core(module): ...@@ -348,7 +355,7 @@ def core(module):
elif state in ('absent', 'deleted'): elif state in ('absent', 'deleted'):
# First, try to find a droplet by id. # First, try to find a droplet by id.
droplet = Droplet.find(id=getkeyordie('id')) droplet = Droplet.find(module.params['id'])
# If we couldn't find the droplet and the user is allowing unique # If we couldn't find the droplet and the user is allowing unique
# hostnames, then check to see if a droplet with the specified # hostnames, then check to see if a droplet with the specified
...@@ -392,8 +399,9 @@ def main(): ...@@ -392,8 +399,9 @@ def main():
image_id = dict(type='int'), image_id = dict(type='int'),
region_id = dict(type='int'), region_id = dict(type='int'),
ssh_key_ids = dict(default=''), ssh_key_ids = dict(default=''),
virtio = dict(type='bool', choices=BOOLEANS, default='yes'), virtio = dict(type='bool', default='yes'),
private_networking = dict(type='bool', choices=BOOLEANS, default='no'), private_networking = dict(type='bool', default='no'),
backups_enabled = dict(type='bool', default='no'),
id = dict(aliases=['droplet_id'], type='int'), id = dict(aliases=['droplet_id'], type='int'),
unique_name = dict(type='bool', default='no'), unique_name = dict(type='bool', default='no'),
wait = dict(type='bool', default=True), wait = dict(type='bool', default=True),
......
#!/usr/bin/python
# -*- coding: utf-8 -*-
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: digital_ocean_domain
short_description: Create/delete a DNS record in DigitalOcean
description:
- Create/delete a DNS record in DigitalOcean.
version_added: "1.6"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'active', 'absent', 'deleted']
client_id:
description:
- Digital Ocean manager id.
api_key:
description:
- Digital Ocean api key.
id:
description:
- Numeric, the droplet id you want to operate on.
name:
description:
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key, or the name of a domain.
ip:
description:
- The IP address to point a domain at.
notes:
- Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY.
'''
EXAMPLES = '''
# Create a domain record
- digital_ocean_domain: >
state=present
name=my.digitalocean.domain
ip=127.0.0.1
# Create a droplet and a corresponding domain record
- digital_cean_droplet: >
state=present
name=test_droplet
size_id=1
region_id=2
image_id=3
register: test_droplet
- digital_ocean_domain: >
state=present
name={{ test_droplet.name }}.my.domain
ip={{ test_droplet.ip_address }}
'''
import sys
import os
import time
try:
from dopy.manager import DoError, DoManager
except ImportError as e:
print "failed=True msg='dopy required for this module'"
sys.exit(1)
class TimeoutError(DoError):
def __init__(self, msg, id):
super(TimeoutError, self).__init__(msg)
self.id = id
class JsonfyMixIn(object):
def to_json(self):
return self.__dict__
class DomainRecord(JsonfyMixIn):
manager = None
def __init__(self, json):
self.__dict__.update(json)
update_attr = __init__
def update(self, data = None, record_type = None):
json = self.manager.edit_domain_record(self.domain_id,
self.id,
record_type if record_type is not None else self.record_type,
data if data is not None else self.data)
self.__dict__.update(json)
return self
def destroy(self):
json = self.manager.destroy_domain_record(self.domain_id, self.id)
return json
class Domain(JsonfyMixIn):
manager = None
def __init__(self, domain_json):
self.__dict__.update(domain_json)
def destroy(self):
self.manager.destroy_domain(self.id)
def records(self):
json = self.manager.all_domain_records(self.id)
return map(DomainRecord, json)
@classmethod
def add(cls, name, ip):
json = cls.manager.new_domain(name, ip)
return cls(json)
@classmethod
def setup(cls, client_id, api_key):
cls.manager = DoManager(client_id, api_key)
DomainRecord.manager = cls.manager
@classmethod
def list_all(cls):
domains = cls.manager.all_domains()
return map(cls, domains)
@classmethod
def find(cls, name=None, id=None):
if name is None and id is None:
return False
domains = Domain.list_all()
if id is not None:
for domain in domains:
if domain.id == id:
return domain
if name is not None:
for domain in domains:
if domain.name == name:
return domain
return False
def core(module):
def getkeyordie(k):
v = module.params[k]
if v is None:
module.fail_json(msg='Unable to load %s' % k)
return v
try:
# params['client_id'] will be None even if client_id is not passed in
client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID']
api_key = module.params['api_key'] or os.environ['DO_API_KEY']
except KeyError, e:
module.fail_json(msg='Unable to load %s' % e.message)
changed = True
state = module.params['state']
Domain.setup(client_id, api_key)
if state in ('present'):
domain = Domain.find(id=module.params["id"])
if not domain:
domain = Domain.find(name=getkeyordie("name"))
if not domain:
domain = Domain.add(getkeyordie("name"),
getkeyordie("ip"))
module.exit_json(changed=True, domain=domain.to_json())
else:
records = domain.records()
at_record = None
for record in records:
if record.name == "@":
at_record = record
if not at_record.data == getkeyordie("ip"):
record.update(data=getkeyordie("ip"), record_type='A')
module.exit_json(changed=True, domain=Domain.find(id=record.domain_id).to_json())
module.exit_json(changed=False, domain=domain.to_json())
elif state in ('absent'):
domain = None
if "id" in module.params:
domain = Domain.find(id=module.params["id"])
if not domain and "name" in module.params:
domain = Domain.find(name=module.params["name"])
if not domain:
module.exit_json(changed=False, msg="Domain not found.")
event_json = domain.destroy()
module.exit_json(changed=True, event=event_json)
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(choices=['active', 'present', 'absent', 'deleted'], default='present'),
client_id = dict(aliases=['CLIENT_ID'], no_log=True),
api_key = dict(aliases=['API_KEY'], no_log=True),
name = dict(type='str'),
id = dict(aliases=['droplet_id'], type='int'),
ip = dict(type='str'),
),
required_one_of = (
['id', 'name'],
),
)
try:
core(module)
except TimeoutError as e:
module.fail_json(msg=str(e), id=e.id)
except (DoError, Exception) as e:
module.fail_json(msg=str(e))
# import module snippets
from ansible.module_utils.basic import *
main()
#!/usr/bin/python
# -*- coding: utf-8 -*-
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: digital_ocean_sshkey
short_description: Create/delete an SSH key in DigitalOcean
description:
- Create/delete an SSH key.
version_added: "1.6"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'absent']
client_id:
description:
- Digital Ocean manager id.
api_key:
description:
- Digital Ocean api key.
id:
description:
- Numeric, the SSH key id you want to operate on.
name:
description:
- String, this is the name of an SSH key to create or destroy.
ssh_pub_key:
description:
- The public SSH key you want to add to your account.
notes:
- Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY.
'''
EXAMPLES = '''
# Ensure a SSH key is present
# If a key matches this name, will return the ssh key id and changed = False
# If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False
- digital_ocean_sshkey: >
state=present
name=my_ssh_key
ssh_pub_key='ssh-rsa AAAA...'
client_id=XXX
api_key=XXX
'''
import sys
import os
import time
try:
from dopy.manager import DoError, DoManager
except ImportError as e:
print "failed=True msg='dopy required for this module'"
sys.exit(1)
class TimeoutError(DoError):
def __init__(self, msg, id):
super(TimeoutError, self).__init__(msg)
self.id = id
class JsonfyMixIn(object):
def to_json(self):
return self.__dict__
class SSH(JsonfyMixIn):
manager = None
def __init__(self, ssh_key_json):
self.__dict__.update(ssh_key_json)
update_attr = __init__
def destroy(self):
self.manager.destroy_ssh_key(self.id)
return True
@classmethod
def setup(cls, client_id, api_key):
cls.manager = DoManager(client_id, api_key)
@classmethod
def find(cls, name):
if not name:
return False
keys = cls.list_all()
for key in keys:
if key.name == name:
return key
return False
@classmethod
def list_all(cls):
json = cls.manager.all_ssh_keys()
return map(cls, json)
@classmethod
def add(cls, name, key_pub):
json = cls.manager.new_ssh_key(name, key_pub)
return cls(json)
def core(module):
def getkeyordie(k):
v = module.params[k]
if v is None:
module.fail_json(msg='Unable to load %s' % k)
return v
try:
# params['client_id'] will be None even if client_id is not passed in
client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID']
api_key = module.params['api_key'] or os.environ['DO_API_KEY']
except KeyError, e:
module.fail_json(msg='Unable to load %s' % e.message)
changed = True
state = module.params['state']
SSH.setup(client_id, api_key)
name = getkeyordie('name')
if state in ('present'):
key = SSH.find(name)
if key:
module.exit_json(changed=False, ssh_key=key.to_json())
key = SSH.add(name, getkeyordie('ssh_pub_key'))
module.exit_json(changed=True, ssh_key=key.to_json())
elif state in ('absent'):
key = SSH.find(name)
if not key:
module.exit_json(changed=False, msg='SSH key with the name of %s is not found.' % name)
key.destroy()
module.exit_json(changed=True)
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(choices=['present', 'absent'], default='present'),
client_id = dict(aliases=['CLIENT_ID'], no_log=True),
api_key = dict(aliases=['API_KEY'], no_log=True),
name = dict(type='str'),
id = dict(aliases=['droplet_id'], type='int'),
ssh_pub_key = dict(type='str'),
),
required_one_of = (
['id', 'name'],
),
)
try:
core(module)
except TimeoutError as e:
module.fail_json(msg=str(e), id=e.id)
except (DoError, Exception) as e:
module.fail_json(msg=str(e))
# import module snippets
from ansible.module_utils.basic import *
main()
...@@ -148,7 +148,7 @@ options: ...@@ -148,7 +148,7 @@ options:
- Set the state of the container - Set the state of the container
required: false required: false
default: present default: present
choices: [ "present", "stopped", "absent", "killed", "restarted" ] choices: [ "present", "running", "stopped", "absent", "killed", "restarted" ]
aliases: [] aliases: []
privileged: privileged:
description: description:
...@@ -169,6 +169,20 @@ options: ...@@ -169,6 +169,20 @@ options:
default: null default: null
aliases: [] aliases: []
version_added: "1.5" version_added: "1.5"
stdin_open:
description:
- Keep stdin open
required: false
default: false
aliases: []
version_added: "1.6"
tty:
description:
- Allocate a pseudo-tty
required: false
default: false
aliases: []
version_added: "1.6"
author: Cove Schneider, Joshua Conner, Pavel Antonov author: Cove Schneider, Joshua Conner, Pavel Antonov
requirements: [ "docker-py >= 0.3.0" ] requirements: [ "docker-py >= 0.3.0" ]
''' '''
...@@ -287,6 +301,7 @@ import sys ...@@ -287,6 +301,7 @@ import sys
from urlparse import urlparse from urlparse import urlparse
try: try:
import docker.client import docker.client
import docker.utils
from requests.exceptions import * from requests.exceptions import *
except ImportError, e: except ImportError, e:
HAS_DOCKER_PY = False HAS_DOCKER_PY = False
...@@ -331,7 +346,7 @@ class DockerManager: ...@@ -331,7 +346,7 @@ class DockerManager:
if self.module.params.get('volumes'): if self.module.params.get('volumes'):
self.binds = {} self.binds = {}
self.volumes = {} self.volumes = {}
vols = self.parse_list_from_param('volumes') vols = self.module.params.get('volumes')
for vol in vols: for vol in vols:
parts = vol.split(":") parts = vol.split(":")
# host mount (e.g. /mnt:/tmp, bind mounts host's /tmp to /mnt in the container) # host mount (e.g. /mnt:/tmp, bind mounts host's /tmp to /mnt in the container)
...@@ -345,46 +360,32 @@ class DockerManager: ...@@ -345,46 +360,32 @@ class DockerManager:
self.lxc_conf = None self.lxc_conf = None
if self.module.params.get('lxc_conf'): if self.module.params.get('lxc_conf'):
self.lxc_conf = [] self.lxc_conf = []
options = self.parse_list_from_param('lxc_conf') options = self.module.params.get('lxc_conf')
for option in options: for option in options:
parts = option.split(':') parts = option.split(':')
self.lxc_conf.append({"Key": parts[0], "Value": parts[1]}) self.lxc_conf.append({"Key": parts[0], "Value": parts[1]})
self.exposed_ports = None self.exposed_ports = None
if self.module.params.get('expose'): if self.module.params.get('expose'):
expose = self.parse_list_from_param('expose') self.exposed_ports = self.get_exposed_ports(self.module.params.get('expose'))
self.exposed_ports = self.get_exposed_ports(expose)
self.port_bindings = None self.port_bindings = None
if self.module.params.get('ports'): if self.module.params.get('ports'):
ports = self.parse_list_from_param('ports') self.port_bindings = self.get_port_bindings(self.module.params.get('ports'))
self.port_bindings = self.get_port_bindings(ports)
self.links = None self.links = None
if self.module.params.get('links'): if self.module.params.get('links'):
links = self.parse_list_from_param('links') self.links = dict(map(lambda x: x.split(':'), self.module.params.get('links')))
self.links = dict(map(lambda x: x.split(':'), links))
self.env = None self.env = None
if self.module.params.get('env'): if self.module.params.get('env'):
env = self.parse_list_from_param('env') self.env = dict(map(lambda x: x.split("="), self.module.params.get('env')))
self.env = dict(map(lambda x: x.split("="), env))
# connect to docker server # connect to docker server
docker_url = urlparse(module.params.get('docker_url')) docker_url = urlparse(module.params.get('docker_url'))
self.client = docker.Client(base_url=docker_url.geturl()) self.client = docker.Client(base_url=docker_url.geturl())
def parse_list_from_param(self, param_name, delimiter=','):
"""
Get a list from a module parameter, whether it's specified as a delimiter-separated string or is already in list form.
"""
param_list = self.module.params.get(param_name)
if not isinstance(param_list, list):
param_list = param_list.split(delimiter)
return param_list
def get_exposed_ports(self, expose_list): def get_exposed_ports(self, expose_list):
""" """
Parse the ports and protocols (TCP/UDP) to expose in the docker-py `create_container` call from the docker CLI-style syntax. Parse the ports and protocols (TCP/UDP) to expose in the docker-py `create_container` call from the docker CLI-style syntax.
...@@ -409,7 +410,9 @@ class DockerManager: ...@@ -409,7 +410,9 @@ class DockerManager:
""" """
binds = {} binds = {}
for port in ports: for port in ports:
parts = port.split(':') # ports could potentially be an array like [80, 443], so we make sure they're strings
# before splitting
parts = str(port).split(':')
container_port = parts[-1] container_port = parts[-1]
if '/' not in container_port: if '/' not in container_port:
container_port = int(parts[-1]) container_port = int(parts[-1])
...@@ -522,15 +525,19 @@ class DockerManager: ...@@ -522,15 +525,19 @@ class DockerManager:
'command': self.module.params.get('command'), 'command': self.module.params.get('command'),
'ports': self.exposed_ports, 'ports': self.exposed_ports,
'volumes': self.volumes, 'volumes': self.volumes,
'volumes_from': self.module.params.get('volumes_from'),
'mem_limit': _human_to_bytes(self.module.params.get('memory_limit')), 'mem_limit': _human_to_bytes(self.module.params.get('memory_limit')),
'environment': self.env, 'environment': self.env,
'dns': self.module.params.get('dns'),
'hostname': self.module.params.get('hostname'), 'hostname': self.module.params.get('hostname'),
'detach': self.module.params.get('detach'), 'detach': self.module.params.get('detach'),
'name': self.module.params.get('name'), 'name': self.module.params.get('name'),
'stdin_open': self.module.params.get('stdin_open'),
'tty': self.module.params.get('tty'),
} }
if docker.utils.compare_version('1.10', self.client.version()['ApiVersion']) < 0:
params['dns'] = self.module.params.get('dns')
params['volumes_from'] = self.module.params.get('volumes_from')
def do_create(count, params): def do_create(count, params):
results = [] results = []
for _ in range(count): for _ in range(count):
...@@ -558,6 +565,11 @@ class DockerManager: ...@@ -558,6 +565,11 @@ class DockerManager:
'privileged': self.module.params.get('privileged'), 'privileged': self.module.params.get('privileged'),
'links': self.links, 'links': self.links,
} }
if docker.utils.compare_version('1.10', self.client.version()['ApiVersion']) >= 0:
params['dns'] = self.module.params.get('dns')
params['volumes_from'] = self.module.params.get('volumes_from')
for i in containers: for i in containers:
self.client.start(i['Id'], **params) self.client.start(i['Id'], **params)
self.increment_counter('started') self.increment_counter('started')
...@@ -616,12 +628,12 @@ def main(): ...@@ -616,12 +628,12 @@ def main():
count = dict(default=1), count = dict(default=1),
image = dict(required=True), image = dict(required=True),
command = dict(required=False, default=None), command = dict(required=False, default=None),
expose = dict(required=False, default=None), expose = dict(required=False, default=None, type='list'),
ports = dict(required=False, default=None), ports = dict(required=False, default=None, type='list'),
publish_all_ports = dict(default=False, type='bool'), publish_all_ports = dict(default=False, type='bool'),
volumes = dict(default=None), volumes = dict(default=None, type='list'),
volumes_from = dict(default=None), volumes_from = dict(default=None),
links = dict(default=None), links = dict(default=None, type='list'),
memory_limit = dict(default=0), memory_limit = dict(default=0),
memory_swap = dict(default=0), memory_swap = dict(default=0),
docker_url = dict(default='unix://var/run/docker.sock'), docker_url = dict(default='unix://var/run/docker.sock'),
...@@ -629,13 +641,15 @@ def main(): ...@@ -629,13 +641,15 @@ def main():
password = dict(), password = dict(),
email = dict(), email = dict(),
hostname = dict(default=None), hostname = dict(default=None),
env = dict(), env = dict(type='list'),
dns = dict(), dns = dict(),
detach = dict(default=True, type='bool'), detach = dict(default=True, type='bool'),
state = dict(default='present', choices=['absent', 'present', 'stopped', 'killed', 'restarted']), state = dict(default='running', choices=['absent', 'present', 'running', 'stopped', 'killed', 'restarted']),
debug = dict(default=False, type='bool'), debug = dict(default=False, type='bool'),
privileged = dict(default=False, type='bool'), privileged = dict(default=False, type='bool'),
lxc_conf = dict(default=None), stdin_open = dict(default=False, type='bool'),
tty = dict(default=False, type='bool'),
lxc_conf = dict(default=None, type='list'),
name = dict(default=None) name = dict(default=None)
) )
) )
...@@ -662,12 +676,20 @@ def main(): ...@@ -662,12 +676,20 @@ def main():
changed = False changed = False
# start/stop containers # start/stop containers
if state == "present": if state in [ "running", "present" ]:
# make sure a container with `name` is running # make sure a container with `name` exists, if not create and start it
if name and "/" + name not in map(lambda x: x.get('Name'), running_containers): if name and "/" + name not in map(lambda x: x.get('Name'), deployed_containers):
containers = manager.create_containers(1) containers = manager.create_containers(1)
if state == "present": #otherwise it get (re)started later anyways..
manager.start_containers(containers) manager.start_containers(containers)
running_containers = manager.get_running_containers()
deployed_containers = manager.get_deployed_containers()
if state == "running":
# make sure a container with `name` is running
if name and "/" + name not in map(lambda x: x.get('Name'), running_containers):
manager.start_containers(deployed_containers)
# start more containers if we don't have enough # start more containers if we don't have enough
elif delta > 0: elif delta > 0:
...@@ -681,6 +703,8 @@ def main(): ...@@ -681,6 +703,8 @@ def main():
manager.remove_containers(containers_to_stop) manager.remove_containers(containers_to_stop)
facts = manager.get_running_containers() facts = manager.get_running_containers()
else:
acts = manager.get_deployed_containers()
# stop and remove containers # stop and remove containers
elif state == "absent": elif state == "absent":
......
This diff is collapsed. Click to expand it.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment